The responder name is now cached in LLURLRequest
(ResponderBase::getName() must return a string literal).
The run time (in the main thread) per state machine is now accumulated
in AIStateMachine (instead of AIEngine::QueueElement).
When AIStateMachine::mainloop runs longer than StateMachineMaxTime
then a warning is printed that now includes the time spent in the
slowest state machine (that frame) and (if it is a LLURLRequest)
what the corresponding responder is. Also the total accumulated run
time of that state machine is printed.
From this is can be concluded that the only responder currently
regularly holding up the main thread is LLMeshLODResponder (mostly 30 to
100 ms, but with spikes in the 1 to 2 second range some times).
This fixes a bug where unref() was called when a state machine was
aborted before it reached bs_initialized. Debug code was added to detect
errors related to that.
In order to run HTTPGetResponder in any thread, I needed direct access
to LLHTTPClient::request, so I had to move that to the header file,
and therefore had to move ERequestAction from LLURLRequest to
LLHTTPClient to avoid include problems.
With this, textures are fetched with no latency: call to
LLHTTPClient::request runs all the way till the state machine is idle
(AICurlEasyRequestStateMachine_waitAdded). There is small delay till the
curl thread wakes up, which then processes the request and opens the url
etc. When the transaction is finished, it calls
AIStateMachine::advance_state(AICurlEasyRequestStateMachine_removed_after_finished)
which subsequently doesn't return until the state machine is completely
finished (bs_killed). The LLURLRequest isn't deleted yet at that point
because the AITimer of the LLURLRequest runs in the main thread: it is
aborted, but only the next time the main thread state engines run that
is deleted and the timer keeps an LLPointer to it's parent, the
LLURLRequest, so only then the LLURLRequest object is destructed. This
however has nothing to do with the texture-bandwidth loop.
This reverts commit ef35aa7954
because it contained too much wrong things that I won't be
using. I'll re-commit stuff from it after that that I do
want to keep.
This work extends AIStateMachine to run multiplex() in the thread
that calls run(), cont() or set_state(). Note that all three
eventually call locked_cont(), so thats where multiplex() is called
from. Calling multiplex() means "running the state machine", as in
"calling multiplex_impl".
Currently only LLURLRequest uses this feature, and then only
for the HTTPGetResponder, and well only for the initializing,
start up and normal finish states.
A current/remaining problem is that we run into a situation where
the curl thread runs a statemachine to it's finish and kills it,
while the main thread is also 'running' it and tries to call
multiplex while the statemachine isn't running anymore.
Before every HEAD and GET request allowed redirection by default,
without setting a limit on the number of redirections. This caused
an infinite redirect loop when connecting to marketplace, in combination
with the bug that we did not allow cookies.
ResponderHeadersOnly is a base class for responders that use
HTTPClient::head or HTTPClient::getHeaderOnly. It already
has a needsHeaders() that return true and only allows for
completedHeaders to be overridden.
I removed the CURLOPT_HEADER option for these cases, because
that only causes the headers to be send to the writeCallback
as if they are part of the body, in addition to the headerCallback;
That gave raise to some confusion for the existing code (ie,
unexpected errors when trying to decode the body as LLSD and
duplicated 'low speed' information for the Timeout policy code.
Every curl transaction is a AICurlEasyRequestStateMachine which has a
AICurlEasyRequest as member, which is a reference counting pointer to
a ThreadSafeBufferedCurlEasyRequest. And now BufferedCurlEasyRequest is
derived from CurlEasyRequest which is derived from CurlEasyHandle, but
neither are used separatedly.
* Moved Responder stuff to LLHTTPClient.
* Renamed LLHTTPClient::Responder to LLHTTPClient::ResponderWithResult.
* Deleted LLHTTPClientAdapter and LLHTTPClientInterface.
* Renamed AICurlInterface::TransferInfo to AITransferInfo and moved it
to llhttpclient.h
* Removed 'CURLcode code' argument from completed_headers.
* Removed LLCurlRequest and replaced it's last usage with LLHTTPClient API calls.
* Deleted dead code.
* Renamed all the get4/post4/put4/getByteRange4 etc, back to their
original name without the '4'.
Minor changes like comment fixes and addition of accessors
that will be needed for future commits.
Also removed Responder::fatalError as it was never used.
Moved CURLOPT_ENCODING from CurlEasyRequest::setPost_raw, and
CURLOPT_SSL_VERIFYPEER and CURLOPT_SSL_VERIFYHOST from
CurlResponderBuffer::prepRequest, to LLURLRequest::configure,
enabling the debug setting NoVerifySSLCert for the latter
two to work as follows: old behavior if "NoVerifySSLCert"
is not set, and check neither if it is set. However, if
the (new) bool mIsAuth is set the behavior of LLXMLRPCTransaction::Impl::init
is used. This is so in a next commit we can replace
LLXMLRPCTransaction with LLURLRequest: LLXMLRPCTransaction::Impl::init
will be removed. For the same reason, when the new boolean
mNoCompression is set then CURLOPT_ENCODING is set to "identity",
otherwise the old behavior (of clearing it) is used.
Introduces AIHTTPTimeoutPolicy objects which do not just
specify a single "timeout" in seconds, but a plethora of
timings related to the life cycle of the average HTTP
transaction.
This knowledge is that moved to the Responder being
used instead of floating constants hardcoded in the
callers of http requests. This assumes that the same
timeout policy is wanted for each transaction that
uses the same Responder, which can be enforced is needed.
I added a AIHTTPTimeoutPolicy for EVERY responder,
only to make it easier later to tune timeout values
and/or to get feedback about which responder runs
into HTTP errors in debug output (especially time outs),
so that they can be tuned later. If we already understood
exactly what we were doing then most responders could
have been left alone and just return the default timeout
policy: by far most timeout policies are just a copy
of the default policy, currently.
This commit is not finished... It's a work in progress
(viewer runs fine with it though).
For POST and PUT, libcurl adds by default a "Expect: 100-continue"
header and wait for the server to reply with a 100 response.
However, since the server never does that (LL comment in the
code "I'm not sure what it means") it's up to libcurl to continue
anyway after a while and that part is apparently bugged, causing
people not to be able to login sometimes. But suppressing the
header, libcurl doesn't wait for it and we worked around this bug.
The commit introduces a much improved CurlEasyRequest::setPost
method that enforces the above for POST, but that also no longer
uses CURLOPT_COPYPOSTFIELDS (but CURLOPT_POSTFIELDS), no longer
making a copy of the body of what we are going to send to the server.
Instead it uses a new object, derived from AIPostField, to keep track
of this data and to dispose of it once the transaction is complete
(and no sooner).
Copied from llcorehttp, we now also always set a "Connection:
keep-alive" and "Keep-alive: 300" header for POST.
This was already done for texture downloads (HttpOpRequest),
but now we do it always :p. Might need to be changed in the
future, but currently those headers are ignored by the server
anyway.
This code has been in the viewer source for a long time,
and hasn't been used for a long time (furtherest back that
I checked was Snowglobe 1.4).
Most notably, this removes LLContextURLExtractor and code
that used it because that required an API where AICurlEasyHandle
is created before an url is known, which gets in the way of
reusing connections.
Note that in the code, and still, has_curl_request was always false.
However, instead of deleting all code paths that are only executed
when has_curl_request would be true, I fixed the code to work as
intended with my current implementation; which also results in
LLCurlRequests to never expire. This way things won't break
unexpectedly when this ever changes.
Since on this branch isValid was only called still (the rest was
removed already) to check if the curl download expired, I took
the liberty to rename isValid to hasNotExpired.
Conflicts:
indra/llmessage/llcurl.cpp
indra/llmessage/llcurl.h
indra/newview/app_settings/settings.xml
indra/newview/llappviewer.cpp
indra/newview/llmeshrepository.cpp
Resolved:
indra/llmessage/llcurl.cpp:
Basically removed (not used anyway)
indra/llmessage/llcurl.h:
Basically removed (just includes aiculr.h now)
indra/newview/app_settings/settings.xml:
CurlUseMultipleThreads was remvoved.
CurlMaximumNumberOfHandles and CurlRequestTimeOut
are still in there, but unused at the moment.
indra/newview/llappviewer.cpp:
CurlMaximumNumberOfHandles and CurlRequestTimeOut
are unused at the moment.
indra/newview/llmeshrepository.cpp:
Lock mSignal always (is unlocked inside wait()).
Use mSignal lock to see if we are waiting; remove mWaiting.
Return false from the MeshFetch functions iff we have to retry
a HTTP fetch. Catch the error exception thrown by getByteRange
instead of using it's return value (always returns true
anyway).