This adds mStatus, mReason and mContent to ResponderBase
and fills those in instead of passing it to member functions.
The added danger here is that now code can accidently try
to access these variables while they didn't already get a
correct value.
Affected members of ResponderBase (that now have less arguments):
decode_llsd_body, decode_raw_body, completedHeaders,
completed -> httpCompleted, result -> httpSuccess,
errorWithContent and error -> httpFailure.
New API:
ResponderBase::setResult
ResponderBase::getStatus()
ResponderBase::getReason()
ResponderBase::getContent()
ResponderBase::getResponseHeaders() (returns AIHTTPReceivedHeaders though, not LLSD)
ResponderBase::dumpResponse()
ResponderWithCompleted::completeResult
ResponderWithResult::failureResult (previously pubErrorWithContent)
ResponderWithResult::successResult (previously pubResult)
Not implemented:
getHTTPMethod() - use getName() instead which returns the class name of the responder.
completedHeaders() is still called as usual, although you can ignore
it (not implement in a derived responder) and call getResponseHeaders()
instead, provided you implement needsHeaders() and have it return true.
However, classes derived from ResponderHeadersOnly do not have
completedHeaders(), so they still must implement completedHeaders(),
and then call getResponseHeaders() or just access mReceivedHeaders
directly, as usual.
Turns out that the only responders that want to get the redirect
status codes themselves are the ones that already had a
redirect_status_ok() exception.
Added SEND_UDP_REQ and WAIT_UDP_REQ fetch states.
Parse 'content-range' from headers.
Purge 'complete' textures from the cache if they lack 'end of codestream' marker.
Added boostlevel/category to textureview display.
More diagnostic output.
Discard handling tweaks/bugfixes from v3.
This splits the AIPerService queue up into four queues: one for each
"capability type". On Second Life that doesn't make a difference in
itself for textures because the texture service only serves one
capability type: textures. Other services however can serve two types,
while on Avination - that currently only has one services for everything
- this really makes a difference because that single service now has
four queues.
More importantly however is that the administration of how many requests
are in the "pipeline" (from approving that a new HTTP request may be
added for given service, till curl finished it) is now per capability
type (or service/capabitity type pair actually). This means downloads of
a certain capability type (textures, inventory, mesh, other) will no
longer stall because unapproved requests cluttered the queue for a given
service.
Moreover, before when a request did finished, it would only look for a
new request in the queue of the service that just finished. This simple
algorithm worked when there were no 'PerSerice' objects, and only one
'Curl' queue: because if anything was queued that that was because there
were running requests, and when one of those running requests finished
it made sense to see if one of those queued requests could be added now.
However, after adding multiple queues, one for each service, it could
happen that service A had queued requests while only requests from
service B were actually running: only requests of B would ever finish
and the requests of A would be queued forever.
With this patch the algorithm is to look alternating first in the
texture request queue and then in the inventory request queue - or vice
versa, and if there are none of those, look for a request of a different
type. If also that cannot be found, look for a request in another
service. This is still not optimal and subject to change.
The inventory bulk fetch is not thread-safe, so the it doesn't start
right away, causing the approvement not to be honored upon return from
post_approved (formerly post_nb).
This patch renames wantsMoreHTTPReqestsFor to approveHTTPRequestFor,
and has it return NULL or a AIPerService::Approvement object.
The latter is now passed to the CurlEasyHandle object instead of just a
boolean mQueueIfTooMuchBandwidthUsage, and then the Approvement is
honored by the state machine right after the request is actually added
to the command queue.
This should avoid a flood of inventory requests in the case
approveHTTPRequestFor is called multiple times before the main thread
adds the requests to the command queue. I don't think that actually ever
happens, but I added debug code (to find some problem) that is so damn
strictly checking everything that I need to be this precise in order to
do that testing.
Most notably getMesh (the only one possibly using any significant
bandwidth), but in general every type of requests that just have to
happen anyway and in the order they are requested: they are just passed
to the curl thread, but now the curl thread will queue them and hold
back if the (general) service they use is loaded too heavily.
Don't pass arguments to wantsMoreHTTPRequestsFor, but use globals in
llmessage: AIPerService::sHTTPThrottleBandwidth125 and
AIPerService::sNoHTTPBandwidthThrottling instead.
This is needed later on.
This also makes the viewer immune for grids that send the FetchInventory2 et al
capabilities regardsless of whether we requested them (in fact, we always
request them now: we need them when someone switches in the middle of a session).
Note that (I tested that) textures could already be switched between
HTTP and UDP without relogging.
Add '(UDP)' after Objects, to show that this is UDP bandwidth.
Do not add the received HTTP texture bytes to gTextureList.sTextureBits,
making it (and the 'UDP Textures' graph) indeed pure UDP.
Adds throttling based on on average bandwidth usage per HTTP service.
Since only HTTP textures are using this, they are still starved by other
services like inventory and mesh dowloads. Also, it will be needed to
move the maximum number of connections per service the to the PerService
class, and dynamically tune them: reducing the number of connections is
the first thing to do when using too much bandwidth.
I also added a graph for HTTP texture bandwidth to the stats floater.
For some reason the average bandwidth (over 1 second) look almost like
scattered noise... weird for something that is averaged...
Rationale: LL is doing all throttling per service (host:port), not per
service hostname. Also, textures and capabilities use the same host: the
sim you are connected to. Splitting the queues up on a per-service basis
will stop the textures from blocking a capability request.
After commit things compile again :).
The HTTP bandwidth throttling is not yet implemented. I'll put a
temporary fix back the next commit that just does it the "old way"...
Points to the AIPerHostRequestQueue instance corresponding to the
hostname in LLTextureFetchWorker::mUrl. This is basically a cache as we
could of course just retrieve that instance from mUrl at any time,
everytime. Needed in a future commit.
Replaces LLTextureFetch::getNumHTTPRequests.
Returns AICurlInterface::Stats::running_handles.
This is work in progress that temporarily doesn't compile because
LLTextureFetch::getNumHTTPRequests is still being used somewhere.
All HTTP timing is done by AIHTTPTimeoutPolicy.
Inside LLTextureFetchWorker::doWork when mState == SEND_HTTP_REQ,
mCanUseHTTP is true, throttling is not in effect and mURL is not empty,
mLoaded is set to FALSE, mState is set to WAIT_HTTP_REQ and
LLHTTPClient::request is called that starts the download by curl.
A call back to LLTextureFetchWorker::callbackHttpGet is guaranteed,
which causes mLoaded to be set to TRUE (HTTPGetResponder::completedRaw
calls LLTextureFetchWorker::callbackHttpGet which sets mRequestedSize to -1
(if there was an error) and mLoaded to TRUE).
Being in state WAIT_HTTP_REQ, once mLoaded == TRUE (and mRequestedSize
is -1), the different timeout errors are handled.
When uploading finishes, but is not detected, the timeout should be for
"reply delay", the time that the server takes before it replies, and not
CurlTimeoutLowSpeedTime. This patch adds code that takes this failure
into account (which happened only ONCE for me on Metropolis while flying
around and using trickle (not sure if that is relevant), so it's not
that likely to improvement anything in practise. Note that it is
detected by an assertion when it happens, so that we can safely assume
it normally never happened on SL).
* Generalized PUT / POST configuration by adding
CurlEasyRequest::setPut, which now also supports keep-alive (which
still isn't used).
* Upload content length is now stored in CurlEasyRequest::mContentLength
* CurlEasyRequest::has_stalled() now return false if it was possbile
that the 'upload finished' detect failed AND calls upload_finished()
itself in that case, so it is no longer 'const'.
* If low speed is detect exactly when the last bytes are being attempted
to be sent (unlikely scenario), then the upload gets 4 more seconds
after which is switches to CurlTimeoutReplyDelay.
* Added EDoesAuthentication and EAllowCompressedReply to replace
booleans, for readability and type-safety, as did EKeepAlive. Note
that this change inverts the meaning of the compression related parameter.
* Unrelated: removed an unnecessary #include "llurlrequest.h" from
llxmlrpcresponder.h
This fixes a bug where unref() was called when a state machine was
aborted before it reached bs_initialized. Debug code was added to detect
errors related to that.
In order to run HTTPGetResponder in any thread, I needed direct access
to LLHTTPClient::request, so I had to move that to the header file,
and therefore had to move ERequestAction from LLURLRequest to
LLHTTPClient to avoid include problems.
With this, textures are fetched with no latency: call to
LLHTTPClient::request runs all the way till the state machine is idle
(AICurlEasyRequestStateMachine_waitAdded). There is small delay till the
curl thread wakes up, which then processes the request and opens the url
etc. When the transaction is finished, it calls
AIStateMachine::advance_state(AICurlEasyRequestStateMachine_removed_after_finished)
which subsequently doesn't return until the state machine is completely
finished (bs_killed). The LLURLRequest isn't deleted yet at that point
because the AITimer of the LLURLRequest runs in the main thread: it is
aborted, but only the next time the main thread state engines run that
is deleted and the timer keeps an LLPointer to it's parent, the
LLURLRequest, so only then the LLURLRequest object is destructed. This
however has nothing to do with the texture-bandwidth loop.
This reverts commit ef35aa7954
because it contained too much wrong things that I won't be
using. I'll re-commit stuff from it after that that I do
want to keep.
This work extends AIStateMachine to run multiplex() in the thread
that calls run(), cont() or set_state(). Note that all three
eventually call locked_cont(), so thats where multiplex() is called
from. Calling multiplex() means "running the state machine", as in
"calling multiplex_impl".
Currently only LLURLRequest uses this feature, and then only
for the HTTPGetResponder, and well only for the initializing,
start up and normal finish states.
A current/remaining problem is that we run into a situation where
the curl thread runs a statemachine to it's finish and kills it,
while the main thread is also 'running' it and tries to call
multiplex while the statemachine isn't running anymore.