This adds mStatus, mReason and mContent to ResponderBase
and fills those in instead of passing it to member functions.
The added danger here is that now code can accidently try
to access these variables while they didn't already get a
correct value.
Affected members of ResponderBase (that now have less arguments):
decode_llsd_body, decode_raw_body, completedHeaders,
completed -> httpCompleted, result -> httpSuccess,
errorWithContent and error -> httpFailure.
New API:
ResponderBase::setResult
ResponderBase::getStatus()
ResponderBase::getReason()
ResponderBase::getContent()
ResponderBase::getResponseHeaders() (returns AIHTTPReceivedHeaders though, not LLSD)
ResponderBase::dumpResponse()
ResponderWithCompleted::completeResult
ResponderWithResult::failureResult (previously pubErrorWithContent)
ResponderWithResult::successResult (previously pubResult)
Not implemented:
getHTTPMethod() - use getName() instead which returns the class name of the responder.
completedHeaders() is still called as usual, although you can ignore
it (not implement in a derived responder) and call getResponseHeaders()
instead, provided you implement needsHeaders() and have it return true.
However, classes derived from ResponderHeadersOnly do not have
completedHeaders(), so they still must implement completedHeaders(),
and then call getResponseHeaders() or just access mReceivedHeaders
directly, as usual.
This splits the AIPerService queue up into four queues: one for each
"capability type". On Second Life that doesn't make a difference in
itself for textures because the texture service only serves one
capability type: textures. Other services however can serve two types,
while on Avination - that currently only has one services for everything
- this really makes a difference because that single service now has
four queues.
More importantly however is that the administration of how many requests
are in the "pipeline" (from approving that a new HTTP request may be
added for given service, till curl finished it) is now per capability
type (or service/capabitity type pair actually). This means downloads of
a certain capability type (textures, inventory, mesh, other) will no
longer stall because unapproved requests cluttered the queue for a given
service.
Moreover, before when a request did finished, it would only look for a
new request in the queue of the service that just finished. This simple
algorithm worked when there were no 'PerSerice' objects, and only one
'Curl' queue: because if anything was queued that that was because there
were running requests, and when one of those running requests finished
it made sense to see if one of those queued requests could be added now.
However, after adding multiple queues, one for each service, it could
happen that service A had queued requests while only requests from
service B were actually running: only requests of B would ever finish
and the requests of A would be queued forever.
With this patch the algorithm is to look alternating first in the
texture request queue and then in the inventory request queue - or vice
versa, and if there are none of those, look for a request of a different
type. If also that cannot be found, look for a request in another
service. This is still not optimal and subject to change.
The old code could do up till four requests with only one approvement.
Now we just start to assemble the four types of requests until either we
can get approvement for one, or one of them gets too large. This way we
still request everything in the same order, and at LEAST as many per
call as before, assuming we get the approvement of course.
The result should actually be faster because now we will request up to 5
folders or items per capability, and not spread those 5 out over 2 to 4
capability requests.
The inventory bulk fetch is not thread-safe, so the it doesn't start
right away, causing the approvement not to be honored upon return from
post_approved (formerly post_nb).
This patch renames wantsMoreHTTPReqestsFor to approveHTTPRequestFor,
and has it return NULL or a AIPerService::Approvement object.
The latter is now passed to the CurlEasyHandle object instead of just a
boolean mQueueIfTooMuchBandwidthUsage, and then the Approvement is
honored by the state machine right after the request is actually added
to the command queue.
This should avoid a flood of inventory requests in the case
approveHTTPRequestFor is called multiple times before the main thread
adds the requests to the command queue. I don't think that actually ever
happens, but I added debug code (to find some problem) that is so damn
strictly checking everything that I need to be this precise in order to
do that testing.
Most notably getMesh (the only one possibly using any significant
bandwidth), but in general every type of requests that just have to
happen anyway and in the order they are requested: they are just passed
to the curl thread, but now the curl thread will queue them and hold
back if the (general) service they use is loaded too heavily.
Don't pass arguments to wantsMoreHTTPRequestsFor, but use globals in
llmessage: AIPerService::sHTTPThrottleBandwidth125 and
AIPerService::sNoHTTPBandwidthThrottling instead.
This is needed later on.
This also makes the viewer immune for grids that send the FetchInventory2 et al
capabilities regardsless of whether we requested them (in fact, we always
request them now: we need them when someone switches in the middle of a session).
Note that (I tested that) textures could already be switched between
HTTP and UDP without relogging.
Call AIPerService::wantsMoreHTTPRequestsFor in
LLInventoryModelBackgroundFetch::bulkFetch to determine if curl is ready
for the next batch or not, instead of using inaccurate heuristic code
that is just guessing a bit.
Doing this resulted in a 404 on Aditi, and although that was a server
bug; it still doesn't seem to make much sense to do the request in the
first place.
* Moved Responder stuff to LLHTTPClient.
* Renamed LLHTTPClient::Responder to LLHTTPClient::ResponderWithResult.
* Deleted LLHTTPClientAdapter and LLHTTPClientInterface.
* Renamed AICurlInterface::TransferInfo to AITransferInfo and moved it
to llhttpclient.h
* Removed 'CURLcode code' argument from completed_headers.
* Removed LLCurlRequest and replaced it's last usage with LLHTTPClient API calls.
* Deleted dead code.
* Renamed all the get4/post4/put4/getByteRange4 etc, back to their
original name without the '4'.
Introduces AIHTTPTimeoutPolicy objects which do not just
specify a single "timeout" in seconds, but a plethora of
timings related to the life cycle of the average HTTP
transaction.
This knowledge is that moved to the Responder being
used instead of floating constants hardcoded in the
callers of http requests. This assumes that the same
timeout policy is wanted for each transaction that
uses the same Responder, which can be enforced is needed.
I added a AIHTTPTimeoutPolicy for EVERY responder,
only to make it easier later to tune timeout values
and/or to get feedback about which responder runs
into HTTP errors in debug output (especially time outs),
so that they can be tuned later. If we already understood
exactly what we were doing then most responders could
have been left alone and just return the default timeout
policy: by far most timeout policies are just a copy
of the default policy, currently.
This commit is not finished... It's a work in progress
(viewer runs fine with it though).