This splits the AIPerService queue up into four queues: one for each
"capability type". On Second Life that doesn't make a difference in
itself for textures because the texture service only serves one
capability type: textures. Other services however can serve two types,
while on Avination - that currently only has one services for everything
- this really makes a difference because that single service now has
four queues.
More importantly however is that the administration of how many requests
are in the "pipeline" (from approving that a new HTTP request may be
added for given service, till curl finished it) is now per capability
type (or service/capabitity type pair actually). This means downloads of
a certain capability type (textures, inventory, mesh, other) will no
longer stall because unapproved requests cluttered the queue for a given
service.
Moreover, before when a request did finished, it would only look for a
new request in the queue of the service that just finished. This simple
algorithm worked when there were no 'PerSerice' objects, and only one
'Curl' queue: because if anything was queued that that was because there
were running requests, and when one of those running requests finished
it made sense to see if one of those queued requests could be added now.
However, after adding multiple queues, one for each service, it could
happen that service A had queued requests while only requests from
service B were actually running: only requests of B would ever finish
and the requests of A would be queued forever.
With this patch the algorithm is to look alternating first in the
texture request queue and then in the inventory request queue - or vice
versa, and if there are none of those, look for a request of a different
type. If also that cannot be found, look for a request in another
service. This is still not optimal and subject to change.
If AIPerService::add_queued_to fails because a new request is throttled,
then do not add the request to the end of the queue, nor remove it from
the queue: do nothing: it makes no sense to move the request to the back
because they all belong to the same service and all of them will be
either throttled or not.
Note: Still need to fix that in this case we should look in queues of
other services.
The inventory bulk fetch is not thread-safe, so the it doesn't start
right away, causing the approvement not to be honored upon return from
post_approved (formerly post_nb).
This patch renames wantsMoreHTTPReqestsFor to approveHTTPRequestFor,
and has it return NULL or a AIPerService::Approvement object.
The latter is now passed to the CurlEasyHandle object instead of just a
boolean mQueueIfTooMuchBandwidthUsage, and then the Approvement is
honored by the state machine right after the request is actually added
to the command queue.
This should avoid a flood of inventory requests in the case
approveHTTPRequestFor is called multiple times before the main thread
adds the requests to the command queue. I don't think that actually ever
happens, but I added debug code (to find some problem) that is so damn
strictly checking everything that I need to be this precise in order to
do that testing.
* Removed the 'RequestQueue' from other PerServiceRequestQueue occurances
in the code.
* Made wantsMoreHTTPRequestsFor and checkBandwidthUsage threadsafe (by
grouping the static variables of AIPerService into thread ThreadSafe
groups.
Move the destructor (and copy constructor while I was at it) to the .cpp
file in order to avoid instantiating the destructor of
boost::intrusive_ptr<ThreadSafeBufferedCurlEasyRequest> from a header,
which would require the class ThreadSafeBufferedCurlEasyRequest
to be defined in that header, which is unnecessary. In other words,
this avoid the need to include "aicurl.h" in headers using
AIPerService[Ptr].
Also fixed indentation of a comment.