This is necessary for opensim mega regions that can have up to 16 long
poll connections for a single service, while upping the maximum number
of connections per service just for that is clearly nonsense.
I also changed that long poll connections do not "use" the "Other"
capability type (which shows that the debug info in the HTTP debug
console for the Other capability type turns grey when it only has event
poll connections). As a result additional connections become available
for textures, mesh and the inventory (the other capability types) on
opensim, which has all types on the same service, because now Other
does no longer constantly reserves a full share of the available
connections. This makes the actual number of connections used for
textures and mesh a lot more like it is on Second Life.
Apparently LLViewerMediaImpl::navigateInternal could be called with
an empty url. That lead to a curl request with an empty url, which
failed because that leads to an empty 'service' string, which is
not allowed.
Patched the code to ignore empty media urls and hardened AICurl
to deal with any remaining cases.
I need to parse the debug output of libcurl for this, and choose
to not do that for a Release build - at which point it makes little
sense to even do it for a Debug build, since also the Alpha release
are 'Release'. Even though it would probably have no impact on FPS,
there would be like two persons looking at that number and understanding
it.
This make more sense when comparing the threshold against 2 times the
maximum allowed total connections.
As a result, textures won't stall when there are around 128 mesh
requests queued.
Keep track of which capability types are used and in use
at the moment for each service.
Update the http debug console to only show services
that have at least one capability type marked as used (this resets upon
restart of the debug console) and show previously used but currently
unused capability types in grey.
Update CapabilityType::mConcurrentConnections based on usage of the
capability type (CT): Each currently in-use CT gets an (approximate)
equal portion of the available number of connections, currently
unused CTs get 1 connection for future use, so that requests can and
will be added to them if they occur. If a CT is currently not in use
but was used before then it's connection (but at most one connection)
is kept in reserve. For example, if there are 8 connections available
and a service served textures and mesh in the past, but currently
there are no texture downloads, then mesh get at most 7 connections,
so that at all times there is a connection available for textures.
When one texture is added, both get 4 connections.
Prepares for throttling number of connections per capability type.
The value is current left at AIPerService::mConcurrectConnections,
so not having an effect yet.
This splits the AIPerService queue up into four queues: one for each
"capability type". On Second Life that doesn't make a difference in
itself for textures because the texture service only serves one
capability type: textures. Other services however can serve two types,
while on Avination - that currently only has one services for everything
- this really makes a difference because that single service now has
four queues.
More importantly however is that the administration of how many requests
are in the "pipeline" (from approving that a new HTTP request may be
added for given service, till curl finished it) is now per capability
type (or service/capabitity type pair actually). This means downloads of
a certain capability type (textures, inventory, mesh, other) will no
longer stall because unapproved requests cluttered the queue for a given
service.
Moreover, before when a request did finished, it would only look for a
new request in the queue of the service that just finished. This simple
algorithm worked when there were no 'PerSerice' objects, and only one
'Curl' queue: because if anything was queued that that was because there
were running requests, and when one of those running requests finished
it made sense to see if one of those queued requests could be added now.
However, after adding multiple queues, one for each service, it could
happen that service A had queued requests while only requests from
service B were actually running: only requests of B would ever finish
and the requests of A would be queued forever.
With this patch the algorithm is to look alternating first in the
texture request queue and then in the inventory request queue - or vice
versa, and if there are none of those, look for a request of a different
type. If also that cannot be found, look for a request in another
service. This is still not optimal and subject to change.
If AIPerService::add_queued_to fails because a new request is throttled,
then do not add the request to the end of the queue, nor remove it from
the queue: do nothing: it makes no sense to move the request to the back
because they all belong to the same service and all of them will be
either throttled or not.
Note: Still need to fix that in this case we should look in queues of
other services.
The inventory bulk fetch is not thread-safe, so the it doesn't start
right away, causing the approvement not to be honored upon return from
post_approved (formerly post_nb).
This patch renames wantsMoreHTTPReqestsFor to approveHTTPRequestFor,
and has it return NULL or a AIPerService::Approvement object.
The latter is now passed to the CurlEasyHandle object instead of just a
boolean mQueueIfTooMuchBandwidthUsage, and then the Approvement is
honored by the state machine right after the request is actually added
to the command queue.
This should avoid a flood of inventory requests in the case
approveHTTPRequestFor is called multiple times before the main thread
adds the requests to the command queue. I don't think that actually ever
happens, but I added debug code (to find some problem) that is so damn
strictly checking everything that I need to be this precise in order to
do that testing.
* Removed the 'RequestQueue' from other PerServiceRequestQueue occurances
in the code.
* Made wantsMoreHTTPRequestsFor and checkBandwidthUsage threadsafe (by
grouping the static variables of AIPerService into thread ThreadSafe
groups.
Move the destructor (and copy constructor while I was at it) to the .cpp
file in order to avoid instantiating the destructor of
boost::intrusive_ptr<ThreadSafeBufferedCurlEasyRequest> from a header,
which would require the class ThreadSafeBufferedCurlEasyRequest
to be defined in that header, which is unnecessary. In other words,
this avoid the need to include "aicurl.h" in headers using
AIPerService[Ptr].
Also fixed indentation of a comment.