I need to parse the debug output of libcurl for this, and choose
to not do that for a Release build - at which point it makes little
sense to even do it for a Debug build, since also the Alpha release
are 'Release'. Even though it would probably have no impact on FPS,
there would be like two persons looking at that number and understanding
it.
The automatic adjustment sets the connect time to 3 seconds, which might
be a bit on the short side sometimes.
Also remove the HACK and use CurlTimeoutConnect again.
The default is still 10 seconds, but is overridden to be 30 for mesh
now (which needs a reconnect for every mesh, since LL does not support
connection reuse for mesh). Note that I never saw any successful
connect under 8 seconds while I had the connect time out at 30:
Over 8 seconds is a sure fail (except for mesh, where in rare cases
there were connect times of 15 seconds (never 14 or 16).
On quiet servers, the connect time is normally sub-second. This is
the bulk of connections for my own SL use.
XMLRPCResponder is used to login, which is a HTTP 1.0 server
that doesn't support reuse of connections.
It's cleaner to close connections from our side too.
This changes,
0x7fffdcf93700 CURLIO 0x7fffcd86db40 * Connection #0 to host login.agni.lindenlab.com left intact
into
0x7fffdcf93700 CURLIO 0x7fffcd8b0bd0 * Closing connection #0
This make more sense when comparing the threshold against 2 times the
maximum allowed total connections.
As a result, textures won't stall when there are around 128 mesh
requests queued.
Keep track of which capability types are used and in use
at the moment for each service.
Update the http debug console to only show services
that have at least one capability type marked as used (this resets upon
restart of the debug console) and show previously used but currently
unused capability types in grey.
Update CapabilityType::mConcurrentConnections based on usage of the
capability type (CT): Each currently in-use CT gets an (approximate)
equal portion of the available number of connections, currently
unused CTs get 1 connection for future use, so that requests can and
will be added to them if they occur. If a CT is currently not in use
but was used before then it's connection (but at most one connection)
is kept in reserve. For example, if there are 8 connections available
and a service served textures and mesh in the past, but currently
there are no texture downloads, then mesh get at most 7 connections,
so that at all times there is a connection available for textures.
When one texture is added, both get 4 connections.
Prepares for throttling number of connections per capability type.
The value is current left at AIPerService::mConcurrectConnections,
so not having an effect yet.
This fires so little that it isn't worth anymore to look into it
(the only problem resulting from this is a wrong timeout on one
of the http requests); and therefore no longer makes sense to
have it triggered in alpha builds at all, since those do not
produce enough debug output to understand what caused it in the
first place.
Condenses the three moderation responders to one.
Adds Speakers settings.
Uses dedicated timer class to track speakers on the list
Adds temporary getSpeakerManager() function to LLFloaterIMPanel to play the role of LLIMModel::getSpeakerManager
Migrate a bunch of classes out of llvoiceclient.* and into new llvoicevivox.*
Update most of these to match their counterparts
Introduce VoiceFonts support (voice morphing, floater still to come)
Support for a bunch of v3 voice settings.
Move volume settings management from LLMutelist into LLSpeakerVolumeStorage
Support for Avaline mutes (WIP)
Adds voice section to LLAgent
Moved llfloatervoicedevicesettings to llpanelvoicedevicesettings, v3's voice device panel design is more intuitive.
This splits the AIPerService queue up into four queues: one for each
"capability type". On Second Life that doesn't make a difference in
itself for textures because the texture service only serves one
capability type: textures. Other services however can serve two types,
while on Avination - that currently only has one services for everything
- this really makes a difference because that single service now has
four queues.
More importantly however is that the administration of how many requests
are in the "pipeline" (from approving that a new HTTP request may be
added for given service, till curl finished it) is now per capability
type (or service/capabitity type pair actually). This means downloads of
a certain capability type (textures, inventory, mesh, other) will no
longer stall because unapproved requests cluttered the queue for a given
service.
Moreover, before when a request did finished, it would only look for a
new request in the queue of the service that just finished. This simple
algorithm worked when there were no 'PerSerice' objects, and only one
'Curl' queue: because if anything was queued that that was because there
were running requests, and when one of those running requests finished
it made sense to see if one of those queued requests could be added now.
However, after adding multiple queues, one for each service, it could
happen that service A had queued requests while only requests from
service B were actually running: only requests of B would ever finish
and the requests of A would be queued forever.
With this patch the algorithm is to look alternating first in the
texture request queue and then in the inventory request queue - or vice
versa, and if there are none of those, look for a request of a different
type. If also that cannot be found, look for a request in another
service. This is still not optimal and subject to change.
If AIPerService::add_queued_to fails because a new request is throttled,
then do not add the request to the end of the queue, nor remove it from
the queue: do nothing: it makes no sense to move the request to the back
because they all belong to the same service and all of them will be
either throttled or not.
Note: Still need to fix that in this case we should look in queues of
other services.
The inventory bulk fetch is not thread-safe, so the it doesn't start
right away, causing the approvement not to be honored upon return from
post_approved (formerly post_nb).
This patch renames wantsMoreHTTPReqestsFor to approveHTTPRequestFor,
and has it return NULL or a AIPerService::Approvement object.
The latter is now passed to the CurlEasyHandle object instead of just a
boolean mQueueIfTooMuchBandwidthUsage, and then the Approvement is
honored by the state machine right after the request is actually added
to the command queue.
This should avoid a flood of inventory requests in the case
approveHTTPRequestFor is called multiple times before the main thread
adds the requests to the command queue. I don't think that actually ever
happens, but I added debug code (to find some problem) that is so damn
strictly checking everything that I need to be this precise in order to
do that testing.
* Removed the 'RequestQueue' from other PerServiceRequestQueue occurances
in the code.
* Made wantsMoreHTTPRequestsFor and checkBandwidthUsage threadsafe (by
grouping the static variables of AIPerService into thread ThreadSafe
groups.
Most notably getMesh (the only one possibly using any significant
bandwidth), but in general every type of requests that just have to
happen anyway and in the order they are requested: they are just passed
to the curl thread, but now the curl thread will queue them and hold
back if the (general) service they use is loaded too heavily.