Commit Graph

103 Commits

Author SHA1 Message Date
Aleric Inglewood
8c6d51cb71 No longer support DEBUG_CURLIO when libcwd isn't installed. 2014-08-25 17:58:45 +02:00
Aleric Inglewood
f8d9bcf487 Don't reset easy handles after using them.
We create a new easy handle every time.
2013-11-24 19:15:48 +01:00
Aleric Inglewood
ec436b0e3b Improve upload_finished detection for HTTP requests.
We detect the end of an upload when libcurl informs us
that it is no longer interested in knowing if a socket
is writable (which it tells us cause we're doing the call
to select(), by means of a call to MultiHandle::socket_callback).

However, this does not *always* work.
The debug output might look like this:

[...initialization and first part of a TLS handshake...]
0x7f759f0ad700   CURL        :   select(55, {17, 39, 46, 47, 48, 49, 50, 51, 52, 54}, NULL, NULL, timeout = 1 ms) = 7
0x7f759f0ad700   CURLTR      :   curl_multi_socket_action((CURLM*)0x7f758c002920, 46, CURL_CSELECT_IN, <unfinished>	<--- Calling curl_multi_socket_action (socket 46 is readable)
0x7f759f0ad700   CURLIO      0x13fce7b0       * SSLv3, TLS change cipher, Client hello (1):
0x7f759f0ad700   CURLIO      0x13fce7b0       S> 1 bytes
0x7f759f0ad700   CURLIO      0x13fce7b0       * SSLv3, TLS handshake, Finished (20):
0x7f759f0ad700   CURLIO      0x13fce7b0       S> 16 bytes
0x7f759f0ad700   CURLIO      0x13fce7b0       * SSL connection using AES256-SHA
0x7f759f0ad700   CURLIO      0x13fce7b0       * Server certificate:
0x7f759f0ad700   CURLIO      0x13fce7b0       *          subject: C=US; ST=California; L=San Francisco; O=Linden Lab, Inc.; CN=*.agni.lindenlab.com; emailAddress=root@lindenlab.com
0x7f759f0ad700   CURLIO      0x13fce7b0       *          start date: 2012-09-20 22:55:47 GMT
0x7f759f0ad700   CURLIO      0x13fce7b0       *          expire date: 2015-09-20 22:55:47 GMT
0x7f759f0ad700   CURLIO      0x13fce7b0       *          issuer: C=US; ST=California; L=San Francisco; O=Linden Lab, Inc.; OU=Linden Lab Certificate Authority; CN=Linden Lab Certificate Authority; emailAddress=ca@lindenlab.com
0x7f759f0ad700   CURLIO      0x13fce7b0       *          SSL certificate verify ok.
0x7f759f0ad700   CURLIO      0x13fce7b0       * STATE: PROTOCONNECT => DO handle 0x7f758d62d810; (connection #2)
0x7f759f0ad700   CURLIO      0x13fce7b0       H< POST /cap/d67d4540-4504-b3d1-1d26-112010e88bd8 HTTP/1.1\r\nHost: sim10480.agni.lindenlab.com:12043\r\nAccept: */*\r\nAccept-Encoding: deflate, gzip\r\nContent-Type: application/llsd+xml\r\nX-SecondLife-UDP-Listen-Port: 32837\r\nConnection: keep-alive\r\nKeep-alive: 300\r\nContent-Length: 333\r\n\r\n
0x7f759f0ad700   CURLIO      0x13fce7b0       * STATE: DO => DO_DONE handle 0x7f758d62d810; (connection #2)
0x7f759f0ad700   CURL        :       Entering BufferedCurlEasyRequest::curlProgressCallback(0x13fce7b0, 0, 0, 333, 0)	<=== THIS IS NEW: total to upload: 333 bytes, total uploaded: 0 bytes.
0x7f759f0ad700   CURLIO      0x13fce7b0       * STATE: DO_DONE => WAITPERFORM handle 0x7f758d62d810; (connection #2)
0x7f759f0ad700   CURL        :       Entering BufferedCurlEasyRequest::curlProgressCallback(0x13fce7b0, 0, 0, 333, 0)
0x7f759f0ad700   CURLIO      0x13fce7b0       * STATE: WAITPERFORM => PERFORM handle 0x7f758d62d810; (connection #2)
0x7f759f0ad700   CURL        :       Entering BufferedCurlEasyRequest::curlProgressCallback(0x13fce7b0, 0, 0, 333, 0)
0x7f759f0ad700   CURL        :       Entering BufferedCurlEasyRequest::curlProgressCallback(0x13fce7b0, 0, 0, 333, 0)
0x7f759f0ad700   CURL        :       Entering BufferedCurlEasyRequest::curlProgressCallback(0x13fce7b0, 0, 0, 333, 0)
0x7f759f0ad700   CURLTR      :       curl_easy_getinfo((CURL*)0x13fcf320, CURLINFO_PRIVATE, 0x7f759f0abdb0) = CURLE_OK
0x7f759f0ad700   CURL        :       Entering MultiHandle::socket_callback((CURL*)0x13fcf320, 46, CURL_POLL_INOUT, 0x7f758c001508, 0x7f758c2ce4e0) [CURLINFO_PRIVATE = 0x13fce7b0]
0x7f759f0ad700   CURL        :         CurlSocketInfo::set_action(CURL_POLL_IN --> CURL_POLL_INOUT) [0x13fce7b0]	<=== WATING FOR WRITING BUT ALREADY WAITING FOR INPUT!
0x7f759f0ad700   CURL        0x13fce7b0          reset_lowspeed: mLowSpeedClock = 138530973390; mStalled = -1
0x7f759f0ad700   CURLTR      :   <continued> {11}) = 0									<--- Returning from curl_multi_socket_action
0x7f759f0ad700   CURL        :   select(58, {17, 39, 46, 47, 48, 49, 50, 51, 52, 54}, {36, 46, 57}, NULL, timeout = 756 ms) = 5
0x7f759f0ad700   CURLTR      :   curl_multi_socket_action((CURLM*)0x7f758c002920, 46, CURL_CSELECT_OUT, <unfinished>	<=== SOCKET AVAILABLE FOR WRITING
0x7f759f0ad700   CURLIO      0x13fce7b0       * additional stuff not fine transfer.c:1037: 0 0
0x7f759f0ad700   CURLIO      0x13fce7b0       D< 333 bytes: "<llsd><map><key>folders</key><array><map><key>fetch_folders</key>...etc"	<=== WRITING *ALL* DATA
0x7f759f0ad700   CURL        :       Entering BufferedCurlEasyRequest::curlProgressCallback(0x13fce7b0, 0, 0, 333, 333)	<=== THIS IS NEW, DETECT END OF UPLOAD
0x7f759f0ad700   CURL        0x13fce7b0          upload_finished: mStalled set to Time_10ms (138530973390) + 6000 (60 seconds)
0x7f759f0ad700   CURL        :       Entering BufferedCurlEasyRequest::curlProgressCallback(0x13fce7b0, 0, 0, 333, 333)
0x7f759f0ad700   CURL        :   select(58, {17, 39, 46, 47, 50, 52, 54}, {36, 46, 57}, NULL, timeout = 0 ms) = 8
0x7f759f0ad700   CURLTR      :   curl_multi_socket_action((CURLM*)0x7f758c002920, 46, CURL_CSELECT_IN|CURL_CSELECT_OUT, <unfinished>
0x7f759f0ad700   CURLIO      0x13fce7b0       * HTTP 1.1 or later with persistent connection, pipelining supported
0x7f759f0ad700   CURLIO      0x13fce7b0       H> HTTP/1.1 200 OK\r\n

...and the server already replies without a call to MultiHandle::socket_callback between
writing and reading, because the socket was still (already) waiting for input anyway
(since it was waiting for the SSL/TLS handshake before in this case, but
it also happens when the data has to be written in more than one writes).

Using the progress callback however, we are able to detect the end of the upload right
after writing, anyway.
2013-11-24 19:13:17 +01:00
Aleric Inglewood
f7614dced3 Fix possible problem with negative mQueuedCommands
mQueuedCommands was unsigned, but can become shortly negative.
Unfortunately this means I had to remove the assert that
keeps track of possible errors, but I believe that already
has been proven that it works anyway.

The reason it can become negative is because its meaning is
"Number of added ADD (request) commands minus number of REMOVE (request)
commands". Hence, it is incremented when an ADD command is added to the
queue and decremented when that ADD command is removed from the queue.
However, it is first decremented when a REMOVE command is added to the
queue and then incremented again when that REMOVE command is removed
again from the queue. Such a remove command is current extremely rare:
it only happens when curl fails to time out - and the backup timeout of
the statemachine fires after 10 minutes of a curl request being idle.
That should normally only happen if a request never reached curl of
course, and is queued in the curl thread, waiting to be added to curl,
aka: -0-0-1,{...} for 10 minutes...

Now that, in turn should be impossible (since the "don't count long poll
connections) unless the Curl* Debug Setting variables are changed to
wrong values.. but ok, can only guess here.
2013-10-06 15:27:40 +02:00
Aleric Inglewood
723f4963e0 Do not count long poll connections against CurlConcurrentConnectionsPerService
This is necessary for opensim mega regions that can have up to 16 long
poll connections for a single service, while upping the maximum number
of connections per service just for that is clearly nonsense.

I also changed that long poll connections do not "use" the "Other"
capability type (which shows that the debug info in the HTTP debug
console for the Other capability type turns grey when it only has event
poll connections). As a result additional connections become available
for textures, mesh and the inventory (the other capability types) on
opensim, which has all types on the same service, because now Other
does no longer constantly reserves a full share of the available
connections. This makes the actual number of connections used for
textures and mesh a lot more like it is on Second Life.
2013-09-30 17:39:17 +02:00
Aleric Inglewood
1c0f87d82f Let curl follow redirects by default.
Turns out that the only responders that want to get the redirect
status codes themselves are the ones that already had a
redirect_status_ok() exception.
2013-09-28 20:41:10 +02:00
Aleric Inglewood
4b0b83797c CURLIO (and libcwd): add number of established connections to http console.
I need to parse the debug output of libcurl for this, and choose
to not do that for a Release build - at which point it makes little
sense to even do it for a Debug build, since also the Alpha release
are 'Release'. Even though it would probably have no impact on FPS,
there would be like two persons looking at that number and understanding
it.
2013-07-04 00:02:28 +02:00
Aleric Inglewood
59716ba86b Added progress meter in HTTP debug console. 2013-07-02 02:12:34 +02:00
Aleric Inglewood
982921e495 Add AICurlTimer - timer support for the curl thread.
Not used yet. Deliberately not threadsafe.
Usage the same as AIFrameTimer, upon which is was based.
2013-07-01 15:41:44 +02:00
Aleric Inglewood
f74ad5eafa Fix global connection threshold not to count non-approved queued request as being in the pipeline.
This make more sense when comparing the threshold against 2 times the
maximum allowed total connections.

As a result, textures won't stall when there are around 128 mesh
requests queued.
2013-06-28 00:09:10 +02:00
Aleric Inglewood
73ea6eb80e Rewrite of AIPerService::add_queued_to
This should fix any 'stalls' with zero added requests for mesh.
2013-06-28 00:09:09 +02:00
Aleric Inglewood
32681403c6 Fixed a typo: mConcurrectConnections -> mConcurrentConnections
That's what you get for only copy&pasting variable names.
2013-06-26 13:23:10 +02:00
Aleric Inglewood
1d04b4af24 Actually use CapabilityType::mConcurrentConnections for Approvements. 2013-06-24 18:57:42 +02:00
Aleric Inglewood
719f57e00d Fix documentation at top of AIPerService::approveHTTPRequestFor 2013-06-24 18:37:57 +02:00
Aleric Inglewood
af9018f35f Add AIPerService::CapabilityType::mConcurrectConnections
Prepares for throttling number of connections per capability type.
The value is current left at AIPerService::mConcurrectConnections,
so not having an effect yet.
2013-06-23 18:09:09 +02:00
Aleric Inglewood
87176418a2 Change CurlConcurrentConnectionsPerService to U16 and clamp it between 1 and 32.
Also fixes a bug where going from 1 to 2 would not update
AIPerService::mConcurrectConnections correctly.
2013-06-23 18:05:23 +02:00
Aleric Inglewood
ee59617c5e Always add new http requests to the back the queue. 2013-06-22 22:20:37 +02:00
Aleric Inglewood
c5489932cf Fix ASSERT(!mUploadFinished || mBeingRedirected)
Hopefully this is the last case.
2013-06-04 00:31:44 +02:00
Aleric Inglewood
98badb94da Add AIHTTPView, a HTTP Debug Console - press Ctrl-Shift-7 2013-06-01 16:14:32 +02:00
Siana Gearz
bc145c95fb Silence VC10 warnings 2013-05-26 05:43:28 +02:00
Aleric Inglewood
cf4c4a72c2 Added AICapabilityType and related.
This splits the AIPerService queue up into four queues: one for each
"capability type". On Second Life that doesn't make a difference in
itself for textures because the texture service only serves one
capability type: textures. Other services however can serve two types,
while on Avination - that currently only has one services for everything
- this really makes a difference because that single service now has
four queues.

More importantly however is that the administration of how many requests
are in the "pipeline" (from approving that a new HTTP request may be
added for given service, till curl finished it) is now per capability
type (or service/capabitity type pair actually). This means downloads of
a certain capability type (textures, inventory, mesh, other) will no
longer stall because unapproved requests cluttered the queue for a given
service.

Moreover, before when a request did finished, it would only look for a
new request in the queue of the service that just finished. This simple
algorithm worked when there were no 'PerSerice' objects, and only one
'Curl' queue: because if anything was queued that that was because there
were running requests, and when one of those running requests finished
it made sense to see if one of those queued requests could be added now.
However, after adding multiple queues, one for each service, it could
happen that service A had queued requests while only requests from
service B were actually running: only requests of B would ever finish
and the requests of A would be queued forever.

With this patch the algorithm is to look alternating first in the
texture request queue and then in the inventory request queue - or vice
versa, and if there are none of those, look for a request of a different
type. If also that cannot be found, look for a request in another
service. This is still not optimal and subject to change.
2013-05-21 23:34:23 +02:00
Aleric Inglewood
67e88561dc Queue/throttle fix.
If AIPerService::add_queued_to fails because a new request is throttled,
then do not add the request to the end of the queue, nor remove it from
the queue: do nothing: it makes no sense to move the request to the back
because they all belong to the same service and all of them will be
either throttled or not.

Note: Still need to fix that in this case we should look in queues of
other services.
2013-05-12 18:10:49 +02:00
Aleric Inglewood
929badb110 Let statemachine honor approvements.
The inventory bulk fetch is not thread-safe, so the it doesn't start
right away, causing the approvement not to be honored upon return from
post_approved (formerly post_nb).

This patch renames wantsMoreHTTPReqestsFor to approveHTTPRequestFor,
and has it return NULL or a AIPerService::Approvement object.
The latter is now passed to the CurlEasyHandle object instead of just a
boolean mQueueIfTooMuchBandwidthUsage, and then the Approvement is
honored by the state machine right after the request is actually added
to the command queue.

This should avoid a flood of inventory requests in the case
approveHTTPRequestFor is called multiple times before the main thread
adds the requests to the command queue. I don't think that actually ever
happens, but I added debug code (to find some problem) that is so damn
strictly checking everything that I need to be this precise in order to
do that testing.
2013-05-12 04:19:44 +02:00
Aleric Inglewood
d2af4a3c23 WIP: comment out llasserts.
This way my master isn't totally broken, at least ;).
2013-05-07 23:02:56 +02:00
Aleric Inglewood
5b1984ed0c WIP: Added AIPerService::Approvement 2013-05-07 19:36:10 +02:00
Aleric Inglewood
e3fec7c715 AIPerService improvements.
* Removed the 'RequestQueue' from other PerServiceRequestQueue occurances
  in the code.
* Made wantsMoreHTTPRequestsFor and checkBandwidthUsage threadsafe (by
  grouping the static variables of AIPerService into thread ThreadSafe
  groups.
2013-05-07 16:10:09 +02:00
Aleric Inglewood
84e7f15dc5 Fix initialization list order (compiler warnings) 2013-05-06 03:26:22 +02:00
Aleric Inglewood
1d629438c0 Add HTTP bandwidth throttling for every other responder.
Most notably getMesh (the only one possibly using any significant
bandwidth), but in general every type of requests that just have to
happen anyway and in the order they are requested: they are just passed
to the curl thread, but now the curl thread will queue them and hold
back if the (general) service they use is loaded too heavily.
2013-05-06 02:54:10 +02:00
Aleric Inglewood
75a45f501e Moved a part of AIPerService::wantsMoreHTTPRequestsFor to a new function.
The new function is AIPerService::checkBandwidthUsage.
No functional changes were made.
2013-05-06 02:54:10 +02:00
Aleric Inglewood
fd3e8e4a23 Preparation for AIPerService::checkBandwidthUsage.
Don't pass arguments to wantsMoreHTTPRequestsFor, but use globals in
llmessage: AIPerService::sHTTPThrottleBandwidth125 and
AIPerService::sNoHTTPBandwidthThrottling instead.

This is needed later on.
2013-05-06 02:54:03 +02:00
Aleric Inglewood
32a5494c79 Limit sMaxPipelinedRequests changes to once per 40ms, plus code clean up.
Mostly renaming stuff and moving static variables to class AIPerSerive.
2013-05-04 20:02:21 +02:00
Aleric Inglewood
873b61c39d Fix comment. 2013-04-30 22:16:29 +02:00
Aleric Inglewood
ebfb76c284 Renamed AIPerServiceRequestQueue[Ptr] to AIPerService[Ptr]
This because AIPerService now contains a lot more than just the request
queue.
2013-04-26 19:20:10 +02:00
Aleric Inglewood
6c1335af50 Moved connect limits per service to AIPerServiceRequestQueue.
Added AIPerServiceRequestQueue::mConcurrectConnections and
AIPerServiceRequestQueue::mMaxPipelinedRequests.
2013-04-26 19:13:18 +02:00
Aleric Inglewood
304e2b4958 Pass LLControlGroup* to AICurlInterface::startCurlThread instead of each Debug Setting separately. 2013-04-26 16:47:22 +02:00
Aleric Inglewood
45e6b7975f First version of HTTP bandwidth throttling.
Adds throttling based on on average bandwidth usage per HTTP service.
Since only HTTP textures are using this, they are still starved by other
services like inventory and mesh dowloads. Also, it will be needed to
move the maximum number of connections per service the to the PerService
class, and dynamically tune them: reducing the number of connections is
the first thing to do when using too much bandwidth.

I also added a graph for HTTP texture bandwidth to the stats floater.
For some reason the average bandwidth (over 1 second) look almost like
scattered noise... weird for something that is averaged...
2013-04-24 22:04:21 +02:00
Latif Khalifa
f1ce3c89c3 Silence warnings 2013-04-23 12:57:02 +02:00
Aleric Inglewood
734d2e658d Rename HTTPTimeout::sClockCount (in clock ticks) to sTime_10ms (in 10ms units).
Note that HTTPTimeout::sClockWidth is no longer used for HTTPTimeout (as
if it's value became a constant of 0.01, the fraction 10ms / 1s).
A new variable, HTTPTimeout::sClockWidth_10ms is used to calculate
sTime_10ms (only).

Also HTTPTimeout::mStalled and HTTPTimeout::mLowSpeedClock changed units
to the same as of sTime_10ms (time since epoch in 10ms units).
2013-04-09 22:27:23 +02:00
Aleric Inglewood
fce106f7e2 PerHost became PerService
Reflect the fact that we include port number in its name.
2013-04-09 05:06:32 +02:00
Aleric Inglewood
29a1d0e4e1 Renamed aicurlperhost.{cpp,h} --> aicurlperservice.{cpp,h}
Next commit will rename everything else.
2013-04-09 04:40:47 +02:00
Aleric Inglewood
3af89dd685 Temporarily add old bandwidth throttle method back. 2013-04-09 04:25:28 +02:00
Aleric Inglewood
748d339ee6 Move decision whether or not to add new HTTP request from texture fetcher to AICurl
After commit things compile again :).
The HTTP bandwidth throttling is not yet implemented. I'll put a
temporary fix back the next commit that just does it the "old way"...
2013-04-08 22:46:01 +02:00
Aleric Inglewood
79bcb1ec07 Print info in texture console about curl request queuing
This includes also non-texture requests (not sure if it's worth it to
explicitely separate this data for just textures).

The texture console now prints: HTTP:c/q/a/r
Where, c = number of 'add' request commands in the command queue
(minus the number of 'remove' request in the command queue and not
taking into account the command_being_processed, nor entirely being
thread-safe with regard to adding requests to 'q' or 'a' next: it is
possible that a request is no longer counted in 'c' but not yet is added
to 'q' or 'a'.
q = number of queued (throttled) requests (by the curl thread).
a = number of actually added requests (to the multi handle).
r = last returned value of 'running handles' by libcurl.

Obviously, c and q should be small (0 or 1) most of the time and a and r
should be equal and maxed out. This turns out to be the case.
2013-04-08 06:41:07 +02:00
Aleric Inglewood
c20e33c7f4 Added AIPerHostRequestQueue::queued_commands()
Returns the number of add commands minus the number of remove commands
in the command_queue for this host.
2013-04-08 06:41:07 +02:00
Aleric Inglewood
7866c68ab2 Moved PerHostRequestQueue[Ptr] outside of AICurlPrivate.
Renamed PerHostRequestQueue and PerHostRequestQueuePtr to
AIPerHostRequestQueue and AIPerHostRequestQueuePtr respectively and
moved them to global namespace. This in preparation for using them in
the texture fetcher (as function of the hostname of the url that is
contructed there).
2013-04-08 06:41:07 +02:00
Aleric Inglewood
db7c378160 Added MultiHandle::sTotalAdded
This new variable is updated to contain the total number of requests in
MultiHandle::mAddedEasyRequests. It can be a static because MultiHandle
is a singleton (there is only one curl thread) and it has to be an
(atomic) static because we don't want to need to take the lock on
MultiHandle for accessing this variable.
2013-04-08 06:40:48 +02:00
Aleric Inglewood
f58bc148ba Add a 'size' to command_queue.
This adds a size (command_queue_st::size) to the ThreadSafe deque<Command> that was
AICurlPrivate::command_queue that is kept equal to the number of 'add'
commands in the deque minus the number of 'remove' commands in the
deque. The deque itself is now command_queue_st::commands.
2013-04-08 06:39:51 +02:00
Aleric Inglewood
07201a5cfe Add AICurlInterface::getNumHTTPRunning
Replaces LLTextureFetch::getNumHTTPRequests.
Returns AICurlInterface::Stats::running_handles.
This is work in progress that temporarily doesn't compile because
LLTextureFetch::getNumHTTPRequests is still being used somewhere.
2013-04-08 06:39:06 +02:00
Aleric Inglewood
b20886a481 Allow TOS redirect. Fix upload finished detection when redirecting.
This fixes https://code.google.com/p/singularity-viewer/issues/detail?id=705

Adds 'bool redirect_status_ok(void) const { return true; }' to LLIamHere,
because it's ok to receive a 302 status there. Likewise added to LLIamHereVoice,
because that has the same comment in its error() method.

Also fixes the problem that if two redirects occur on a row, then the
upload_finished detection asserted because it would detect the third
time that libcurl turned off writing to the socket as a failure (the
second time wasn't a problem because mUploadFinished was reset upon
receiving the first 302 header, but not upon receiving the second
header).
2013-03-25 04:41:07 +01:00
Aleric Inglewood
835240fda1 Fix crash in LLTextureFetch::getWorker upon viewer exit.
This is now necessary since the curl thread no longer syncs with the
main thread: it is possible that a request finishes after a texture
fetch thread was shot down but before curl was stopped, and curl
calling BufferedCurlEasyRequest::processOutput while objects that the
responder uses were already destructed (most notably
LLTextureFetch itself).
2013-03-21 20:26:01 +01:00