Commit Graph

392 Commits

Author SHA1 Message Date
Siana Gearz
bc145c95fb Silence VC10 warnings 2013-05-26 05:43:28 +02:00
Aleric Inglewood
ed8fa9229f Merge remote-tracking branch 'singu/master' 2013-05-21 23:58:53 +02:00
Aleric Inglewood
cf4c4a72c2 Added AICapabilityType and related.
This splits the AIPerService queue up into four queues: one for each
"capability type". On Second Life that doesn't make a difference in
itself for textures because the texture service only serves one
capability type: textures. Other services however can serve two types,
while on Avination - that currently only has one services for everything
- this really makes a difference because that single service now has
four queues.

More importantly however is that the administration of how many requests
are in the "pipeline" (from approving that a new HTTP request may be
added for given service, till curl finished it) is now per capability
type (or service/capabitity type pair actually). This means downloads of
a certain capability type (textures, inventory, mesh, other) will no
longer stall because unapproved requests cluttered the queue for a given
service.

Moreover, before when a request did finished, it would only look for a
new request in the queue of the service that just finished. This simple
algorithm worked when there were no 'PerSerice' objects, and only one
'Curl' queue: because if anything was queued that that was because there
were running requests, and when one of those running requests finished
it made sense to see if one of those queued requests could be added now.
However, after adding multiple queues, one for each service, it could
happen that service A had queued requests while only requests from
service B were actually running: only requests of B would ever finish
and the requests of A would be queued forever.

With this patch the algorithm is to look alternating first in the
texture request queue and then in the inventory request queue - or vice
versa, and if there are none of those, look for a request of a different
type. If also that cannot be found, look for a request in another
service. This is still not optimal and subject to change.
2013-05-21 23:34:23 +02:00
Siana Gearz
63f152e78d Merge branch 'master' of https://github.com/AlericInglewood/SingularityViewer 2013-05-13 02:43:49 +02:00
Drake Arconis
eef53e3eac Merge branch 'master' of git://github.com/singularity-viewer/SingularityViewer 2013-05-12 10:08:49 -07:00
Aleric Inglewood
67e88561dc Queue/throttle fix.
If AIPerService::add_queued_to fails because a new request is throttled,
then do not add the request to the end of the queue, nor remove it from
the queue: do nothing: it makes no sense to move the request to the back
because they all belong to the same service and all of them will be
either throttled or not.

Note: Still need to fix that in this case we should look in queues of
other services.
2013-05-12 18:10:49 +02:00
Aleric Inglewood
f8aac1f3dd Lets CurlEasyHandle::approved() stay consisten throughout its lifetime. 2013-05-12 17:56:29 +02:00
Aleric Inglewood
929badb110 Let statemachine honor approvements.
The inventory bulk fetch is not thread-safe, so the it doesn't start
right away, causing the approvement not to be honored upon return from
post_approved (formerly post_nb).

This patch renames wantsMoreHTTPReqestsFor to approveHTTPRequestFor,
and has it return NULL or a AIPerService::Approvement object.
The latter is now passed to the CurlEasyHandle object instead of just a
boolean mQueueIfTooMuchBandwidthUsage, and then the Approvement is
honored by the state machine right after the request is actually added
to the command queue.

This should avoid a flood of inventory requests in the case
approveHTTPRequestFor is called multiple times before the main thread
adds the requests to the command queue. I don't think that actually ever
happens, but I added debug code (to find some problem) that is so damn
strictly checking everything that I need to be this precise in order to
do that testing.
2013-05-12 04:19:44 +02:00
Aleric Inglewood
d2af4a3c23 WIP: comment out llasserts.
This way my master isn't totally broken, at least ;).
2013-05-07 23:02:56 +02:00
Aleric Inglewood
5b1984ed0c WIP: Added AIPerService::Approvement 2013-05-07 19:36:10 +02:00
Aleric Inglewood
e3fec7c715 AIPerService improvements.
* Removed the 'RequestQueue' from other PerServiceRequestQueue occurances
  in the code.
* Made wantsMoreHTTPRequestsFor and checkBandwidthUsage threadsafe (by
  grouping the static variables of AIPerService into thread ThreadSafe
  groups.
2013-05-07 16:10:09 +02:00
Aleric Inglewood
410f9d92a5 Minor improvemnt of coding style. 2013-05-07 02:01:31 +02:00
Aleric Inglewood
84e7f15dc5 Fix initialization list order (compiler warnings) 2013-05-06 03:26:22 +02:00
Drake Arconis
73b10e6b29 Made compiler detection more reliable and cleanup clang warnings 2013-05-05 17:56:16 -07:00
Aleric Inglewood
1d629438c0 Add HTTP bandwidth throttling for every other responder.
Most notably getMesh (the only one possibly using any significant
bandwidth), but in general every type of requests that just have to
happen anyway and in the order they are requested: they are just passed
to the curl thread, but now the curl thread will queue them and hold
back if the (general) service they use is loaded too heavily.
2013-05-06 02:54:10 +02:00
Aleric Inglewood
75a45f501e Moved a part of AIPerService::wantsMoreHTTPRequestsFor to a new function.
The new function is AIPerService::checkBandwidthUsage.
No functional changes were made.
2013-05-06 02:54:10 +02:00
Aleric Inglewood
fd3e8e4a23 Preparation for AIPerService::checkBandwidthUsage.
Don't pass arguments to wantsMoreHTTPRequestsFor, but use globals in
llmessage: AIPerService::sHTTPThrottleBandwidth125 and
AIPerService::sNoHTTPBandwidthThrottling instead.

This is needed later on.
2013-05-06 02:54:03 +02:00
Aleric Inglewood
32a5494c79 Limit sMaxPipelinedRequests changes to once per 40ms, plus code clean up.
Mostly renaming stuff and moving static variables to class AIPerSerive.
2013-05-04 20:02:21 +02:00
Aleric Inglewood
873b61c39d Fix comment. 2013-04-30 22:16:29 +02:00
Aleric Inglewood
74023d5303 Compile fix.
Move the destructor (and copy constructor while I was at it) to the .cpp
file in order to avoid instantiating the destructor of
boost::intrusive_ptr<ThreadSafeBufferedCurlEasyRequest> from a header,
which would require the class ThreadSafeBufferedCurlEasyRequest
to be defined in that header, which is unnecessary. In other words,
this avoid the need to include "aicurl.h" in headers using
AIPerService[Ptr].

Also fixed indentation of a comment.
2013-04-30 20:23:42 +02:00
Aleric Inglewood
ebfb76c284 Renamed AIPerServiceRequestQueue[Ptr] to AIPerService[Ptr]
This because AIPerService now contains a lot more than just the request
queue.
2013-04-26 19:20:10 +02:00
Aleric Inglewood
6c1335af50 Moved connect limits per service to AIPerServiceRequestQueue.
Added AIPerServiceRequestQueue::mConcurrectConnections and
AIPerServiceRequestQueue::mMaxPipelinedRequests.
2013-04-26 19:13:18 +02:00
Aleric Inglewood
304e2b4958 Pass LLControlGroup* to AICurlInterface::startCurlThread instead of each Debug Setting separately. 2013-04-26 16:47:22 +02:00
Aleric Inglewood
45e6b7975f First version of HTTP bandwidth throttling.
Adds throttling based on on average bandwidth usage per HTTP service.
Since only HTTP textures are using this, they are still starved by other
services like inventory and mesh dowloads. Also, it will be needed to
move the maximum number of connections per service the to the PerService
class, and dynamically tune them: reducing the number of connections is
the first thing to do when using too much bandwidth.

I also added a graph for HTTP texture bandwidth to the stats floater.
For some reason the average bandwidth (over 1 second) look almost like
scattered noise... weird for something that is averaged...
2013-04-24 22:04:21 +02:00
Latif Khalifa
f1ce3c89c3 Silence warnings 2013-04-23 12:57:02 +02:00
Aleric Inglewood
734d2e658d Rename HTTPTimeout::sClockCount (in clock ticks) to sTime_10ms (in 10ms units).
Note that HTTPTimeout::sClockWidth is no longer used for HTTPTimeout (as
if it's value became a constant of 0.01, the fraction 10ms / 1s).
A new variable, HTTPTimeout::sClockWidth_10ms is used to calculate
sTime_10ms (only).

Also HTTPTimeout::mStalled and HTTPTimeout::mLowSpeedClock changed units
to the same as of sTime_10ms (time since epoch in 10ms units).
2013-04-09 22:27:23 +02:00
Aleric Inglewood
fce106f7e2 PerHost became PerService
Reflect the fact that we include port number in its name.
2013-04-09 05:06:32 +02:00
Aleric Inglewood
29a1d0e4e1 Renamed aicurlperhost.{cpp,h} --> aicurlperservice.{cpp,h}
Next commit will rename everything else.
2013-04-09 04:40:47 +02:00
Aleric Inglewood
bb948ce6d5 Use host:port as key for the "PerHost" request queue, instead of just the hostname.
Rationale: LL is doing all throttling per service (host:port), not per
service hostname. Also, textures and capabilities use the same host: the
sim you are connected to. Splitting the queues up on a per-service basis
will stop the textures from blocking a capability request.
2013-04-09 04:32:36 +02:00
Aleric Inglewood
3af89dd685 Temporarily add old bandwidth throttle method back. 2013-04-09 04:25:28 +02:00
Aleric Inglewood
748d339ee6 Move decision whether or not to add new HTTP request from texture fetcher to AICurl
After commit things compile again :).
The HTTP bandwidth throttling is not yet implemented. I'll put a
temporary fix back the next commit that just does it the "old way"...
2013-04-08 22:46:01 +02:00
Aleric Inglewood
79bcb1ec07 Print info in texture console about curl request queuing
This includes also non-texture requests (not sure if it's worth it to
explicitely separate this data for just textures).

The texture console now prints: HTTP:c/q/a/r
Where, c = number of 'add' request commands in the command queue
(minus the number of 'remove' request in the command queue and not
taking into account the command_being_processed, nor entirely being
thread-safe with regard to adding requests to 'q' or 'a' next: it is
possible that a request is no longer counted in 'c' but not yet is added
to 'q' or 'a'.
q = number of queued (throttled) requests (by the curl thread).
a = number of actually added requests (to the multi handle).
r = last returned value of 'running handles' by libcurl.

Obviously, c and q should be small (0 or 1) most of the time and a and r
should be equal and maxed out. This turns out to be the case.
2013-04-08 06:41:07 +02:00
Aleric Inglewood
c20e33c7f4 Added AIPerHostRequestQueue::queued_commands()
Returns the number of add commands minus the number of remove commands
in the command_queue for this host.
2013-04-08 06:41:07 +02:00
Aleric Inglewood
d26241c0f2 Move extract_canonical_hostname to AIPerHostRequestQueue::extract_canonical_hostname.
This to make it available to the texture fetcher.
2013-04-08 06:41:07 +02:00
Aleric Inglewood
7866c68ab2 Moved PerHostRequestQueue[Ptr] outside of AICurlPrivate.
Renamed PerHostRequestQueue and PerHostRequestQueuePtr to
AIPerHostRequestQueue and AIPerHostRequestQueuePtr respectively and
moved them to global namespace. This in preparation for using them in
the texture fetcher (as function of the hostname of the url that is
contructed there).
2013-04-08 06:41:07 +02:00
Aleric Inglewood
db7c378160 Added MultiHandle::sTotalAdded
This new variable is updated to contain the total number of requests in
MultiHandle::mAddedEasyRequests. It can be a static because MultiHandle
is a singleton (there is only one curl thread) and it has to be an
(atomic) static because we don't want to need to take the lock on
MultiHandle for accessing this variable.
2013-04-08 06:40:48 +02:00
Aleric Inglewood
17455e2442 Add PerHostRequestQueue::sTotalQueued
The new variable is updated to contain the sum of the size of
PerHostRequestQueue::mQueuedRequests of every PerHostRequestQueue
object, and thus the total number of queued requests for all hosts
together.

Also added PerHostRequestQueue::host_queued_plus_added_size() which
returns the the sum of queued requests plus the requests already added
to the multi handle for this particular hostname.
2013-04-08 06:39:52 +02:00
Aleric Inglewood
f58bc148ba Add a 'size' to command_queue.
This adds a size (command_queue_st::size) to the ThreadSafe deque<Command> that was
AICurlPrivate::command_queue that is kept equal to the number of 'add'
commands in the deque minus the number of 'remove' commands in the
deque. The deque itself is now command_queue_st::commands.
2013-04-08 06:39:51 +02:00
Aleric Inglewood
07201a5cfe Add AICurlInterface::getNumHTTPRunning
Replaces LLTextureFetch::getNumHTTPRequests.
Returns AICurlInterface::Stats::running_handles.
This is work in progress that temporarily doesn't compile because
LLTextureFetch::getNumHTTPRequests is still being used somewhere.
2013-04-08 06:39:06 +02:00
Aleric Inglewood
ac84e02018 Do not use a timer for HTTP get in LLTextureFetchWorker::doWork
All HTTP timing is done by AIHTTPTimeoutPolicy.

Inside LLTextureFetchWorker::doWork when mState == SEND_HTTP_REQ,
mCanUseHTTP is true, throttling is not in effect and mURL is not empty,
mLoaded is set to FALSE, mState is set to WAIT_HTTP_REQ and
LLHTTPClient::request is called that starts the download by curl.
A call back to LLTextureFetchWorker::callbackHttpGet is guaranteed,
which causes mLoaded to be set to TRUE (HTTPGetResponder::completedRaw
calls LLTextureFetchWorker::callbackHttpGet which sets mRequestedSize to -1
(if there was an error) and mLoaded to TRUE).

Being in state WAIT_HTTP_REQ, once mLoaded == TRUE (and mRequestedSize
is -1), the different timeout errors are handled.
2013-03-27 22:37:21 +01:00
Aleric Inglewood
524fdf033d Merge remote-tracking branch 'singu/master' 2013-03-26 21:10:00 +01:00
Aleric Inglewood
b20886a481 Allow TOS redirect. Fix upload finished detection when redirecting.
This fixes https://code.google.com/p/singularity-viewer/issues/detail?id=705

Adds 'bool redirect_status_ok(void) const { return true; }' to LLIamHere,
because it's ok to receive a 302 status there. Likewise added to LLIamHereVoice,
because that has the same comment in its error() method.

Also fixes the problem that if two redirects occur on a row, then the
upload_finished detection asserted because it would detect the third
time that libcurl turned off writing to the socket as a failure (the
second time wasn't a problem because mUploadFinished was reset upon
receiving the first 302 header, but not upon receiving the second
header).
2013-03-25 04:41:07 +01:00
Latif Khalifa
afc137bb3f Merge remote-tracking branch 'aleric/master' 2013-03-24 14:22:28 +01:00
Aleric Inglewood
712c46a74e Add comment with regard to LLSD in the body of pages with an HTTP error code. 2013-03-21 20:56:21 +01:00
Aleric Inglewood
835240fda1 Fix crash in LLTextureFetch::getWorker upon viewer exit.
This is now necessary since the curl thread no longer syncs with the
main thread: it is possible that a request finishes after a texture
fetch thread was shot down but before curl was stopped, and curl
calling BufferedCurlEasyRequest::processOutput while objects that the
responder uses were already destructed (most notably
LLTextureFetch itself).
2013-03-21 20:26:01 +01:00
Drake Arconis
0eeddb0607 Merge branch 'master' of git://github.com/AlericInglewood/SingularityViewer into Cupcake
Conflicts:
	indra/newview/llappviewer.cpp
2013-03-20 08:31:04 -04:00
Aleric Inglewood
4be1d057bc Fix windows compile warnings. 2013-03-19 23:59:15 +01:00
Drake Arconis
d345630b2c Merge remote-tracking branch 'Aleric/master' into Cupcake 2013-03-11 19:55:38 -04:00
Aleric Inglewood
7de15e7acd Update AIHTTPTimeoutPolicy objects when their base changes.
Actually propagate changes to CurlTimeout* Debug Settings to the timeout policy objects.
2013-03-10 20:59:48 +01:00
Aleric Inglewood
5a8308109b Lowered CurlTimeoutLowSpeedLimit from 56 kB/s to 7 kB/s.
This value really IS in bytes/s (not for the total), and apparently 56
kB/s is too optimistic. The value was used by LL for transfers that went
beyond the total download time (2 minutes?), adding enough seconds per
received bytes to the timeout to allow a download at 56 kB/s to finish.

Our meaning is different however: we time out immediately and whenever
the download drops below this speed.

Perhaps a better algorithm is where the speed demand is based on the
total size of the download, but I'm not sure we always know the size of
downloads at this point.
2013-03-10 17:45:39 +01:00