It turns out that it's possible to receive data before an upload
finished when the server sends HTTP_BAD_REQUEST in the middle of the
upload (this happens to me when I try to upload a 6000x6000 image to my
profile feed, with is a 44MB file). So, in that case the finished upload
detection did not fail and we shouldn't assert.
Although it should never happen that a file descriptor is suddenly
closed, it appeared that this happens on linux 64bit when using
FMODex... Not really sure how useful this is, but at least now the
viewer just continues to work, as if -say- the socket was closed
remotely. Before the curl thread would go into a tight loop that it
wouldn't recover from until the watchdog thread terminated the viewer.
When linking against the prebuilt libcrypto.so.1.0.0 and
libssl.so.1.0.0 there is a problem running the viewer in
gdb (on debian) because gdb links with libpython2.7.so
which needs a newer openssl on my system (OPENSSL_1.0.1).
The patch allows to work around this problem by defining
-DUSE_SYSTEM_OPENSSL:BOOL=ON when configuring, which then
causes the viewer to be compiled against the system libs.
(If you already installed the prebuilt, you have to manually
remove them with script/install.py --uninstall openSSL).
The prebuilt libcurl expects a function SSLv2_client_method
however, which is not present in my openssl system libs.
A stub was added which is possible because the function
in question isn't used anyway.
Add CURLTR debug channel for libcurl API calls,
and use CURLIO only for libcurl debug output.
Note: need to set gDebugCurlTerse to true for
filtering to take effect, then pass 'debug_on'
to the LLHttpClient methods that require debugging.
Adds a std::map for hostname (or urls) --> PerHostRequestQueue
objects. The latter keeps track of the number of added curl easy
requests and decides if a new request should be throttled or
not, as well as provides the queue to queue throttled requests.
At the moment CurlConcurrentConnectionsPerHost is set to 16,
because things really don't work without LL supporting connection
reuse if we limit it to 2. CurlConcurrentConnectionsPerHost is
also set to non-persistent so that we can easily change it in the future
(once we decide on it's final value it can be set to persistent).
Moved AICurlPrivate::Stats to AICurlInterface::Stats and added several
counters to keep track of the number of existing instances of
respectively AICurlEasyRequest, AICurlEasyRequestStateMachine,
BufferedCurlEasyRequest, ResponderBase and
ThreadSafeBufferedCurlEasyRequest.
Rename check_run_count to check_msg_queue, because the whole 'run count'
approach is flawed anyway (the author of libcurl told me that THE way
to check for finished curl handles is to just call curl_multi_info_read
every time: it's extremely fast. Any test that attempts to avoid that
call is nonsense anyway.
The reason the assertion failed might have been caused by the fact
that we're comparing the current number of easy handles with the
number of running handles of 'a while ago'. It is possible that a
easy handle was removed in the meantime.
In order to check if that hypothesis is right, I moved the assertion
to directly below the call to curl_multi_socket_action where it
should hold. If this new assertion doesn't trigger than the hypothesis
was right and this is fixed.
Every curl transaction is a AICurlEasyRequestStateMachine which has a
AICurlEasyRequest as member, which is a reference counting pointer to
a ThreadSafeBufferedCurlEasyRequest. And now BufferedCurlEasyRequest is
derived from CurlEasyRequest which is derived from CurlEasyHandle, but
neither are used separatedly.
* Moved Responder stuff to LLHTTPClient.
* Renamed LLHTTPClient::Responder to LLHTTPClient::ResponderWithResult.
* Deleted LLHTTPClientAdapter and LLHTTPClientInterface.
* Renamed AICurlInterface::TransferInfo to AITransferInfo and moved it
to llhttpclient.h
* Removed 'CURLcode code' argument from completed_headers.
Bug fixes:
AICurlEasyRequestStateMachine didn't delete itself.
curl_multi_socket_action calls were made for potentional removed sockets.
The curl thread wasn't terminated.
Move applyProxySettings to CurlEasyRequest and call it from
applyDefaultOptions.
Use AIThreadSafe for LLProxy for a more robust threadsafeness.
(This forces correct locking, checks that the unshared vars
are indeed unshared and made it easy to use read/write locking,
which might be important in this case (we do a lot of read-only
accesses to it).
Conflicts:
indra/llmessage/llcurl.cpp
indra/llmessage/llcurl.h
indra/newview/app_settings/settings.xml
indra/newview/llappviewer.cpp
indra/newview/llmeshrepository.cpp
Resolved:
indra/llmessage/llcurl.cpp:
Basically removed (not used anyway)
indra/llmessage/llcurl.h:
Basically removed (just includes aiculr.h now)
indra/newview/app_settings/settings.xml:
CurlUseMultipleThreads was remvoved.
CurlMaximumNumberOfHandles and CurlRequestTimeOut
are still in there, but unused at the moment.
indra/newview/llappviewer.cpp:
CurlMaximumNumberOfHandles and CurlRequestTimeOut
are unused at the moment.
indra/newview/llmeshrepository.cpp:
Lock mSignal always (is unlocked inside wait()).
Use mSignal lock to see if we are waiting; remove mWaiting.
Return false from the MeshFetch functions iff we have to retry
a HTTP fetch. Catch the error exception thrown by getByteRange
instead of using it's return value (always returns true
anyway).