Since that fakes the libcwd macros, this is needed to compile
the recent change that allows to only turn on debugging output
for certain statemachine objects when DEBUG_CURLIO is defined,
but libcwd is not used (CWDEBUG is undefined).
AICondition is like AIThreadSafeSimpleDC (it is derived from it),
and wraps some variable, protecting it with a builtin mutex.
While in a statemachine, one can call wait(condition) to make
the state machine go idle, and call condition.signal() to
wake it (or another) up again.
While normally a condition variable is used as follows:
condition.lock();
while (!whatwerewaitingfor)
{
condition.wait();
}
// Here the condition is guaranteed to be true and we're
// still in the critical area of the mutex.
condition.unlock();
where the thread blocks in wait(), the statemachine actually
returns CPU to the thread (the AIEngine), so there is no
while loop involved, and our wait() doesn't even unlock
the mutex: that happens because the state machine returns:
condition_wat condition_w(condition); // Lock condition.
if (!condition_w->whatwerewaitingfor()) // The variable(s) involved can only be accessed when condition is locked.
{
wait(condition); // Causes the state machine to go idle. This does not block.
break; // Leave scope (unlock condition) and return to the mainloop - but as idle state machine.
}
// Here the condition is guaranteed to be true and we're
// still in the critical area of the condition variable.
In this case, when condition.signal() is called, the thread
doesn't return from wait() with a locked mutex - but the
statemachine executes the same state again, and enters from
the top: locking the condition and doing the test again,
just like it would be when this was a while loop.
This can be used to switch to a specific engine.
If the state machine is not running in the passed engine,
then it performs a yield to that engine, otherwise it keeps
running. Hence, putting this at the top of a state
guarantees that it runs in that engine.
For example:
case FrontEnd_done:
{
// Unlock must be called by the same thread that locked it.
if (yield_if_not(&gStateMachineThreadEngine))
{
break;
}
// Here we are running in gStateMachineThreadEngine.
...
The patch is exclusively libcwd related.
Turns off output from statemachines to dc::statemachine by default.
Allows to turn on the debug output on a per statemachine basis (at
compile time).
https://code.google.com/p/singularity-viewer/issues/detail?id=848
If advance_state is called, then the state is NOT changed (only
sub_state_w->advance_state is set), but sub_state_w->skip_idle is set to
true in order to ignore the next call to idle (if the state machine is
already in the middle of executing a state). Therefore, this assert
would trigger. The solution is to not trigger when skip_idle is set.
Fixes a possible ASSERT(!(need_new_run && !mYieldEngine &&
sub_state_r->run_state == run_state)).
When over all multiplex_impl functions and this is the only case that I
found where it could return without changing the state and without
calling either idle() or yield().
This fixes
https://code.google.com/p/singularity-viewer/issues/detail?id=736
The problem was that we need to keep the 'user' derived THREAD_IMPL
alive in the thread, and therefore used an LLPointer<THREAD_IMPL>
(as base class of AIStateMachineThread<THREAD_IMPL>), and therefore
THREAD_IMPL, derived from AIThreadImpl had to be derived from
LLThreadSafeRefCount. However, AIStateMachineThread<THREAD_IMPL> also
needed to be a statemachine of itself and is derived from
AIStateMachineThreadBase derived from AIStateMachine which is ALSO
derived from LLThreadSafeRefCount - that in this case wasn't really
needed. An attempt to deactive it by calling ref() from the constructor
of AIStateMachineThreadBase failed on the fact that LLThreadSafeRefCount
insists that its ref count mRef is exactly zero when it is being
deleted.
The chosen solution is to remove the ref count from AIThreadImpl and use
the LLThreadSafeRefCount base class of AIStateMachineThreadBase. The
result is that not only THREAD_IMPL, but also the state machine object
is kept alive, but that doesn't seem like a problem.
Thus, instead of passing a AIThreadImpl* to
AIStateMachineThreadBase::Thread, we now pass a
AIStateMachineThreadBase* to it to keep the whole of the
AIStateMachineThread<THREAD_IMPL> object alive, which has a THREAD_IMPL
as member now. This member then can be accessed through a virtual
function impl(). Another result of this change is that the 'user' (the
class derived from AIThreadImpl, THREAD_IMPL) now has to deal with the
LLPointer, and use LLPointer<AIStateMachineThread<THREAD_IMPL> >
instead of just AIStateMachineThread<THREAD_IMPL> and also allocate
this object himself. The access from there then changes into a -> to
access the state machine (as opposed to a .) and ->thread_impl() to
access the THREAD_IMPL object (as opposed to a ->).
This fixes
https://code.google.com/p/singularity-viewer/issues/detail?id=714
The problem was a typo in AIStateMachine::sleep, >= should have been <=
which caused a state machine that uses yield_ms() to never run anymore
when next run it already should have run again. AIFilePicker is the only
state machine that currently uses yield_ms and I hadn't spotted this
because I don't have plugin messages on by default which made my viewer
just that much faster that it the yield never expired the first run
already (causing it to expire immediately).
The rest of the changes in this commit are just minor improvements /
conformation to the EXAMPLE_CODE in aistatemachine.cpp.
When a state machine is aborted after it switched to bs_initialize,
but before it executed initialize_impl(), then we should set the
state back to bs_reset and abort cleanly by switching to bs_killed
and then handle that. Before the state was set to bs_abort, resulting
in calling abort_impl(), finish_impl(), the call back and unref()
for an not initialized state machine! (detected by the assert that
makes sure that ref()/unref() are called in balance).
The responder name is now cached in LLURLRequest
(ResponderBase::getName() must return a string literal).
The run time (in the main thread) per state machine is now accumulated
in AIStateMachine (instead of AIEngine::QueueElement).
When AIStateMachine::mainloop runs longer than StateMachineMaxTime
then a warning is printed that now includes the time spent in the
slowest state machine (that frame) and (if it is a LLURLRequest)
what the corresponding responder is. Also the total accumulated run
time of that state machine is printed.
From this is can be concluded that the only responder currently
regularly holding up the main thread is LLMeshLODResponder (mostly 30 to
100 ms, but with spikes in the 1 to 2 second range some times).
This fixes a bug where unref() was called when a state machine was
aborted before it reached bs_initialized. Debug code was added to detect
errors related to that.
In order to run HTTPGetResponder in any thread, I needed direct access
to LLHTTPClient::request, so I had to move that to the header file,
and therefore had to move ERequestAction from LLURLRequest to
LLHTTPClient to avoid include problems.
With this, textures are fetched with no latency: call to
LLHTTPClient::request runs all the way till the state machine is idle
(AICurlEasyRequestStateMachine_waitAdded). There is small delay till the
curl thread wakes up, which then processes the request and opens the url
etc. When the transaction is finished, it calls
AIStateMachine::advance_state(AICurlEasyRequestStateMachine_removed_after_finished)
which subsequently doesn't return until the state machine is completely
finished (bs_killed). The LLURLRequest isn't deleted yet at that point
because the AITimer of the LLURLRequest runs in the main thread: it is
aborted, but only the next time the main thread state engines run that
is deleted and the timer keeps an LLPointer to it's parent, the
LLURLRequest, so only then the LLURLRequest object is destructed. This
however has nothing to do with the texture-bandwidth loop.
This reverts commit ef35aa7954
because it contained too much wrong things that I won't be
using. I'll re-commit stuff from it after that that I do
want to keep.
This work extends AIStateMachine to run multiplex() in the thread
that calls run(), cont() or set_state(). Note that all three
eventually call locked_cont(), so thats where multiplex() is called
from. Calling multiplex() means "running the state machine", as in
"calling multiplex_impl".
Currently only LLURLRequest uses this feature, and then only
for the HTTPGetResponder, and well only for the initializing,
start up and normal finish states.
A current/remaining problem is that we run into a situation where
the curl thread runs a statemachine to it's finish and kills it,
while the main thread is also 'running' it and tries to call
multiplex while the statemachine isn't running anymore.
As idle statemachines aren't in any list, it's not possible
(without adding that list) to delete them. I don't think
that there are any active statemachines left at the end
of flush anyway, but killing them doesn't much sense if
we can't get them all: there will always be statemachines
left: those that were idle at the moment the viewer was
quit.
Since we changed CurlResponderBuffer to be derived from CurlEasyRequest
and therefore changed it's name to BufferedCurlEasyRequest, we should
also rename AICurlResponderBufferEvents to
AIBufferedCurlEasyRequestEvents.
This commit also fixes C++ comment in several places to reflex the
previous name change.