Synopsis:
There are two independent processes, the main game process and the background loading process. The purpose behind having two processes is to maximize the CPU utilization, such that the main process can do other things (ie. manipulate data, creating, and calculating) while the background process is focused on the loading.
In order to manage concurrency accordingly, implementation of synchronization schemes between these two processes must exist to protect the data that is shared between them. Therefore, the main task of this particular assignment is to synchronize the two processes, maximize the CPU efficiency and maintain the requirements of mutual exclusion, progress and bounded waiting.
Solution:
The pending queue and the completed queue store the requests and they are separate from each other. They are shared between the main process and the background process. The main process places requests to the pending queue, and removes requests form the completed queue. Conversely, the background process removes the requests form the pending queue and stores the completed requests in the completed queue.
Therefore, synchronization needs to take place at two different junctions for both queues, so that only one process may operate on the specific queue at the same time.
Synchronization was achieved by implementing two sets of three counting semaphores, each set used to guard one queue. This helped achieve mutual exclusion since the semaphores mutexPending and mutexCompleted ensure that only one process at the time can act on the given queue. For both processes to act on the same queue at the same time, semaphores mutexPending or mutexCompleted would have to have count values greater than one, which is impossible in our implementation format.
With respect to progress condition, our implementation guarantees progress since deadlock cannot occur. This is due to the fact that neither process will be waiting indefinitely on the other process because for deadlock to occur one of the queues would have to have at least one item and no items, which is impossible. An example of this is demonstrated: the main process is waiting on <wait(emptyPending)> that is waiting for at least one free space in the pending queue, while at the same time the loading process is waiting for the wait(mul_NumAsyncRequests) that is for at least one item to be in the pending queue, obviously this condition can not exist. With respect to the completed queue the similar situation exists but the main and loading processes swap roles of producer and consumer respectively.
As far as bounded waiting condition is concerned it is satisfied for both processes as a consequence of both queues being bounded buffers. It is impossible for one process to dominate and continuously prevent the other process form having a chance to execute. This can be demonstrated by the following example, of the worst case scenario. If the bound on the buffer is 1000 the producer (either main or the loading process) will be able to produce at most 1000 requests to the queue before it will be forced to wait for the consumer to consume at least one item.
With respect to CPU utilization the solution above maximizes the CPU effieciency since the loading process will only get to execute when an item exists in the pending queue and the main process will only call to CFILE::Service() when an item exists in the completed queue. These two constraints limit the wasted CPU time and allow the main process to focus on getting more done hence making it more efficient.
Note:
Our solution to the problem was predominately based on the May 30, 2003 in-class handout pertaining to Producer/Consumer Bounded Buffer(solution). Where the actual was implemented, however, using different variable names.
Pseudocode
of solution:
Initialization of shared data:
semaphore mutexPending, mul_NumAsyncRequests, emptyPending;
semaphore mutexCompleted, mul_NumCompletedAsyncRequests,
emptyCompleted;
Queue pending, completed;
// mutual exclusion for the pending queue
mutexPending.count:=1;
// number of filled spaces in the pending queue
mul_NumAsyncRequests.count:=0;
// number of empty spaces in the pending queue (k = BUFSIZ)
emptyPending.count:=k;
// mutual exclusion for the completed queue
mutexCompleted.count:=1;
// number of filled spaces in the completed queue
mul_NumCompletedAsyncRequests:=0;
// number of empty spaces in the completed queue (k = BUFSIZ)
emptyCompleted.count:=k;
Main:
repeat
// Create Read/Write/Seek request
CAsyncFileRequest = CFile::Create(Read or Write or Seek);
// Wait for pending queue to have a free space
wait(emptyPending);
// Wait for mutexPending to equal one, then decrement
// mutexPending and enter the CS
wait(mutexPending);
// Append CAsyncFileRequest to pending queue
pending.store(CAsyncFileRequest);
// Increment mutex now that process is not in CS
signal(mutexPending);
// Signal that a request has been added to the pending queue
signal(mul_NumAsyncRequests);
// Perform other operations. Manipulate textures, create
// game environment, perform calculations, etc.
// If the completed queue is empty, skip this section because
// synchronization of the two processes is not necessary
if (mul_NumCompletedAsyncRequests > 0) {
// Call CFile::Service() to remove item from completed queue
completedRequest = CFile::Service();
// Manipulate data that was just loaded, etc.
doSomethingTo(completedRequest);
// Call AsyncOperationComplete() on callback object
AsyncOperationComplete();
}
forever
Loading:
repeat
// Wait for a request to be placed in the pending queue
wait(mul_NumAsyncRequests);
// Decrement mutexPending and enter CS
wait(mutexPending);
// Retrieve a CAsyncFileRequest from the pending queue
item = pending.retrieve();
// Increment mutexPending to indicate that process has left CS
signal(mutexPending);
// Signal that another space has freed up in the pending queue
signal(emptyPending);
// Perform the file operation on the request that was just removed
// from the pending queue
requestResult = ExecuteAsyncFileOperation(item);
// Wait for completed queue to have a free space
wait(emptyCompleted);
// Wait for mutexCompleted to equal one, then decrement
// mutexCompleted and enter the CS. Basically, this process is
// synchronizing with the main process to make sure that the
// main process in not operating on the completed queue
wait(mutexCompleted);
// Storing the completed request into the completed queue
completed.store(requestResult);
// Indicate that process is no longer in CS. mutexCompleted has been
// incremented.
signal(mutexCompleted);
// Indicate that an item has been added to the completed queue
signal(mul_NumCompletedAsyncRequests);
forever
CFile::Service:
// decrement mul_NumCompletedAsyncRequests
wait(mul_NumCompletedAsyncRequests);
// Wait for mutexCompleted to equal one, then decrement mutexCompleted
// and enter the CS
wait(mutexCompleted);
// Retrieving a completed request from the completed queue
result = completed.retrieve();
// Increment mutexCompleted to indicate that process is no longer in CS
signal(mutexCompleted);
// Signal that another space has been freed up in completed queue
signal(emptyCompleted);
return result;
Queue Methods:
store(item):
buf[in]:=item;
in=(++in)mod k;
retrieve:
item:=b[out];
out=(++out)mod k;
return item;