1. Find definitions for eight terms and concepts used in threaded programming:
1. Thread Synchronisation
Selvam (2004) defines thread synchronisation refers to the simultaneous execution of multiple threads and/or processes, this allows a program to provide a wider range of services than using singular processes.In a multithreaded environment, each thread has its own local thread stack and registers. If multiple threads access the same resource for read and write, the value may not be the correct value. For example, let's say our application contains two threads, one thread for reading content from the file and another thread writing the content to the file. If the write thread tries to write and the read thread tries to read the same data, the data might become corrupted. In this situation, we want to lock the file access. The thread synchronization has two stages. Signaled and non-signaled (Selvam, 2004).
2. Locks
The Wikipedia (2010) article on locks defines a lock is a synchronisation mechanism put in place to control access to resources between threads and processes. These locks are put in place to ensure the integrity of the data resources and that no 2 threads can access one data source at the same time.
3. Deadlock
The Wikipedia (2010) article states that deadlock refers to a specific condition when two or more processes are each waiting for each other to release a resource, or more than two processes are waiting for resources in a circular chain. Deadlock is a common problem in multiprocessing where many processes share a specific type of mutually exclusive resource known as a software lock or soft lock. Computers intended for the time-sharing and/or real-time markets are often equipped with a hardware lock (or hard lock) which guarantees exclusive access to processes, forcing serialized access. Deadlocks are particularly troubling because there is no general solution to avoid (soft) deadlocks.
4. Semaphores
According to the Wikipedia (2010) article a semaphore is a protected variable or abstract data type which constitutes the classic method for restricting access to shared resources such as shared memory in a parallel programming environment. A counting semaphore is a counter for a set of available resources, rather than a locked/unlocked flag of a single resource.
5. Mutex (mutual exclusion)
According to the Wikipedia (2010) article, Mutual exclusion (often abbreviated to mutex) algorithms are used in concurrent programming to avoid the simultaneous use of a common resource, such as a global variable, by pieces of computer code called critical sections. A critical section is a piece of code where a process or thread accesses a common resource. The critical section by itself is not a mechanism or algorithm for mutual exclusion. A program, process, or thread can have critical section in it without any mechanism or algorithm, which implements mutual exclusion. Examples of such resources are fine-grained flags, counters or queues, used to communicate between code that runs concurrently, such as an application and its interrupt handlers. The synchronization of access to those resources is an acute problem because a thread can be stopped or started at any time. To illustrate: suppose a section of code is altering a piece of data over several program steps, when another thread, perhaps triggered by some unpredictable event, starts executing. If this second thread reads from the same piece of data, the data, which is in the process of being overwritten, is in an inconsistent and unpredictable state. If the second thread tries overwriting that data, the ensuing state will probably be unrecoverable. These shared data being accessed by critical sections of code, must therefore be protected, so that other processes which read from or write to the chunk of data are excluded from running.
6. Thread
Threads are a lightweight processes. Ince (2005) defines a thread as the execution of a chunk of code which can be carried out in parallel with the execution of other chunks of code.
7. Event
According to Wikipedia (2010), in computing an event is an action that is usually initiated outside the scope of a program and that is handled by a piece of code inside the program. Typically events are handled synchronous with the program flow, that is, the program has one or more dedicated places where events are handled. Typical sources of events include the user (who presses a key on the keyboard, in other words, through a keystroke). Another source is a hardware devices such as a timer. A computer program that changes its behavior in response to events is said to be event-driven, often with the goal of being interactive.
8. Waitable timer.
The Microsoft Developer Network (2010) states that waitable timer object is a synchronization object whose state is set to signaled when the specified due time arrives. There are two types of waitable timers that can be created: manual-reset and synchronisation. A timer of either type can also be a periodic timer.
2. A simple demonstration of the threading module in Python ( threaddemo.py ) that uses both a lock and semaphore to control concurrency is by Ted Herman at the University of Iowa. The code and sample output below are worth a look. Report your findings.
Not being a programming, it took a fair degree of time and research in order to determine what was required here - oh I need to download Python.... more software to download!! Arghhh. Anyways, once that was out of the way and I ran the program from what I can tell, this particular program creates 10 tasks and sets are bounded by semaphore so no more than 3 of the 10 processes can run at the same time. The program only allow 3 tasks running concurrently. When a task completed, another task will start. There is probably much more to it, but I don't have the level of programming knowledge to really pick this apart.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment