Thursday, August 27, 2009

Graphs

Unsafe State in Resource-
allocation Graph

-





Resource-Allocation Graph For
Deadlock Avoidance

-










Resource Allocation Grap with a
cycle but no deadlock

-





















Example of research Allocation
graph w/ a deadlock

-











Example of research Allocation
graph

-






Resource Allocation Graph

Resource allocation graph & Wait-for graph




A set of vertices V and a set of edges E.






* V is partitioned into two types:




>FP = {P1, P2, …, Pn}, the set consisting of all the processes in the system.




>FR = {R1, R2, …, Rm}, the set consisting of all resource types in the system.




* request edge – directed edge P1 ---> Rj
* assignment edge – directed edge Rj ---> Pi




How could you know if theres a deadlock based w/ the Resouce Allocation table?



*If graph contains no cycles Þ no deadlock.



*If graph contains a cycle Þ
-if only one instance per resource type, then deadlock.
-if several instances per resource type, possibility of deadlock.

Thursday, August 20, 2009

Methods For Handling Deadlocks

*Deadlock Prevention

- Disallow one of the four necessary conditions

for dealock.

*Deadlock Avoidance

-Do not grant a resource request if this allocation

have potential to lead to a deadlock

*Deadlock Detection

- Always grant resource request when possible.

Periodically check for deadlock. If deadlock exists,

recover from it.

*Ignore the problem

- Makes sense if the likelihood is very low.

Deadlock Recovery

*Abort all deadlock processes and release resource - too drastic - will lead to loss of work

*Abort one process at a time - releasing resources until no deadlock
How do we determine which process to abort first ? - priority ordering, process which has done least work

*Selectively restart processes from a previous checkpoint i.e. before it claimed any resources difficult to achieve - sometimes impossible

*Successively withdraw resources from a process and give to another process until deadlock is broken. How to choose which processes and which resources ?

1)Complex decisions due to the large number of processes present within a system

2)fficult to automate

3) Operator to resolve conflicts - BUT this requires the operator to have skill and understanding of what processes are actually doing

Deadlock Characterization

A deadlock can occur if the following four conditions hold
simultaneously:

1. Mutual exclusion: at least one resource must be held in a
nonsharable mode.

2. Hold and wait: a process must be holding at least one
resource and waiting to acquire additional resources held by
other processes.

3. No preemption: resources can’t be preempted.

4. Circular wait: there exists a set {P0, P1, …, Pn} of waiting
processes such that P0
is waiting for a resource that is held by

P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is
waiting for a resource that is held by Pn, and Pn is waiting for a
resource that is held by P0.

Deadlock Detection

• Allow system to enter deadlock state

• Detection algorithm

•Recovery scheme

Deadlock Prevention

• At least 1 necessary condition does not hold– Mutual Exclusion: not required for sharable resources; must hold for non-sharable resources.– Hold-and-Wait: can not request new when
holding resources.
• Protocol 1: request all resources before it begins execution
• Protocol 2: request resources only when the process has none.
• Low resource utilization; starvation possible.– No Preemption: preempt resources from processes
• Protocol 1: If a request can not be satisfied then preempt all resources held and block
• Protocol 2: Preempt only if resources are needed by another running process
• Requires resource state to be easily restored– Circular Wait: Block any request that results in a cycle.
• Impose a total ordering of all resource types, and require that each process requests resources in an increasing order

Monday, August 17, 2009

REAL-TIME SCHEDULING

  • Hard real-time systems – required to complete a critical task within a guaranteed amount of time
  • Soft real-time computing – requires that critical processes receive priority over less fortunate ones

Thursday, August 13, 2009

Thread Scheduling

Across platforms, thread scheduling1 tends to be based on at least the following criteria:

*a priority, or in fact usually multiple "priority" settings that we'll discuss below;

*a quantum, or number of allocated timeslices of CPU, which essentially determines the amount of CPU time a thread is allotted before it is forced to yield the CPU to another thread of the same or lower priority (the system will keep track of the remaining quantum at any given time, plus its default quantum, which could depend on thread type and/or system configuration);

*a state, notably "runnable" vs "waiting";

*metrics about the behaviour of threads, such as recent CPU usage or the time since it last ran (i.e. had a share of CPU), or the fact that it has "just received an event it was waiting for".

Most systems use what we might dub priority-based round-robin scheduling to some extent. The general principles are:

*a thread of higher priority (which is a function of base and local priorities) will preempt a thread of lower priority;

*otherwise, threads of equal priority will essentially take turns at getting an allocated slice or quantum of CPU;


*there are a few extra "tweaks" to make things work.

Multiprocessor Scheduling

  • In computer science, multiprocessor scheduling is an NP-Complete optimization problem. The problem statement is: "Given a set J of jobs where job ji has length li and a number of processors mi, what is the minimum possible time required to schedule all jobs in J on m processors such that none overlap?" The applications of this problem are numerous, but are, as suggested by the name of the problem, most strongly associated with the scheduling of computational tasks in a multiprocessor environment.

Monday, August 10, 2009

Scheduling Algorithms

1.First-come, first-served (FCFS) scheduling
2.Shortest-job first (SJF) scheduling
3.Priority scheduling
4.Round-robin scheduling
5.Multilevel queue scheduling
6.Multilevel feedback queue scheduling

- First-come, First-served (FCFS) scheduling is the simplest scheduling algorithm, but it can cause short processes to wait for very long processes.

- Shortest-job-first (SJF) scheduling is provably optimal, providing the shortest average waiting time. Implementing SJF scheduling is difficult because predicting the length of the next CPU burst is difficult. The SJF algorithm is a special case of the general priority-scheduling algorithm, which simply allocates the CPU to the highest-priority process. Both priority and SJF scheduling may suffer from starvation. Aging is a technique to prevent starvation.

- Round-robin (RR) scheduling is more appropriate for a time-shared (interactive) system. RR scheduling allocates the CPU to the first process in the ready queue for q time units, where q is the time quantum. After q time units, if the process has not relinquished the CPU, it is preempted and the process is put at the tail of the ready queue. The major problem is the selection of the time quantum. If the quantum is too large, RR scheduling degenerates to FCFS scheduling; if the quantum is too small, scheduling overhead in the form of context-switch time becomes excessive.

The FCFS algorithm is nonpreemptive, the RR algorithm is preemptive. The SJF and priority algorithms may be either preemptive or nonpreemptive.
Multilevel queue algorithms allow different algorithms to be used for various classes of processes. The most common is a foreground interactive queue which uses RR scheduling, and a background batch queue, which uses FCFS scheduling. Multilevel feedback queues allow processes to move from one queue to another.
Because such a wide variety of scheduling algorithms are available, we need methods to select among them. Analytic methods use mathematical analysis to determine the performance of an algorithm. Simulation methods determine performance by imitating the scheduling algorithm on a “representative” sample of processes, and computing the resulting performance.
Operating Systems supporting threads at the kernel level must schedule threads - not processes
- for execution. This is the case with Solaris 2 and Windows 2000 where both systems schedule threads using preemptive priority based on scheduling algorithm including support for real-time threads.

The Linux process scheduler also uses a priority-based algorithm with real-time supports as well. The scheduling algorithms for these three operating systems typically favor interactive over batch and CPU-bound processes.systems typically favor interactive over batch and CPU-bound processes.