Thursday, June 25, 2009

Hardware Protection

•Dual -Mode Operation-sharing system resources requires operating system to ensure that an incorrect program connot cause other programs to execute incorrectly.

-provides hardware support to differenciate atleast up to two modes of operation.

1. User mode-execution done on behalf of a user.

2. Monitor mode-execution done on behalf of a operating system.


•I/O Protection-All i/o instructions are priveleged instructions.

-must ensure that a user program could never gain control

of the computer in monitor mode.


•Memory Protection-Memory protection is a way to control memory access rights on a computer, and is a part of nearly every modern operating system. The main purpose of memory protection is to prevent a process from accessing memory that has not been allocated to it. This prevents a bug within a process from affecting other processes, or the operating system itself. Memory protection also makes a Rootkit more difficult to implement. Memory protection is a behavior that is distinct from ASLR and the NX bit.
•CPU protection-to prevent a user programs gets stuck in infinite loop and never returning back to the os

Storage Hierarchy

* Caching-In computer science, a cache (pronounced /kæʃ/) is a collection of data duplicating original values stored elsewhere or computed earlier, where the original data is expensive to fetch (owing to longer access time) or to compute, compared to the cost of reading the cache. In other words, a cache is a temporary storage area where frequently accessed data can be stored for rapid access. Once the data is stored in the cache, it can be used in the future by accessing the cached copy rather than re-fetching or recomputing the original data.
A cache has proven to be extremely effective in many areas of computing because access patterns in typical computer applications have locality of reference. There are several kinds of locality, but this article primarily deals with data that are accessed close together in time (temporal locality). The data might or might not be located physically close to each other (spatial locality).


*Coherency-In computing, cache coherence (also cache coherency) refers to the integrity of data stored in local caches of a shared resource. Cache coherence is a special case of memory coherence.
When clients in a system maintain caches of a common memory resource, problems may arise with inconsistent data. This is particularly true of CPUs in a multiprocessing system. Referring to the "Multiple Caches of Shared Resource" figure, if the top client has a copy of a memory block from a previous read and the bottom client changes that memory block, the top client could be left with an invalid cache of memory without any notification of the change. Cache coherence is intended to manage such conflicts and maintain consistency between cache and memory.

*Consistency- in theory is one that does not contain a contradiction. The lack of contradiction can be defined in either semantic or syntactic terms. The semantic definition states that a theory is consistent if it has a model; this is the sense used in traditional Aristotelian logic, although in contemporary mathematical logic the term satisfiable is used instead. The syntactic definition states that a theory is consistent if there is no formula P such that both P and its negation are provable from the axioms of the theory under its associated deductive system.
If these semantic and syntactic definitions are equivalent for a particular logic, the logic is complete. The completeness of sentential calculus was proved by Paul Bernays in 1918 and Emil Post in 1921, while the completeness of predicate calculus was proved by Kurt Gödel in 1930. Stronger logics, such as second-order logic, are not complete.

Storage Structure

•Main Memory--Computer data storage, often called storage or memory, refers to computer components, devices, and recording media that retain digital data used for computing for some interval of time. Computer data storage provides one of the core functions of the modern computer, that of information retention. It is one of the fundamental components of all modern computers, and coupled with a central processing unit (CPU, a processor), implements the basic computer model used since the 1940s.In contemporary usage, memory usually refers to a form of semiconductor storage known as random access memory (RAM) and sometimes other forms of fast but temporary storage. Similarly, storage today more commonly refers to mass storage - optical discs, forms of magnetic storage like hard disks, and other types slower than RAM, but of a more permanent nature. Historically, memory and storage were respectively called primary storage and secondary storage.

•Magnetic Disk-Magnetic Disk-Magnetic storage and magnetic recording are terms from engineering referring to the storage of data on a magnetized medium. Magnetic storage uses different patterns of magnetization in a magnetizable material to store data and is a form of non-volatile memory. The information is accessed using one or more read/write heads. As of 2009, magnetic storage media, primarily hard disks, are widely used to store computer data as well as audio and video signals. In the field of computing, the term magnetic storage is preferred and in the field of audio and video production, the term magnetic recording is more commonly used. The distinction is less technical and more a matter of preference10.Storage Hierarchy-The hierarchical arrangement of storage in current computer architectures is called the memory hierarchy. It is designed to take advantage of memory locality in computer programs. Each level of the hierarchy has the properties of higher bandwidth, smaller size, and lower latency than lower levels


•Moving Head Disk Mechanism-


•Magnetic Tapes-Magnetic tape is a medium for magnetic recording generally consisting of a thin magnetizable coating on a long and narrow strip of plastic. Nearly all recording tape is of this type, whether used for recording audio or video or for computer data storage. It was originally developed in Germany, based on the concept of magnetic wire recording. Devices that record and playback audio and video using magnetic tape are generally called tape recorders and video tape recorders respectively. A device that stores computer data on magnetic tape can be called a tape drive, a tape unit, or a streamer.

Tuesday, June 23, 2009

1) Bootstrap program

In computing, bootstrapping (from an old expression "to pull oneself up by one's bootstraps") is a technique by which a simple computer program activates a more complicated system of programs. In the start up process of a computer system, a small program such as BIOS, initializes and tests that hardware, peripherals and external memory devices are connected, then loads a program from one of them and passes control to it, thus allowing loading of larger programs, such as an operating system.

A different use of the term bootstrapping is to use a compiler to compile itself, by first writing a small part of a compiler of a new programming language in an existing language to compile more programs of the new compiler written in the new language. This solves the "chicken and egg" causality dilemma.

2) Difference of interrupt and trap and there use.

The difference between interrupt and trap is that in computing and operating systems, a trap is a type of synchronous interrupt typically caused by an exceptional condition (e.g. division by zero or invalid memory access) in a user process. A trap usually results in a switch to kernel mode, wherein the operating system performs some action before returning control to the originating process. In some usages, the term trap refers specifically to an interrupt intended to initiate a context switch to a monitor program or debugger.[1]

In SNMP, a trap is a type of PDU used to report an alert or other asynchronous event about a managed subsystem. While interrupt is an asynchronous signal indicating the need for attention or a synchronous event in software indicating the need for a change in execution. A hardware interrupt causes the processor to save its state of execution via a context switch, and begin execution of an interrupt handler.
Software interrupts are usually implemented as instructions in the instruction set, which cause a context switch to an interrupt handler similar to a hardware interrupt.
Interrupts are a commonly used technique for computer multitasking, especially in real-time computing. Such a system is said to be interrupt-driven.[1]
An act of interrupting is referred to as an interrupt request (IRQ).

3)monitor mode

Monitor mode, or RFMON (Radio Frequency Monitor) mode, allows a computer with a wireless network interface card (NIC) to monitor all traffic received from the wireless network. Unlike promiscuous mode, which is also used for packet sniffing, monitor mode allows packets to be captured without having to associate with an access point or ad-hoc network first. Monitor mode only applies to wireless networks, while promiscuous mode can be used on both wired and wireless networks. Monitor mode is one of the six modes that 802.11 wireless cards can operate in: Master (acting as an access point), Managed (client, also known as station), Ad-hoc, Mesh, Repeater, and Monitor mode.

4)User mode

It is a non-privileged mode in which each process (i.e., a running instance of a program) starts out. It is non-privileged in that it is forbidden for processes in this mode to access those portions of memory (i.e., RAM) that have been allocated to the kernel or to other programs. The kernel is not a process, but rather a controller of processes, and it alone has access to all resources on the system.
When a user mode process (i.e., a process currently in user mode) wants to use a service that is provided by the kernel (i.e., access system resources other than the limited memory space that is allocated to the user program), it must switch temporarily into kernel mode, which has root (i.e., administrative) privileges, including root access permissions (i.e., permission to access any memory space or other resources on the system). When the kernel has satisfied the process's request, it restores the process to user mode.
This change in mode is termed a mode switch, which should not be confused with a context switch (i.e., the switching of the CPU from one process to another). The standard procedure to switch from user mode to kernel mode is to call the 0x80 software interrupt.
An interrupt is a signal to the operating system that an event has occurred, and it results in changes in the sequence of instructions that is executed by the CPU. In the case of a hardware interrupt, the signal originates from a hardware device such as a keyboard (e.g., when a user presses a key), mouse or system clock (a circuit that generates pulses at precise intervals that are used to coordinate the computer's activities). A software interrupt is an interrupt that originates in software, usually by a program in user mode.

5)Device status table

6)Direct memory access

Direct memory access (DMA) is a feature of modern computers and microprocessors that allows certain hardware subsystems within the computer to access system memory for reading and/or writing independently of the central processing unit. Many hardware systems use DMA including disk drive controllers, graphics cards, network cards and sound cards. DMA is also used for intra-chip data transfer in multi-core processors, especially in multiprocessor system-on-chips, where its processing element is equipped with a local memory (often called scratchpad memory) and DMA is used for transferring data between the local memory and the main memory. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without a DMA channel. Similarly a processing element inside a multi-core processor can transfer data to and from its local memory without occupying its processor time and allowing computation and data transfer concurrency.

Without DMA, using programmed input/output (PIO) mode for communication with peripheral devices, or load/store instructions in the case of multicore chips, the CPU is typically fully occupied for the entire duration of the read or write operation, and is thus unavailable to perform other work. With DMA, the CPU would initiate the transfer, do other operations while the transfer is in progress, and receive an interrupt from the DMA controller once the operation has been done. This is especially useful in real-time computing applications where not stalling behind concurrent operations is critical. Another and related application area is various forms of stream processing where it is essential to have data processing and transfer in parallel, in order to achieve sufficient throughput.

7)Difference of ram and dram

Random-access memory (usually known by its acronym, RAM) is a form of computer data storage. Today, it takes the form of integrated circuits that allow stored data to be accessed in any order (i.e., at random). The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data.[1]

By contrast, storage devices such as tapes, magnetic discs and optical discs rely on the physical movement of the recording medium or a reading head. In these devices, the movement takes longer than data transfer, and the retrieval time varies based on the physical location of the next item.

The word RAM is often associated with volatile types of memory (such as DRAM memory modules), where the information is lost after the power is switched off. Many other types of memory are RAM, too, including most types of ROM and flash memory called NOR-Flash.

while the dram is a feature of modern computers and microprocessors that allows certain hardware subsystems within the computer to access system memory for reading and/or writing independently of the central processing unit. Many hardware systems use DMA including disk drive controllers, graphics cards, network cards and sound cards. DMA is also used for intra-chip data transfer in multi-core processors, especially in multiprocessor system-on-chips, where its processing element is equipped with a local memory (often called scratchpad memory) and DMA is used for transferring data between the local memory and the main memory. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without a DMA channel. Similarly a processing element inside a multi-core processor can transfer data to and from its local memory without occupying its processor time and allowing computation and data transfer concurrency.

Without DMA, using programmed input/output (PIO) mode for communication with peripheral devices, or load/store instructions in the case of multicore chips, the CPU is typically fully occupied for the entire duration of the read or write operation, and is thus unavailable to perform other work. With DMA, the CPU would initiate the transfer, do other operations while the transfer is in progress, and receive an interrupt from the DMA controller once the operation has been done. This is especially useful in real-time computing applications where not stalling behind concurrent operations is critical. Another and related application area is various forms of stream processing where it is essential to have data processing and transfer in parallel, in order to achieve sufficient throughput.

8)Main Memory

Refers to physical memory that is internal to the computer. The word main is used to distinguish it from external mass storage devices such as disk drives. Another term for main memory is RAM.

9)Magnetic Disk

Are terms from engineering referring to the storage of data on a magnetized medium. Magnetic storage uses different patterns of magnetization in a magnetizable material to store data and is a form of non-volatile memory. The information is accessed using one or more read/write heads. As of 2009, magnetic storage media, primarily hard disks, are widely used to store computer data as well as audio and video signals. In the field of computing, the term magnetic storage is preferred and in the field of audio and video production, the term magnetic recording is more commonly used. The distinction is less technical and more a matter of preference.

10)Storage Hierarchy

The range of memory and storage devices within the computer system. The following list starts with the slowest devices and ends with the fastest. See storage and memory.

Thursday, June 18, 2009

1) What’s the difference of OS in terms of users view and the system view?

The difference between of OS in terms of the users view and systems view is that in the users view is that how an os makes a computer convenient to use. the Effeciency : An os allows the computer system resource to be used in a effecient manner and lastly abilty to evolve: An OS should be constructed in such a way to permit the effective development,testing,and introduction of new system function without interfering with service.
While the operating system interms of the systems view is that a operating system manages the sharig resources among competing entites, provides a number of common services that make application easier to wrtie and serves as the interface between application and programs

2) Explain the goals of OS.

  • The various roles or goals of and operating system generally revolve around the idea of “sharing nicely”.
  • An Operating system manages resources, and these resources are often shared in one way or another among programs that want to use them

3) What’s the difference bet. batch system ,multiprogrammed systems, and time-sharing systems?

  • The difference bet. batch system ,multiprogrammed systems, and time-sharing systems is that batch system is one in which jobs are bundled together with the instructions necessary to allow them to be processed without intervention. Often jobs of a similar nature can be bundled together to further increase economy, then the multiprogrammed systems is a method by which multiple tasks, also known to processes , share common processing resources such as CPU. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is actively executing instructions for that task while time-sharing is were you have a primary computer or shall we say the sever and there you would connect the other computers and the server will serve as the main PC.

4) Advantages of Parallel system.

  • The advantage of having a Parallel system is that the certain PC having this system has a higher performance rate, has a higher response time, efficient to use of a faster interconnection network essential for good performance and can perform tasks that a non-parallel system operating system could not do. It could also use two hardware platforms with different performance characteristics to validate some conclusion or data processed.

5) Differentiate Symmetric multiprocessing and Asymmetric multiprocessing.

  • Asymmetric multiprocessing or ASMP is a type of multiprocessing supported in DEC's VMS V.3 as well as a number of older systems including TOPS-10 and OS-360. It varies greatly from the standard processing model that we see in personal computers today. Due to the complexity and unique nature of this architecture, it was not adopted by many vendors or programmers during its brief stint between 1970 - 1980.
    Where as a symmetric multiprocessor or SMP treats all of the processing elements in the system identically, an ASMP system assigns certain tasks only to certain processors. In particular, only one processor may be responsible for fielding all of the interrupts in the system or perhaps even performing all of the I/O in the system. This makes the design of the I/O system much simpler, although it tends to limit the ultimate performance of the system. Graphics cards, physics cards and cryptographic accelerators which are subordinate to a CPU in modern computers can be considered a form of asymmetric multiprocessing. SMP is extremely common in the modern computing world, when people refer to "multi core" or "multi processing" they are most commonly referring to SMP.

6) Differentiate client-server system and peer-to-peer system.

  • The difference between client-server and peer-to-peer system is that the client to server is a computing system composed of two logical parts: a server, which provides information or services, and a client, which requests them. On a network, for example, users can access server resources from their personal computers using client software while the peer-to-peer system is a networking method of delivering computer network services in which the participants share a portion of their own resources, such as processing power, disk storage, network bandwidth, printing facilities. Such resources are provided directly to other participants without intermediary network hosts or servers

7) Differentiate the design issues of OS between stand alone PC and a workstation connected to a network.

-The Stand alone PC is just like your own PC, it doesn't connect with the other PC's, while the Workstation connected PC is compose of one or two PC's but attached by two or more monitors.

8) Define the following types of OS: Batch, Time sharing, Real Time, Network, Distributed,Handheld

a. Batch- are set up so they can be run to completion without human interaction, so all input data is preselected through scripts or command-line parameters. This is in contrast to "online" or interactive programs which prompt the user for such input. A program takes a set of data files as input, process the data, and produces a set of output data files. This operating environment is termed as "batch processing" because the input data are collected into batches on files and are processed in batches by the program. b. Time Sharing- is sharing a computing resource among many users by multitasking. Its introduction in the 1960s, and emergence as the prominent model of computing in the 1970s, represents a major historical shift in the history of computing. By allowing a large number of users to interact simultaneously on a single computer, time-sharing dramatically lowered the cost of providing computing, while at the same time making the computing experience much more interactive c. Real Time-In computer science, real-time computing (RTC) is the study of hardware and software systems that are subject to a "real-time constraint"—i.e., operational deadlines from event to system response. By contrast, a non-real-time system is one for which there is no deadline, even if fast response or high performance is desired or preferred. The needs of real-time software are often addressed in the context of real-time operating systems, and synchronous programming languages, which provide frameworks on which to build real-time application software d. Network- networked systems consist of multiple computers that are networked together, usually with a common operating system and shared resources. Users, however, are aware of the different computers that make up the system. e. Distributed-Distributed computing deals with hardware and software systems containing more than one processing element or storage element, concurrent processes, or multiple programs, running under a loosely or tightly controlled regime. f. Handheld-Handheld is common names for mobile devices such as Hand-held camera, Hand-held computer, Handheld computing ,Handheld (gaming),Handheld electronic game