Exercise Set Three

Chapter 8

Problem 8.3:  Given memory partitions of 100K, 500K, 200K, 300K and 600K (in order), how would each of the First-fit, Best-fit and Worst-fit algorithms place processes of 212K, 417K, 112K and 426K (in order) ? Which algorithm makes the most efficient use of memory?

Answer:

  1.     First-fit
  2.     212K is put in 500K partition
  3.     417K is put in 600K partition
  4.     112K is put in 288K partition(new partition 288K=500K-212K)
  5.     426K must wait
  6.     Best-fit
  7.     212K is put in 300K partition
  8.     417K is put in 500K partition
  9.     112K is put in 200K partition
  10.     426K is put in 600K partition
  11.     Worst-fit
  12.     212K is put in 600K partition
  13.     417K is put in 500K partition
  14.     112K is put in 388K partition
  15.     426K must wait

In this example, Best-fit turns out to be the best.

Problem 8.6:    Why is that, on a system with paging, a process cannot access memory it does not own? How could the operating system allow access to other memory? Why should it or should it not?

Answer:

    An address on a paging system is a logical page number and an offset. The physical page is found by searching a table based on the logical page number to produce a physical page number. Because the operating system controls the contents of this table, it can limit a process to accessing only those physical pages allocated to the process. There is no way for a process to refer to a page it does not own because the page will not be in the page table. To allow such access, an operating system simply needs to allow entries for non-process memory to be added to the process' page table. This is useful when two or more processes need to exchange data - they just read and write to the same physical address ( which may be at varying logical addresses ) . This makes for very efficient interprocess communication.

Problem 8.9:    Consider a paging system with the page table stored in memory.

  1. If a memory reference takes 200 nanoseconds, how long does a paged memory reference take?
  2. If we add associative registers, and 75 percent of all page-table references are found in the associative registers, what is the effective memory reference time? (Assume that finding a page-table entry in the associative registers takes zero time, if the entry is there )

Answer:

  1. 400 nanoseconds; 200 nanoseconds to access the page table and 200 nanoseconds to access the word in memory.
  2. Effective access time= 0.75 * (200 nanoseconds) + 0.25 * ( 400 nanoseconds ) = 250 nanoseconds

Problem 8.12:    Consider the following segment table:
 
Segment Base Length
0 219 600
1 2300 14
2 90 100
3 1327 580
4 1952 96

What are the physical addresses for the following logical addresses?

  1. 0,430
  2. 1,10
  3. 2,500
  4. 3,400
  5. 4,112

Answer:

  1.  219+430 = 649
  2.  2300 + 10 = 2310
  3.  illegal reference, trap to operating system
  4.  1327+ 400= 1727
  5.  illegal reference, trap to operating system

 


Chapter 9

Problem :    When do page faults occur? Describe the actions taken by the operating system when a page fault occurs.

Answer:

    A page fault occurs when an access to a page that has not been brought into main memory takes place. The operating system verifies the memory access, aborting the program if it is invalid. If it is valid a free frame is located and I/O requested to read the needed page into the free frame. Upon completion of I/O, the process table and page table are updated and the instruction is restarted.

Problem 9.5:    Suppose we have a demand-paged memory. The page table is held in registers. It takes 8 milliseconds to service a page fault if an empty page is available or the replaced page is not modified, and 20 milliseconds if the replaces page is modified. Memory access time is 100 nanoseconds.
Assume that the page to be replaced is modified 70 percent of the time. What is the maximum acceptable page-fault rate for an effective access time of no more than 200 nanoseconds?

Answer:

    0.2micro seconds = (1-P) * 0.1 microseconds +(0.3P) *8milliseconds +(0.7P) * 20 milliseconds
                0.1 =  -0.1P +2400 P + 14000 P          
0.1 ~ 16,400 P
P ~ 0.000006


Problem :    Consider the two-dimensional array A:  var A: array[1..100] of array[1..100] of integer;
where A[1][1] is at location 200, in a paged memory system with pages of size 200. A small process is in page 0 (location 0 to 199) for manipulating the matrix; thus, every instruction fetch will be from page 0.
For three page frames, how many page faults are generated by the following array-initialization loops, using LRU replacement, and assuming page frame 1 has the process in it, and the other two are initially empty.

a.        for j:=1 to 100 do
for i:= 1 to 100 do
A[i][j]:=0;
b.        for i:=1 to 100 do
for j:=1 to 100 do
A[i][j]:=0;
 

Answer:
    The array is stored row-major; that is, the first data page contains A[1,1].A[1,2]....A[2,100] and the second page contains A[3,1],A[3,2]....A[4,100] and so on.

  1. The page reference string is  0,1,0,2,0,....0,49,0,1,0,2,0.....0,49,...
        and thus there will be 5000 page faults.
  2. The page reference string is  0,1,0,1,0,....0,49
        and thus there will be 50 page faults.

Problem 9.13:    A page-replacement algorithm should minimize the number of page faults. We can do this minimization by distributing heavily used pages evenly over all of memory, rather than having them compete for a small number of page frames. We can associate with each page frame a counter of the number of pages that are associated with that frame. Then, to replace a page, we search for the page frame with the smallest counter.

  1. Define a page-replacement algorithm using this basic idea. Specifically address the problems of
    1.  what the initial value of the counters is ,
    2.  when counters are increased ,
    3.  when counters are decreased , and
    4.  how the page to be replaces is selected.
  2. How many page faults occur for your algorithm for the following reference string, for four page frames? 1,2,3,4,5,3,4,1,6,7,8,7,8,9,7,8,9,5,4,5,4,2
  3. What is the minimum number of page faults for an optimal page-replacement strategy for the reference string in part b with four page frames?

Answer:

  1.     a.Define a page-replacement algorithm addressing the problem of:
    1. initial value of the counters- 0
    2. Counters are increased - whenever a nee page is associated with that frame.
    3. Counters are decreased - whenever one of the pages associated with that frame is no longer required.
    4. How the page to be replaced is selected - find a frame with the smallest counter.Use FIFO for breaking ties.
  2. 14 page faults
  3. 11 page faults

Problem 9.14:    Consider a demand-paging system with a paging disk that has an average access and transfer time of 20 milliseconds. Addresses are translated through a page table in main memory, with an access time of 1 microsecond per memory access. Thus, each memory reference through the page table takes two accesses. To improve this time, we have added an associative memory that reduces access time to one memory reference, if the page-table entry is in the associative memory.
Assume that 80 percent of the accesses are in the associative memory, and that, of the remaining, 10 percent (or 2 percent of the total) cause page faults. What is the effective memory access time?

Answer:
Access Time = (0.8) * (1 microsecond) + (0.18) * (2 microsecond) + (0.02) * (20002 microsecond)
            = 401.2 microsecond
            = 0.4 millisecond


Chapter 10.

Problem : Give an example of an application in which data in a file should be accessed in the following order:

  1. Sequentially
  2. Randomly

Answer :

  1. Print the content of the file.
  2. Print the content of record i. This record can be found using hashing or index techniques.

Chapter 12.

Problem 12.2 : Suppose that a disk drive has 5000 cylinders, numbered 0 to 4999. The drive is currently serving a request at cylinder 142, and the previous request was at cylinder 125. The queue of pending requests, in FIFO order, is

86, 1470, 913, 1774, 948, 1509, 1022, 1750, 130.
Starting from the current head position, what is the total distance (in cylinders) that the disk arm moves to satisfy all the pending requests, for each of the following disk scheduling algorithms?
  1. FCFS
  2. SSTF
  3. SCAN
  4. LOOK
  5. C-SCAN
Answer :
a.
0
86
130
143
913
948
1022
1470
1509
1750
1774
4999
 
 
 
D
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 D
 
 
 
 
 
 
 
 
 

b.
0
86
130
143
913
948
1022
1470
1509
1750
1774
4999
 
 
 
D
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 

c.
0
86
130
143
913
948
1022
1470
1509
1750
1774
4999
 
 
 
D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 D
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
     

d.
0
86
130
143
913
948
1022
1470
1509
1750
1774
4999
 
 
 
D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 D
 
                 

e.
0
86
130
143
913
948
1022
1470
1509
1750
1774
4999
 
 
 
D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 
 
 
 
 D
 
 
 
 
 
 
 
 
 


Chapter 13

Problem 13.3:    Consider the following I/O scenarios on a single-user PC.

  1. A mouse used with a graphical user interface
  2. A tape drive on a multitasking operating system (assume no device preallocation is available)
  3. A disk drive containing user files
  4. A graphics card with direct bus connection, accessible through memory-mapped I/O

For each of these I/O scenarios, would you design the operating system to use buffering, spooling, caching, or a combination? Would you use polled I/O, or interrupt-driven I/O? Give reasons for your choices.

Answer:

  1. Buffering may be needed to record mouse movement during times when higher-priority operations are taking place. Spooling, caching are inappropriate. Interrupt-driven I/O is most appropriate.
  2. Buffering may be needed to manage throughput difference between the tape drive and the source or destination of the I/O. Caching can be used to hold copies of data that resides on the tape, for faster access. Spooling could be used to stage data to the device when multiple users desire to read from or write to it. Interrupt=driven I/O is likely to allow the best performance.
  3. Buffering can be used to hold data while in transit from user space to the disk, and visa versa. Caching can be used to hold disk-resident data for improved performance. Spooling is not necessary because disks are shared-access devices. Interrupt-driven I/O is best for devices such as disks that transfer data at slow rates.
  4. Buffering may be needed to control multiple access and for performance(double-buffering can be used to hold the next screen image while displaying the current one). Caching and spooling are not necessary due to the fast and shared-access natures of the device. Polling and interrupts are only useful for input and for I/O completion detection, neither of which is needed for a memory-mapper device.

Problem 13.6:    Describe the three circumstances under which blocking I/O should be used. Describe three circumstances under which nonblocking I/O should be used. Why not just implement nonblocking I/O and have processed busy-wait until their devices are ready?

Answer:

        Generally.blocking I/O is appropriate when the process will only be waiting for one specific event. Examples include a disk, tape, or keyboard read by an application program. Non-blocking I/O is useful when I/O may come from more than one source and the order of the I/O arrival is not predetermined. Examples include network daemons listening to more than one network socket, window managers that accept mouse movement as well as keyboard input, and I/O management programs, such as a copy command that copies data between I/O devices. In the last case, the program would optimize its performance by buffering the input and output and using non-blocking I/O to keep both devices fully occupied.
    Non-blocking I/O is more complicated for programmers, because of the asynchronous rendezvous that is needed when an I/O occurs. Also, bust waiting is less efficient than interrupt-driven I/O so the overall system performance would decrease.