Saturday, 20 June 2020

Homework on Textbook Sections 5.4, 5.10, 5.7, 5.8, 5.13

For Questions 1 and 2
Suppose we have a processor with a base CPI of 1.3, assuming all references hit in the primary cache, and a clock rate of 1GHz. Assume a main memory access time of 150ns, including all the miss handling. Suppose the miss rate per instruction at the primary cache is 4%.
Question 1
What is the actual CPI?
Solution: 7.3

Question 2
How much faster will the processor be if we add a secondary cache that has a 10 ns access time and is large enough that the global miss rate to main memory is 1%?
Solution:
2.2813

For Questions 5 to 8
The RISC-V 32-bit architecture supports virtual memory with 32-bit virtual addresses mapping to 32-bit physical addresses. The page size is 4Kbytes, and page table entries (PTEs) are 4 bytes each.
Translation is performed using a 2-level page table structure. Bits 31:22 of a virtual address index the first-level page table. If the selected first-level PTE is valid, it points to a second-level page table. Bits 21:12 of the virtual address then index that second-level page table. If the selected second-level PTE is valid, it points to the physical page.

Question 5
How many bytes are required for the first-level page table?
Solution:
4,096

Question 6
How many bytes are required for a second-level page table?
Solution:
4,096

Question 7
If the entire virtual address space were allocated, how many bytes would be required for page tables?
Solution:
4,198,400

Question 8: If the following ranges of virtual addresses are allocated:

    0x00000000 to 0x011FFFFF
    0x10000000 to 0x1FFFFFFF
    0x7FC00000 to 0x7FFFFFFF

How many bytes would be required for page tables?
Solution:
290,816

Quiz on Textbook Sections 5.7, 5.8, 5.13

Question 1:
Match the following descriptions to the terms defined:

A technique that uses main memory as a “cache” for secondary storage
Solution: virtual memory

An address in main memory
Solution: physical address

An address that corresponds to a location in virtual space and is translated by address mapping to a physical address when memory is accessed
Solution: virtual memory

The process by which a virtual address is mapped to an address used to access memory
Solution: address translation

Question 2:
In paged virtual memory, a virtual address is divided into a virtual page number and an offset within the page. The virtual page number is translated into a physical page number. The physical page number is joined with the offset to form the physical address.
Solution: True

Question 3:
Where is the page table located, and how is it found?
Solution: Located in physical memory at an address given by the Page Table Register

Question 4:
If a page of virtual memory is not resident in main memory, how is the page accessed?
Solution: When translating the virtual address, the valid bit in the page table entry is found to be 0. An exception occurs, and the handler reads the page from secondary storage.

Question 5:
Why does a virtual memory system require a translation lookaside buffer (TLB)?

Solution: Without it, each virtual memory access would require two physical memory accesses, one for the page table entry and one for the translated location.

Question 6:
What is the purpose of the reference bit in a page table entry?
Solution: It is used to implement approximate LRU replacement, by being set whenever the corresponding page is accessed.

Question 7:
Some processors use hardware to deal with a TLB miss, with circuits that locate the required PTE in physical memory and read it into the TLB. Other processors raise an exception on TLB miss and require the handler to read the PTE into the TLB. RISC-V uses the second approach.
Solution: True

Question 8:
Cache memory and virtual memory operate in different ways, since cache relies on locality of reference, whereas virtual memory does not.
Solution: False

Question 9
Match the following descriptions of causes of misses with the terms defined:

A cache miss caused by the first access to a block that has never been in the cache.
Solution:
compulsory miss

A cache miss that occurs because the cache, even with full associativity, cannot contain all the blocks needed to satisfy the request.
Solution:
capacity miss
           
A cache miss that occurs in a set-associative or direct-mapped cache when multiple blocks compete for the same set and that are eliminated in a fully associative cache of the same size.
Solution:
conflict miss        

 Question 10
What is meant by the term "nonblocking cache"?
Solution:
A cache that allows an out-of-order processor to make references to the cache while the cache is handling an earlier miss.