On This Page

This set of Computer Architecture Multiple Choice Questions & Answers (MCQs) focuses on Computer Architecture Set 21

Q1 | To overcome the slow operating speeds of the secondary memory we make use of faster flash drives.
  • true
  • false
Q2 | The fastest data access is provided using                 
  • caches
  • dram’s
  • sram’s
  • registers
Q3 | The memory which is used to store the copy of data or instructions stored in larger memories, inside the CPU is called                 
  • level 1 cache
  • level 2 cache
  • registers
  • tlb
Q4 | The larger memory placed between the primary cache and the memory is called               
  • level 1 cache
  • level 2 cache
  • eeprom
  • tlb
Q5 | The next level of memory hierarchy after the L2 cache is                 
  • secondary storage
  • tlb
  • main memory
  • register
Q6 | The last on the hierarchy scale of memory devices is               
  • main memory
  • secondary memory
  • tlb
  • flash drives
Q7 | In the memory hierarchy, as the speed of operation increases the memory size also increases.
  • true
  • false
Q8 | If we use the flash drives instead of the harddisks, then the secondary storage can go above primary memory in the hierarchy.
  • true
  • false
Q9 | The reason for the implementation of the cache memory is                   
  • to increase the internal memory of the system
  • the difference in speeds of operation of the processor and memory
  • to reduce the memory access and cycle time
  • all of the mentioned
Q10 | The effectiveness of the cache memory is based on the property of
  • locality of reference
  • memory localisation
  • memory size
  • none of the mentioned
Q11 | The temporal aspect of the locality of reference means                   
  • that the recently executed instruction won’t be executed soon
  • that the recently executed instruction is temporarily not referenced
  • that the recently executed instruction will be executed soon again
  • none of the mentioned
Q12 | The spatial aspect of the locality of reference means                   
  • that the recently executed instruction is executed again next
  • that the recently executed won’t be executed again
  • that the instruction executed will be executed at a later time
  • that the instruction in close proximity of the instruction executed will be executed in future
Q13 | The correspondence between the main memory blocks and those in the cache is given by                     
  • hash function
  • mapping function
  • locale function
  • assign function
Q14 | The algorithm to remove and place new contents into the cache is called
  • replacement algorithm
  • renewal algorithm
  • updation
  • none of the mentioned
Q15 | The write-through procedure is used
  • to write onto the memory directly
  • to write and read from memory simultaneously
  • to write directly on the memory and the cache simultaneously
  • none of the mentioned
Q16 | The bit used to signify that the cache location is updated is                   
  • dirty bit
  • update bit
  • reference bit
  • flag bit
Q17 | The copy-back protocol is used
  • to copy the contents of the memory onto the cache
  • to update the contents of the memory from the cache
  • to remove the contents of the cache and push it on to the memory
  • none of the mentioned
Q18 | The approach where the memory contents are transferred directly to the processor from the memory is called
  • read-later
  • read-through
  • early-start
  • none of the mentioned
Q19 | In                   protocol the information is directly written into the main memory.
  • write through
  • write back
  • write first
  • none of the mentioned
Q20 | The only draw back of using the early start protocol is                 
  • time delay
  • complexity of circuit
  • latency
  • high miss rate
Q21 | During a write operation if the required block is not present in the cache then               occurs.
  • write latency
  • write hit
  • write delay
  • write miss
Q22 | While using the direct mapping technique, in a 16 bit system the higher order 5 bits are used for                   
  • tag
  • block
  • word
  • id
Q23 | In direct mapping the presence of the block in memory is checked with the help of block field.
  • true
  • false
Q24 | In associative mapping, in a 16 bit system the tag field has               bits.
  • 12
  • 8
  • 9
  • 10
Q25 | The associative mapping is costlier than direct mapping.
  • true
  • false