On This Page

This set of High Performance Computing HPC Multiple Choice Questions & Answers (MCQs) focuses on High Performance Computing Set 7

Q1 | every node has to know when to communicate that is
  • call the procedure
  • call for broadcast
  • call for communication
  • call the congestion
Q2 | the procedure is disturbed and require only point-to-point _______
  • synchronization
  • communication
  • both
  • none
Q3 | Renaming relative to the source is _____ the source.
  • xor
  • xnor
  • and
  • nand
Q4 | Task dependency graph is ------------------
  • directed
  • undirected
  • directed acyclic
  • undirected acyclic
Q5 | In task dependency graph longest directed path between any pair of start and finish node is called as --------------
  • total work
  • critical path
  • task path
  • task length
Q6 | which of the following is not a granularity type
  • course grain
  • large grain
  • medium grain
  • fine grain
Q7 | which of the following is a an example of data decomposition
  • matrix multiplication
  • merge sort
  • quick sort
  • 15 puzzal
Q8 | which problems can be handled by recursive decomposition
  • backtracking
  • greedy method
  • divide and conquer problem
  • branch and bound
Q9 | In this decomposition problem decomposition goes hand in hand with its execution
  • data decomposition
  • recursive decomposition
  • explorative decomposition
  • speculative decomposition
Q10 | which of the following is not an example of explorative decomposition
  • n queens problem
  • 15 puzzal problem
  • tic tac toe
  • quick sort
Q11 | Topological sort can be applied to which of the following graphs?
  • a) undirected cyclic graphs
  • b) directed cyclic graphs
  • c) undirected acyclic graphs
  • d) directed acyclic graphs
Q12 | In most of the cases, topological sort starts from a node which has __________
  • a) maximum degree
  • b) minimum degree
  • c) any degree
  • d) zero degree
Q13 | Which of the following is not an application of topological sorting?
  • a) finding prerequisite of a task
  • b) finding deadlock in an operating system
  • c) finding cycle in a graph
  • d) ordered statistics
Q14 | In ------------task are defined before starting the execution of the algorithm
  • dynamic task
  • static task
  • regular task
  • one way task
Q15 | which of the following is not the array distribution method of data partitioning
  • block
  • cyclic
  • block cyclic
  • chunk
Q16 | blocking optimization is used to improve temmporal locality for reduce
  • hit miss
  • misses
  • hit rate
  • cache misses
Q17 | CUDA thought that 'unifying theme' of every form of parallelism is
  • cda thread
  • pta thread
  • cuda thread
  • cud thread
Q18 | Topological sort of a Directed Acyclic graph is?
  • a) always unique
  • b) always not unique
  • c) sometimes unique and sometimes not unique
  • d) always unique if graph has even number of vertices
Q19 | threads being block altogether and being executed in the sets of 32 threads called a
  • thread block
  • 32 thread
  • 32 block
  • unit block
Q20 | True or False: The threads in a thread block are distributed across SM units so that each thread is executed by one SM unit.
  • true
  • false
Q21 | When the topological sort of a graph is unique?
  • a) when there exists a hamiltonian path in the graph
  • b) in the presence of multiple nodes with indegree 0
  • c) in the presence of single node with indegree 0
  • d) in the presence of single node with outdegree 0
Q22 | What is a high performance multi-core processor that can be used to accelerate a wide variety of applications using parallel computing.
  • cpu
  • dsp
  • gpu
  • clu
Q23 | A good mapping does not depends on which following factor
  • knowledge of task sizes
  • the size of data associated with tasks
  • characteristics of inter-task interactions
  • task overhead
Q24 | CUDA is a parallel computing platform and programming modelĀ 
  • true
  • false
Q25 | Which of the following is not a form of parallelism supported by CUDA
  • vector parallelism - floating point computations are executed in parallel on wide vector units
  • thread level task parallelism - different threads execute a different tasks
  • block and grid level parallelism - different blocks or grids execute different tasks
  • data parallelism - different threads and blocks process different parts of data in memory