On This Page

This set of Multi-core processors Multiple Choice Questions & Answers (MCQs) focuses on Multi Core Processors Set 2

Q1 | Let S and Q be two semaphores initialized to 1, where P0 and P1 processes the following statements wait(S);wait(Q); ---; signal(S);signal(Q) and wait(Q); wait(S);---;signal(Q);signal(S); respectively. The above situation depicts a                .
  • . livelock
  • . critical section
  • . deadlock
  • . mutual exclusion
Q2 | Which of the following conditions must be satisfied to solve the critical section problem?
  • . mutual exclusion
  • . progress
  • . bounded waiting
  • . all of the mentioned
Q3 | a process is allowed to enter its critical section                     .
  • . after a process has made a request to enter its critical section and before the request is granted
  • . when another process is in its critical section
  • . before a process has made a request to enter its critical section
  • . none of the mentioned
Q4 | processes to solve the critical section problem.
  • . one
  • . two
  • . three
  • . four
Q5 | What are Spinlocks?
  • cpu cycles wasting locks over critical sections of programs
  • locks that avoid time wastage in context switches
  • locks that work better on multiprocessor systems
  • all of the mentioned
Q6 | What is the main disadvantage of spinlocks?
  • they are not sufficient for many process
  • they require busy waiting
  • they are unreliable sometimes
  • they are too complex for programmers
Q7 | at any moment (the mutex being initialized to 1)?
  • 1
  • 2
  • 3
  • none of the mentioned
Q8 | Here, w1 and w2 have shared variables, which are initialized to false. Which one of the following statements is TRUE about the above construct?
  • it does not ensure mutual exclusion
  • it does not ensure bounded waiting
  • it requires that processes enter the critical section in strict alternation
  • it does not prevent deadlocks but ensures mutual exclusion
Q9 | The signal operation of the semaphore basically works on the basic              system call.
  • continue()
  • wakeup()
  • getup()
  • start()
Q10 | (not necessarily immediately)?
  • . #pragma omp section
  • . #pragma omp parallel
  • . none
  • . #pragma omp master
Q11 | When compiling an OpenMP program with gcc, what flag must be included?
  • . -fopenmp
  • . #pragma omp parallel
  • . –o hello
  • . ./openmp
Q12 | program, i.e., each thread executes the same code.
  • . parallel
  • . section
  • . single
  • . master
Q13 | they would be in serial program.
  • . nowait
  • . ordered
  • . collapse
  • . for loops
Q14 | code of functions that are called (directly or indirectly) from within the parallel region.
  • . lexical extent
  • . static extent
  • . dynamic extent
  • . none of the above
Q15 | The                          specifies that the iterations of the for loop should be executed in parallel by multiple threads.
  • . sections construct
  • . for pragma
  • . single construct
  • . parallel for construct
Q16 | active in the parallel section region.
  • . omp_get_num_procs ( )
  • . omp_get_num_threads ( )
  • . omp_get_thread_num ( )
  • . omp_set_num_threads ( )
Q17 | The size of the initial chunksize                       .
  • . total_no_of_iterations / max_threads
  • . total_no_of_remaining_iterations / max_threads
  • . total_no_of_iterations / no_threads
  • . total_no_of_remaining_iterations / no_threads
Q18 | parallel architectures affect parallelization?
  • . performance
  • . latency
  • . bandwidth
  • . accuracy
Q19 | global_count += 5;
  • . 4 instructions
  • . 3 instructions
  • . 5 instructions
  • . 2 instructions
Q20 |                               generate log files of MPI calls.
  • mpicxx
  • mpilog
  • mpitrace
  • mpianim
Q21 | selectively screen messages.
  • dest
  • type
  • address
  • length
Q22 | them in this case and returning the result to a single process.
  • mpi _ reduce
  • mpi_ bcast
  • mpi_ finalize
  • mpi_ comm size
Q23 | The easiest way to create communicators with new groups is with                      .
  • mpi_comm_rank
  • mpi_comm_create
  • mpi_comm_split
  • mpi_comm_group
Q24 |                           is an object that holds information about the received message, including, for example, it’s actually count.
  • buff
  • count
  • tag
  • status
Q25 | The                           operation similarly computes an element-wise reduction of vectors, but this time leaves the result scattered among the processes.
  • reduce-scatter
  • reduce (to-one)
  • allreduce
  • none of the above