On This Page

This set of Cloud Computing Multiple Choice Questions & Answers (MCQs) focuses on Cloud Computing Set 37

Q1 | Point out the correct statement.
  • with atmos, you can create your own cloud storage system or leverage a public cloud service with atmos online
  • ibm is a major player in cloud computing particularly for businesses
  • in managed storage, the storage service provider makes storage capacity available to users
  • all of the mentioned
Q2 | Redundancy has to be implemented at the                   architectural level for effective results in cloud computing.
  • lower
  • higher
  • middle
  • all of the mentioned
Q3 | Which of the following can manage data from CIFS and NFS file systems over HTTPnetworks?
  • storagegrid
  • datagrid
  • diskgrid
  • all of the mentioned
Q4 | Point out the wrong statement.
  • aws s3 essentially lets you create your own cloud storage
  • aws created “availability zones” within regions, which are sets of systems that are isolated from one another
  • amazon web services (aws) adds redundancy to its iaas systems by allowing ec2 virtual machine instances
  • none of the mentioned
Q5 | A                  is a logical unit that serves as the target for storage operations, such as the SCSI protocol READs and WRITEs.
  • gets
  • pun
  • lun
  • all of the mentioned
Q6 | Which of the following use LUNs to define a storage volume that appears to a connected computer as a device?
  • san
  • iscsi
  • fibre channel
  • all of the mentioned
Q7 | Which of the following protocol is used for discovering and retrieving objects from a cloud?
  • occi
  • smtp
  • http
  • all of the mentioned
Q8 | Which of the following disk operation is performed When a tenant is granted access to a virtual storage container?
  • crud
  • file system modifications
  • partitioning
  • all of the mentioned
Q9 | Which of the following standard connect distributed hosts or tenants to their provisioned storage in the cloud?
  • cdmi
  • ocmi
  • coa
  • all of the mentioned
Q10 | IBM and                  have announced a major initiative to use Hadoop to support university courses in distributed computer programming.
  • google latitude
  • android (operating system)
  • google variations
  • google
Q11 | Point out the correct statement.
  • hadoop is an ideal environment for extracting and transforming small volumes of data
  • hadoop stores data in hdfs and supports data compression/decompression
  • the giraph framework is less useful than a mapreduce job to solve graph and machine learning
  • none of the mentioned
Q12 | What license is Hadoop distributed under?
  • apache license 2.0
  • mozilla public license
  • shareware
  • commercial
Q13 | Sun also has the Hadoop Live CD                 project, which allows running a fully functional Hadoop cluster using a live CD.
  • openoffice.org
  • opensolaris
  • gnu
  • linux
Q14 | Hadoop achieves reliability by replicating the data across multiple hosts and hence does not require                  storage on hosts.
  • raid
  • standard raid levels
  • zfs
  • operating system
Q15 | What was Hadoop written in?
  • java (software platform)
  • perl
  • java (programming language)
  • lua (programming language)
Q16 | The Hadoop list includes the HBase database, the Apache Mahout                   system, and matrix operations.
  • machine learning
  • pattern recognition
  • statistical classification
  • artificial intelligence
Q17 | The Mapper implementation processes one line at a time via                    method.TOPIC 5.2 MAPREDUCE
  • map
  • reduce
  • mapper
  • reducer
Q18 | Point out the correct statement.
  • mapper maps input key/value pairs to a set of intermediate key/value pairs
  • applications typically implement the mapper and reducer interfaces to provide the map and reduce methods
  • mapper and reducer interfaces form the core of the job
  • none of the mentioned
Q19 | The Hadoop MapReduce framework spawns one map task for each                       generated by the InputFormat for the job.
  • outputsplit
  • inputsplit
  • inputsplitstream
  • all of the mentioned
Q20 | Users can control which keys (and hence records) go to which Reducer by implementing a custom?
  • partitioner
  • outputsplit
  • reporter
  • all of the mentioned
Q21 | Point out the wrong statement.
  • the mapper outputs are sorted and then partitioned per reducer
  • the total number of partitions is the same as the number of reduce tasks for the job
  • the intermediate, sorted outputs are always stored in a simple (key-len, key, value-len, value) format
  • none of the mentioned
Q22 | Applications can use the                          to report progress and set application-level status messages.
  • partitioner
  • outputsplit
  • reporter
  • all of the mentioned
Q23 | The right level of parallelism for maps seems to be around                    maps per- node.
  • 1-10
  • 10-100
  • 100-150
  • 150-200
Q24 | The number of reduces for the job is set by the user via                    
  • jobconf.setnumtasks(int)
  • jobconf.setnumreducetasks(int)
  • jobconf.setnummaptasks(int)
  • all of the mentioned
Q25 | The framework groups Reducer inputs by key in                    stage.
  • sort
  • shuffle
  • reduce
  • none of the mentioned