# High performance computing mcq sppu Unit 4

## high performance computing mcq questions and answers

1. mathematically efficiency is

1. e=s/p
2. e=p/s
3. e*s=p/2
4. e=p+e/e

e=s/p

2. Cost of a parallel system is sometimes referred to____ of product

1. work
2. processor time
3. both
4. none

both

3. Scaling Characteristics of Parallel Programs Ts is

1. increase
2. constant
3. decreases
4. none

constant

4. Speedup tends to saturate and efficiency _ as a consequence of Amdahl’s law.

1. increase
2. constant
3. decreases
4. none

decreases

5. Speedup obtained when the problem size is ______ linearly with the number of processing elements.

1. increase
2. constant
3. decreases
4. depend on problem size

increase

6. The n × n matrix is partitioned among n processors, with each processor storing complete ___ _ of the matrix.

1. row
2. column
3. both
4. depend on processor

row

7. cost-optimal parallel systems have an efficiency of _____

1. 1
2. n
3. logn
4. complex

1

8. The n × n matrix is partitioned among n2 processors such that each processor owns a _ element

1. n
2. 2n
3. single
4. double

single

9. how many basic communication operations are used in matrix vector multiplication

1. 1
2. 2
3. 3
4. 4

3

10. In DNS algorithm of matrix multiplication it used

1. 1d partition
2. 2d partition
3. 3d partition
4. both a,b

3d partition

## high performance computing mcq sppu

11. In the Pipelined Execution, steps contain

1. normalization
2. communication
3. elimination
4. all

all

12. the cost of the parallel algorithm is higher than the sequential run time by a factor of __

1. 3/2
2. 2/3
3. 3*2
4. 2/3+3/2

3/2

13. The load imbalance problem in Parallel Gaussian Elimination: can be alleviated by using a __

1. mapping
2. acyclic
3. cyclic
4. both
5. none

acyclic

14. A parallel algorithm is evaluated by its runtime in function of

1. the input size,
2. the number of processors,
3. the communication parameters.
4. all

all

15. For a problem consisting of W units of work, p__W processors can be used optimally

1. <=
2. >=
3. <
4. >

<=

16. C(W)__Θ(W) for optimality (necessary condition).

1. >
2. <
3. <=
4. equals

equals

17. many interactions in oractical parallel programs occur in ____ pattern

1. well defined
2. zig-zac
3. reverse
4. straight

well defined

18. efficient implementation of basic communication operation can improve

1. performance
2. communication
3. algorithm
4. all

performance

19. efficient use of basic communication operations can reduce

1. development effort
2. software quality
3. both
4. none

development effort

## high performance computing mcq questions

20. Group communication operations are built using _____ Messenging primitives.

1. point-to-point
2. one-to-all
3. all-to-one
4. none

point-to-point

21. one processor has a piece of data and it need to send to everyone is

1. one -to-all
2. all-to-one
3. point -to-point
4. all of above

one -to-all

22. wimpleat way to send p-1 messages from source to the other p-1 processors

1. Algorithm
2. communication
3. concurrency

concurrency

23. In a eight node ring, node __ is source of broadcast

1. 1
2. 2
3. 8
4. 0

0

24. The processors compute __ product of the vector element and the local matrix

1. local
2. global
3. both
4. none

local

25. one to all broadcast use

1. recursive doubling
2. simple algorithm
3. both
4. none

recursive doubling

26. In a broadcast and reduction on a balanced binary tree reduction is done in __

1. recursive order
2. straight order
3. vertical order
4. parallel order

recursive order

27. if “X” is the message to broadcast it initially resides at the source node

1. 1
2. 2
3. 8
4. 0

0

28. logical operators used in algorithm are

1. XOR
2. AND
3. both
4. none

both

29. Generalization of broadcast in Which each processor is

1. Source as well as destination
2. only source
3. only destination
4. none

Source as well as destination

30. The algorithm terminates in _ steps

1. p
2. p+1
3. p+2
4. p-1

p-1

30. Each node first sends to one of its neighbours the data it need to….

2. identify
3. verify
4. none

30. The second communication phase is a columnwise __ broadcast of consolidated

1. All-to-all
2. one -to-all
3. all-to-one
4. point-to-point

All-to-all

30. All nodes collects _ message corresponding to √p nodes to their respectively

1. √p
2. p
3. p+1
4. p-1

√p

30. It is not possible to port __ for higher dimensional network

1. Algorithm
2. hypercube
3. both
4. none

Algorithm

30. If we port algorithm to higher dimemsional network it would cause

1. error
2. contention
3. recursion
4. none

contention

30. In the scatter operation __ node send message to every other node

1. single
2. double
3. triple
4. none

single

30. The gather Operation is exactly the inverse of _

1. scatter operation
2. recursion operation
3. execution
4. none

scatter operation

30. Similar communication pattern to all-to-all broadcast except in the_____

1. reverse order
2. parallel order
3. straight order
4. vertical order