parallel computing - introduction

6
Parallel Computing 1. What is Parallel Computing and Why Use Parallel Computing? Serial Computing: In Serial Computing, problem is broken down into instructions that are execute on single processor one at time. Sujata Regoti

Upload: sujata-regoti

Post on 13-Apr-2017

51 views

Category:

Technology


0 download

TRANSCRIPT

Page 1: Parallel Computing - Introduction

Parallel Computing

1. What is Parallel Computing and Why Use Parallel

Computing?

Serial Computing: In Serial Computing, problem is broken down into instructions that are execute on

single processor one at time.

Sujata Regoti

Page 2: Parallel Computing - Introduction

Parallel Computing: In Parallel Computing, problem is broken down into discrete parts. These parts are

further broken down into instructions execute sequentially. Ultimately, Discrete

parts can execute concurrently on different processor.

Advantages :

1. SAVE TIME AND/OR MONEY - as heap, commodity components can be used

2. SOLVE MORE COMPLEX PROBLEMS - as no limiting computing like traditional

3. PROVIDE CONCURRENCY - as we use multiple processor

4. TAKE ADVANTAGE OF NON-LOCAL RESOURCES - by wide area network

5. MAKE BETTER USE OF UNDERLYING PARALLEL HARDWARE - as modern laptops

also based on parallel architecture

Page 3: Parallel Computing - Introduction

2. Who use Parallel Computing?

Companies that I know which are working in parallel computing area are Intel ,

IBM, calligo tech, TCS and Wipro.

To get more, Please see click here

Fields :

● Science and Engineering

● Industrial and Commercial

● Global Applications

Click here for More Details

Page 4: Parallel Computing - Introduction

3. What is meant by shared memory parallel

computer? Explain its advantages and disadvantages.

General Characteristics:

> all processors can access all memory as global address space

> Multiple processors can operate independently

> If any processor modify any memory location, visible to other processors

> Historically, shared memory machines have been classified as UMA and NUMA,

based upon memory access times.

Fig. UMA Fig. NUMA

Advantages:

> Global address space provides a user-friendly programming perspective to

memory

> Data sharing between tasks is both fast and uniform due to the proximity of

memory to CPUs

Page 5: Parallel Computing - Introduction

Disadvantages:

> Lack of scalability between memory and CPUs.

Adding more CPUs can geometrically increases traffic on the shared memory-CPU

path, and for cache coherent systems, geometrically increase traffic associated

with cache/memory management.

> Programmer responsibility for synchronization constructs that ensure "correct"

access of global memory.

4. Explain SIMD and MIMD with its architecture.

SIMD [ Single Instruction, Multiple Data ] :

> All processing units execute the same instruction at any given clock cycle

> Each processing unit can operate on a different data element

> Best suited for specialized problems characterized by a high degree of regularity,

such as graphics/image processing

> Synchronous (lockstep) and deterministic execution

> Two varieties: Processor Arrays and Vector Pipelines

> Most modern computers, particularly those with graphics processor units (GPUs)

employ SIMD instructions and execution units.

Page 6: Parallel Computing - Introduction

MIMD [ Multiple Instruction, Multiple Data ] : > Every processor may be executing a different instruction stream

> Every processor may be working with a different data stream

> Execution can be synchronous or asynchronous, deterministic or non-

deterministic

> Currently, the most common type of parallel computer - most modern

supercomputers fall into this category.

> Many MIMD architectures also include SIMD execution sub-components

Reference :

https://computing.llnl.gov/tutorials/parallel_com

https://www.quora.com