parallel and distributed computing

1
123 Volume 2, Issue 3 Copyright © 2001 John Wiley & Sons, Ltd. focus software Autumn 2001 Parallel and Distributed Computing Roland Wismüller TU München Institut für Informatik SAB, Lehrstuhl für Rechnertechnik und Rechnerorganisation D-80290 München, Germany [email protected] After a decade of research, parallel processing is just becoming a standard instrument of high performance computing, which is used as naturally as any other specialised programming technique. ROLAND WISMÜLLER highlights some interesting trends in Concurrency and Computation: Practice and Experience. With the ever-growing accuracy of simulation models, combined with new parallel architectures and more advanced programming methods and tools, parallel computing looks set to continue to develop exciting applications and even more exciting results. In a recent issue of Concurrency and Computation: Practice and Experience on parallel programming, two of the papers address specific aspects of new parallel architectures. Rauch et al. are concerned with clusters of workstations or PCs. Their research is motivated by an important problem of clusters and other decentralised systems: how can we efficiently support the system administrators’ task of keeping the software configuration of all machines consistent? The work of Theobald et al. highlights a very different, but equally important architecture trend: multi-threaded architectures. The research group follows a stepwise approach, where off-the-shelf microprocessors are extended with various levels of special hardware for supporting multi-threaded execution. Thus, they can find a trade-off between hardware cost and performance. While these papers suggest that in the realm of parallel architectures there is a movement towards new approaches, the remaining two articles indicate that the programming support for parallel machines rather is characterised by a steady evolution. Existing techniques and tools are noticeably enhanced in order to make them easier to use, more flexible, and more and more generally applicable. Cain et al. present an example from the area of performance analysis. The Performance Consultant automatically locates different classes of bottlenecks in a parallel program. By using a more suitable search strategy, both the search time and the accuracy could be improved. Schloegel et al. developed enhanced methods of graph partitioning, which are vital for mesh-based parallel applications. Their improved algorithms account for multiple constraints to be minimised at the same time, thus resulting in a good load balancing even in applications that consist of multiple phases with different load characteristics. Existing techniques and tools are noticeably enhanced in order to make them more easy to use focus preview

Upload: roland-wismueller

Post on 06-Jul-2016

215 views

Category:

Documents


2 download

TRANSCRIPT

123Volume 2, Issue 3Copyright © 2001 John Wiley & Sons, Ltd.focussoftware

Autumn 2001

Parallel andDistributed Computing

Roland WismüllerTU MünchenInstitut für InformatikSAB, Lehrstuhl für Rechnertechnik und RechnerorganisationD-80290 München, [email protected]

After a decade of research, parallel

processing is just becoming a

standard instrument of high

performance computing, which is

used as naturally as any other

specialised programming technique.

ROLAND WISMÜLLER highlights

some interesting trends in

Concurrency and Computation:

Practice and Experience.

With the ever-growing accuracy ofsimulation models, combined with newparallel architectures and more advancedprogramming methods and tools, parallelcomputing looks set to continue todevelop exciting applications and evenmore exciting results.

In a recent issue of Concurrency andComputation: Practice and Experience onparallel programming, two of the papersaddress specific aspects of new parallelarchitectures. Rauch et al. are concernedwith clusters of workstations or PCs. Theirresearch is motivated by an importantproblem of clusters and otherdecentralised systems: how can weefficiently support the systemadministrators’ task of keeping thesoftware configuration of all machinesconsistent? The work of Theobald et al. highlights a very different, butequally important architecture trend:

multi-threaded architectures. Theresearch group follows a stepwiseapproach, where off-the-shelfmicroprocessors are extended withvarious levels of special hardware forsupporting multi-threaded execution.Thus, they can find a trade-off betweenhardware cost and performance.

While these papers suggest that in therealm of parallel architectures there is amovement towards new approaches, theremaining two articles indicate that theprogramming support for parallelmachines rather is characterised by asteady evolution. Existing techniques andtools are noticeably enhanced in order tomake them easier to use, more flexible,and more and more generally applicable.Cain et al. present an example from thearea of performance analysis. ThePerformance Consultant automaticallylocates different classes of bottlenecks in aparallel program. By using a moresuitable search strategy, both the searchtime and the accuracy could be improved.Schloegel et al. developed enhancedmethods of graph partitioning, which arevital for mesh-based parallel applications.Their improved algorithms account formultiple constraints to be minimised atthe same time, thus resulting in a goodload balancing even in applications thatconsist of multiple phases with differentload characteristics.

Existing techniques and tools arenoticeably enhanced in order tomake them more easy to use

focuspreview