frontiers in computing, amsterdam, the netherlands, 9–11 december 1987

4
1 Monitor n provides helpful examples, glossary, bibliography and a thorough index. In places the sentence structure used is overly complex, however this does not detract from the value of the material presented. The insights given on the project development process appear to be quite applicable to areas other than software design, e.g. gen- eral project planning. If anything, the authors try to cover too much within 209 pages. For example, a discussion of the ca- pabilities and merits of ten current languages is given less than two pages. The more recent structured BASIC dialects (e.g. Microsoft QuickBasic) are not mentioned. A comparison of the abilities and appli- cability of the languages listed would have been appreciated. Discussion of LISP is limited to two lines, and no mention is made of its list-processing capabilities. Similar comments could be made for several of the other lan- guages. Despite this shallowness in places, the book is certainly worth reading. The authors include a useful - discussion of software project management, use of automated tools, and even a section on the importance of the working environment. I will be recommending that programmers in my research group study its contents and implement as much as possible. A.P. WADE Chemistry Department, University of British Columbia, Vancouver, Canada m Meeting Report Frontiers in Computing, Amsterdam, The Netherlands, 9-l! December 1987 Massive parallellism, distributed processing, VLSI, knowledge-based systems, data flow machines, systolic arrays, optical components and neu- ral networks - these were just a few of the buzz-words at the “Frontiers in Computing” conference. On the first day of the conference, all attention was focussed on large scale IT (Information Technology) programmes in Japan, Europe and the U.S.A. The so-called Fifth Gen- eration Computer Systems (FGCS) project in Japan was a landmark ini- tiative in this respect. After pre- liminary studies (1979-1981), the In- stitute for New Generation Com- puter Technology (ICOT) got off the ground in 1982. Its activities should yield a new type of computer: a KIPS (Knowledge and Information Proces- sing System - to put it simply, a computer that actually understands its users). At ICOT, scientists1 work on three projects: - theory (software development tools, logic programming, knowl- edge acquisition and artificial in- telligence); - architecture (VLSI technology, parallel and distributed proces- sing, non-numerical and non- deterministic processing and knowledge based systems); - social impact (internationalisa- tion, possible applications and business opportunities). Ultimately, the Japanese efforts should yield a KIPS with an intelli- gent interface and automatic pro- gramming capabilities, reasoning at a speed of 100 million to one billion LIPS (1 LIPS = 1 Logical Inference Per Second, which amounts to 100 up to 1000 machine instructions per second) utilising a knowledge base of 100 to 1000 Gigabyte (comparable to approximately 1000 compact discs, or, equivalently, 1.5 million floppy disks!). According to Professor H. Aiso of Keio University, the project is still proceeding on schedule. The Japanese FGCS and Super- speed Computer projects were taken seriously in the U.S.A. right from the start, in 1982. Although long range research tends. not to be extremely popular in the U.S.A. (since it is hard to get the right people and costs and risks are running high), a number of executives of computer companies decided that some action had to be taken. This resulted in the initiation of MCC, the Microelectronics and Computer Technology Corporation, a joint venture in which, at present, more than twenty companies par- ticipate (as shareholders). MCC cur- rently employs some 360 researchers who are working at five different projects: - - advanced computer architectures (amongst other subjects: artificial intelligence and man-machine interfaces); software technology; VLSI and CAD (VLSI = Very Large Scale Integration; CAD = Computer-Aided Design); packaging/interconnect (how to make chips as fast and efficient as possible); electronic applications of high- temperature superconductors. Proceeds of MCC tend not to be

Upload: gerard-j-kleywegt

Post on 21-Jun-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

1 Monitor n

provides helpful examples, glossary, bibliography and a thorough index. In places the sentence structure used is overly complex, however this does not detract from the value of the material presented. The insights given on the project development process appear to be quite applicable to areas other than software design, e.g. gen- eral project planning.

If anything, the authors try to cover too much within 209 pages. For example, a discussion of the ca- pabilities and merits of ten current

languages is given less than two pages. The more recent structured BASIC dialects (e.g. Microsoft QuickBasic) are not mentioned. A comparison of the abilities and appli- cability of the languages listed would have been appreciated. Discussion of LISP is limited to two lines, and no mention is made of its list-processing capabilities. Similar comments could be made for several of the other lan- guages. Despite this shallowness in places, the book is certainly worth reading. The authors include a useful

-

discussion of software project management, use of automated tools, and even a section on the importance of the working environment. I will be recommending that programmers in my research group study its contents and implement as much as possible.

A.P. WADE Chemistry Department,

University of British Columbia, Vancouver, Canada

m Meeting Report

Frontiers in Computing, Amsterdam, The Netherlands, 9-l! December 1987

Massive parallellism, distributed processing, VLSI, knowledge-based systems, data flow machines, systolic arrays, optical components and neu- ral networks - these were just a few of the buzz-words at the “Frontiers in Computing” conference.

On the first day of the conference, all attention was focussed on large scale IT (Information Technology) programmes in Japan, Europe and the U.S.A. The so-called Fifth Gen- eration Computer Systems (FGCS) project in Japan was a landmark ini- tiative in this respect. After pre- liminary studies (1979-1981), the In- stitute for New Generation Com- puter Technology (ICOT) got off the ground in 1982. Its activities should yield a new type of computer: a KIPS (Knowledge and Information Proces- sing System - to put it simply, a computer that actually understands its users). At ICOT, scientists1 work on three projects: - theory (software development

tools, logic programming, knowl-

edge acquisition and artificial in- telligence);

- architecture (VLSI technology, parallel and distributed proces- sing, non-numerical and non- deterministic processing and knowledge based systems);

- social impact (internationalisa- tion, possible applications and business opportunities).

Ultimately, the Japanese efforts should yield a KIPS with an intelli- gent interface and automatic pro- gramming capabilities, reasoning at a speed of 100 million to one billion LIPS (1 LIPS = 1 Logical Inference Per Second, which amounts to 100 up to 1000 machine instructions per second) utilising a knowledge base of 100 to 1000 Gigabyte (comparable to approximately 1000 compact discs, or, equivalently, 1.5 million floppy disks!). According to Professor H. Aiso of Keio University, the project is still proceeding on schedule.

The Japanese FGCS and Super- speed Computer projects were taken

seriously in the U.S.A. right from the start, in 1982. Although long range research tends. not to be extremely popular in the U.S.A. (since it is hard to get the right people and costs and risks are running high), a number of executives of computer companies decided that some action had to be taken. This resulted in the initiation of MCC, the Microelectronics and Computer Technology Corporation, a joint venture in which, at present, more than twenty companies par- ticipate (as shareholders). MCC cur- rently employs some 360 researchers who are working at five different projects:

- -

advanced computer architectures (amongst other subjects: artificial intelligence and man-machine interfaces); software technology; VLSI and CAD (VLSI = Very Large Scale Integration; CAD = Computer-Aided Design); packaging/interconnect (how to make chips as fast and efficient as possible); electronic applications of high- temperature superconductors. Proceeds of MCC tend not to be

s Chemometrics and Intelligent Laboratory Systems 8

products, but concepts and ideas. In Europe, there is a similar in-

stitution called ECRC (European Computer Industry Research Centre), which is a joint initiative of Bull, ICL and Siemens. The aim of the ECRC again is not to develop products but to gather fundamental know-how in the area of symbolic processing. In particular, interest focusses on the acquisition and representation of knowledge, reasoning mechanisms and models for the interaction be- tween man and machine.

There are four research pro- grammes within ECRC: - logic programming and problem

solving; - knowledge bases; - computer architectures (in this

project researchers are devel- oping a sequential PROLOG processor and a parallel PRO- LOG machine);

- man-machine interaction. Better known than the ECRC is

the European ESPRIT programme. Its objective is to improve the co-op- eration between European compa- nies, to improve the European “tech- nology base” and to facilitate early standardisation. The programme was necessary because of the economical impact of IT, market fragmentation, the need to avoid duplicate efforts, the lack of skilled people and with an eye on the current speed with which innovations take place. At present, there are some 226 ESPRIT projects involving 2900 researchers from 420 organisations (and yet only one in five projects was actually admitted to the programme). In three quarters of the projects, universities participate whereas small companies are in- volved in 578 of the projects.

If everything goes ahead as planned, ESPRIT II will be launched soon. (Note added in proof: the plans for ESPRIT II have been approved by the European Committee.) A novel aspect of this programme is the pos-

sibility to fund projects without any industrial involvement. This was done in order to stimulate pure basic re- search in the areas of micro-efectron- its (optical components, quantum- electronics, low-temperature elec- tronics), artificial intelligence and cognitive science as well as computer science.

In other lectures, attention was given to the activities of the Amsterdam Centre for Mathematics and Informatics (Centrum voor Wiskunde en Informatica, CWI) and the German Society for Mathematics and Data Processing (Gesellschaft ftir Mathematik und Datenverarbeitung), where - once again - the hot topics turned out to be parallel systems, scientific applications of supercom- puters, VLSI-CAD and innovative computer systems. In addition, there were contributions discussing the DARPA Strategic Computing pro- gramme and the British Alvey pro- gramme. In these programmes too, the buzz-words are VLSI, novel computer architectures, software en- gineering, intelligent systems and man-machine interaction.

A common theme in all lectures was the paramount importance of the FGCS initiative; all other pro- grammes were started in response to and/or have been heavily influenced by this particular project. This was the reason for naming the sessions on the first day after Professor Tohru Moto-oka, the “centre forward” of the FGCS project and a late victim of the mission of Enola Gay.

On the second and third day of the conference, many parallel sessions had been organised addressing a wide variety of topics.

J.P. Banltre (IRISA/ INRIA, France) offered one solution to the problem of programming a (mas- sively) parallel computer. His group has developed the so-called f-model, which shows some resemblance to the process of chemical reactions. The

r-operator takes as its arguments a reaction condition R and an action A and operates on a multiset M. A multiset is a set in which elements may occur more than once, for in- stance (1, 2, 3, 3, 3, 4). The formal definition of the F-operator for the two-dimensional case may be some- what vague, but a few examples will readily expose its meaning:

if there is some pair (xi, x2} in M, &h that R(x,, x1) = true

then replace { xt, x2} in M by

A(+ ~2) else M

endT

Its “chemical analogue”: if there are two molecules in the reaction mixture that can react, remove them from the mixture and replace them by the product of their reaction, else do not change the reaction mixture.

A few examples demonstrate the issues:

- determining prime numbers:

sieve (n) = r(R, A)({2, 3, 4 ,..., n)) where R(x,, x2) = multiplex,, x2)

A(x,, x1) = {x2)

In other words: if there are two num- bers, one of which is a multiple of the other, then remove them from the multiset and add the smaller number.

If n = 8, for example, the follow- ing “sequence of reactions” might occur:

- computing factorials:

fact(n)

=In(R, AX{1,2,3,4,...,n})

9 Monitor m

where

Wx,, x2)= true

A(x,, x2)=Cx1*x*)

For example, when n = 6:

@_@_$l!$J - determining the ~~rnurn of a set of numbers:

mi~mum(~) = f(R, A)(M) where

For example:

Major advantages of the F-for- malism are, that it facilitates a com- pact programming style, that nothing needs be said by the programmer about the sequence of the operations (non-determinism) and that the method can be implemented on an array of processors. The two-dimen- sional case (illustrated here), in which only two molecules at a time may react, has actually been implemented. The chief problem appears to be the detection of the end of the “ reaction”. Nonetheless, the general concept seems interesting and prom- ising.

A.R.G. Pudner (British Aero- space) spoke about the Declarative

Language Machine (DLM), a real- time artificial intelligence computer on a plug-in board, based on the Motorola 68020 processor and devel- oped in-house at BA. The DLM is at present probably the fastest AI-com- puter available. British Aerospace still have to decide whether or not they will take on marketing the device.

T.J. Reynolds (University of Essex) discussed BRAVE, a PRO- LOG-based programming language with capabilities for parallel pro- cessing. The programmer enjoys a large amount of freedom in de- te~ning which parts of the program should be executed in parallel.

The two sorts of paraIlellism

(AND- and OR-parallellism) can both be used in the following manner.

If a normal, sequential clause were to read:

c: -gl, 82.

then, if gl and g2 share no variables, they may be executed in parallel (AND-parallellism), which is indi- cated as follows:

c: -gl&g2.

OR-parallellism is appropriate if there are several clauses for one pre- dicate. For instance, in the following example, the first two clauses are ex- ecuted in parallel (indicated by the colon). The third clause is invoked only when both clauses fail (or on backtracking):

g: -p:

g: -4.

g: -r.

A prototype version of BRAVE has been implemented on a machine con- sisting of three MC-68020 processors.

During a session about systolic arrays, two lectures were devoted to the WARP machine, developed at Carnegie-Mellon University. H.T. Kung discussed the architecture of the machine of which over 15 copies have been installed at DARPA

customers. WARP may be pro- grammed in a high-level language. J.A. Webb discussed an application of WARP, namely FIDO, a complex

vision system for an autonomously navigating vehicle.

B.S. Wherett (Heriot-Watt Uni- versity) and T. Yatagai (University of Tsukuba) discussed the topic of optical computers. Wherett does not expect any practical systems to ap- pear on the market-place in the near future. He does however expect the first optical chip within some three years. Important potential applica- tions of optical computers are Four- ier transforms and image processing operations.

Finally, G. Matsumoto gave an account of Japanese efforts in the area of neurocomputing f”far be- yond the frontiers in computing”, as he phrased it). In 1986 yet another ten year programme has been ini- tiated, called the Bio-information Element Project. The project entails research into biomaterials for build- ing future neurocomputers as well as into resolving questions such as “how does the brain function?’ and “how do neural networks function?‘.

Overall this has been a fascinating conference where several very excit- ing developments in the area of com- puter science have been discussed. It was striking, however, to find that there is a huge communications gap between those who do the developing and those who are supposed to be using the fruits of their efforts.

References and further reading For those interested in recent ad-

vances in computer science, some rel- evant references follow. The October 1987 issue of Scientific American (de- voted entirely to “advanced comput- ing”) may serve as a first introduc- tion. Annual Reviews in Computer Science provides yearly updates at a higher level.

D. Gelernter, Domesticating parallel- ism, Compufer, August (1986) 12-16. K.A. Frenkel, Evaluating two mas-

n Chemometrics and Intelligent Laboratory Systems 10

sively parallel machines, Communica- tions of the Association for Computing Machinery, 29 (1986) 752-758.

D.E. Shaw, Architecture and appli- cations of a heterogeneous, massively parallel machine, Annual Reuiews in

Computer Science, 1 (1986) 139-151. D.A. Patterson, Reduced instruction set computers, Communications of the

Association for Computing Machinery,

28 (1985) 8-21. C.L. Seitz, The cosmic cube, Com- munications of the Association for

Computing Machinery, 28 (1985)

22-33. G.C. Fox and S.W. Otto, Algorithms for concurrent processors, Physics

Today, 37 (5) (1984) 50-59. D. Parkinson, The distributed array processor (DAP), Computational

Physics Communications, 28 (1983) 325-336.

L.S. Haynes, R.L. Lau, D.P. Siewio- rek and D.W. Mizell, A survey of highly parallel computing, Computer, January (1982) 9-24.

J.B. Dennis, Data flow supercom- puters, Computer, November (1980) 48-56. K.E. Batcher, Design of a massively parallel processor, IEEE Trunsac- tions on Computers, C-29 (1980)

836-840.

GERARD J. KLEYWEGT Department of NMR Spectroscopy, University of Utrecht, Padualaan 8,

3584 CH Utrecht, The Netherlands

-

j\ THE CHEMOMETRICS SOCIETY , N

In 1987 elections were held to choose a new President and a new Secretary of the Chemometrics Society according to the decision of the Assembly of the Society at the Lerici Meeting in 1986.

The polls gave the following re- sults: - President: Prof. D. Luc Massart, Vrije Universiteit Brussel, Faculteit der Geneeskunde en Farmacie, Farmaceutisch Instituut, Laarbeek- laan 103, B-1090 Brussels, Belgium; - Secretary: Dr. Wolfhard Weg- scheider, Institut fur Analytische Chemie, Mikro- und Radiochemie, Technische Universitat Graz, Tech- nikerstrasse 4, A-8010 Graz, Austria.

I was very glad to realize that

,WS FROM THE CHEMOMETRICS SOCIETY

participation in the vote was very high, and that both officers were al- most unanimously elected.

As the elections were held after a great delay, I therefore suggest that the officers are appointed for the tri-

ennium 1988-1990. I welcome the new officers, and I

am certain that, through their scien-

tific prestige and their organizational capabilities, the Chemometrics Soci- ety will enjoy a very profitable trien-

nium. In giving up my presidency, I

thank all the members of the Chem- ometrics Society, inviting them to ac- tively endorse, in their own interest, all the initiatives of the Chemomet- rics Society, and especially of the two

journals. I ask the members to cooperate

with the organizational structure of the Society, even in the seemingly less important matters, e.g. by send- ing Dr. Wegscheider any change of address.

I congratulate the newly formed National Sections; some news I have received recently makes me hopeful that new national groups will very soon join the Chemometrics Society, thereby increasing its standing and professional base, and bringing it more in accordance with the aims for which it has been constituted.

M. FORINA