Loading...
Please wait, while we are loading the content...
Similar Documents
Modula-2*: Language Overview Most Current Programming Languages for Parallel Machines, including *lisp
| Content Provider | Semantic Scholar |
|---|---|
| Author | Philippsen, Michael |
| Abstract | 11] suer from some or all of the following problems: Manual Virtualization. The programmer must write explicit code for mapping processes, whose number is problem dependent, onto the available processors, whose number is xed. This task is not only tedious and repetitive, but also one that makes programs non-portable. Manual Data Allocation. Distribution of data over memory modules must be programmed explicitly for achieving adequate performance. Because of the tight coupling of data allocation with algorithms and topology of the communications network, the resulting programs are dicult to comprehend and non-portable. Manual Communication. The programmer must implement inter-process communication by means of low-level message passing primitives. This approach results in code that is dicult to write, read, and verify, especially if asynchrony and indeterministic behavior is possible. The given topology of the communication network in BLOCKINuences the design of algorithms and leads to non-portability. Choice between SIMD and MIMD. Parallel programming languages are either synchronous or asyn-chronous, re BLOCKINecting whether the target machine is either a SIMD or MIMD architecture. On SIMD machines , programs are restricted to total synchrony, even if that causes poor machine utilization. On MIMD machines, tightly synchronous execution is quite expensive to implement when needed. Since the choice is dictated by the available hardware rather than the problem, the resulting programs are often distorted and not portable among SIMD and MIMD architectures. Modula-2* provides solutions to the basic problems mentioned above. The language abstracts from the memory organization and from the number of physical processors. Mapping of data to processors is performed by the compiler, optionally supported by high-level directives provided by the programmer. Communication is not directly visible. Instead, reading and writing in a (virtually) shared address space subsumes communication. A shared memory, however, is not required. Parallelism is explicit, and the programmer can choose among synchronous and asynchronous execution modes at any level of granularity. Thus, programs can use SIMD mode for synchronous algorithms, or use MIMD mode where asynchronous concurrency is more appropriate. The two modes can be intermixed freely. The data-parallel approach, discussed in [9] and exemplied in languages such as *LISP, C*, and MPL is currently quite successful, because it reduces machine dependence of parallel programs. Data parallelism extends a synchronous, SIMD model with a global name space, which hides message passing between processing elements. It also makes the number of (virtual) processing elements a function of the problem size … |
| File Format | PDF HTM / HTML |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |