{"title":"A high-level programming environment for distributed memory architectures","authors":"W. Giloi, A. Schramm","doi":"10.1007/3-540-48387-X_23","DOIUrl":"https://doi.org/10.1007/3-540-48387-X_23","url":null,"abstract":"","PeriodicalId":92432,"journal":{"name":"Proceedings. Euromicro International Conference on Parallel, Distributed, and Network-based Processing","volume":"7 1","pages":"217-222"},"PeriodicalIF":0.0,"publicationDate":"1999-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87017621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session 12: Parallel Programming","authors":"P. Milligan","doi":"10.1109/EMPDP.1994.592532","DOIUrl":"https://doi.org/10.1109/EMPDP.1994.592532","url":null,"abstract":"The drive to identify solutions to the many problems associated with the development of code for execution on parallel architectures continues to dominate research in this area. Apart from the problems associated with the lack of general purpose development environments there is the simple fact that once a parallel version of a program is produced there is no guarantee that it will have a satisfactory execution profile. Certain areas of programming, e.g. logic programming, pose highly specialized problems and there is a long standing and growing interest in solving these problems in a highly efficient manner. The papers contained in this section of the workshop programme can be divided into two groups. The first contains a paper which will have a broad range of applicability as it suggests how programs exhibiting finegrain characteristics can be restructured to produce coarsegrain equivalents with a much improved performance prolile. The second group of three papers have logic programming as a central feature and consider various performance and implementation related issues.","PeriodicalId":92432,"journal":{"name":"Proceedings. Euromicro International Conference on Parallel, Distributed, and Network-based Processing","volume":"22 1","pages":"494-495"},"PeriodicalIF":0.0,"publicationDate":"1994-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74612164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session 5: Networks And Communications","authors":"G. Ciccarella","doi":"10.1109/EMPDP.1994.592475","DOIUrl":"https://doi.org/10.1109/EMPDP.1994.592475","url":null,"abstract":"The need to support applications that are distributed on wide-local area networks or on multicomputer systems and that require large data transfer and low delay, drives the development of faster and more efficient ways of transporting and switching data. This work has led to the definition of new communication protocols and to the introduction in the telecommunication networks of fast packet-based data services, such as frame relay, switched multimegabit data services and asynchronous transfer mode (ATM) cell relay service.","PeriodicalId":92432,"journal":{"name":"Proceedings. Euromicro International Conference on Parallel, Distributed, and Network-based Processing","volume":"321 1","pages":"102"},"PeriodicalIF":0.0,"publicationDate":"1994-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75902976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session 9: Distributed Operating Systems","authors":"N. Scarabottolo","doi":"10.1109/EMPDP.1994.592505","DOIUrl":"https://doi.org/10.1109/EMPDP.1994.592505","url":null,"abstract":"Tlie aim of this session is to present some very interesting experiences related to various aspects of tlie impleiiieiitatioii of operating systems in parallel aiid distributed processing eiiviroiimeiits. Tlie iiiiportance of the runtime support in these eiiviroiuneiits is well understood: being tlie software layer between tlie liardivare architecture aiid the application software, it is up to the operating sj.steii1 to (try to) solve all problems related to Iiardiare independence, workload distribution, inter-processor coiiuiiunication, perfoniiance optimization, real-time behavior, etc. The first paper of tlie session - Sorne Ixszies ,for the Distributed Scliedziling Problem in tlie A402 Distributed Rea2-Tinie Object-0rierired ,Yj~tern. b!) B. Mecibah and A. Attoui (France) - concentrates on tlie scheduling problem of tlie cictive objects available in tlie MO2 object-oriented real-time model. aiming at supporting implemeiitatioli of a rcal-time database iiianageiiieiit systeiii. Active objects are characterized by an autonomous: event-driven behavior, iiidepeiideiit of the activation of their methods. Tlie proposed solution is a distributed scheduling algorithm adopting a heuristic approach for optimization. Tlie second paper of the session - A /.)is/r/bii/ed Algori tlim ,for E’nzi I/- 7 blercinr Ilynnm ic 7 i7.r k S’chedziling, by A. Baucli. E. Maehle and F.J. Mal-k~is (Geniiaiiy) - considers again the problcm of scheduling, but the aim is in this case to obtain a fault tolerant behavior in a ~iarallel s!.stcm .ittiout requiring a static redundancy based on full replication of tasks. Tlie basic idea for acliieving this fault tolerant behavior is to keep all input data sets of a","PeriodicalId":92432,"journal":{"name":"Proceedings. Euromicro International Conference on Parallel, Distributed, and Network-based Processing","volume":"4 1","pages":"300"},"PeriodicalIF":0.0,"publicationDate":"1994-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81294694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Load Balancing","authors":"R. McConnell","doi":"10.1109/EMPDP.1994.592502","DOIUrl":"https://doi.org/10.1109/EMPDP.1994.592502","url":null,"abstract":"As the price/performance ratio of parallel computers continues to fall many applications developers such as scientist and engineers with large computational problems are looking to non von Neuman computer technology to provide powerful throughput platforms. While the hardware technology behind such platforms are well matured the software environments do not provide the functionality necessary to allow ordinary applications developers to utilize them efficiently. One particular problem, which requires the development of new and innovative techniques, is the mapping of the work of a potentially concurrent computation to the processors of a multiprocessor system. Adjusting this mapping in order to complete the workload in the minimum possible time (i.e. to share the workload among the processors evenly and minimize inter processor communication) is known as load balancing. Deciding on the mapping before execution based on compile time information is known as static load balancing while adjust the mapping during the execution (via process migration) is known as dynamic load balancing. The papers in this session incorporate techniques for mapping processes both before and during execution in order to maintain an effective load balance. However one of the papers deals with the distributed computing environment while the other is in the area of object oriented programming environments for multiprocessors. The first paper in this session, entitled “The Efficient Management of Task Clusters in a Dynamic Load Balancer”, describes work being carried out on load balancing of multiuser disthbuted systems. A novel technique is proposed which deal with groups of subtasks, known as task clusters rather than single task units. This provides the advantage of letting the user submit several tasks, in a script type format, to the load balancer. The load balancer, which consists of a load manager running on each node in the system, can then distribute the subtasks across the nodes thus executing the task cluster in parallel. Allowing task clusters to be submitted to the load balancing system gives increase efficiency over submitting tasks separately. Currently two strategies for task cluster management are being considered by the authors. The two alternatives are based on extensions to either a bidding strategy or a probing strategy. The paper will compare the use of these two options. In addition the load balancing scheme has been implemented across a network of workstations and performance results from experiments which compare the scheme described in the paper with an old scheme will be included The second paper which is entitled “The Benefits of Migration in a Parallel Objects Programming Environment” deals with load balancing of distributed memory multiprocessors used for object oriented programming. The parallel objects environment is based on the active object model. Parallel object applications can be highly dynamic as new objects and new threads of execution","PeriodicalId":92432,"journal":{"name":"Proceedings. Euromicro International Conference on Parallel, Distributed, and Network-based Processing","volume":"5 1","pages":"42-"},"PeriodicalIF":0.0,"publicationDate":"1994-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76782187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Program Chairman's Introduction","authors":"S. Winter","doi":"10.1109/EMPDP.1994.592459","DOIUrl":"https://doi.org/10.1109/EMPDP.1994.592459","url":null,"abstract":"It has been my pleasure to participate in the organisation of the International Workshop on Parallel and Distributed Processing, the second organised by Euromicro. Following the tremendous success of last year's event, I am glad to report that interest in the Workshop has once again been very high, and I look forward to an exciting and stimulating Workshop. May I add nny own welcome to those of the local organisers from the University of Malaga.","PeriodicalId":92432,"journal":{"name":"Proceedings. Euromicro International Conference on Parallel, Distributed, and Network-based Processing","volume":"10 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"1994-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83829923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session 2: Parallelization","authors":"A. Tyrrell","doi":"10.1109/EMPDP.1994.592464","DOIUrl":"https://doi.org/10.1109/EMPDP.1994.592464","url":null,"abstract":"The design of parallel systems requires care and accuracy if the results obtained from the system are to be useful. This requirement means that an accurate model of the system to be implemented must be derived and this model carefully mapped onto the final hardware architecture. Many systems have concurrency inherent in their operation and this concurrency should be used to the full in any implementation of the final system. However, concurrency introduces a number of complications which could cause errors in the final system to occur. A careful choice must therefore be made to describe and implement such requirements. The design of software intended for a parallel implementation is demanding, requiring the understanding and application of proper design methods including techniques which exploit and control the parallel nature of the system. The software will comprise a set of processes in asynchronous concurrent execution where coordination is provided by synchronising interprocess communications. The designer is therefore presented with three interrelated problems:","PeriodicalId":92432,"journal":{"name":"Proceedings. Euromicro International Conference on Parallel, Distributed, and Network-based Processing","volume":"87 1","pages":"30-31"},"PeriodicalIF":0.0,"publicationDate":"1994-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91352978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}