Proceedings of the IEEE/ACM SC98 Conference最新文献

筛选
英文 中文
Multilevel Algorithms for Multi-Constraint Graph Partitioning 多约束图划分的多级算法
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10018
G. Karypis, Vipin Kumar
{"title":"Multilevel Algorithms for Multi-Constraint Graph Partitioning","authors":"G. Karypis, Vipin Kumar","doi":"10.1109/SC.1998.10018","DOIUrl":"https://doi.org/10.1109/SC.1998.10018","url":null,"abstract":"Traditional graph partitioning algorithms compute a k-way partitioning of a graph such that the number of edges that are cut by the partitioning is minimized and each partition has an equal number of vertices. The task of minimizing the edge-cut can be considered as the objective and the requirement that the partitions will be of the same size can be considered as the constraint. In this paper we extend the partitioning problem by incorporating an arbitrary number of balancing constraints. In our formulation, a vector of weights is assigned to each vertex, and the goal is to produce a k-way partitioning such that the partitioning satisfies a balancing constraint associated with each weight, while attempting to minimize the edge-cut. Applications of this multi-constraint graph partitioning problem include parallel solution of multi-physics and multi-phase computations, that underlay many existing and emerging large-scale scientific simulations. We present new multi-constraint graph partitioning algorithms that are based on the multilevel graph partitioning paradigm. Our work focuses on developing new types of heuristics for coarsening, initial partitioning, and refinement that are capable of successfully handling multiple constraints. We experimentally evaluate the effectiveness of our multi-constraint partitioners on a variety of synthetically generated problems.","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115619342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 498
OVERTURE: An Object-Oriented Framework for High Performance Scientific Computing 前言:面向对象的高性能科学计算框架
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10013
F. Bassetti, David L. Brown, K. Davis, W. Henshaw, D. Quinlan
{"title":"OVERTURE: An Object-Oriented Framework for High Performance Scientific Computing","authors":"F. Bassetti, David L. Brown, K. Davis, W. Henshaw, D. Quinlan","doi":"10.1109/SC.1998.10013","DOIUrl":"https://doi.org/10.1109/SC.1998.10013","url":null,"abstract":"The Overture Framework is an object-oriented environment for solving PDEs on serial and parallel architectures. It is a collection of C++ libraries that enables the use of finite difference and finite volume methods at a level that hides the details of the associated data structures, as well as the details of the parallel implementation. It is based on the A++/P++ array class library and is designed for solving problems on a structured grid or a collection of structured grids. In particular, it can use curvilinear grids, adaptive mesh refinement and the composite overlapping grid methods to represent problems with complex moving geometry. This paper introduces Overture, its motivation, and specifically the aspects of the design central to portability and high performance. In particular we focus on the mechanisms within Overture that permit a hierarchy of abstractions and those mechanisms which permit their efficiency on advanced serial and parallel architectures. We expect that these same mechanisms will become increasingly important within other object-oriented frameworks in the future.","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"268 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126956905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Tuning Strassen's Matrix Multiplication for Memory Efficiency 调整Strassen矩阵乘法以提高内存效率
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10045
Mithuna Thottethodi, S. Chatterjee, A. Lebeck
{"title":"Tuning Strassen's Matrix Multiplication for Memory Efficiency","authors":"Mithuna Thottethodi, S. Chatterjee, A. Lebeck","doi":"10.1109/SC.1998.10045","DOIUrl":"https://doi.org/10.1109/SC.1998.10045","url":null,"abstract":"Strassen's algorithm for matrix multiplication gains its lower arithmetic complexity at the expense of reduced locality of reference, which makes it challenging to implement the algorithm efficiently on a modern machine with a hierarchical memory system. We report on an implementation of this algorithm that uses several unconventional techniques to make the algorithm memory-friendly. First, the algorithm internally uses a non- standard array layout known as Morton order that is based on a quad-tree decomposition of the matrix. Second, we dynamically select the recursion truncation point to minimize padding without affecting the performance of the algorithm, which we can do by virtue of the cache behavior of the Morton ordering. Each technique is critical for performance, and their combination as done in our code multiplies their effectiveness. Performance comparisons of our implementation with that of competing implementations show that our implementation often outperforms the alternative techniques (up to 25%). However, we also observe wide variability across platforms and across matrix sizes, indicating that at this time, no single implementation is a clear choice for all platforms or matrix sizes. We also note that the time required to convert matrices to/from Morton order is a noticeable amount of execution time (5% to 15%). Eliminating this overhead further reduces our execution time.","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122365528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 89
The Penn State Computing Condominium Scheduling System 宾夕法尼亚州立大学计算公寓调度系统
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10002
Pawan Agnihotri, V. Agarwala, J. J. Nucciarone, K. Morooney, C. Das
{"title":"The Penn State Computing Condominium Scheduling System","authors":"Pawan Agnihotri, V. Agarwala, J. J. Nucciarone, K. Morooney, C. Das","doi":"10.1109/SC.1998.10002","DOIUrl":"https://doi.org/10.1109/SC.1998.10002","url":null,"abstract":"The Penn State RS/6000 SP is a uniquely acquired and operated computing facility. This 143 CPU machine, centrally located and jointly owned, is a result of collaboration between academic departments, research groups, and the central academic computing facility. It is the largest on- campus resource at Penn State for meeting the high performance computing needs. Due to the joint ownership structure of the machine, the job scheduling requirements are significantly different from the usual methods of job processor allocation in distributed memory parallel machines. After several years of adapting different queuing systems, primarily the Distributed Queuing System, to our needs, it became obvious that the conventional scheduling systems did not serve the machine scheduling requirements unique to the Penn State SP. We concluded that a robust and easily configurable system needs to be developed to meet our unique needs. We have drawn inspiration from and modeled our system on EASY. As with EASY, we use the application programming interface of LoadLeveler to implement our scheduler. Our scheduler is named Penn State Condominium Scheduler (PSCS). PSCS does policy implementation and job execution on the machine is done by LoadLeveler. PSCS is written to facilitate easier configuration and administration. It does not have any processor architecture dependence. It is similar to the native scheduler in LoadLeveler in this regard. PSCS has incorporated three unique features: (i) node owner affinity which ensures fairness by allocation based on ownership, (ii) backfilling which ensures efficient utilization of resources, and (iii) affinity for services provided which ensures proper matching of jobs to the processors based on memory, software and other requirements. Jobs from users who own nodes in the SP complex have affinity to those particular processors owned by them. They also have preferences granted to them depending on their ownership level. Once the demand from the node owners is met, the next important goal is to keep the machine as fully occupied with running jobs as possible. This is accomplished by backfilling. This scheduler incorporates these features which are most important to successful implementation of multi-owner, centrally located, heterogeneous computing facilities.","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129714610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
WebFlow - High-Level Programming Environment and Visual Authoring Toolkit for High Performance Distributed Computing 用于高性能分布式计算的高级编程环境和可视化创作工具包
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10046
E. Akarsu, Geoffrey C. Fox, W. Furmanski, T. Haupt
{"title":"WebFlow - High-Level Programming Environment and Visual Authoring Toolkit for High Performance Distributed Computing","authors":"E. Akarsu, Geoffrey C. Fox, W. Furmanski, T. Haupt","doi":"10.1109/SC.1998.10046","DOIUrl":"https://doi.org/10.1109/SC.1998.10046","url":null,"abstract":"We developed a platform independent, three-tier system, called WebFlow. The visual authoring tools implemented in the front end integrated with the middle tier network of servers based on the industry standards and following distributed object paradigm, facilitate seamless integration of commodity software components. We add high performance to commodity systems using GLOBUS metacomputing toolkit as the backend. We have explained these ideas in general before, and here for the first time we describe a fully operational example which is expected to be deployed in an NCSA Alliance Grand Challenge.","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130487624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Real-Time Image Segmentation for Image-Guided Surgery 图像引导手术的实时图像分割
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10024
S. Warfield, F. Jolesz, R. Kikinis
{"title":"Real-Time Image Segmentation for Image-Guided Surgery","authors":"S. Warfield, F. Jolesz, R. Kikinis","doi":"10.1109/SC.1998.10024","DOIUrl":"https://doi.org/10.1109/SC.1998.10024","url":null,"abstract":"Image-guided surgery is an application for which high performance computing is increasingly becoming a critical technology. Advances in image-guided surgery techniques have made it possible to acquire images of a patient whilst the surgery is taking place, to align these images with high resolution 3D scans of the patient acquired preoperatively and to merge intraoperative images from multiple imaging modalities. The application of these technologies has now become a routine clinical procedure in some hospitals. However, as the type of procedures undertaken is expanded, it is becoming clear that the use of image fusion and linear registration technology alone has some limitations. We have developed a novel image segmentation algorithm that makes use of an individualized template of normal patient anatomy in order to compute the segmentation of intraoperative imaging data. Intraoperative image segmentation is highly data and compute intensive. In order to achieve accurate segmentation in a time frame compatible with surgical intervention, we have developed a parallel version of our segmentation algorithm, and implemented the algorithm on a symmetric multiprocessor architecture. We have studied the accuracy of the segmentation algorithm, and the scalability and bandwidth requirements of our parallel implementation.","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123984234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
A Collaborative Framework for Distributed Microscopy 分布式显微镜的协作框架
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10050
B. Parvin, John R. Taylor, G. Cong
{"title":"A Collaborative Framework for Distributed Microscopy","authors":"B. Parvin, John R. Taylor, G. Cong","doi":"10.1109/SC.1998.10050","DOIUrl":"https://doi.org/10.1109/SC.1998.10050","url":null,"abstract":"This paper outlines the motivation, requirements, and architecture of a collaborative framework for distributed virtual microscopy. In this context, the requirements are specified in terms of (1) functionality, (2) scalability, (3) interactivity, and (4) safety and security. Functionality refers to what and how an instrument does something. Scalability refers to the number of instruments, vendor specific desktop workstations, analysis programs, and collaborators that can be accessed. Interactivity refers to how well the system can be steered either for static or dynamic experiments. Safety and security refers to safe operation of an instrument coupled with user authentication, privacy, and integrity of data communication. To meet these requirements, we introduce three types of services in the architecture: Instrument Services (IS), Exchange Services (ES), and Computational Services (CS). These services may reside on any host in the distributed system. The IS provide an abstraction for manipulating different types of microscopes; the ES provide common services that are required between different resources; and the CS provide analytical capabilities for data analysis and simulation. These services are brought together through CORBA and its enabling services, e.g., Event Services, Time Services, Naming Services, and Security Services. Two unique applications have been introduced into the CS for analyzing scientific images either for instrument control or recovery of a model for objects of interest. These include: insitu electron microscopy and recovery of 3D shape from holographic microscopy. The first application provides a near real-time processing of the video-stream for on-line quantitative analysis and the use of that information for closed-loop servo control. The second application reconstructs a 3D representation of an inclusion (a crystal structure in a matrix) from multiple views through holographic electron microscopy. These application require steering external stimuli or computational parameters for a particular result. In a sense, ``computational instruments'' (symmetric multiprocessors) interact closely with data generated from ``experimental instruments'' (unique microscopes) to conduct new experiments and bring new functionalities to these instruments. Both of these features exploit high-performance computing and low-latency networks to bring novel functionalities to unique scientific imaging instruments.","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133898793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Reducing Coherence Overhead of Barrier Synchronization in Software DSMs 降低软件dsm中屏障同步的相干开销
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10029
Jae Bum Lee, C. Jhon
{"title":"Reducing Coherence Overhead of Barrier Synchronization in Software DSMs","authors":"Jae Bum Lee, C. Jhon","doi":"10.1109/SC.1998.10029","DOIUrl":"https://doi.org/10.1109/SC.1998.10029","url":null,"abstract":"Software Distributed Shared Memory (SDSM) systems usually have the large coherence granularity that is imposed by the underlying virtual memory page size. To alleviate the coherence overheads such as the network traffic to preserve the coherence, or page misses caused by false sharing, relaxed memory models are widely accepted for the SDSM systems. In the relaxed memory models, when a shared page is modified, invalidation requests to other copies are deferred until a synchronization point and, in addition, the requests are transferred only to the processor acquiring the synchronization variable. On a barrier, however, the invalidation requests must be transferred to all the processors that participate in the barrier. As a result, it tends to induce heavy network traffic, and also may lead to useless page misses by false sharing. In this paper, we propose a method to alleviate the coherence overheads of barrier synchronization in shared-memory parallel programs. It performs static analysis to examine data dependency between processors across global barriers, and then inserts special primitives into the program in order to exploit the dependency information at run time. The static analysis finds out code regions where a processor modifies data that will be used only by some of the other processors. At run time, the coherence messages for the data are transferred only to the processors with the help of the inserted primitives. In particular, if the modified data will not be used by any other processors, the primitives enforce that the coherence messages are delivered only to master processor when the parallel execution of the program is finished. We evaluated the performance of this method in a 16-node software DSM system supporting AURC protocol. Program-driven simulation was performed with five benchmark programs: Jacobi, Red-black SOR, Expl, LU, and Water-nsquared. For the applications, the experimental results show that our method can reduce the coherence messages by up to about 98%, and also can improve the execution time by up to about 26%.","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128931221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Communication overlap in multi-tier parallel algorithms 多层并行算法中的通信重叠
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10000
S. Baden, Stephen J. Fink
{"title":"Communication overlap in multi-tier parallel algorithms","authors":"S. Baden, Stephen J. Fink","doi":"10.1109/SC.1998.10000","DOIUrl":"https://doi.org/10.1109/SC.1998.10000","url":null,"abstract":"Hierarchically organized multicomputers such as SMP clusters offer new opportunities and new challenges for high-performance computation, but realizing their full potential remains a formidable task. We present a hierarchical model of communication targeted to block- structured, bulk-synchronous applications running on dedicated clusters of symmetric multiprocessors. Our model supports node-level rather processor-level communication as the fundamental operation, and is optimized for aggregate patterns of regular section moves rather than point-to-point messages. These two capabilities work synergistically. They provide flexibility in overlapping communication and overcome deficiencies in the underlying communication layer on systems where inter-node communication bandwidth is at a premium. We have implemented our communication model in the KeLP2.0 run time library. We present empirical results for five applications running on a cluster of Digital AlphaServer 2100's. Four of the applications were able to overlap communication on a system which does not support overlap via non-blocking message passing using MPI. Overall performance improvements due to our overlap strategy ranged from 12% to 28%.","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133046649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Dynamic Repartitioning of Adaptively Refined Meshes 自适应细化网格的动态重划分
Proceedings of the IEEE/ACM SC98 Conference Pub Date : 1998-11-07 DOI: 10.1109/SC.1998.10025
K. Schloegel, G. Karypis, Vipin Kumar
{"title":"Dynamic Repartitioning of Adaptively Refined Meshes","authors":"K. Schloegel, G. Karypis, Vipin Kumar","doi":"10.1109/SC.1998.10025","DOIUrl":"https://doi.org/10.1109/SC.1998.10025","url":null,"abstract":"One ingredient which is viewed as vital to the successful conduct of many large-scale numerical simulations is the ability to dynamically repartition the underlying adaptive finite element mesh among the processors so that the computations are balanced and interprocessor communication is minimized. This requires that a sequence of partitions of the computational mesh be computed during the course of the computation in which the amount of data migration necessary to realize subsequent partitions is minimized, while all of the domains of a given partition contain a roughly equal amount of computational weight. Recently, parallel multilevel graph repartitioning techniques have been developed that can quickly compute high-quality repartitions for adaptive and dynamic meshes while minimizing the amount of data which needs to be migrated between processors. These algorithms can be categorized as either schemes which compute a new partition from scratch and then intelligently remap this partition to the original partition (hereafter referred to as scratch-remap schemes), or multilevel diffusion schemes. Scratch-remap schemes work quite well for graphs which are highly imbalanced in localized areas. On slightly to moderately imbalanced graphs and those in which imbalance occurs globally throughout the graph, however, they result in excessive vertex migration compared to multilevel diffusion algorithms. On the other hand, diffusion- based schemes work well for slightly imbalanced graphs and for those in which imbalance occurs globally throughout the graph. However, these schemes perform poorly on graphs that are highly imbalanced in localized areas, as the propagation of diffusion over long distances results in excessive edge-cut and vertex migration results. In this paper, we present two new schemes for adaptive repartitioning: Locally-Matched Multilevel Scratch-Remap (or LMSR) and Wavefront Diffusion. The LMSR scheme performs purely local coarsening and partition remapping in a multilevel context. In Wavefront Diffusion, the flow of vertices move in a wavefront from overbalanced to underbalanced domains. We present experimental evaluations of our LMSR and Wavefront Diffusion algorithms on synthetically generated adaptive meshes as well as on some application meshes. We show that our LMSR algorithm decreases the amount of vertex migration required to balance the graph and produces repartitionings of similar quality compared to state-of-the-art scratch-remap schemes. Furthermore, we show that our LMSR algorithm is more scalable in terms of execution time compared to state-of-the-art scratch-remap schemes. We show that our Wavefront Diffusion algorithm obtains significantly lower vertex migration requirements, while maintaining similar edge-cut results compared to state-of-the-art multilevel diffusion algorithms, especially for highly imbalanced graphs. Furthermore, we compare Wavefront Diffusion with LMSR and show that the former will result in lower vert","PeriodicalId":113978,"journal":{"name":"Proceedings of the IEEE/ACM SC98 Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115495336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信