Large-scale concurrent computing in artificial intelligence research

L. Gasser
{"title":"Large-scale concurrent computing in artificial intelligence research","authors":"L. Gasser","doi":"10.1145/63047.63089","DOIUrl":null,"url":null,"abstract":"Research in AI is slowly maturing, and body of accepted techniques for reasoning and for representing knowledge in simple, circumscribed domains now exists. But with the maturity of AI has come a growing awareness of the severe limitations of current techniques for constructing more complex problem solving or interpretation systems. We currently have inadequate means to gather, represent, store, organize, access, and manipulate the huge collections of knowledge required for complex problem solving. Existing systems can't reconfigure themselves in changing situations, nor can they incrementally adjust to new knowledge or new techniques. Large scale problem solvers (e.g. factory automation systems) cannot in principle completely model the world in which they exist, and must face problems of inconsistency, asynchrony, control and geographic distribution, etc. — they will have to work in “open systems.”\nMany solutions under consideration rely on concurrent computation, using either very fine grained “connectionist,” “neural computing” or “data parallel” approaches, or using larger grain collections of “objects,” “agents,” or “problem solving nodes” — techniques collectively termed “Distributed AI.” In this paper we characterize the needs for concurrency and parallelism in AI, with special attention to building medium to large grain adaptive problem solvers in open systems. In these systems the overriding concern is organizing the problem solving system's behavior — the “coordination problem.” Conventional distributed computing and parallel algorithms approaches allow a programmer to solve the coordination problem, and provide language constructs and concurrency control mechanisms with which a program can enact his solution. In Distributed AI, we attempt to improve adaptability by designing problem solvers which can both solve the coordination problem and enact the solution themselves.","PeriodicalId":299435,"journal":{"name":"Conference on Hypercube Concurrent Computers and Applications","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1989-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Conference on Hypercube Concurrent Computers and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/63047.63089","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

Research in AI is slowly maturing, and body of accepted techniques for reasoning and for representing knowledge in simple, circumscribed domains now exists. But with the maturity of AI has come a growing awareness of the severe limitations of current techniques for constructing more complex problem solving or interpretation systems. We currently have inadequate means to gather, represent, store, organize, access, and manipulate the huge collections of knowledge required for complex problem solving. Existing systems can't reconfigure themselves in changing situations, nor can they incrementally adjust to new knowledge or new techniques. Large scale problem solvers (e.g. factory automation systems) cannot in principle completely model the world in which they exist, and must face problems of inconsistency, asynchrony, control and geographic distribution, etc. — they will have to work in “open systems.” Many solutions under consideration rely on concurrent computation, using either very fine grained “connectionist,” “neural computing” or “data parallel” approaches, or using larger grain collections of “objects,” “agents,” or “problem solving nodes” — techniques collectively termed “Distributed AI.” In this paper we characterize the needs for concurrency and parallelism in AI, with special attention to building medium to large grain adaptive problem solvers in open systems. In these systems the overriding concern is organizing the problem solving system's behavior — the “coordination problem.” Conventional distributed computing and parallel algorithms approaches allow a programmer to solve the coordination problem, and provide language constructs and concurrency control mechanisms with which a program can enact his solution. In Distributed AI, we attempt to improve adaptability by designing problem solvers which can both solve the coordination problem and enact the solution themselves.
大规模并发计算在人工智能研究中的应用
人工智能的研究正在慢慢成熟,在简单的、有限的领域中,已经存在了许多公认的推理和表示知识的技术。但随着人工智能的成熟,人们越来越意识到当前技术在构建更复杂的问题解决或解释系统方面的严重局限性。我们目前没有足够的手段来收集、表示、存储、组织、访问和操作解决复杂问题所需的大量知识。现有系统不能在不断变化的情况下重新配置自己,也不能逐渐适应新知识或新技术。大规模问题解决者(例如工厂自动化系统)原则上不能完全模拟它们所处的世界,并且必须面对不一致、异步、控制和地理分布等问题——它们必须在“开放系统”中工作。许多正在考虑的解决方案依赖于并发计算,要么使用非常细粒度的“连接器”、“神经计算”或“数据并行”方法,要么使用更大粒度的“对象”、“代理”或“问题解决节点”集合——这些技术统称为“分布式人工智能”。在本文中,我们描述了人工智能对并发性和并行性的需求,特别关注在开放系统中构建中到大粒度自适应问题求解器。在这些系统中,最重要的是组织问题解决系统的行为——“协调问题”。传统的分布式计算和并行算法方法允许程序员解决协调问题,并提供语言结构和并发控制机制,程序可以利用这些结构实现其解决方案。在分布式人工智能中,我们试图通过设计既能解决协调问题又能自行制定解决方案的问题求解器来提高适应性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信