{"title":"Framework to enable and test conversational assistant for APIs and RPAs","authors":"Jayachandu Bandlamudi, Kushal Mukherjee, Prerna Agarwal, Ritwik Chaudhuri, Rakesh Pimplikar, Sampath Dechu, Alex Straley, Anbumunee Ponniah, Renuka Sindhgatta","doi":"10.1002/aaai.12198","DOIUrl":"https://doi.org/10.1002/aaai.12198","url":null,"abstract":"<p>In the realm of business automation, conversational assistants are emerging as the primary method for making automation software accessible to users in various business sectors. Access to automation primarily occurs through application programming interface (APIs) and robotic process automation (RPAs). To effectively convert APIs and RPAs into chatbots on a larger scale, it is crucial to establish an automated process for generating data and training models that can recognize user intentions, identify questions for conversational slot filling, and provide recommendations for subsequent actions. In this paper, we present a technique for enhancing and generating natural language conversational artifacts from API specifications using large language models (LLMs). The goal is to utilize LLMs in the “build” phase to assist humans in creating skills for digital assistants. As a result, the system does not need to rely on LLMs during conversations with business users, leading to efficient deployment. Along with enabling digital assistants, our system employs LLMs as proxies to simulate human interaction and automatically evaluate the digital assistant's performance. Experimental results highlight the effectiveness of our proposed approach. Our system is deployed in the IBM Watson Orchestrate product for general availability.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 4","pages":"443-456"},"PeriodicalIF":2.5,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12198","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142851522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2024-10-18DOI: 10.1002/aaai.12200
Kaize Ding, Yixin Liu, Chuxu Zhang, Jianling Wang
{"title":"Data-efficient graph learning: Problems, progress, and prospects","authors":"Kaize Ding, Yixin Liu, Chuxu Zhang, Jianling Wang","doi":"10.1002/aaai.12200","DOIUrl":"https://doi.org/10.1002/aaai.12200","url":null,"abstract":"<p>Graph-structured data, ranging from social networks to financial transaction networks, from citation networks to gene regulatory networks, have been widely used for modeling a myriad of real-world systems. As a prevailing model architecture to model graph-structured data, graph neural networks (GNNs) have drawn much attention in both academic and industrial communities in the past decades. Despite their success in different graph learning tasks, existing methods usually rely on learning from “big” data, requiring a large amount of labeled data for model training. However, it is common that real-world graphs are associated with “small” labeled data as data annotation and labeling on graphs is always time and resource-consuming. Therefore, it is imperative to investigate graph machine learning (graph ML) with low-cost human supervision for low-resource settings where limited or even no labeled data is available. This paper investigates a new research field—data-efficient graph learning, which aims to push forward the performance boundary of graph ML models with different kinds of low-cost supervision signals. Specifically, we outline the fundamental research problems, review the current progress, and discuss the future prospects of data-efficient graph learning, aiming to illuminate the path for subsequent research in this field.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 4","pages":"549-560"},"PeriodicalIF":2.5,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12200","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ai MagazinePub Date : 2024-10-18DOI: 10.1002/aaai.12199
Anqi Lu, Zifeng Wu, Zheng Jiang, Wei Wang, Eerdun Hasi, Yi Wang
{"title":"DCV\u0000 2\u0000 \u0000 I\u0000 \u0000 $text{DCV}^2text{I}$\u0000 : Leveraging deep vision models to support geographers' visual interpretation in dune segmentation","authors":"Anqi Lu, Zifeng Wu, Zheng Jiang, Wei Wang, Eerdun Hasi, Yi Wang","doi":"10.1002/aaai.12199","DOIUrl":"https://doi.org/10.1002/aaai.12199","url":null,"abstract":"<p>Visual interpretation is extremely important in human geography as the primary technique for geographers to use photograph data in identifying, classifying, and quantifying geographic and topological objects or regions. However, it is also time-consuming and requires overwhelming manual effort from professional geographers. This paper describes our interdisciplinary team's efforts in integrating computer vision models with geographers' visual image interpretation process to reduce their workload in interpreting images. Focusing on the dune segmentation task, we proposed an approach called <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msup>\u0000 <mi>DCV</mi>\u0000 <mn>2</mn>\u0000 </msup>\u0000 <mi>I</mi>\u0000 </mrow>\u0000 <annotation>${bf DCV}^2{bf I}$</annotation>\u0000 </semantics></math> featuring a deep dune segmentation model to identify dunes and label their ranges in an automated way. By developing a tool to connect our model with ArcGIS—one of the most popular workbenches for visual interpretation, geographers can further refine the automatically generated dune segmentation on images without learning any CV or deep learning techniques. Our approach thus realized a noninvasive change to geographers' visual interpretation routines, reducing their manual efforts while incurring minimal interruptions to their work routines and tools they are familiar with. Deployment with a leading Chinese geography research institution demonstrated the potential of <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msup>\u0000 <mi>DCV</mi>\u0000 <mn>2</mn>\u0000 </msup>\u0000 <mi>I</mi>\u0000 </mrow>\u0000 <annotation>${bf DCV}^2{bf I}$</annotation>\u0000 </semantics></math> in supporting geographers in researching and solving drylands desertification.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 4","pages":"472-485"},"PeriodicalIF":2.5,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12199","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}