Jean-Christophe Bélisle-Pipon, James Anibal, Ruth Bahr, Steven Bedrick, Oita Coleman, David Dorr, Barbara J Evans, Guy Fagherazzi, Alexander Gelbard, Satrajit Ghosh, Anita Ho, Christie Jackson, Dale Joachim, Lampros Kourtis, Andrea Krussel, Amir Lahav, Breanna Leuze, Bob MacDonald, Geralyn Miller, Vivek Mohan, Matthew Naunheim, Maria Powell, Anaïs Rameau, Sat Ramphal, Vardit Ravitsky, Charlie Reavis, Samantha Salvi Cruz, Jamie Toghranegar, Adam Vogel, Stephanie Watts, Joseph Yracheta, Robin Zhao, Yael Bensoussan
{"title":"2024语音人工智能研讨会的互动小组总结。","authors":"Jean-Christophe Bélisle-Pipon, James Anibal, Ruth Bahr, Steven Bedrick, Oita Coleman, David Dorr, Barbara J Evans, Guy Fagherazzi, Alexander Gelbard, Satrajit Ghosh, Anita Ho, Christie Jackson, Dale Joachim, Lampros Kourtis, Andrea Krussel, Amir Lahav, Breanna Leuze, Bob MacDonald, Geralyn Miller, Vivek Mohan, Matthew Naunheim, Maria Powell, Anaïs Rameau, Sat Ramphal, Vardit Ravitsky, Charlie Reavis, Samantha Salvi Cruz, Jamie Toghranegar, Adam Vogel, Stephanie Watts, Joseph Yracheta, Robin Zhao, Yael Bensoussan","doi":"10.3389/fdgth.2025.1484521","DOIUrl":null,"url":null,"abstract":"<p><p>The 2024 Voice AI Symposium presented by the Bridge2AI-Voice Consortium, was a 2-day event which took place May 1st-May 2nd in Tampa, FL. The event included four interactive panel sessions, which are summarized here. All four interactive panels featured an innovative format, designed to maximize engagement and facilitate deep discussions. Each panel began with a 45 min segment where moderators posed targeted questions to expert panelists, delving into complex topics within the field of voice AI. This was followed by a 45 min \"stakeholder forum,\" during which audience members asked questions and engaged in live interactive polls. Interactive polls stimulated meaningful conversation between panelists and attendees, and brought to light diverse viewpoints. Workshops were audio recorded and transcripts were assembled with assistance from generative A.I tools including Whisper Version 7.13.1 for audio transcription and ChatGPT version 4.0 for content summation. Content was then reviewed and edited by authors.</p>","PeriodicalId":73078,"journal":{"name":"Frontiers in digital health","volume":"7 ","pages":"1484521"},"PeriodicalIF":3.2000,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11983451/pdf/","citationCount":"0","resultStr":"{\"title\":\"Interactive Panel Summaries of the 2024 Voice AI Symposium.\",\"authors\":\"Jean-Christophe Bélisle-Pipon, James Anibal, Ruth Bahr, Steven Bedrick, Oita Coleman, David Dorr, Barbara J Evans, Guy Fagherazzi, Alexander Gelbard, Satrajit Ghosh, Anita Ho, Christie Jackson, Dale Joachim, Lampros Kourtis, Andrea Krussel, Amir Lahav, Breanna Leuze, Bob MacDonald, Geralyn Miller, Vivek Mohan, Matthew Naunheim, Maria Powell, Anaïs Rameau, Sat Ramphal, Vardit Ravitsky, Charlie Reavis, Samantha Salvi Cruz, Jamie Toghranegar, Adam Vogel, Stephanie Watts, Joseph Yracheta, Robin Zhao, Yael Bensoussan\",\"doi\":\"10.3389/fdgth.2025.1484521\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The 2024 Voice AI Symposium presented by the Bridge2AI-Voice Consortium, was a 2-day event which took place May 1st-May 2nd in Tampa, FL. The event included four interactive panel sessions, which are summarized here. All four interactive panels featured an innovative format, designed to maximize engagement and facilitate deep discussions. Each panel began with a 45 min segment where moderators posed targeted questions to expert panelists, delving into complex topics within the field of voice AI. This was followed by a 45 min \\\"stakeholder forum,\\\" during which audience members asked questions and engaged in live interactive polls. Interactive polls stimulated meaningful conversation between panelists and attendees, and brought to light diverse viewpoints. Workshops were audio recorded and transcripts were assembled with assistance from generative A.I tools including Whisper Version 7.13.1 for audio transcription and ChatGPT version 4.0 for content summation. Content was then reviewed and edited by authors.</p>\",\"PeriodicalId\":73078,\"journal\":{\"name\":\"Frontiers in digital health\",\"volume\":\"7 \",\"pages\":\"1484521\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11983451/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in digital health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/fdgth.2025.1484521\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fdgth.2025.1484521","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
摘要
由Bridge2AI-Voice联盟主办的2024语音人工智能研讨会于5月1日至5月2日在佛罗里达州坦帕市举行,为期两天。该活动包括四个互动小组会议,总结如下。所有四个互动小组都采用了创新的形式,旨在最大限度地提高参与度,促进深入讨论。每个小组以45分钟的环节开始,主持人向专家小组成员提出有针对性的问题,深入探讨语音人工智能领域的复杂话题。随后是45分钟的“利益相关者论坛”,在此期间,观众提出问题并参与现场互动投票。互动式投票激发了小组成员和与会者之间有意义的对话,并揭示了不同的观点。在生成式人工智能工具(包括用于音频转录的Whisper Version 7.13.1和用于内容汇总的ChatGPT Version 4.0)的帮助下,对研讨会进行了录音和转录记录。然后由作者审查和编辑内容。
Interactive Panel Summaries of the 2024 Voice AI Symposium.
The 2024 Voice AI Symposium presented by the Bridge2AI-Voice Consortium, was a 2-day event which took place May 1st-May 2nd in Tampa, FL. The event included four interactive panel sessions, which are summarized here. All four interactive panels featured an innovative format, designed to maximize engagement and facilitate deep discussions. Each panel began with a 45 min segment where moderators posed targeted questions to expert panelists, delving into complex topics within the field of voice AI. This was followed by a 45 min "stakeholder forum," during which audience members asked questions and engaged in live interactive polls. Interactive polls stimulated meaningful conversation between panelists and attendees, and brought to light diverse viewpoints. Workshops were audio recorded and transcripts were assembled with assistance from generative A.I tools including Whisper Version 7.13.1 for audio transcription and ChatGPT version 4.0 for content summation. Content was then reviewed and edited by authors.