Bryan Starbuck, Hanlong Li, Bryan Cochran, Marc Weissburg, Bert Bras
{"title":"A Concept for Bio-Agentic Visual Communication: Bridging Swarm Intelligence with Biological Analogues.","authors":"Bryan Starbuck, Hanlong Li, Bryan Cochran, Marc Weissburg, Bert Bras","doi":"10.3390/biomimetics10090605","DOIUrl":null,"url":null,"abstract":"<p><p>Biological swarms communicate through decentralized, adaptive behaviors shaped by local interactions, selective attention, and symbolic signaling. These principles of animal communication enable robust coordination without centralized control or persistent connectivity. This work presents a proof of concept that identifies, evaluates, and translates biological communication strategies into a generative visual language for unmanned aerial vehicle (UAV) swarm agents operating in radio-frequency (RF)-denied environments. Drawing from natural exemplars such as bee waggle dancing, white-tailed deer flagging, and peacock feather displays, we construct a configuration space that encodes visual messages through trajectories and LED patterns. A large language model (LLM), preconditioned using retrieval-augmented generation (RAG), serves as a generative translation layer that interprets perception data and produces symbolic UAV responses. Five test cases evaluate the system's ability to preserve and adapt signal meaning through within-modality fidelity (maintaining symbolic structure in the same modality) and cross-modal translation (transferring meaning across motion and light). Covariance and eigenvalue-decomposition analysis demonstrate that this bio-agentic approach supports clear, expressive, and decentralized communication, with motion-based signaling achieving near-perfect clarity and expressiveness (0.992, 1.000), while LED-only and multi-signal cases showed partial success, maintaining high expressiveness (~1.000) but with much lower clarity (≤0.298).</p>","PeriodicalId":8907,"journal":{"name":"Biomimetics","volume":"10 9","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12467162/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomimetics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.3390/biomimetics10090605","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Biological swarms communicate through decentralized, adaptive behaviors shaped by local interactions, selective attention, and symbolic signaling. These principles of animal communication enable robust coordination without centralized control or persistent connectivity. This work presents a proof of concept that identifies, evaluates, and translates biological communication strategies into a generative visual language for unmanned aerial vehicle (UAV) swarm agents operating in radio-frequency (RF)-denied environments. Drawing from natural exemplars such as bee waggle dancing, white-tailed deer flagging, and peacock feather displays, we construct a configuration space that encodes visual messages through trajectories and LED patterns. A large language model (LLM), preconditioned using retrieval-augmented generation (RAG), serves as a generative translation layer that interprets perception data and produces symbolic UAV responses. Five test cases evaluate the system's ability to preserve and adapt signal meaning through within-modality fidelity (maintaining symbolic structure in the same modality) and cross-modal translation (transferring meaning across motion and light). Covariance and eigenvalue-decomposition analysis demonstrate that this bio-agentic approach supports clear, expressive, and decentralized communication, with motion-based signaling achieving near-perfect clarity and expressiveness (0.992, 1.000), while LED-only and multi-signal cases showed partial success, maintaining high expressiveness (~1.000) but with much lower clarity (≤0.298).