Meftahul Ferdaus, Mahdi Abdelguerfi, Kendall N. Niles, Ken Pathak, Joe Tom
{"title":"Widened Attention‐Enhanced Atrous Convolutional Network for Efficient Embedded Vision Applications under Resource Constraints","authors":"Meftahul Ferdaus, Mahdi Abdelguerfi, Kendall N. Niles, Ken Pathak, Joe Tom","doi":"10.1002/aisy.202300480","DOIUrl":"https://doi.org/10.1002/aisy.202300480","url":null,"abstract":"Onboard image analysis enables real‐time autonomous capabilities for unmanned platforms including aerial, ground, and aquatic drones. Performing classification on embedded systems, rather than transmitting data, allows rapid perception and decision‐making critical for time‐sensitive applications such as search and rescue, hazardous environment exploration, and military operations. To fully capitalize on these systems’ potential, specialized deep learning solutions are needed that balance accuracy and computational efficiency for time‐sensitive inference. This article introduces the widened attention‐enhanced atrous convolution‐based efficient network (WACEfNet), a new convolutional neural network designed specifically for real‐time visual classification challenges using resource‐constrained embedded devices. WACEfNet builds on EfficientNet and integrates innovative width‐wise feature processing, atrous convolutions, and attention modules to improve representational power without excessive overhead. Extensive benchmarking confirms state‐of‐the‐art performance from WACEfNet for aerial imaging applications while remaining suitable for embedded deployment. The improvements in accuracy and speed demonstrate the potential of customized deep learning advancements to unlock new capabilities for unmanned aerial vehicles and related embedded systems with tight size, weight, and power constraints. This research offers an optimized framework, combining widened residual learning and attention mechanisms, to meet the unique demands of high‐fidelity real‐time analytics across a variety of embedded perception paradigms.","PeriodicalId":7187,"journal":{"name":"Advanced Intelligent Systems","volume":"8 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141683437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiple Sampling Capsule Robot for Studying Gut Microbiome","authors":"Sanghyeon Park, M. Hoang, Jayoung Kim, Sukho Park","doi":"10.1002/aisy.202300625","DOIUrl":"https://doi.org/10.1002/aisy.202300625","url":null,"abstract":"Longitudinal analysis of the gut microbiota is crucial for understanding its relationship with gastrointestinal (GI) diseases and advancing diagnostics and treatments. Most ingestible sampling devices move passively within the GI tract, rely on physiological factors, and fail at multipoint sampling. This study proposes a multiple‐sampling capsule robot capable of collecting gut microbiota from various locations within the GI tract with minimal cross‐contamination. The proposed capsule comprises a body, a driving unit, six sampling tools, a central rod, and two heads. Electromagnetic field control facilitates control of the orientation and position of the capsule, particularly to align the channel of the capsule where the sample is collected facing downward. The capsule can collect six gut microbiota samples preventing contamination before and after sampling. The active locomotion and multiple sampling performance of the capsule are evaluated through basic performance tests (propulsion direction precision: 0.76 ± 0.52°, channel alignment precision: 0.84 ± 0.55°), phantom tests (average amount per sample: 10.3 ± 2.4 mg, cross‐contamination: 0.6 ± 0.4%), and ex‐vivo tests (average amount per sample: 9.9 ± 1.7 mg). The possibility of integration and clinical application of the capsule is confirmed through preclinical tests using a porcine model.","PeriodicalId":7187,"journal":{"name":"Advanced Intelligent Systems","volume":"32 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141273145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unraveling Multimodal Brain Signatures: Deciphering Transdiagnostic Dimensions of Psychopathology in Adolescents","authors":"Jing Xia, Nanguang Chen, Anqi Qiu","doi":"10.1002/aisy.202300577","DOIUrl":"https://doi.org/10.1002/aisy.202300577","url":null,"abstract":"Adolescent psychiatric disorders arise from intricate interactions of clinical histories and disruptions in brain development. While connections between psychopathology and brain functional connectivity are studied, the use of deep learning to elucidate overlapping neural mechanisms through multimodal brain images remains nascent. Utilizing two adolescent datasets—the Philadelphia Neurodevelopmental Cohort (PNC, n = 1100) and the Adolescent Brain Cognitive Development (ABCD, n = 7536)—this study employs interpretable neural networks and demonstrates that incorporating brain morphology, along with functional and structural networks, augments traditional clinical characteristics (age, gender, race, parental education, medical history, and trauma exposure). Predictive accuracy reaches 0.37–0.464 between real and predicted general psychopathology and four psychopathology dimensions (externalizing, psychosis, anxiety, and fear). The brain morphology and connectivities within the frontoparietal, default mode network, and visual associate networks are recurrent across general psychopathology and four psychopathology dimensions. Unique structural and functional pathways originating from the cerebellum, amygdala, and visual‐sensorimotor cortex are linked with these individual dimensions. Consistent findings across both PNC and ABCD affirm the generalizability. The results underscore the potential of diverse sensory inputs in steering executive processes tied to psychopathology dimensions in adolescents, hinting at neural avenues for targeted therapeutic interventions and preventive strategies.","PeriodicalId":7187,"journal":{"name":"Advanced Intelligent Systems","volume":"47 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141103467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Joint Situational Assessment‐Hierarchical Decision‐Making Framework for Maneuver Intent Decisions","authors":"Ruihai Chen, Hao Li, Guanwei Yan, Haojie Peng, Qian Zhang","doi":"10.1002/aisy.202300574","DOIUrl":"https://doi.org/10.1002/aisy.202300574","url":null,"abstract":"Decision‐making in unmanned combat aerial vehicles (UCAVs) presents a multifaceted challenge because of the complexity and dynamics of the flight environment, which leads to hurdles in training convergence, low decision validity, and the dimensionality catastrophe for decision‐making neural networks. A novel framework is proposed to address breaking down the complicated decision issues, which combines the strengths of graph convolutional networks in relation extraction with the ability of hierarchical reinforcement learning. To solve the problem of decision validity under high‐dimensional inputs, the joint framework is applied to the Maneuver Intent's decision, and a maneuver library‐based state space design method is suggested. The joint framework executes adaptable strategies and flight maneuvers to address the issue of training non‐convergence or task failure due to difficult‐to‐obtain reward signals across various scenarios. Then, the recurrent curriculum training and cross‐entropy rewards are designed to train decisions on different sub‐strategies. The experimental evaluation demonstrated more flexibility and adaptability in decision‐making problems under complex tasks compared to rule‐based and reinforcement learning baseline methods. The method proposed in this article provides a novel approach to resolving intricate decision problems, and which has certain theoretical significance and reference value for engineering applications.","PeriodicalId":7187,"journal":{"name":"Advanced Intelligent Systems","volume":"103 13","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140678886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Geonwoo Hwang, Jongseok Nam, Minki Kim, David Santiago Diaz Cortes, Ki-Uk Kyung
{"title":"Protective and Collision-Sensitive Gel-Skin: Visco-Elastomeric Polyvinyl Chloride Gel Rapidly Detects Robot Collision by Breaking Electrical Charge Accumulation Stability","authors":"Geonwoo Hwang, Jongseok Nam, Minki Kim, David Santiago Diaz Cortes, Ki-Uk Kyung","doi":"10.1002/aisy.202300583","DOIUrl":"https://doi.org/10.1002/aisy.202300583","url":null,"abstract":"Human–robot collaboration (HRC) is effective to improve productivity in industrial fields, based on the robot's fast and precise work and the human's flexible skill. To facilitate the HRC system, the first priority is to ensure safety in the event of accidents, such as collisions between robots and humans. Therefore, a protective and collision-sensitive robot skin, named Gel-Skin is proposed to guarantee the safety in HRC. The Gel-Skin is composed of polyvinyl chloride (PVC) gel, which is a functional material with piezoresistive characteristics and impact absorption capability. In particular, the PVC gel has a distinctive piezoresistive property that the relation between mechanical pressure and electrical resistance is tunable depending on an applied voltage. When a voltage is applied to the PVC gel, the electrical charges are accumulated around the anode and it shows increased piezoresistive sensitivity. In this study, it is verified for the PVC gel to exhibit the 4.78 times higher sensitivity by simply applying a voltage. This novel physical phenomenon enables the Gel-Skin to detect the collision rapidly. Finally, the Gel-Skin is applicated to a real robot system and it is verified that the Gel-Skin can detect a collision and absorb impact to ensure safety.","PeriodicalId":7187,"journal":{"name":"Advanced Intelligent Systems","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140569526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jieun Park, Minho Kim, Jinhyung Park, Myungrae Hong, Sunghoon Im, Damin Choi, Eunyoung Kim, Dohyeon Gong, Seokhaeng Huh, Seung-Un Jo, ChangHwan Kim, Je-Sung Koh, Seungyong Han, Daeshik Kang
{"title":"Active Whisker-Inspired Food Material Surface Property Measurement Using Deep-Learned Mechanosensor","authors":"Jieun Park, Minho Kim, Jinhyung Park, Myungrae Hong, Sunghoon Im, Damin Choi, Eunyoung Kim, Dohyeon Gong, Seokhaeng Huh, Seung-Un Jo, ChangHwan Kim, Je-Sung Koh, Seungyong Han, Daeshik Kang","doi":"10.1002/aisy.202300660","DOIUrl":"https://doi.org/10.1002/aisy.202300660","url":null,"abstract":"Rat whiskers are an exceptional sensing system, extracting information from their surrounding environment. Inspired by this concept, active whisker sensors measure various physical and geometric properties through contact with objects. However, previous research has focused on measuring the object geometry, often overlooking the potential for broader applications of the sensors. Herein, an active whisker sensor that enables simple measurement of the surface properties such as surface hardness and adhesiveness is reported. Composed of motor-, wire-, and crack-based mechanosensor, the active whisker sensor implements a tapping process inspired by the movement of a rat's whiskers to quickly evaluate the object surface. One area of potential application is the food industry. The active whisker sensors offer a new approach to measuring surface properties of viscoelastic and inelastic food that are difficult to measure with traditional bulky systems. Herein, it is validated that the tapping process can be used to measure the surface properties of a various foods. With the aid of machine learning algorithms, sensor can also recognize differences in the surface properties of bananas at different ripeness stages and classify them with 99% accuracy. In this report, the possibilities for applications of active whisker sensors, including food industry, robotics, and medical devices, are opened up.","PeriodicalId":7187,"journal":{"name":"Advanced Intelligent Systems","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139760053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jungmin Hamm, Seonghyeon Lim, Jiae Park, Jiwon Kang, Injun Lee, Yoongeun Lee, Jiseok Kang, Youngjun Jo, Jaejin Lee, Seoyeong Lee, M. C. Ratri, A. I. Brilian, Seungyeon Lee, Seokhwan Jeong, Kwanwoo Shin
{"title":"A Modular Robotic Platform for Biological Research: Cell Culture Automation and Remote Experimentation","authors":"Jungmin Hamm, Seonghyeon Lim, Jiae Park, Jiwon Kang, Injun Lee, Yoongeun Lee, Jiseok Kang, Youngjun Jo, Jaejin Lee, Seoyeong Lee, M. C. Ratri, A. I. Brilian, Seungyeon Lee, Seokhwan Jeong, Kwanwoo Shin","doi":"10.1002/aisy.202300566","DOIUrl":"https://doi.org/10.1002/aisy.202300566","url":null,"abstract":"Robotic arms are now commonplace in diverse settings and are poised to play a crucial role in automating laboratory tasks. However, biological experiments remain challenging for automation due to their dependence on human factors, such as researchers’ skills and experience. This article introduces robotic automation and remote control for both general and biological research tasks through a modularized platform comprising a robotic arm, auxiliary tools, and software. This platform facilitates fully automated or remote execution of key experiments in chemistry and biology, including liquid handling, mixing, cell seeding, culturing, and genetic manipulation. The robot interfaces seamlessly with standard laboratory equipment and operates remotely in real time through an online program. Integration of a vision system via robotic arm webcams ensures precise positioning and object localization, enhancing accuracy. This modularized robotic platform signifies a substantial advancement in lab automation, promising enhanced efficiency, reproducibility, and scientific progress compared to human‐led experiments.","PeriodicalId":7187,"journal":{"name":"Advanced Intelligent Systems","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139814864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Bio‐Inspired Neuromorphic Sensory System","authors":"Tong Wang, Xiao-Xue Wang, Juan Wen, Zhenya Shao, He-Ming Huang, Xin Guo","doi":"10.1002/aisy.202200047","DOIUrl":"https://doi.org/10.1002/aisy.202200047","url":null,"abstract":"The advent of the intelligent society leads to the exponential growth of information, imposing urgent requirements in a time‐ and energy‐efficient way to process information where data are generated. This issue can be addressed by the neuromorphic paradigm of computing inspired by biological sensory systems that build up the association between external stimuli and the response of an organism in real‐time; in the paradigm, a neuromorphic system is integrated with sensors to form an artificial sensory system. Herein, a neuromorphic sensory system with integrated capabilities of gas sensing, data storage, and processing is demonstrated. Leaky integrate‐and‐fire (LIF) neurons, the basic computing units in the system, are realized with volatile memristive device Pt/Ag/TaOx/Pt; sensory neurons, i.e., the LIF neurons connected with an array of gas sensors, detect gases and convert the chemical information of gases into neural spikes; synapses based on nonvolatile memristive device Pt/Ta/TaOx/Pt transmit the signals from sensory neurons to relay neurons according to synaptic weights, which are trained by the supervised spike‐rate dependent plasticity; relay neurons then process the signals from the synapses and classify gases. The approach of this work can also be applied to emulate other biological perceptions through the integration with different sensors.","PeriodicalId":7187,"journal":{"name":"Advanced Intelligent Systems","volume":"120 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87779988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Byung Do Lee, Jin-Woong Lee, W. Park, Joonseo Park, Min-Young Cho, S. Singh, M. Pyo, K. Sohn
{"title":"Powder X‐Ray Diffraction Pattern Is All You Need for Machine‐Learning‐Based Symmetry Identification and Property Prediction","authors":"Byung Do Lee, Jin-Woong Lee, W. Park, Joonseo Park, Min-Young Cho, S. Singh, M. Pyo, K. Sohn","doi":"10.1002/aisy.202200042","DOIUrl":"https://doi.org/10.1002/aisy.202200042","url":null,"abstract":"Herein, data‐driven symmetry identification, property prediction, and low‐dimensional embedding from powder X‐Ray diffraction (XRD) patterns of inorganic crystal structure database (ICSD) and materials project (MP) entries are reported. For this purpose, a fully convolutional neural network (FCN), transformer encoder (T‐encoder), and variational autoencoder (VAE) are used. The results are compared to those obtained from a well‐established crystal graph convolutional neural network (CGCNN). A task‐specified small dataset that focuses on a narrow material system, knowledge (rule)‐based descriptor extraction, and significant data dimension reduction are not the main focus of this study. Conventional powder XRD patterns, which are most widely used in materials research, can be used as a significantly informative material descriptor for deep learning. Both the FCN and T‐encoder outperform the CGCNN for symmetry classification. For property prediction, the performance of the FCN concatenated with multilayer perceptron reaches the performance level of CGCNN. Machine‐learning‐driven material property prediction from the powder XRD pattern deserves appreciation because no such attempts have been made despite common XRD‐driven symmetry (and lattice size) prediction and phase identification. The ICSD and MP data are embedded in the 2D (or 3D) latent space through the VAE, and well‐separated clustering according to the symmetry and property is observed.","PeriodicalId":7187,"journal":{"name":"Advanced Intelligent Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88297752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}