{"title":"Corrigendum: Facial expression recognition method based on PSA-YOLO network.","authors":"Ruoling Ma, Ruoyuan Zhang","doi":"10.3389/fnbot.2023.1161411","DOIUrl":"https://doi.org/10.3389/fnbot.2023.1161411","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.3389/fnbot.2022.1057983.].</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1161411"},"PeriodicalIF":3.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10116054/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9629930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel J L L Pinheiro, Jean Faber, Silvestro Micera, Solaiman Shokur
{"title":"Human-machine interface for two-dimensional steering control with the auricular muscles.","authors":"Daniel J L L Pinheiro, Jean Faber, Silvestro Micera, Solaiman Shokur","doi":"10.3389/fnbot.2023.1154427","DOIUrl":"https://doi.org/10.3389/fnbot.2023.1154427","url":null,"abstract":"<p><p>Human-machine interfaces (HMIs) can be used to decode a user's motor intention to control an external device. People that suffer from motor disabilities, such as spinal cord injury, can benefit from the uses of these interfaces. While many solutions can be found in this direction, there is still room for improvement both from a decoding, hardware, and subject-motor learning perspective. Here we show, in a series of experiments with non-disabled participants, a novel decoding and training paradigm allowing naïve participants to use their auricular muscles (AM) to control two degrees of freedom with a virtual cursor. AMs are particularly interesting because they are vestigial muscles and are often preserved after neurological diseases. Our method relies on the use of surface electromyographic records and the use of contraction levels of both AMs to modulate the velocity and direction of a cursor in a two-dimensional paradigm. We used a locking mechanism to fix the current position of each axis separately to enable the user to stop the cursor at a certain location. A five-session training procedure (20-30 min per session) with a 2D center-out task was performed by five volunteers. All participants increased their success rate (Initial: 52.78 ± 5.56%; Final: 72.22 ± 6.67%; median ± median absolute deviation) and their trajectory performances throughout the training. We implemented a dual task with visual distractors to assess the mental challenge of controlling while executing another task; our results suggest that the participants could perform the task in cognitively demanding conditions (success rate of 66.67 ± 5.56%). Finally, using the Nasa Task Load Index questionnaire, we found that participants reported lower mental demand and effort in the last two sessions. To summarize, all subjects could learn to control the movement of a cursor with two degrees of freedom using their AM, with a low impact on the cognitive load. Our study is a first step in developing AM-based decoders for HMIs for people with motor disabilities, such as spinal cord injury.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1154427"},"PeriodicalIF":3.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10277645/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9710003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu-Ting Bai, Wei Jia, Xue-Bo Jin, Ting-Li Su, Jian-Lei Kong
{"title":"Location estimation based on feature mode matching with deep network models.","authors":"Yu-Ting Bai, Wei Jia, Xue-Bo Jin, Ting-Li Su, Jian-Lei Kong","doi":"10.3389/fnbot.2023.1181864","DOIUrl":"https://doi.org/10.3389/fnbot.2023.1181864","url":null,"abstract":"<p><strong>Introduction: </strong>Global navigation satellite system (GNSS) signals can be lost in viaducts, urban canyons, and tunnel environments. It has been a significant challenge to achieve the accurate location of pedestrians during Global Positioning System (GPS) signal outages. This paper proposes a location estimation only with inertial measurements.</p><p><strong>Methods: </strong>A method is designed based on deep network models with feature mode matching. First, a framework is designed to extract the features of inertial measurements and match them with deep networks. Second, feature extraction and classification methods are investigated to achieve mode partitioning and to lay the foundation for checking different deep networks. Third, typical deep network models are analyzed to match various features. The selected models can be trained for different modes of inertial measurements to obtain localization information. The experiments are performed with the inertial mileage dataset from Oxford University.</p><p><strong>Results and discussion: </strong>The results demonstrate that the appropriate networks based on different feature modes have more accurate position estimation, which can improve the localization accuracy of pedestrians in GPS signal outages.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1181864"},"PeriodicalIF":3.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10303778/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9794611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yufeng Lian, Wenhuan Feng, Shuaishi Liu, Zhigen Nie
{"title":"A road adhesion coefficient-tire cornering stiffness normalization method combining a fractional-order multi-variable gray model with a LSTM network and vehicle direct yaw-moment robust control.","authors":"Yufeng Lian, Wenhuan Feng, Shuaishi Liu, Zhigen Nie","doi":"10.3389/fnbot.2023.1229808","DOIUrl":"https://doi.org/10.3389/fnbot.2023.1229808","url":null,"abstract":"<p><p>A normalization method of road adhesion coefficient and tire cornering stiffness is proposed to provide the significant information for vehicle direct yaw-moment control (DYC) system design. This method is carried out based on a fractional-order multi-variable gray model (FOMVGM) and a long short-term memory (LSTM) network. A FOMVGM is used to generate training data and testing data for LSTM network, and LSTM network is employed to predict tire cornering stiffness with road adhesion coefficient. In addition to that, tire cornering stiffness represented by road adhesion coefficient can be used to built vehicle lateral dynamic model and participate in DYC robust controller design. Simulations under different driving cycles are carried out to demonstrate the feasibility and effectiveness of the proposed normalization method of road adhesion coefficient and tire cornering stiffness and vehicle DYC robust control system, respectively.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1229808"},"PeriodicalIF":3.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10445168/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10058117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"<i>Elongating, entwining, and dragging</i>: mechanism for adaptive locomotion of tubificine worm blobs in a confined environment.","authors":"Taishi Mikami, Daiki Wakita, Ryo Kobayashi, Akio Ishiguro, Takeshi Kano","doi":"10.3389/fnbot.2023.1207374","DOIUrl":"https://doi.org/10.3389/fnbot.2023.1207374","url":null,"abstract":"<p><p>Worms often aggregate through physical connections and exhibit remarkable functions such as efficient migration, survival under environmental changes, and defense against predators. In particular, entangled blobs demonstrate versatile behaviors for their survival; they form spherical blobs and migrate collectively by flexibly changing their shape in response to the environment. In contrast to previous studies on the collective behavior of worm blobs that focused on locomotion in a flat environment, we investigated the mechanisms underlying their adaptive motion in confined environments, focusing on tubificine worm collectives. We first performed several behavioral experiments to observe the aggregation process, collective response to aversive stimuli, the motion of a few worms, and blob motion in confined spaces with and without pegs. We found the blob deformed and passed through a narrow passage using environmental heterogeneities. Based on these behavioral findings, we constructed a simple two-dimensional agent-based model wherein the flexible body of a worm was described as a cross-shaped agent that could deform, rotate, and translate. The simulations demonstrated that the behavioral findings were well-reproduced. Our findings aid in understanding how physical interactions contribute to generating adaptive collective behaviors in real-world environments as well as in designing novel swarm robotic systems consisting of soft agents.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1207374"},"PeriodicalIF":3.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10495593/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10626471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"When neuro-robots go wrong: A review.","authors":"Muhammad Salar Khan, James L Olds","doi":"10.3389/fnbot.2023.1112839","DOIUrl":"https://doi.org/10.3389/fnbot.2023.1112839","url":null,"abstract":"<p><p>Neuro-robots are a class of autonomous machines that, in their architecture, mimic aspects of the human brain and cognition. As such, they represent unique artifacts created by humans based on human understanding of healthy human brains. European Union's Convention on Roboethics 2025 states that the design of all robots (including neuro-robots) must include provisions for the complete traceability of the robots' actions, analogous to an aircraft's flight data recorder. At the same time, one can anticipate rising instances of neuro-robotic failure, as they operate on imperfect data in real environments, and the underlying AI behind such neuro-robots has yet to achieve explainability. This paper reviews the trajectory of the technology used in neuro-robots and accompanying failures. The failures demand an explanation. While drawing on existing explainable AI research, we argue explainability in AI limits the same in neuro-robots. In order to make robots more explainable, we suggest potential pathways for future research.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1112839"},"PeriodicalIF":3.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9935594/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10826222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rethinking 1D convolution for lightweight semantic segmentation.","authors":"Chunyu Zhang, Fang Xu, Chengdong Wu, Chenglong Xu","doi":"10.3389/fnbot.2023.1119231","DOIUrl":"https://doi.org/10.3389/fnbot.2023.1119231","url":null,"abstract":"<p><p>Lightweight semantic segmentation promotes the application of semantic segmentation in tiny devices. The existing lightweight semantic segmentation network (LSNet) has the problems of low precision and a large number of parameters. In response to the above problems, we designed a full 1D convolutional LSNet. The tremendous success of this network is attributed to the following three modules: 1D multi-layer space module (1D-MS), 1D multi-layer channel module (1D-MC), and flow alignment module (FA). The 1D-MS and the 1D-MC add global feature extraction operations based on the multi-layer perceptron (MLP) idea. This module uses 1D convolutional coding, which is more flexible than MLP. It increases the global information operation, improving features' coding ability. The FA module fuses high-level and low-level semantic information, which solves the problem of precision loss caused by the misalignment of features. We designed a 1D-mixer encoder based on the transformer structure. It performed fusion encoding of the feature space information extracted by the 1D-MS module and the channel information extracted by the 1D-MC module. 1D-mixer obtains high-quality encoded features with very few parameters, which is the key to the network's success. The attention pyramid with FA (AP-FA) uses an AP to decode features and adds a FA module to solve the problem of feature misalignment. Our network requires no pre-training and only needs a 1080Ti GPU for training. It achieved 72.6 mIoU and 95.6 FPS on the Cityscapes dataset and 70.5 mIoU and 122 FPS on the CamVid dataset. We ported the network trained on the ADE2K dataset to mobile devices, and the latency of 224 ms proves the application value of the network on mobile devices. The results on the three datasets prove that the network generalization ability we designed is powerful. Compared to state-of-the-art lightweight semantic segmentation algorithms, our designed network achieves the best balance between segmentation accuracy and parameters. The parameters of LSNet are only 0.62 M, which is currently the network with the highest segmentation accuracy within 1 M parameters.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"17 ","pages":"1119231"},"PeriodicalIF":3.1,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9947531/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10789709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}