Tamas Dozsa, Federico Deuschle, Bram Cornelis, Peter Kovacs
{"title":"Variable projection support vector machines and some applications using adaptive Hermite expansions","authors":"Tamas Dozsa, Federico Deuschle, Bram Cornelis, Peter Kovacs","doi":"10.1142/s0129065724500047","DOIUrl":"https://doi.org/10.1142/s0129065724500047","url":null,"abstract":"Summary: We introduce an extension of the classical support vector machine classification algorithm with adaptive orthogonal transformations. The proposed transformations are realized through so-called variable projection operators. This approach allows the classifier to learn an informative representation of the data during the training process. Furthermore, choosing the underlying adaptive transformations correctly allows for learning interpretable parameters. Since the gradients of the proposed transformations are known with respect to the learnable parameters, we focus on training the primal form the modified SVM objectives using a stochastic subgradient method. We consider the possibility of using Mercer kernels with the proposed algorithms. We construct a case study using the linear combinations of adaptive Hermite functions where the proposed classification scheme outperforms the classical support vector machine approach. The proposed variable projection support vector machines provide a lightweight alternative to deep learning methods which incorporate automatic feature extraction.","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"191 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136312493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Wang, Yuan Zhang, Chi Ma, Rui Wang, Zhe Guo, Yu Shen, Miaomiao Wang, Hongying Meng
{"title":"Neonatal White Matter Damage Analysis using DTI Super-resolution and Multi-modality Image Registration","authors":"Yi Wang, Yuan Zhang, Chi Ma, Rui Wang, Zhe Guo, Yu Shen, Miaomiao Wang, Hongying Meng","doi":"10.1142/s0129065724500011","DOIUrl":"https://doi.org/10.1142/s0129065724500011","url":null,"abstract":"Punctate White Matter Damage (PWMD) is a common neonatal brain disease, which can easily cause neurological disorder and strongly affect life quality in terms of neuromotor and cognitive performance. Especially, at the neonatal stage, the best cure time can be easily missed because PWMD is not conducive to the diagnosis based on current existing methods. The lesion of PWMD is relatively straightforward on T1-weighted Magnetic Resonance Imaging (T1 MRI), showing semi-oval, cluster or linear high signals. Diffusion Tensor Magnetic Resonance Image (DT-MRI, referred to as DTI) is a noninvasive technique that can be used to study brain microstructures in vivo, and provide information on movement and cognition-related nerve fiber tracts. Therefore, a new method was proposed to use T1 MRI combined with DTI for better neonatal PWMD analysis based on DTI super-resolution and multi-modality image registration. First, after preprocessing, neonatal DTI super-resolution was performed with the three times B-spline interpolation algorithm based on the Log-Euclidean space to improve DTIs' resolution to fit the T1 MRIs and facilitate nerve fiber tractography. Second, the symmetric diffeomorphic registration algorithm and inverse b0 image were selected for multi-modality image registration of DTI and T1 MRI. Finally, the 3D lesion models were combined with fiber tractography results to analyze and predict the degree of PWMD lesions affecting fiber tracts. Extensive experiments demonstrated the effectiveness and super performance of our proposed method. This streamlined technique can play an essential auxiliary role in diagnosing and treating neonatal PWMD.","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"148 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136312439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miguel A Vicente-Querol, Antonio Fernández-Caballero, Pascual González, Luz M González-Gualda, Patricia Fernández-Sotos, José P Molina, Arturo S García
{"title":"Effect of Action Units, Viewpoint and Immersion on Emotion Recognition Using Dynamic Virtual Faces.","authors":"Miguel A Vicente-Querol, Antonio Fernández-Caballero, Pascual González, Luz M González-Gualda, Patricia Fernández-Sotos, José P Molina, Arturo S García","doi":"10.1142/S0129065723500533","DOIUrl":"https://doi.org/10.1142/S0129065723500533","url":null,"abstract":"<p><p>Facial affect recognition is a critical skill in human interactions that is often impaired in psychiatric disorders. To address this challenge, tests have been developed to measure and train this skill. Recently, virtual human (VH) and virtual reality (VR) technologies have emerged as novel tools for this purpose. This study investigates the unique contributions of different factors in the communication and perception of emotions conveyed by VHs. Specifically, it examines the effects of the use of action units (AUs) in virtual faces, the positioning of the VH (frontal or mid-profile), and the level of immersion in the VR environment (desktop screen versus immersive VR). Thirty-six healthy subjects participated in each condition. Dynamic virtual faces (DVFs), VHs with facial animations, were used to represent the six basic emotions and the neutral expression. The results highlight the important role of the accurate implementation of AUs in virtual faces for emotion recognition. Furthermore, it is observed that frontal views outperform mid-profile views in both test conditions, while immersive VR shows a slight improvement in emotion recognition. This study provides novel insights into the influence of these factors on emotion perception and advances the understanding and application of these technologies for effective facial emotion recognition training.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"33 10","pages":"2350053"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41155867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-supervised eeg representation learning with contrastive predictive coding for post-stroke","authors":"Fangzhou Xu, Yihao Yan, Jianqun Zhu, Xinyi Chen, Licai Gao, Yanbing Liu, Weiyou Shi, Yitai Lou, Wei Wang, Jiancai Leng, Yang Zhang","doi":"10.1142/s0129065723500661","DOIUrl":"https://doi.org/10.1142/s0129065723500661","url":null,"abstract":"Stroke patients are prone to fatigue during the EEG acquisition procedure, and experiments have high requirements on cognition and physical limitations of subjects. Therefore, how to learn effective feature representation is very important. Deep learning networks have been widely used in motor imagery (MI) based brain-computer interface (BCI). This paper proposes a contrast predictive coding (CPC) framework based on the modified s-transform (MST) to generate MST-CPC feature representations. MST is used to acquire the temporal-frequency feature to improve the decoding performance for MI task recognition. EEG2Image is used to convert multi-channel one-dimensional EEG into two-dimensional EEG topography. High-level feature representations are generated by CPC which consists of an encoder and autoregressive model. Finally, the effectiveness of generated features is verified by the k-means clustering algorithm. It can be found that our model generates features with high efficiency and a good clustering effect. After classification performance evaluation, the average classification accuracy of MI tasks is 89% based on 40 subjects. The proposed method can obtain effective feature representations and improve the performance of MI-BCI systems. By comparing several self-supervised methods on the public dataset, it can be concluded that the MST-CPC model has the highest average accuracy. This is a breakthrough in the combination of self-supervised learning and image processing of EEG signals. It is helpful to provide effective rehabilitation training for stroke patients to promote motor function recovery.","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135132847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Convolutional Neural Networks Quantization with Double-Stage Squeeze-and-Threshold.","authors":"Binyi Wu, Bernd Waschneck, Christian Georg Mayr","doi":"10.1142/S0129065722500514","DOIUrl":"https://doi.org/10.1142/S0129065722500514","url":null,"abstract":"<p><p>It has been proven that, compared to using 32-bit floating-point numbers in the training phase, Deep Convolutional Neural Networks (DCNNs) can operate with low-precision during inference, thereby saving memory footprint and power consumption. However, neural network quantization is always accompanied by accuracy degradation. Here, we propose a quantization method called double-stage Squeeze-and-Threshold (double-stage ST) to close the accuracy gap with full-precision models. While accurate colors in pictures can be pleasing to the viewer, they are not necessary for distinguishing objects. The era of black and white television proves this idea. As long as the limited colors are filled reasonably for different objects, the objects can be well identified and distinguished. Our method utilizes the attention mechanism to adjust the activations and learn the thresholds to distinguish objects (features). We then divide the numerically rich activations into intervals (a limited variety of numerical values) by the learned thresholds. The proposed method supports both binarization and multi-bit quantization. Our method achieves state-of-the-art results. In binarization, ReActNet [Z. Liu, Z. Shen, S. Li, K. Helwegen, D. Huang and K. Cheng, arXiv:abs/2106.11309] trained with our method outperforms the previous state-of-the-art result by 0.2 percentage points. Whereas in multi-bit quantization, the top-1 accuracy of the 3-bit ResNet-18 [K. He, X. Zhang, S. Ren and J. Sun, Deep residual learning for image recognition, <i>2016 IEEE Conf. Computer Vision and Pattern Recognition, CVPR 2016</i>, 27-30 June 2016, Las Vegas, NV, USA (IEEE Computer Society, 2016), pp. 770-778] model exceeds the top-1 accuracy of its full-precision baseline model by 0.4 percentage points. The double-stage ST activation quantization method is easy to apply by inserting it before the convolution. Besides, the double-stage ST is detachable after training and introducing no computational cost in inference.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2250051"},"PeriodicalIF":8.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40376589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Personalized Watch-Based Fall Detection Using a Collaborative Edge-Cloud Framework.","authors":"Anne Hee Ngu, Vangelis Metsis, Shuan Coyne, Priyanka Srinivas, Tarek Salad, Uddin Mahmud, Kyong Hee Chee","doi":"10.1142/S0129065722500484","DOIUrl":"https://doi.org/10.1142/S0129065722500484","url":null,"abstract":"<p><p>The majority of current smart health applications are deployed on a smartphone paired with a smartwatch. The phone is used as the computation platform or the gateway for connecting to the cloud while the watch is used mainly as the data sensing device. In the case of fall detection applications for older adults, this kind of setup is not very practical since it requires users to always keep their phones in proximity while doing the daily chores. When a person falls, in a moment of panic, it might be difficult to locate the phone in order to interact with the Fall Detection App for the purpose of indicating whether they are fine or need help. This paper demonstrates the feasibility of running a real-time personalized deep-learning-based fall detection system on a smartwatch device using a collaborative edge-cloud framework. In particular, we present the software architecture we used for the collaborative framework, demonstrate how we automate the fall detection pipeline, design an appropriate UI on the small screen of the watch, and implement strategies for the continuous data collection and automation of the personalization process with the limited computational and storage resources of a smartwatch. We also present the usability of such a system with nine real-world older adult participants.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2250048"},"PeriodicalIF":8.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40701026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Convolutional Neural Networks-Based Framework for Early Identification of Dementia Using MRI of Brain Asymmetry.","authors":"Nitsa J Herzog, George D Magoulas","doi":"10.1142/S0129065722500538","DOIUrl":"https://doi.org/10.1142/S0129065722500538","url":null,"abstract":"<p><p>Computer-aided diagnosis of health problems and pathological conditions has become a substantial part of medical, biomedical, and computer science research. This paper focuses on the diagnosis of early and progressive dementia, building on the potential of deep learning (DL) models. The proposed computational framework exploits a magnetic resonance imaging (MRI) brain asymmetry biomarker, which has been associated with early dementia, and employs DL architectures for MRI image classification. Identification of early dementia is accomplished by an eight-layered convolutional neural network (CNN) as well as transfer learning of pretrained CNNs from ImageNet. Different instantiations of the proposed CNN architecture are tested. These are equipped with Softmax, support vector machine (SVM), linear discriminant (LD), or [Formula: see text] -nearest neighbor (KNN) classification layers, assembled as a separate classification module, which are attached to the core CNN architecture. The initial imaging data were obtained from the MRI directory of the Alzheimer's disease neuroimaging initiative 3 (ADNI3) database. The independent testing dataset was created using image preprocessing and segmentation algorithms applied to unseen patients' imaging data. The proposed approach demonstrates a 90.12% accuracy in distinguishing patients who are cognitively normal subjects from those who have Alzheimer's disease (AD), and an 86.40% accuracy in detecting early mild cognitive impairment (EMCI).</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2250053"},"PeriodicalIF":8.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40359825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}