{"title":"Visual Analytics in Explaining Neural Networks with Neuron Clustering","authors":"Gülsüm Alicioğlu, Bo Sun","doi":"10.3390/ai5020023","DOIUrl":"https://doi.org/10.3390/ai5020023","url":null,"abstract":"Deep learning (DL) models have achieved state-of-the-art performance in many domains. The interpretation of their working mechanisms and decision-making process is essential because of their complex structure and black-box nature, especially for sensitive domains such as healthcare. Visual analytics (VA) combined with DL methods have been widely used to discover data insights, but they often encounter visual clutter (VC) issues. This study presents a compact neural network (NN) view design to reduce the visual clutter in explaining the DL model components for domain experts and end users. We utilized clustering algorithms to group hidden neurons based on their activation similarities. This design supports the overall and detailed view of the neuron clusters. We used a tabular healthcare dataset as a case study. The design for clustered results reduced visual clutter among neuron representations by 54% and connections by 88.7% and helped to observe similar neuron activations learned during the training process.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140736115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Single Image Super Resolution Using Deep Residual Learning","authors":"Moiz Hassan, K. Illanko, Xavier N Fernando","doi":"10.3390/ai5010021","DOIUrl":"https://doi.org/10.3390/ai5010021","url":null,"abstract":"Single Image Super Resolution (SSIR) is an intriguing research topic in computer vision where the goal is to create high-resolution images from low-resolution ones using innovative techniques. SSIR has numerous applications in fields such as medical/satellite imaging, remote target identification and autonomous vehicles. Compared to interpolation based traditional approaches, deep learning techniques have recently gained attention in SISR due to their superior performance and computational efficiency. This article proposes an Autoencoder based Deep Learning Model for SSIR. The down-sampling part of the Autoencoder mainly uses 3 by 3 convolution and has no subsampling layers. The up-sampling part uses transpose convolution and residual connections from the down sampling part. The model is trained using a subset of the VILRC ImageNet database as well as the RealSR database. Quantitative metrics such as PSNR and SSIM are found to be as high as 76.06 and 0.93 in our testing. We also used qualitative measures such as perceptual quality.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140222418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yibei Guo, Yijiang Pang, Joseph Lyons, Michael Lewis, K. Sycara, Rui Liu
{"title":"Trust-Aware Reflective Control for Fault-Resilient Dynamic Task Response in Human–Swarm Cooperation","authors":"Yibei Guo, Yijiang Pang, Joseph Lyons, Michael Lewis, K. Sycara, Rui Liu","doi":"10.3390/ai5010022","DOIUrl":"https://doi.org/10.3390/ai5010022","url":null,"abstract":"Due to the complexity of real-world deployments, a robot swarm is required to dynamically respond to tasks such as tracking multiple vehicles and continuously searching for victims. Frequent task assignments eliminate the need for system calibration time, but they also introduce uncertainty from previous tasks, which can undermine swarm performance. Therefore, responding to dynamic tasks presents a significant challenge for a robot swarm compared to handling tasks one at a time. In human–human cooperation, trust plays a crucial role in understanding each other’s performance expectations and adjusting one’s behavior for better cooperation. Taking inspiration from human trust, this paper introduces a trust-aware reflective control method called “Trust-R”. Trust-R, based on a weighted mean subsequence reduced algorithm (WMSR) and human trust modeling, enables a swarm to self-reflect on its performance from a human perspective. It proactively corrects faulty behaviors at an early stage before human intervention, mitigating the negative influence of uncertainty accumulated from dynamic tasks. Three typical task scenarios {Scenario 1: flocking to the assigned destination; Scenario 2: a transition between destinations; and Scenario 3: emergent response} were designed in the real-gravity simulation environment, and a human user study with 145 volunteers was conducted. Trust-R significantly improves both swarm performance and trust in dynamic task scenarios, marking a pivotal step forward in integrating trust dynamics into swarm robotics.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140224103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jie Ren, Changmiao Li, Yaohui An, Weichuan Zhang, Changming Sun
{"title":"Few-Shot Fine-Grained Image Classification: A Comprehensive Review","authors":"Jie Ren, Changmiao Li, Yaohui An, Weichuan Zhang, Changming Sun","doi":"10.3390/ai5010020","DOIUrl":"https://doi.org/10.3390/ai5010020","url":null,"abstract":"Few-shot fine-grained image classification (FSFGIC) methods refer to the classification of images (e.g., birds, flowers, and airplanes) belonging to different subclasses of the same species by a small number of labeled samples. Through feature representation learning, FSFGIC methods can make better use of limited sample information, learn more discriminative feature representations, greatly improve the classification accuracy and generalization ability, and thus achieve better results in FSFGIC tasks. In this paper, starting from the definition of FSFGIC, a taxonomy of feature representation learning for FSFGIC is proposed. According to this taxonomy, we discuss key issues on FSFGIC (including data augmentation, local and/or global deep feature representation learning, class representation learning, and task-specific feature representation learning). In addition, the existing popular datasets, current challenges and future development trends of feature representation learning on FSFGIC are also described.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140262664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring","authors":"Elham Albaroudi, Taha Mansouri, Ali Alameer","doi":"10.3390/ai5010019","DOIUrl":"https://doi.org/10.3390/ai5010019","url":null,"abstract":"The study comprehensively reviews artificial intelligence (AI) techniques for addressing algorithmic bias in job hiring. More businesses are using AI in curriculum vitae (CV) screening. While the move improves efficiency in the recruitment process, it is vulnerable to biases, which have adverse effects on organizations and the broader society. This research aims to analyze case studies on AI hiring to demonstrate both successful implementations and instances of bias. It also seeks to evaluate the impact of algorithmic bias and the strategies to mitigate it. The basic design of the study entails undertaking a systematic review of existing literature and research studies that focus on artificial intelligence techniques employed to mitigate bias in hiring. The results demonstrate that the correction of the vector space and data augmentation are effective natural language processing (NLP) and deep learning techniques for mitigating algorithmic bias in hiring. The findings underscore the potential of artificial intelligence techniques in promoting fairness and diversity in the hiring process with the application of artificial intelligence techniques. The study contributes to human resource practice by enhancing hiring algorithms’ fairness. It recommends the need for collaboration between machines and humans to enhance the fairness of the hiring process. The results can help AI developers make algorithmic changes needed to enhance fairness in AI-driven tools. This will enable the development of ethical hiring tools, contributing to fairness in society.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139858052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring","authors":"Elham Albaroudi, Taha Mansouri, Ali Alameer","doi":"10.3390/ai5010019","DOIUrl":"https://doi.org/10.3390/ai5010019","url":null,"abstract":"The study comprehensively reviews artificial intelligence (AI) techniques for addressing algorithmic bias in job hiring. More businesses are using AI in curriculum vitae (CV) screening. While the move improves efficiency in the recruitment process, it is vulnerable to biases, which have adverse effects on organizations and the broader society. This research aims to analyze case studies on AI hiring to demonstrate both successful implementations and instances of bias. It also seeks to evaluate the impact of algorithmic bias and the strategies to mitigate it. The basic design of the study entails undertaking a systematic review of existing literature and research studies that focus on artificial intelligence techniques employed to mitigate bias in hiring. The results demonstrate that the correction of the vector space and data augmentation are effective natural language processing (NLP) and deep learning techniques for mitigating algorithmic bias in hiring. The findings underscore the potential of artificial intelligence techniques in promoting fairness and diversity in the hiring process with the application of artificial intelligence techniques. The study contributes to human resource practice by enhancing hiring algorithms’ fairness. It recommends the need for collaboration between machines and humans to enhance the fairness of the hiring process. The results can help AI developers make algorithmic changes needed to enhance fairness in AI-driven tools. This will enable the development of ethical hiring tools, contributing to fairness in society.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139798182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhejun Zhang, Huiying Chen, Ruonan Huang, Lihong Zhu, Shengling Ma, Larry Leifer, Wei Liu
{"title":"Automated Classification of User Needs for Beginner User Experience Designers: A Kano Model and Text Analysis Approach Using Deep Learning","authors":"Zhejun Zhang, Huiying Chen, Ruonan Huang, Lihong Zhu, Shengling Ma, Larry Leifer, Wei Liu","doi":"10.3390/ai5010018","DOIUrl":"https://doi.org/10.3390/ai5010018","url":null,"abstract":"This study introduces a novel tool for classifying user needs in user experience (UX) design, specifically tailored for beginners, with potential applications in education. The tool employs the Kano model, text analysis, and deep learning to classify user needs efficiently into four categories. The data for the study were collected through interviews and web crawling, yielding 19 user needs from Generation Z users (born between 1995 and 2009) of LEGO toys (Billund, Denmark). These needs were then categorized into must-be, one-dimensional, attractive, and indifferent needs through a Kano-based questionnaire survey. A dataset of over 3000 online comments was created through preprocessing and annotating, which was used to train and evaluate seven deep learning models. The most effective model, the Recurrent Convolutional Neural Network (RCNN), was employed to develop a graphical text classification tool that accurately outputs the corresponding category and probability of user input text according to the Kano model. A usability test compared the tool’s performance to the traditional affinity diagram method. The tool outperformed the affinity diagram method in six dimensions and outperformed three qualities of the User Experience Questionnaire (UEQ), indicating a superior UX. The tool also demonstrated a lower perceived workload, as measured using the NASA Task Load Index (NASA-TLX), and received a positive Net Promoter Score (NPS) of 23 from the participants. These findings underscore the potential of this tool as a valuable educational resource in UX design courses. It offers students a more efficient and engaging and less burdensome learning experience while seamlessly integrating artificial intelligence into UX design education. This study provides UX design beginners with a practical and intuitive tool, facilitating a deeper understanding of user needs and innovative design strategies.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139869814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhejun Zhang, Huiying Chen, Ruonan Huang, Lihong Zhu, Shengling Ma, Larry Leifer, Wei Liu
{"title":"Automated Classification of User Needs for Beginner User Experience Designers: A Kano Model and Text Analysis Approach Using Deep Learning","authors":"Zhejun Zhang, Huiying Chen, Ruonan Huang, Lihong Zhu, Shengling Ma, Larry Leifer, Wei Liu","doi":"10.3390/ai5010018","DOIUrl":"https://doi.org/10.3390/ai5010018","url":null,"abstract":"This study introduces a novel tool for classifying user needs in user experience (UX) design, specifically tailored for beginners, with potential applications in education. The tool employs the Kano model, text analysis, and deep learning to classify user needs efficiently into four categories. The data for the study were collected through interviews and web crawling, yielding 19 user needs from Generation Z users (born between 1995 and 2009) of LEGO toys (Billund, Denmark). These needs were then categorized into must-be, one-dimensional, attractive, and indifferent needs through a Kano-based questionnaire survey. A dataset of over 3000 online comments was created through preprocessing and annotating, which was used to train and evaluate seven deep learning models. The most effective model, the Recurrent Convolutional Neural Network (RCNN), was employed to develop a graphical text classification tool that accurately outputs the corresponding category and probability of user input text according to the Kano model. A usability test compared the tool’s performance to the traditional affinity diagram method. The tool outperformed the affinity diagram method in six dimensions and outperformed three qualities of the User Experience Questionnaire (UEQ), indicating a superior UX. The tool also demonstrated a lower perceived workload, as measured using the NASA Task Load Index (NASA-TLX), and received a positive Net Promoter Score (NPS) of 23 from the participants. These findings underscore the potential of this tool as a valuable educational resource in UX design courses. It offers students a more efficient and engaging and less burdensome learning experience while seamlessly integrating artificial intelligence into UX design education. This study provides UX design beginners with a practical and intuitive tool, facilitating a deeper understanding of user needs and innovative design strategies.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139809990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"New Convolutional Neural Network and Graph Convolutional Network-Based Architecture for AI Applications in Alzheimer’s Disease and Dementia-Stage Classification","authors":"Md Easin Hasan, A. Wagler","doi":"10.3390/ai5010017","DOIUrl":"https://doi.org/10.3390/ai5010017","url":null,"abstract":"Neuroimaging experts in biotech industries can benefit from using cutting-edge artificial intelligence techniques for Alzheimer’s disease (AD)- and dementia-stage prediction, even though it is difficult to anticipate the precise stage of dementia and AD. Therefore, we propose a cutting-edge, computer-assisted method based on an advanced deep learning algorithm to differentiate between people with varying degrees of dementia, including healthy, very mild dementia, mild dementia, and moderate dementia classes. In this paper, four separate models were developed for classifying different dementia stages: convolutional neural networks (CNNs) built from scratch, pre-trained VGG16 with additional convolutional layers, graph convolutional networks (GCNs), and CNN-GCN models. The CNNs were implemented, and then the flattened layer output was fed to the GCN classifier, resulting in the proposed CNN-GCN architecture. A total of 6400 whole-brain medical reasoning imaging scans were obtained from the Alzheimer’s Disease Neuroimaging Initiative database to train and evaluate the proposed methods. We applied the 5-fold cross-validation (CV) technique for all the models. We presented the results from the best fold out of the five folds in assessing the performance of the models developed in this study. Hence, for the best fold of the 5-fold CV, the above-mentioned models achieved an overall accuracy of 45.47%, 71.17%, 99.06%, and 100%, respectively. The CNN-GCN model, in particular, demonstrates excellent performance in classifying different stages of dementia. Understanding the stages of dementia can assist biotech industry researchers in uncovering molecular markers and pathways connected with each stage.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139876100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"New Convolutional Neural Network and Graph Convolutional Network-Based Architecture for AI Applications in Alzheimer’s Disease and Dementia-Stage Classification","authors":"Md Easin Hasan, A. Wagler","doi":"10.3390/ai5010017","DOIUrl":"https://doi.org/10.3390/ai5010017","url":null,"abstract":"Neuroimaging experts in biotech industries can benefit from using cutting-edge artificial intelligence techniques for Alzheimer’s disease (AD)- and dementia-stage prediction, even though it is difficult to anticipate the precise stage of dementia and AD. Therefore, we propose a cutting-edge, computer-assisted method based on an advanced deep learning algorithm to differentiate between people with varying degrees of dementia, including healthy, very mild dementia, mild dementia, and moderate dementia classes. In this paper, four separate models were developed for classifying different dementia stages: convolutional neural networks (CNNs) built from scratch, pre-trained VGG16 with additional convolutional layers, graph convolutional networks (GCNs), and CNN-GCN models. The CNNs were implemented, and then the flattened layer output was fed to the GCN classifier, resulting in the proposed CNN-GCN architecture. A total of 6400 whole-brain medical reasoning imaging scans were obtained from the Alzheimer’s Disease Neuroimaging Initiative database to train and evaluate the proposed methods. We applied the 5-fold cross-validation (CV) technique for all the models. We presented the results from the best fold out of the five folds in assessing the performance of the models developed in this study. Hence, for the best fold of the 5-fold CV, the above-mentioned models achieved an overall accuracy of 45.47%, 71.17%, 99.06%, and 100%, respectively. The CNN-GCN model, in particular, demonstrates excellent performance in classifying different stages of dementia. Understanding the stages of dementia can assist biotech industry researchers in uncovering molecular markers and pathways connected with each stage.","PeriodicalId":503525,"journal":{"name":"AI","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139816376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}