Machine learning with applications最新文献

筛选
英文 中文
Dual radar vision: A feature fusion approach for advanced object detection in IoT radar networks 双雷达视觉:物联网雷达网络中高级目标检测的特征融合方法
Machine learning with applications Pub Date : 2025-07-16 DOI: 10.1016/j.mlwa.2025.100703
Philipp Reitz, Tobias Veihelmann, Norman Franchi, Maximilian Lübke
{"title":"Dual radar vision: A feature fusion approach for advanced object detection in IoT radar networks","authors":"Philipp Reitz,&nbsp;Tobias Veihelmann,&nbsp;Norman Franchi,&nbsp;Maximilian Lübke","doi":"10.1016/j.mlwa.2025.100703","DOIUrl":"10.1016/j.mlwa.2025.100703","url":null,"abstract":"<div><div>60<!--> <!-->GHz radar technology is one of the most promising movement detector solutions for Internet of Things (IoT) applications. However, challenges remain in accurately classifying different objects and detecting small objects in a multi-target scenario. This work investigates whether sensor fusion between multiple radars can enhance object detection and classification performance. A one-stage detection architecture, designed based on the features of the latest YOLO generations, is used to perform fusion based on range-Doppler (RD) maps of two non-coherent spatially separated radars. A complete physical 3D propagation simulation using ray tracing evaluates the fusion methods. This approach enables precise ground truth, as all unprocessed signal components are known, and guarantees a consistent, error-free reference. Results demonstrate that dynamic, attention-based fusion significantly improves detection and classification compared to static fusion in homogeneous and heterogeneous radar setups.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"21 ","pages":"Article 100703"},"PeriodicalIF":0.0,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144653406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precision glass thermoforming assisted by neural networks 神经网络辅助的精密玻璃热成型
Machine learning with applications Pub Date : 2025-07-14 DOI: 10.1016/j.mlwa.2025.100701
Yuzhou Zhang , Mohan Hua , Jinan Liu , Haihui Ruan
{"title":"Precision glass thermoforming assisted by neural networks","authors":"Yuzhou Zhang ,&nbsp;Mohan Hua ,&nbsp;Jinan Liu ,&nbsp;Haihui Ruan","doi":"10.1016/j.mlwa.2025.100701","DOIUrl":"10.1016/j.mlwa.2025.100701","url":null,"abstract":"<div><div>Many glass products require thermoformed geometry with high precision. However, the traditional approach of developing a thermoforming process through trials and errors can cause a large waste of time and resources and often fails to produce successful outcomes. Hence, there is a need to develop an efficient predictive model, replacing the costly simulations or experiments, to assist the design of precision glass thermoforming. In this work, we report a surrogate model, based on a dimensionless back-propagation neural network (BPNN), that can adequately predict the form errors and thus compensate for these errors in mold design using geometric features and process parameters as inputs. Our trials with simulation and industrial data indicate that the surrogate model can predict forming errors with adequate accuracy. Although perception errors (mold designers’ decisions) and mold fabrication errors make the industrial training data less reliable than simulation data, our preliminary training and testing results still achieved a reasonable consistency with industrial data, suggesting that the surrogate models are directly implementable in the glass-manufacturing industry.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"21 ","pages":"Article 100701"},"PeriodicalIF":0.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144672279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-tuned YOLO-based deep learning model for detecting malaria parasites and leukocytes in thick smear images: A Tanzanian case study 用于检测厚涂片图像中的疟疾寄生虫和白细胞的微调基于yolo的深度学习模型:坦桑尼亚案例研究
Machine learning with applications Pub Date : 2025-07-12 DOI: 10.1016/j.mlwa.2025.100687
Beston Lufyagila , Bonny Mgawe , Anael Sam
{"title":"Fine-tuned YOLO-based deep learning model for detecting malaria parasites and leukocytes in thick smear images: A Tanzanian case study","authors":"Beston Lufyagila ,&nbsp;Bonny Mgawe ,&nbsp;Anael Sam","doi":"10.1016/j.mlwa.2025.100687","DOIUrl":"10.1016/j.mlwa.2025.100687","url":null,"abstract":"<div><div>Malaria remains a serious public health concern in developing countries, where accurate diagnosis is critical for effective treatment. Reliable and timely detection of malaria parasites and leukocytes is essential for precise parasitemia quantification. However, manual identification is labor-intensive, time-consuming, and prone to diagnostic errors—particularly in resource-limited settings. To address this challenge, this paper proposes a fine-tuned deep learning model for detecting malaria parasites and leukocytes in thick smear images. The model is based on the YOLOv10 and YOLOv11 object detection architectures, each independently trained, validated, and evaluated on a custom-annotated dataset collected from hospitals in Tanzania to ensure contextual relevance. A fivefold cross-validation, followed by statistical analysis, was used to identify the best-performing model. Results demonstrate that the optimized YOLOv11m model achieved the highest performance, with a statistically significant improvement (<em>p</em> &lt; .001), attaining a mean mAP@50 of 86.2 % ± 0.3 % and a mean recall of 78.5 % ± 0.2 %. These findings highlight the potential of the proposed model to enhance diagnostic accuracy, support effective parasitemia quantification, and ultimately reduce malaria-related mortality in resource-limited settings.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"21 ","pages":"Article 100687"},"PeriodicalIF":0.0,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144653405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting abnormality-guided multimodal linguistic semantics Arabic image captioning 预测异常引导的多模态语言语义阿拉伯图像字幕
Machine learning with applications Pub Date : 2025-07-08 DOI: 10.1016/j.mlwa.2025.100706
Nahla Aljojo , Hanin Ardah , Araek Tashkandi , Safa Habibullah
{"title":"Predicting abnormality-guided multimodal linguistic semantics Arabic image captioning","authors":"Nahla Aljojo ,&nbsp;Hanin Ardah ,&nbsp;Araek Tashkandi ,&nbsp;Safa Habibullah","doi":"10.1016/j.mlwa.2025.100706","DOIUrl":"10.1016/j.mlwa.2025.100706","url":null,"abstract":"<div><div>Deep learning has significantly advanced image captioning tasks, enabling models to generate accurate, descriptive sentences from visual content. While much progress has been made in English-language image captioning, Arabic remains underexplored despite its linguistic complexity and widespread usage. Existing Arabic image captioning systems suffer from limited datasets, insufficiently tuned models, and poor adaptation to Arabic morphology and semantics. This limitation hinders the development of accurate, coherent Arabic captions, especially in high-resource applications such as media indexing and content accessibility. This study aims to develop an effective Arabic Image Caption Generator that addresses the shortage of research and tools in this domain. The goal is to create a robust model capable of generating semantically rich, syntactically accurate Arabic captions for visual inputs. The proposed system integrates a DenseNet201 convolutional neural network (CNN) for image feature extraction with a deep Recurrent Neural Network using Long Short-Term Memory (RNN-LSTM) units for sequential caption generation. The model was trained and fine-tuned on a translated Arabic version of the Flickr8K dataset, consisting of over 8000 images, each paired with three Arabic captions. The fine-tuned DenseNet201 + LSTM model achieved BLEU-4 of 0.85, ROUGE-L of 0.90, METEOR of 0.72, CIDEr of 0.88, SPICE of 0.68, and a perplexity score of 1.1, surpassing baseline and prior models in Arabic image captioning tasks. This research provides a novel, end-to-end Arabic image captioning framework, addressing linguistic challenges through deep learning. It offers a benchmark model for future research and practical applications in Arabic-language image understanding.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"21 ","pages":"Article 100706"},"PeriodicalIF":0.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144611695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature engineering through two-level genetic algorithm 通过两级遗传算法进行特征工程
Machine learning with applications Pub Date : 2025-07-04 DOI: 10.1016/j.mlwa.2025.100696
Aditi Gulati , Armin Felahatpisheh , Camilo E. Valderrama
{"title":"Feature engineering through two-level genetic algorithm","authors":"Aditi Gulati ,&nbsp;Armin Felahatpisheh ,&nbsp;Camilo E. Valderrama","doi":"10.1016/j.mlwa.2025.100696","DOIUrl":"10.1016/j.mlwa.2025.100696","url":null,"abstract":"<div><div>Deep learning models are widely used for their high predictive performance, but often lack interpretability. Traditional machine learning methods, such as logistic regression and ensemble models, offer greater interpretability but typically have lower predictive capacity. Feature engineering can enhance the performance of interpretable models by identifying features that optimize classification. However, existing feature engineering methods face limitations: (1) they usually do not apply non-linear transformations to features, ignoring the benefits of non-linear spaces; (2) they usually perform feature selection only once, failing to reduce uncertainty through repeated experiments; and (3) traditional methods like minimum redundancy maximum relevance (mRMR) require additional hyperparameters to define the number of selected features. To address these issues, this study proposed a hierarchical two-level feature engineering approach. In the first level, relevant features were identified using multiple bootstrapped training sets. For each training set, the features were expanded using seven non-linear transformation functions, and the minimum feature set maximizing ensemble model performance was selected using the Non-Dominated Sorting Genetic Algorithm II (NSGA-II). In the second level, candidate feature sets were aggregated using two strategies. We evaluated our approach on twelve datasets from various fields, achieving an average F1 score improvement of 1.5% while reducing the feature set size by 54.5%. Moreover, our approach outperformed or matched traditional filter-based methods. Our approach is available through a Python library (<em>feature-gen</em>), enabling others to benefit from this tool. This study highlights the utility of evolutionary algorithms to generate feature sets that enhance the performance of interpretable machine learning models.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"21 ","pages":"Article 100696"},"PeriodicalIF":0.0,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144556800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explaining drivers of housing prices with nonlinear hedonic regressions 用非线性享乐回归解释房价驱动因素
Machine learning with applications Pub Date : 2025-07-04 DOI: 10.1016/j.mlwa.2025.100707
Heng Wan , Pranab K. Roy Chowdhury , Jim Yoon , Parin Bhaduri , Vivek Srikrishnan , David Judi , Brent Daniel
{"title":"Explaining drivers of housing prices with nonlinear hedonic regressions","authors":"Heng Wan ,&nbsp;Pranab K. Roy Chowdhury ,&nbsp;Jim Yoon ,&nbsp;Parin Bhaduri ,&nbsp;Vivek Srikrishnan ,&nbsp;David Judi ,&nbsp;Brent Daniel","doi":"10.1016/j.mlwa.2025.100707","DOIUrl":"10.1016/j.mlwa.2025.100707","url":null,"abstract":"<div><div>Housing markets play a critical role in shaping the spatial and demographic evolution of urban areas. Simulating housing price dynamics can enhance projections of future urban development outcomes. However, traditional hedonic regressions for housing prices, which neglect nonlinear interactions among explanatory variables, often exhibit limited predictive performance. While machine learning (ML) methods can provide a more flexible representation of the relationships between predictors, they are often regarded as “black boxes” due to their complexity and lack of transparency. Interpretable ML techniques provide a promising route by combining the flexibility of ML methods with approaches to analyze the relationships between inputs and outputs. In this study, we employ interpretable ML to analyze the patterns driving the housing market in Baltimore, Maryland, USA. We train an Artificial Neural Network (ANN) to predict Baltimore housing prices based on structural characteristics (e.g., home size, number of stories) and locational attributes (e.g., distance to the city center). We then conduct sensitivity and Partial Dependence Plot (PDP) analyses to interpret the fitted ANN model. We find that the ML model achieves higher predictive accuracy and explains 16 % more of housing price variance than a traditional linear regression model. The interpretable ML model also reveals more nuanced and realistic nonlinear relationships between housing sales price and predictors as well as interactive effects underlying Baltimore home price dynamics. For instance, while the linear model indicates a steady housing price increase over time, our interpretable ML model detects a post-2008 decline, with smaller properties experiencing the sharpest drop.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"21 ","pages":"Article 100707"},"PeriodicalIF":0.0,"publicationDate":"2025-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144596389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Regularized regression outperforms trees for predicting cognitive function in the Health and Retirement Study 在健康和退休研究中,正则化回归优于树预测认知功能
Machine learning with applications Pub Date : 2025-07-03 DOI: 10.1016/j.mlwa.2025.100694
Kyle Masato Ishikawa , Deborah Taira , Joseph Keaweʻaimoku Kaholokula , Matthew Uechi , James Davis , Eunjung Lim
{"title":"Regularized regression outperforms trees for predicting cognitive function in the Health and Retirement Study","authors":"Kyle Masato Ishikawa ,&nbsp;Deborah Taira ,&nbsp;Joseph Keaweʻaimoku Kaholokula ,&nbsp;Matthew Uechi ,&nbsp;James Davis ,&nbsp;Eunjung Lim","doi":"10.1016/j.mlwa.2025.100694","DOIUrl":"10.1016/j.mlwa.2025.100694","url":null,"abstract":"<div><h3>Background</h3><div>Generalized linear models have been favored in healthcare research due to their interpretability. In contrast, tree-based models, such as random forest or boosted trees, are often preferred in machine learning (ML) and commercial settings due to their strong predictive performance. However, for clinical applications, model interpretability remains essential for actionable results and patient understanding. This study used ML to detect cognitive decline for the purpose of timely screening and uncovering associations with psychosocial determinants. All models were interpreted to enhance transparency and understanding of their predictions.</div></div><div><h3>Methods</h3><div>Data from the 2018 to 2020 Health and Retirement Study was used to create three linear regression models and three tree-based models. Ten percent of the sample was withheld for estimating performance, and model tuning used five-fold cross validation with two repeats. Survey frequency weights were applied during tuning, training, and final evaluation. Model performance was evaluated using RMSE and R<sup>2</sup> and interpretability was assessed via coefficients, variable importance, and decision trees.</div></div><div><h3>Results</h3><div>The elastic net model had the best performance (RMSE = 3.520, R<sup>2</sup> = 0.435), followed by standard linear regression, boosted trees, random forest, multivariate adaptive regression splines, and lastly, decision trees. Across all models, baseline cognitive function and frequency of computer use were the most influential predictors.</div></div><div><h3>Conclusion</h3><div>Elastic net regression outperformed tree-based models, suggesting that cognitive outcomes may be best modeled with additive linear relationships. Its ability to remove correlated and weak predictors contributed to its balance of interpretability and predictive performance for this particular dataset.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"21 ","pages":"Article 100694"},"PeriodicalIF":0.0,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144611694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DCLMA: Deep correlation learning with multi-modal attention for visual-audio retrieval 基于多模态注意的视听检索深度相关学习
Machine learning with applications Pub Date : 2025-07-02 DOI: 10.1016/j.mlwa.2025.100695
Jiwei Zhang, Hirotaka Hachiya
{"title":"DCLMA: Deep correlation learning with multi-modal attention for visual-audio retrieval","authors":"Jiwei Zhang,&nbsp;Hirotaka Hachiya","doi":"10.1016/j.mlwa.2025.100695","DOIUrl":"10.1016/j.mlwa.2025.100695","url":null,"abstract":"<div><div>The cross-modal retrieval task aims to retrieve audio modality information from the database that best matches the visual modality and vice versa. One of the key challenges in this field is the inconsistency of audio and visual features, which increases the complexity of capturing cross-modal information, making it difficult for machines to accurately understand visual content and retrieve suitable audio data. In this work, we propose a novel deep correlation learning with multi-modal attention (DCLMA) for visual-audio retrieval, which selectively focuses on relevant information fragments through multi-modal attention, and effectively integrates audio-visual information to enhance modal interaction and correlation representation learning capabilities. First, to achieve accurate retrieval of associated multi-modal data, we utilize multiple attention-composed models to interactively learn the complex correlation of audio and visual multi-scale features. Second, cross-modal attention is exploited to mine inter-modal correlations at the global level. Finally, we combine multi-scale and global-level representations to obtain modality-integrated representations, which enhance the representation capabilities of inputs. Furthermore, our objective function supervised model learns discriminative and modality-invariant features between samples from different semantic categories in the mutual latent space. Experimental results on cross-modal retrieval on two widely used benchmark datasets demonstrate that our proposed approach is superior in learning effective representations and significantly outperforms state-of-the-art cross-modal retrieval methods. Code is available at <span><span>https://github.com/zhangjiwei-japan/cross-modal-visual-audio-retrieval</span><svg><path></path></svg></span></div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"21 ","pages":"Article 100695"},"PeriodicalIF":0.0,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144536040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EmoFusion: An integrated machine learning model leveraging embeddings and lexicons to improve textual emotion classification EmoFusion:一个集成的机器学习模型,利用嵌入和词汇来改进文本情感分类
Machine learning with applications Pub Date : 2025-06-30 DOI: 10.1016/j.mlwa.2025.100693
Anjali Bhardwaj, Muhammad Abulaish
{"title":"EmoFusion: An integrated machine learning model leveraging embeddings and lexicons to improve textual emotion classification","authors":"Anjali Bhardwaj,&nbsp;Muhammad Abulaish","doi":"10.1016/j.mlwa.2025.100693","DOIUrl":"10.1016/j.mlwa.2025.100693","url":null,"abstract":"<div><div>Human emotions are complicated and intertwined with cognitive processes, influencing mental health, learning, and decision-making. The Web 2.0 era has seen a remarkable spike in the number of people sharing their experiences and emotions on online social media, mostly through posts or text messages. Due to inherent challenges associated with textual data, the issue of discovering the intricate relationships between texts and its inherent emotions is still an increasingly prevalent topic in AI and NLP. This paper presents <span>EmoFusion</span>, an integrated machine learning model that improves emotion classification in textual data by integrating pre-trained word embeddings and emotion lexicons. Instead of relying on a single emotion lexicon, <span>EmoFusion</span> integrates multiple emotion lexicons since a single lexicon might not fully cover all possible words or phrases linked with emotions. The proposed approach uses semantically related features to bridge the semantic gap between words and emotions, capturing a wide range of emotional nuances and resulting in better classification performance. The efficacy is further improved by employing emotion-specific pre-processing techniques. <span>EmoFusion</span> is evaluated using three benchmark datasets, namely Google AI GoEmotions, CBET, and TEC. The evaluation results demonstrate a significant improvement compared to six baselines and a state-of-the-art technique using different classifiers.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"21 ","pages":"Article 100693"},"PeriodicalIF":0.0,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144572688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forecasts and insights into Japan’s fiscal future: Machine learning-based projections of city-level taxpayer numbers and total income from 2020 to 2100 对日本财政未来的预测和洞察:基于机器学习的2020年至2100年市级纳税人数量和总收入的预测
Machine learning with applications Pub Date : 2025-06-27 DOI: 10.1016/j.mlwa.2025.100699
Chao Li, Alexander Ryota Keeley, Shunsuke Managi
{"title":"Forecasts and insights into Japan’s fiscal future: Machine learning-based projections of city-level taxpayer numbers and total income from 2020 to 2100","authors":"Chao Li,&nbsp;Alexander Ryota Keeley,&nbsp;Shunsuke Managi","doi":"10.1016/j.mlwa.2025.100699","DOIUrl":"10.1016/j.mlwa.2025.100699","url":null,"abstract":"<div><div>Japan’s economic landscape is undergoing profound transformations due to shifting demographic trends, including population decline, aging, and urban-rural disparities. This study applies advanced machine learning techniques and stepwise updating methodologies to predict city-level taxpayer numbers and total income across 1896 Japanese cities from 2020 to 2100. The models achieve high accuracy, with validation R<sup>2</sup> exceeding 98 %, ensuring robust long-term predictions. The findings reveal a 14.52 % decline in total taxpayers by 2100, closely following population trends, while total income remains relatively stable, even with an increase of 5.21 %. On the other hand, average income is projected to increase by 23.07 % by 2100. Despite an overall economic contraction, increasing labor participation helps sustain the tax base. However, spatial disparities persist, with rural areas experiencing severe declines in taxpayers and income, while metropolitan centers maintain higher resilience but still face income stagnation. These results underscore the need for regionally tailored policy interventions to mitigate the fiscal impacts of demographic shifts. The study contributes to predictive economic modeling by integrating high-resolution spatial and demographic data with explainable machine learning and offers valuable insights for policymakers navigating Japan’s long-term economic evolution.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"21 ","pages":"Article 100699"},"PeriodicalIF":0.0,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144564073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信