{"title":"Structure-Enhanced Graph Learning Approach for Traffic Flow and Density Forecasting","authors":"Phu Pham","doi":"10.1002/for.70012","DOIUrl":"https://doi.org/10.1002/for.70012","url":null,"abstract":"<div>\u0000 \u0000 <p>The rapid expansion of Internet infrastructure and artificial intelligence (AI) has significantly advanced intelligent transportation systems (ITS), which are considered as essential for automating traffic monitoring and management in smart cities. Among ITS applications, traffic flow and density prediction are considered as important problem for optimizing transportation planning and reducing congestion. In recent years, deep learning models, particularly recurrent neural networks (RNNs) and graph neural networks (GNNs), have been widely utilized for traffic forecasting. These models can support to effectively capture temporal and spatial dependencies in traffic data, as a result enabling more accurate forecasting. Despite advancements, recently proposed RNN-GNN-based forecasting models still face challenges related to the capability of preserving rich structural and topological features from traffic networks. The complex spatial dependencies inherent in road connections and vehicle movement patterns are often underrepresented; therefore, limiting the forecasting accuracy. To address these limitations, in this paper, we propose SGL4TF, a structure-enhanced graph learning model that integrates graph convolutional networks (GCN) with a sequence-to-sequence (seq2seq) framework. This architecture enhances the ability to jointly model spatial relationships and long-term temporal dependencies, hence can lead to more precise traffic predictions. Our approach introduces a deeper graph-structural learning mechanism using nonlinear transformations within GNN layers, which can effectively assist to improve structural feature extraction while mitigating over-smoothing issues. The seq2seq component further refines temporal correlations, enabling long-term traffic state predictions. Extensive experiments on real-world datasets demonstrate our proposed SGL4TF model's superior performance over state-of-the-art traffic forecasting techniques.</p>\u0000 </div>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"44 7","pages":"2298-2311"},"PeriodicalIF":2.7,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145197350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yiannis Dendramis, Elias Tzavalis, Aikaterini Cheimarioti
{"title":"Measuring the Default Risk of Small Business Loans: Improved Credit Risk Prediction Using Deep Learning","authors":"Yiannis Dendramis, Elias Tzavalis, Aikaterini Cheimarioti","doi":"10.1002/for.70005","DOIUrl":"https://doi.org/10.1002/for.70005","url":null,"abstract":"<p>This paper proposes a multilayer artificial neural network (ANN) method to predict the probability of default (PD) within a survival analysis framework. The ANN method captures hidden interconnections among covariates that influence PD, potentially leading to improved predictive performance compared to both logit and skewed logit models. To assess the impact of covariates on PD, we introduce a generalized covariate method that accounts for compositional effects among covariates and employ stochastic dominance analysis to rank the importance of covariate effects across both the ANN and logit model approaches. Applying the ANN method to a large dataset of small business loans reveals prediction gains over the logit models. These improvements are evident for short-term prediction horizons and in reducing type I misclassification errors in the identification of loan defaults, an aspect crucial for effective credit risk management. Regarding the generalized covariate effects, our results suggest that behavior-related covariates exert the strongest influence on PD. Moreover, we demonstrate that the ANN structure stochastically dominates the logit models for the majority of the covariates examined.</p>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"44 7","pages":"2277-2297"},"PeriodicalIF":2.7,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/for.70005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145196677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongxuan Yan, Gareth W. Peters, Guillaume Bagnarosa, Jennifer Chan
{"title":"Futures Open Interest and Speculative Pressure Dynamics via Bayesian Models of Long-Memory Count Processes","authors":"Hongxuan Yan, Gareth W. Peters, Guillaume Bagnarosa, Jennifer Chan","doi":"10.1002/for.70001","DOIUrl":"https://doi.org/10.1002/for.70001","url":null,"abstract":"<div>\u0000 \u0000 <p>In this work, we develop time series regression models for long-memory count processes based on the generalized linear Gegenbauer autoregressive moving average (GLGARMA) framework. We present a comprehensive Bayesian formulation that addresses both in-sample and out-of-sample forecasting within a broad class of generalized count time series regression models. The GLGARMA framework supports various count distributions, including Poisson, negative binomial, generalized Poisson, and double Poisson distributions, offering the flexibility to capture key empirical characteristics such as underdispersion, equidispersion, and overdispersion in the data. We connect the counting process to a time series regression framework through a link function, which is associated with a stochastic linear predictor incorporating the family of long-memory GARMA models. This linear predictor is central to the model's formulation, requiring careful specification of both the GLGARMA Bayesian likelihood and the resulting posterior distribution. To model the stochastic error terms driving the linear predictor, we explore two approaches: parameter-driven and observation-driven frameworks. For model estimation, we adopt a Bayesian approach under both frameworks, leveraging advanced sampling techniques, specifically the Riemann manifold Markov chain Monte Carlo (MCMC) methods implemented via R-Stan. To demonstrate the practical utility of our models, we conduct an empirical study of open interest dynamics in US Treasury Bond Futures. Our Bayesian models are used to forecast speculative pressure, a crucial concept for understanding market behavior and agent actions. The analysis includes 136 distinct time series from the US Commodity Futures Trading Commission (CFTC), encompassing futures-only and futures-and-options data across four US government-issued fixed-income securities. Our findings indicate that the proposed Bayesian GLGARMA models outperform existing state-of-the-art models in forecasting open interest and speculative pressure. These improvements in forecast accuracy directly enhance portfolio performance, underscoring the practical value of our approach for bond futures portfolio construction. This work advances both the methodology for modeling long-memory count processes and its application in financial econometrics, particularly in improving the forecasting of speculative pressure and its impact on investment strategies.</p>\u0000 </div>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"44 7","pages":"2252-2276"},"PeriodicalIF":2.7,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145196484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IWSL Model: A Novel Credit Scoring Model With Interpretable Features for Consumer Credit Scenarios","authors":"Runchi Zhang, Iris Li, Zhiyuan Ding, Tianhao Zhu","doi":"10.1002/for.70004","DOIUrl":"https://doi.org/10.1002/for.70004","url":null,"abstract":"<div>\u0000 \u0000 <p>Current studies have designed many credit scoring models with high performance, but they are often weak in interpretability with obvious “black box” features. This makes them difficult to meet the requirements of the regulators about the model's interpretability. This paper presents a novel credit scoring model as the IWSL model, which is data feature driven with interpretable features. The IWSL model first calculates the representative eigenvectors of default and nondefault samples according to their spatial distribution characteristics, as well as the eigenvector located in the middle of these two types of eigenvectors in the sample space. It then calculates the weighted distance between each sample and each eigenvector to divide the training dataset into three subsets and constructs sublogistic models separately. In the absence of prior information about the optimal weight setting of each attribute, the swarm intelligence algorithm is applied to back-optimize the weights according to the model's generalization ability in the validation stage. The empirical results show that the IWSL model outperforms 12 leading credit scoring models on three public consumer credit scoring datasets with statistical significance. Model component validity testing confirms the effectiveness of the IWSL model's core settings, while sensitivity analysis validates its ability to maintain robust results.</p>\u0000 </div>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"44 7","pages":"2230-2251"},"PeriodicalIF":2.7,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145197105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data Quality Improvement for Financial Distress Prediction: Feature Selection, Data Re-Sampling, and Their Combinations in Different Orders","authors":"Chih-Fong Tsai, Wei-Chao Lin, Yi-Hsien Chen","doi":"10.1002/for.70002","DOIUrl":"https://doi.org/10.1002/for.70002","url":null,"abstract":"<div>\u0000 \u0000 <p>In financial distress prediction (FDP), it is very important to ensure the quality of the data for developing effective prediction models. Related studies often apply feature selection to filter out some unrepresentative features from a set of financial ratios, or data re-sampling to re-balance class imbalanced FDP training sets. Although these two types of data pre-processing methods have been demonstrated their effectiveness, they have not often been applied at the same time to develop FDP models. Moreover, the performances of various feature selection algorithms, which can be divided into filter, wrapper, and embedded methods, and data re-sampling algorithms, which can be divided into under-sampling, over-sampling, and hybrid sampling methods, have not been fully investigated in FDP. Therefore, in this study several feature selection and data re-sampling methods, which are employed alone and in combination by different orders are compared. The experimental results based on nine FDP datasets show that executing data re-sampling alone always outperforms executing feature selection alone to develop FDP models, in which hybrid sampling is the better choice. In most cases, better prediction performances can be obtained by performing feature selection first and data re-sampling second. The best combined algorithms are based on the decision tree method for feature selection and Synthetic Minority Over-sampling Technique-Edited Nearest Neighbors (SMOTE-ENN) for hybrid sampling. This combination allows the random forest classifier to produce the highest rate of prediction accuracy. On the other hand, for the Type I error, where crisis cases are misclassified into the non-crisis class, the lowest error rate is produced by executing under-sampling alone using the ClusterCentroids algorithm combined with the random forest classifier.</p>\u0000 </div>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"44 7","pages":"2205-2229"},"PeriodicalIF":2.7,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145196906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xu Xiangxin, Kazeem O. Isah, Yusuf Yakub, Damilola Aboluwodi
{"title":"Revisiting the Volatility Dynamics of REITs Amid Uncertainty and Investor Sentiment: A Predictive Approach in GARCH-MIDAS","authors":"Xu Xiangxin, Kazeem O. Isah, Yusuf Yakub, Damilola Aboluwodi","doi":"10.1002/for.70000","DOIUrl":"https://doi.org/10.1002/for.70000","url":null,"abstract":"<div>\u0000 \u0000 <p>We analyze the impact of investor sentiment on forecasting daily return volatility across various international Real Estate Investment Trust (REIT) indices. Notably, we propose that economic policy uncertainty plays a significant role in shaping investor sentiment and enhances its predictive power regarding REIT volatility. To address the mixed-frequency nature of the involved variables, we utilize the GARCH-MIDAS framework, which effectively mitigates the issues of information loss associated with data aggregation, as well as the biases resulting from data disaggregation. Our findings provide compelling evidence of improved forecasting in models that incorporate investor sentiment, demonstrating significant in-sample predictability. This suggests that heightened expressions of sentiment in investor behavior tend to amplify risks linked to international REITs. Further analysis indicates that economic policy uncertainty may enhance the forecasting capacity of investor sentiment for out-of-sample REIT volatility predictions. Consequently, it is crucial to monitor global economic policy uncertainty and recognize its potential effects on investor sentiment for optimal investment decision-making.</p>\u0000 </div>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"44 7","pages":"2193-2204"},"PeriodicalIF":2.7,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145196843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zixiang Wei, Yongchao Zeng, Yingying Shi, Ioannis Kyriakou, Muhammad Shahbaz
{"title":"Forecasting Energy Efficiency in Manufacturing: Impact of Technological Progress in Productive Service and Commodity Trades","authors":"Zixiang Wei, Yongchao Zeng, Yingying Shi, Ioannis Kyriakou, Muhammad Shahbaz","doi":"10.1002/for.3289","DOIUrl":"https://doi.org/10.1002/for.3289","url":null,"abstract":"<p>This paper employs the theory of biased technological progress to assess the effects of technological advancements across diverse trades, with a particular emphasis on predicting energy efficiency. A translog cost function model is developed, integrating five critical types of energy inputs. The empirical analysis is conducted using a comprehensive panel dataset comprising 26 major sub-sectors within China's manufacturing industry. The results indicate that diesel exhibits the highest own-price elasticity, whereas electricity the lowest. Further analysis highlights the factor substitution relationships and the bias of technological progress through productive service trade and commodity trade channels, providing insights into shifts in energy consumption patterns. Changes in energy efficiency are decomposed into factor substitution effects and technological progress effects via trade channels. The findings reveal the presence of Morishima substitution among three factors. Specifically, productive service trade and commodity imports show a bias towards the combination of energy with labor and energy with capital, while commodity exports are characterized by labor- and capital-biased technological progress. The contributions of factor substitution and the three trade channels demonstrate divergent impacts on energy efficiency improvements across the overall manufacturing sector, as well as within high-energy-consuming and high-tech sub-sectors. Overall, our study enhances the understanding of energy efficiency trends and technological progress in trade-related manufacturing activities, offering a robust foundation for future forecasting.</p>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"44 7","pages":"2170-2192"},"PeriodicalIF":2.7,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/for.3289","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145196643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GARCHX-NoVaS: A Bootstrap-Based Approach of Forecasting for GARCHX Models","authors":"Kejin Wu, Sayar Karmakar, Rangan Gupta","doi":"10.1002/for.3286","DOIUrl":"https://doi.org/10.1002/for.3286","url":null,"abstract":"<div>\u0000 \u0000 <p>In this work, we explore the forecasting ability of a recently proposed normalizing and variance-stabilizing (NoVaS) transformation with the possible inclusion of exogenous variables in GARCH volatility specification. The NoVaS prediction method, which is inspired by a model-free prediction principle, has generally shown more accurate, stable and robust (to misspecifications) performance than that compared with classical GARCH-type methods. We derive the NoVaS transformation needed to include exogenous covariates and then construct the corresponding prediction procedure for multiple exogenous covariates. We address both point and interval forecasts using NoVaS type methods. We show through extensive simulation studies that bolster our claim that the NoVaS method outperforms traditional ones, especially for long-term time aggregated predictions. We also exhibit how our method could utilize geopolitical risks in forecasting volatility in national stock market indices. From an applied point-of-view for practitioners and policymakers, our methodology provides a distribution-free approach to forecast volatility and sheds light on how to leverage extra knowledge such as fundamentals- and sentiments-based information to improve the prediction accuracy of market volatility.</p>\u0000 </div>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"44 7","pages":"2151-2169"},"PeriodicalIF":2.7,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145196494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Turning Time Into Shapes: A Point-Cloud Framework With Chaotic Signatures for Time Series","authors":"Pradeep Singh, Balasubramanian Raman","doi":"10.1002/for.3287","DOIUrl":"https://doi.org/10.1002/for.3287","url":null,"abstract":"<div>\u0000 \u0000 <p>We propose a novel methodology for transforming financial time series into a geometric format via a sequence of point clouds, enabling richer modeling of nonstationary behavior. In this framework, volatility serves as a spatial directive to guide how overlapping temporal windows become connected in an adjacency tensor, capturing both local volatility relationships and temporal proximity. Spatial expansion then interpolates points of different connection strengths while gap filling ensures a regularized geometric structure. A subsequent relevance-weighted attention mechanism targets significant regions of each transformed window. To further illuminate underlying dynamics, we integrate the largest Lyapunov exponents directly into each point cloud, embedding a chaotic signature that quantifies local predictability. Unlike canonical CNN, RNN, or Transformer pipelines, this geometry-based representation makes it easier to detect abrupt changes, volatility clusters, and multiscale dependencies via explicit geometric and topological cues. Finally, an architecture incorporating graph-inspired components—along with point-cloud encoders and multihead attention—learns both short-term and long-term dynamics from the spatially enriched time series. The method's ability to harmonize volatility-driven structure, chaotic features, and temporal attention improves predictive performance in empirical testing on stock and cryptocurrency data, underscoring its potential for versatile financial analysis and risk-based applications.</p>\u0000 </div>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"44 7","pages":"2089-2105"},"PeriodicalIF":2.7,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145196363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lu Wang, Zecheng Yu, Jingling Ma, Xiaofang Chen, Chong Wu
{"title":"A Two-Stage Interpretable Model to Explain Classifier in Credit Risk Prediction","authors":"Lu Wang, Zecheng Yu, Jingling Ma, Xiaofang Chen, Chong Wu","doi":"10.1002/for.3288","DOIUrl":"https://doi.org/10.1002/for.3288","url":null,"abstract":"<div>\u0000 \u0000 <p>In the financial sector, credit risk represents a critical issue, and accurate prediction is essential for mitigating financial risk and ensuring economic stability. Although artificial intelligence methods can achieve satisfactory accuracy, explaining their predictive results poses a significant challenge, thereby prompting research on interpretability. Current research primarily focuses on individual interpretability methods and seldom investigates the combined application of multiple approaches. To address the limitations of existing research, this study proposes a two-stage interpretability model that integrates SHAP and counterfactual explanations. In the first stage, SHAP is employed to analyze feature importance, categorizing features into subsets according to their positive or negative impact on predicted outcomes. In the second stage, a genetic algorithm generates counterfactual explanations by considering feature importance and applying perturbations in various directions based on predefined subsets, thereby accurately identifying counterfactual samples that can modify predicted outcomes. We conducted experiments on the German credit datasets, HMEQ datasets, and the Taiwan Default of Credit Card Clients dataset using SVM, XGB, MLP, and LSTM as base classifiers, respectively. The experimental results indicate that the frequency of feature changes in the counterfactual explanations generated closely aligns with the feature importance derived from the SHAP method. Under the evaluation metrics of effectiveness and sparsity, the performance demonstrates improvements over both basic counterfactual explanation methods and prototype-based counterfactuals. Furthermore, this study offers recommendations based on features derived from SHAP analysis results and counterfactual explanations to reduce the risk of classification as a default.</p>\u0000 </div>","PeriodicalId":47835,"journal":{"name":"Journal of Forecasting","volume":"44 7","pages":"2132-2150"},"PeriodicalIF":2.7,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145196369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}