{"title":"Revised solution technique for a bi-level location-inventory-routing problem under uncertainty of demand and perishability of products","authors":"Fezzeh Partovi, M. Seifbarghy, M. Esmaeili","doi":"10.2139/ssrn.4148557","DOIUrl":"https://doi.org/10.2139/ssrn.4148557","url":null,"abstract":"Bi-level programming is an efficient tool to tackle decentralized decision-making processes in supply chains with upper level (i.e., leader) and lower level (i.e., follower). The leader makes the first decision while the follower makes the second decision. In this research, a bi-level programming formulation for the problem of location-inventory-routing in a two-echelon supply chain, including a number of central warehouses in the first echelon and retailers in the second echelon with perishable products under uncertain demand, is proposed. The total operational costs at both levels are minimized considering capacity constraints. Due to the uncertain nature of the problem, a scenario-based programming is utilized. The economic condition or unforeseen events such as COVID-19 or Russia-Ukraine war can be good examples for uncertainty sources in today’s world. The model determines the optimal locations of warehouses, the routes between warehouses and retailers, number of received shipments and the amount of inventory held at each retailer. A revised solution method is designed by using multi-choice goal programming for solving the problem. The given revised method attempts to minimize the deviations of each decision maker’s solution from its ideal value assuming that the upper level is satisfied higher than the lower level. Base on some numerical analysis, the proposed solution technique is more sensitive to the upper bounds of the goals rather than the lower bounds.","PeriodicalId":8218,"journal":{"name":"Appl. Comput. Intell. Soft Comput.","volume":"143 1","pages":"109899"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76118998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A class of general type-2 fuzzy controller based on adaptive alpha-plane for nonlinear systems","authors":"Ahmad M. El-Nagar, M. El-Bardini, A. A. Khater","doi":"10.2139/ssrn.4129890","DOIUrl":"https://doi.org/10.2139/ssrn.4129890","url":null,"abstract":"","PeriodicalId":8218,"journal":{"name":"Appl. Comput. Intell. Soft Comput.","volume":"45 1 1","pages":"109938"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75696416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated Decision Technique for the Crowd Estimation Method Using Thermal Videos","authors":"N. Negied, A. El-Sayed, Asmaa S. Hassaan","doi":"10.1155/2022/7782879","DOIUrl":"https://doi.org/10.1155/2022/7782879","url":null,"abstract":"Counting and detecting the pedestrians is an important and critical aspect for several applications such as estimation of crowd density, organization of events, individual’s flow control, and surveillance systems to prevent the difficulties and overcrowding in a huge gathering of pedestrians such as the Hajj occasion, which is the annual event for Muslims with the growing number of pilgrims every year. This paper is based on applying some enhancements to two different techniques for automatically estimating the crowd density. These two approaches are based on individual motion and the body’s thermal features. Theessential characteristic of crowd counting techniques is that they do not require a previously stored and trained data; instead they use a live video stream as input. Also, it does not require any intervention from individuals. So, this feature makes it easy to automatically estimate the crowd density. What makes this work special than other approaches in literature is the use of thermal videos, and not just relying on a way or combining several ways to get the crowd size but also analyzing the results to decide which approach is better considering different cases of scenes. This work aims at estimating the crowd density using two methods and decide which method is better and more accurate depending on the case of the scene; i.e., this work measures the crowd size from videos using the heat signature and motion analysis of the human body, plus using the results analysis of both approaches to decide which approach is better. The better approach can vary from video-to-video according to many factors such as the motion state of humans in this video, the occlusion amount, etc. Both approaches are discussed in this paper. The first one is based on capturing the thermal features of an individual and the second one is based on detecting the features of an individual motion. The result of these approaches has been discussed, and different experiments were conducted to prove and identify the most accurate approach. The experimental results prove the advancement of the approach proposed in this paper over the literature as indicated in the result section.","PeriodicalId":8218,"journal":{"name":"Appl. Comput. Intell. Soft Comput.","volume":"71 1","pages":"7782879:1-7782879:16"},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89463987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Self-adaptive Neuroevolution Approach to Constructing Deep Neural Network Architectures Across Different Types","authors":"Zhenhao Shuai, Hongbo Liu, Zhaolin Wan, Wei-jie Yu, Jinchao Zhang","doi":"10.48550/arXiv.2211.14753","DOIUrl":"https://doi.org/10.48550/arXiv.2211.14753","url":null,"abstract":"Neuroevolution has greatly promoted Deep Neural Network (DNN) architecture design and its applications, while there is a lack of methods available across different DNN types concerning both their scale and performance. In this study, we propose a self-adaptive neuroevolution (SANE) approach to automatically construct various lightweight DNN architectures for different tasks. One of the key settings in SANE is the search space defined by cells and organs self-adapted to different DNN types. Based on this search space, a constructive evolution strategy with uniform evolution settings and operations is designed to grow DNN architectures gradually. SANE is able to self-adaptively adjust evolution exploration and exploitation to improve search efficiency. Moreover, a speciation scheme is developed to protect evolution from early convergence by restricting selection competition within species. To evaluate SANE, we carry out neuroevolution experiments to generate different DNN architectures including convolutional neural network, generative adversarial network and long short-term memory. The results illustrate that the obtained DNN architectures could have smaller scale with similar performance compared to existing DNN architectures. Our proposed SANE provides an efficient approach to self-adaptively search DNN architectures across different types.","PeriodicalId":8218,"journal":{"name":"Appl. Comput. Intell. Soft Comput.","volume":"7 1","pages":"110127"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78061752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aripriharta, Kusmayanto Hadi Wibowo, I. Fadlika, Muladi, N. Mufti, M. Diantoro, G. Horng
{"title":"The Performance of a New Heuristic Approach for Tracking Maximum Power of PV Systems","authors":"Aripriharta, Kusmayanto Hadi Wibowo, I. Fadlika, Muladi, N. Mufti, M. Diantoro, G. Horng","doi":"10.1155/2022/1996410","DOIUrl":"https://doi.org/10.1155/2022/1996410","url":null,"abstract":"This paper presents a new heuristic method for maximum power point tracking (MPPT) in PV systems under normal and shadowing situations. The proposed method is a modification of the original queen honey bee migration (QHBM) to shorten the computation time for the maximum power point (MPP) in PV systems. QHBM initially uses random target locations to search for targets, in this case, MPP. So, we adjusted it to be able to do MPP point quests quickly. We accelerated the mQHBM learning process from the original randomly. We had fairly compared the mQHBM with several heuristics. Simulations were carried out with 2 scenarios to test the mQHBM. Based on the simulation results, it was found that mQHBM was able to exceed the capabilities of other methods such as original QHBM, particle swarm optimization (PSO) and perturb and observe (P&O), ANN, gray wolf (GWO), and cuckoo search (CS) in terms of MPPT speed and overshoot. However, the accuracy of mQHBM cannot exceed QHBM, ANN, and GWO. But still, mQHBM is better than PSO and P&O by about 15% and 18%, respectively. This experiment resulted in a gap of about 2% faster in speed, 0.34 seconds better in convergence time, and 0.2 fewer accuracies.","PeriodicalId":8218,"journal":{"name":"Appl. Comput. Intell. Soft Comput.","volume":"20 1","pages":"1996410:1-1996410:13"},"PeriodicalIF":0.0,"publicationDate":"2022-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91135724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mukerem Ali Nur, Mesfin Abebe, Rajesh Sharma Rajendran
{"title":"Handwritten Geez Digit Recognition Using Deep Learning","authors":"Mukerem Ali Nur, Mesfin Abebe, Rajesh Sharma Rajendran","doi":"10.1155/2022/8515810","DOIUrl":"https://doi.org/10.1155/2022/8515810","url":null,"abstract":"Amharic language is the second most spoken language in the Semitic family after Arabic. In Ethiopia and neighboring countries more than 100 million people speak the Amharic language. There are many historical documents that are written using the Geez script. Digitizing historical handwritten documents and recognizing handwritten characters is essential to preserving valuable documents. Handwritten digit recognition is one of the tasks of digitizing handwritten documents from different sources. Currently, handwritten Geez digit recognition researches are very few, and there is no available organized dataset for the public researchers. Convolutional neural network (CNN) is preferable for pattern recognition like in handwritten document recognition by extracting a feature from different styles of writing. In this work, the proposed model is to recognize Geez digits using CNN. Deep neural networks, which have recently shown exceptional performance in numerous pattern recognition and machine learning applications, are used to recognize handwritten Geez digits, but this has not been attempted for Ethiopic scripts. Our dataset, which contains 51,952 images of handwritten Geez digits collected from 524 individuals, is used to train and evaluate the CNN model. The application of the CNN improves the performance of several machine-learning classification methods significantly. Our proposed CNN model has an accuracy of 96.21% and a loss of 0.2013. In comparison to earlier research works on Geez handwritten digit recognition, the study was able to attain higher recognition accuracy using the developed CNN model.","PeriodicalId":8218,"journal":{"name":"Appl. Comput. Intell. Soft Comput.","volume":"502 1","pages":"8515810:1-8515810:12"},"PeriodicalIF":0.0,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76331329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Atsuhiro Miyagi, Kazuto Fukuchi, J. Sakuma, Youhei Akimoto
{"title":"Adaptive Scenario Subset Selection for Worst-Case Optimization and its Application to Well Placement Optimization","authors":"Atsuhiro Miyagi, Kazuto Fukuchi, J. Sakuma, Youhei Akimoto","doi":"10.48550/arXiv.2211.16574","DOIUrl":"https://doi.org/10.48550/arXiv.2211.16574","url":null,"abstract":"In this study, we consider simulation-based worst-case optimization problems with continuous design variables and a finite scenario set. To reduce the number of simulations required and increase the number of restarts for better local optimum solutions, we propose a new approach referred to as adaptive scenario subset selection (AS3). The proposed approach subsamples a scenario subset as a support to construct the worst-case function in a given neighborhood, and we introduce such a scenario subset. Moreover, we develop a new optimization algorithm by combining AS3 and the covariance matrix adaptation evolution strategy (CMA-ES), denoted AS3-CMA-ES. At each algorithmic iteration, a subset of support scenarios is selected, and CMA-ES attempts to optimize the worst-case objective computed only through a subset of the scenarios. The proposed algorithm reduces the number of simulations required by executing simulations on only a scenario subset, rather than on all scenarios. In numerical experiments, we verified that AS3-CMA-ES is more efficient in terms of the number of simulations than the brute-force approach and a surrogate-assisted approach lq-CMA-ES when the ratio of the number of support scenarios to the total number of scenarios is relatively small. In addition, the usefulness of AS3-CMA-ES was evaluated for well placement optimization for carbon dioxide capture and storage (CCS). In comparison with the brute-force approach and lq-CMA-ES, AS3-CMA-ES was able to find better solutions because of more frequent restarts.","PeriodicalId":8218,"journal":{"name":"Appl. Comput. Intell. Soft Comput.","volume":"56 1","pages":"109842"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84758151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}