{"title":"Computational concept for human food choice and eating behaviour","authors":"V. Istratov","doi":"10.37791/2687-0649-2023-18-3-115-131","DOIUrl":"https://doi.org/10.37791/2687-0649-2023-18-3-115-131","url":null,"abstract":"An inadequate diet can cause a number of illnesses with some of them posing major threats for humanity. Poor diet largely originates from behavioral and social issues rather than environmental factors. With simulation being a grand tool to analyze and address behavior issues, relatively few studies focus on computational modeling of nutrition at behavioural level. Furthermore, we have overviewed several popular approaches to computational modeling and simulating dietary decision-making and found no clear favorite. Further still, modelers rarely pay attention to one of the key behavioural factors – motivation. In the vast majority of models, either motivation is assumed to be exogenously given and, hence, is left out of the model, or motivation is not taken into account in any form, even though ignoring incentives significantly reduces adaptive capabilities of any human-like goal-directed model entity. We aimed to outline a modelling approach that would fit into the food choice topic and would improve on the available models. This implies offering an intelligible algorithm that would be easily applied to statistical data yet offering a depth of analysis despite its seeming simplicity. Thus, we present our view of the food choice simulation problem which employs eating incentives and an original choice mechanism that is different both from traditional maximizing approaches common to economics and artificial intelligence research and from the dominant psychological computational approaches. We outlined the programming conceptual algorithm that involves sequential incentive (which can result from the biological necessities, social, intellectual or spiritual needs alike) selection, incentive-foodstuff coupling (a relation can be either fixed or dynamic) and elimination of undesirable food options based on incentives ranking (qualitative ranking seems to be preferable over quantitative ranking, forasmuch as it resembles the way of thinking of a regular person more closely) supplemented by pseudocode segments. The algorithm suits agent-based simulation paradigm, yet it is not tied to it and can be fitted with other simulation approaches as well. The algorithm is supposed to be implemented in Java. Since the offered algorithm is conceptual it requires an implementation to bring about robust conclusions which is our goal to reach next.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84812588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Algorithms for composing efficient business models","authors":"V. Khalin, A. Yurkov, S. Kalayda, G. Chernova","doi":"10.37791/2687-0649-2023-18-3-5-21","DOIUrl":"https://doi.org/10.37791/2687-0649-2023-18-3-5-21","url":null,"abstract":"Solving the problems of effective business management is associated with a variety of current goals facing the same and, by implication, requires the construction of appropriate models of efficient business. The article presents two problems of doing business which, apart from their common target being an improvement of business efficiency, have different current goals. The creation or development of any business involves the construction of a specific business plan for it, including a list of those areas of business development, the implementation of which will increase its efficiency. The first problem considered in the article is related to the phased implementation of all areas of efficiency improvement in order to ultimately obtain the greatest efficiency of their realization. The second one solves the problem of increasing efficiency by partially implementing efficiency improvement directions from the initial list, taking into account certain limitations, for example, in conditions of limited company resources. For the construction of models which would meet the problems set, an efficiency criterion is substantiated and proposed in the article, and Algorithms 1 and 2 are developed which made it possible to build the efficient business models which take into account the difference in its current goals. The authors have developed a multi-stage Algorithm 1 for the generation of individual sets of areas for improvement of efficiency to be used to solve the tasks at hand. Algorithm 2 implemented at each stage of Algorithm 1 has been developed by the authors by using the Pareto optimality method but supplemented by taking into account the features and objectives of the current tasks set for the business. The use of such algorithms has made it possible to build efficient business models enabling not only to obtain an economic effect inherent to each efficiency improvement area, but also to ensure additional growth thereof driven by the properties of the developed algorithms.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86461085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated identification of genuine cut diamonds (brilliants) by luminescence spectrum","authors":"S. Zienko, V. Zhbanova, I. Yakimenko","doi":"10.37791/2687-0649-2023-18-3-61-71","DOIUrl":"https://doi.org/10.37791/2687-0649-2023-18-3-61-71","url":null,"abstract":"The problem of recognition of natural and synthetic diamonds (diamonds) is relevant today. A technique for computer processing of the luminescence spectra of diamonds using the Origin mathematical package is proposed. The processing technique is presented on specific examples. The spectra were measured using a RAOS-3 spectrometer-fluorimeter. A laser with a wavelength of 532 nm was used to excite diamond luminescence. A method is proposed for identifying diamonds of unknown origin by the number of bands of elementary components in the luminescence spectrum when decomposed into Gaussian curves. Luminescence spectra in faceted diamonds (brilliants) are widely used to study their physical properties. Synthetic faceted diamonds are significantly inferior to natural ones in terms of luminescence intensity. The light signal of photoluminescence in the former, in some cases, is comparable with the noise level of the measuring device. As a result, the instantaneous value of the useful signal can take both positive and negative values over the entire wavelength range of the spectrum. Therefore, the detection of a useful signal against the background of interference is of great importance. Along with this, to identify a diamond, it is necessary to solve the problem of decomposing the spectrum into elementary components in the form of Gaussian curves. Since it has been established that the spectra of natural diamonds consist of two peaks, while synthetic diamonds contain from three to eight peaks, which indicates a loose structure of the diamond crystal lattice. The efficiency of solving a number of these problems can be significantly improved by using software applications with special functionality. To demonstrate the features and advantages of the automated technique, the Origin mathematical package was taken, which, in particular, makes it possible to improve the quality of the results of processing a low luminescence spectrum and to find the number of peaks for Gaussian curves with sufficient accuracy.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89670370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. E. Trubin, Filipp A. Mastyaev, A. V. Batishchev, Aleksey I. Zaytsev, S. A. Aleksakhina
{"title":"User interface modeling for convolutional neural network for complex character recognition","authors":"A. E. Trubin, Filipp A. Mastyaev, A. V. Batishchev, Aleksey I. Zaytsev, S. A. Aleksakhina","doi":"10.37791/2687-0649-2023-18-3-105-114","DOIUrl":"https://doi.org/10.37791/2687-0649-2023-18-3-105-114","url":null,"abstract":"In this article, we design a user interface for a prototype desktop application using the capabilities of the author’s neural network for recognizing texts in Japanese written by one of the two Japanese alphabets – katakana or hiragana. During the design, the UML notation, a Use-Case Diagram, was used to build scenarios for using the program, and the BPMN notation was used to describe a program’s main algorithm. In the beginning of this article short versions of previous two articles were also given – the basics of proposed method for preprocessing of machine learning data and the main parameters of the proposed convolutional neural network model including its efficiency against reference model EfficientNetB0. In the work, the principles and the tool base for designing the interface of the software solution were defined, the scenarios for using the program, the algorithms of the program were designed, a prototype of the user interface was created.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73990861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The use of coevolutionary algorithms for optimizing the operating regimes of the roasting conveyor machine","authors":"V. Borisov, O. Bulygina, Elizaveta K. Vereikina","doi":"10.37791/2687-0649-2023-18-3-52-60","DOIUrl":"https://doi.org/10.37791/2687-0649-2023-18-3-52-60","url":null,"abstract":"In modern conditions of constant growth in prices for fuel and energy resources, the problem of increasing the energy and resource efficiency of technological processes of industrial enterprises has acquired particular relevance. It is especially acute for energy-intensive industries, which include high-temperature processing of mining and chemical raw materials. To reduce the energy intensity of complex chemical-technological processes, it is proposed to use the possibilities of computer simulation, for example, to optimize the operating regimes of existing equipment. The article has considered the scientific and practical problem of optimizing the charge heating regimes in various zones of the roasting conveyor machine used to produce phosphorite pellets from apatite-nepheline ore waste stored in dumps of mining and processing plants. The specifics of the optimization task (nonlinearity of the objective function, large dimension of the search space, high computational complexity) are significant limitations for the use of traditional deterministic search methods. It led to the choice of population algorithms, which are based on modeling the collective behavior and are distinguished by the possibility of simultaneous processing of several options. The cuckoo search algorithm, which is distinguished by a small number of “free” parameters that affect the convergence, was used to solve the stated optimization task. To select the optimal values of these parameters, it was proposed to use the idea of coevolution, which consists in the parallel launch of several versions of the selected algorithm with different “settings” for each subpopulation. The management of the chemical-technological system for the processing of apatite-nepheline ore waste, taking into account the basis of the results obtained, will minimize the amount of return and ensure an energy-saving operating regime of the roasting conveyor machine.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89115917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On segmentation of brain tumors by MRI images with deep learning methods","authors":"E. Shchetinin","doi":"10.37791/2687-0649-2023-18-3-40-51","DOIUrl":"https://doi.org/10.37791/2687-0649-2023-18-3-40-51","url":null,"abstract":"Segmentation of a brain tumor is one of the most difficult tasks in the analysis of medical images. The purpose of brain tumor segmentation is to create an accurate outline of brain tumor areas. Gliomas are the most common type of brain tumors. Diagnosis of patients with this disease is based on the analysis of the results of magnetic resonance imaging and segmentation of the tumor boundaries manually. However, due to the time-consuming nature of the manual segmentation process and errors, there is a need for a fast and reliable automatic segmentation algorithm. In recent years, deep learning methods have shown promising effectiveness in solving various computer vision problems, such as image classification, object detection and semantic segmentation. A number of methods based on deep learning have been applied to segmentation of brain tumors, and promising results have been achieved. The article proposes a hybrid method for solving the problem of segmentation of brain tumors based on its MRI images based on the U-Net architecture, the encoder of which uses a model of a deep convolutional neural network pre-trained on a set of ImageNet images. Among such models were used VGG16, VGG19, MobileNetV2, Inception, ResNet50, EfficientNetb7, InceptionResnetV2, DenseNet201, DenseNet121. Based on the hybrid method, the TL-U-Net model was implemented, and numerical experiments were carried out to train it with different encoder models for segmentation of brain tumors based on its MRI images. Computer experiments on a set of MRI images of the brain showed the effectiveness of the proposed approach, the best encoder model turned out to be the neural network Densenet121, which provided indicators of segmentation accuracy MeanIoU=90.34%, MeanDice=94.33%, accuracy=94.17%. The obtained estimates of segmentation accuracy are comparable or exceed similar estimates obtained by other researchers.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77452061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DVCompute++ Simulator: decomposition for discrete event simulation","authors":"D. E. Sorokin","doi":"10.37791/2687-0649-2023-18-3-72-91","DOIUrl":"https://doi.org/10.37791/2687-0649-2023-18-3-72-91","url":null,"abstract":"The decomposition method of discrete event simulation models is represented based on the author’s own work DVCompute++ Simulator, which is a collection of general-purpose programming libraries in C++ for creating and running simulation models. The aim of the research was to find an approach based on which arbitrary models could be divided into parts, then these parts of the model could be divided into less components and so on, where the result would be a hierarchy of nested sub-models that could be considered in isolation as independent entities. Now such sub-models can be created in C++ code, but, in the future, they can be created graphically as diagrams or as some text written in the specialized modeling language, where the sub-models can be used repeatedly, which makes them similar to library units from GPSS STUDIO. The mentioned ways of creating sub-models can be combined in any order on any level of nested hierarchy, where this work can be performed by different people with different skills. Moreover, it is shown in the article that the considered decomposition method can be applied to the case of distributed simulation, which is supported by DVCompute++ Simulator too. All this is possible due to the fact that the author applied functional programming techniques, where the simulation model is considered as a composition of computations. Then the model decomposition is the splitting of computations into parts, which can be connected to each other like constructor. There are two basic computations: blocks similar to the GPSS language and discrete signal computations similar to reactive programming. The diagrams of sub-models and the corresponding C++ code are provided in the article, based on which the suggested author’s method of decomposing the discrete event simulation models is illustrated.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87323018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Functional formation of a neuromorphic reservoir computational element based on a memristive metamaterial","authors":"Y. Lavrenkov","doi":"10.37791/2687-0649-2023-18-3-22-39","DOIUrl":"https://doi.org/10.37791/2687-0649-2023-18-3-22-39","url":null,"abstract":"A neural network structure is designed based on the ability of a certain class of calculators to recombine internal resources in order to produce neuromorphic elements to solve applied problems. This approach is rooted in a composite material with controlled local conductivity to form volumetric inhomogeneities capable of responding to and influencing external electrostatic effects. Such compounds aggregate into stable clusters suitable for modelling the processes that occur during information processing in natural neuronal entities. The use of conductive transitions between substrate-formed neuromorphic clusters as a learning structure makes it possible to increase the reliability of the neural network system. Long-term, non-volatile storage of information about the elements of the training sample in variable structures is possible. The basic approach to information conversion is to manage the electrostatic influence as it passes through the layered structures formed. The response to the input is not formed by propagating the signal through conductive elements with variable conductivity, but by passing the energy impact through a limited volume of metamaterial. Thus, a massively parallel processing of information can be achieved with the implementation of a mechanism for combining the opinions of independent neural network clusters that influence the final decision. Furthermore, this method of spreading effects in such an environment greatly simplifies the process of adding elements to the neural network. The lack of direct electrical interconnection facilitates the easy addition of new computational elements without significant rearrangement of the conductive media. Networks of this type are capable of significant growth without loss of experience. The input conversion process using modified delta coding prevents premature wear and tear on reconfigurable network elements. The manner in which information is presented and the manner in which neural network computing is organised enabled the creation of limited autonomous oscillations within the volume of the calculator to maintain circulating memory and the ability to gradually accumulate network experience for its subsequent recording in configurable elements. The identified features resulted in the application of this kind of calculators in the task of developing radio frequency management plans for the organisation of stable communication in a complex electromagnetic environment.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81339065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Dli, A. Puchkov, Nikolay N. Prokimnov, Boris V. Okunev
{"title":"Fuzzy model of a multi-stage chemical-energy-technological processing system fine ore raw materials","authors":"M. Dli, A. Puchkov, Nikolay N. Prokimnov, Boris V. Okunev","doi":"10.37791/2687-0649-2023-18-3-92-104","DOIUrl":"https://doi.org/10.37791/2687-0649-2023-18-3-92-104","url":null,"abstract":"The results of the study, the purpose of which was to build a software model of a multi-stage integrated system for processing finely dispersed ore raw materials, are presented. The role of such raw materials can be processed waste at mining and processing plants of apatite-nepheline and other types of ores, which accumulate in large volumes in tailing dumps. They create a significant environmental threat in the territories adjacent to the plants due to weathering, dust formation, penetration into the soil and aquifers of chemical compounds and substances hazardous to human health. Therefore, the improvement of existing production processes, the development of new technological systems for mining and processing plants, including the application of the principles of the circular economy, waste recycling, justifies the relevance of the chosen research area. The proposed program model is based on the use of trainable trees of systems (blocks) of fuzzy inference of the first and second types. This approach made it possible to avoid unnecessary complication of the bases of fuzzy inference rules when using only one fuzzy block when building a multi-parameter model of the entire multi-stage complex system. The use of several fuzzy inference blocks that describe the behavior of individual units of the system and their configuration in accordance with the physical structure of the system allows the use of relatively simple sets of rules for individual blocks. The joint selection of their parameters when training a tree of fuzzy blocks makes it possible to achieve high accuracy of the solutions obtained. The novelty of the research results is the proposed software fuzzy model of an integrated system for processing finely dispersed ore raw materials. The results of a simulation experiment conducted in the MatLab environment using a synthetic data set generated in Simulink are presented. The results showed that the trained fuzzy model provides good fidelity of the parameters and variables from the test part of the synthetic set.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84510408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simulation modeling application in complex system management","authors":"Timur Devyatkov, V. Devyatkov, Alexey V. Gabalin","doi":"10.37791/2687-0649-2023-18-2-60-72","DOIUrl":"https://doi.org/10.37791/2687-0649-2023-18-2-60-72","url":null,"abstract":"The article analyses the issues of the simulation models application in the large systems management. The background of the issue, specifically, the gradual increase of the models application domain in the management, is described. Quite successful results of the simulation models application in the strategic management of complex systems in different branches are noticed, for example, the works described in [6, 7]. It is mentioned that quite long execution time of most simulation models make it difficult to use them in the operational management, especially for complex systems. The possible solutions of this problem related to the synergy of multiple factors, primarily the emergence of the fundamentally new computational capabilities, use of contemporary concepts of simulation investigations, a combination of the simulation modeling with the multi-factor optimization, the use of the model as a solver, are proposed. The ability to conduct optimizing experiments with the model allows to find and recommend the best ways of the system development. The effectiveness of the simulation experiments application is shown in this article for the optimal planning of the oil refining company output. The technique of the optimal solution finding in the modeling environment by means of the connection of the IOSO multiparameter optimization software to the GPSS Studio modeling environment was successfully tested. Based on the results of the work, it was concluded that in the future the technique of optimal solution finding will allow use of the simulation model as an automatic “intellectual solver” in automatic production planning processes. The analysis and estimations performed showed that the integrated use of all new possibilities ensures the synchronization of the model execution time and the required time bounds of the management solution production. The conclusion about the beginning of active use of the simulation modeling method for the complex systems operational control is made.","PeriodicalId":44195,"journal":{"name":"Journal of Applied Mathematics & Informatics","volume":null,"pages":null},"PeriodicalIF":0.3,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89920679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}