Optical Memory and Neural Networks最新文献

筛选
英文 中文
Resistor Array as a Commutator 作为换向器的电阻阵列
IF 0.9
Optical Memory and Neural Networks Pub Date : 2023-11-28 DOI: 10.3103/S1060992X23060085
V. B. Kotov, Z. B. Sokhova
{"title":"Resistor Array as a Commutator","authors":"V. B. Kotov,&nbsp;Z. B. Sokhova","doi":"10.3103/S1060992X23060085","DOIUrl":"10.3103/S1060992X23060085","url":null,"abstract":"<p>Being necessary components of large smart systems (including the brain), commutators can be realized on the basis of a resistor array with variable resistors. The paper considers some switching (commutating) capabilities of the resistor array. A switching graph is used to describe the work of the resistor array. This sort of graph provides a visual representation of generated high-conductivity current flow channels. A two-terminal scheme is used to generate the switching graph. In the scheme a voltage is supplies to a particular couple of poles (conductors), other poles being isolated from the power sources. Changing couples of poles makes it possible to generate a series of switching graphs. We demonstrate the possibility to create an interconnection between two or more blocks connected to the appropriate poles of the array. To do this, the resistor array must have a suitable signature (resistor directions), the applied voltage must match the signature. The series we generate are defined by not only control signals, but also the prehistory of the resistor array. Given preset resistor characteristics, the competition between graph edges plays an important role in that it contributes to the thinning of the switching graph we generate.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S226 - S236"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low Rank Adaptation for Stable Domain Adaptation of Vision Transformers 视觉变压器稳定域自适应的低秩自适应
IF 0.9
Optical Memory and Neural Networks Pub Date : 2023-11-28 DOI: 10.3103/S1060992X2306005X
N. Filatov, M. Kindulov
{"title":"Low Rank Adaptation for Stable Domain Adaptation of Vision Transformers","authors":"N. Filatov,&nbsp;M. Kindulov","doi":"10.3103/S1060992X2306005X","DOIUrl":"10.3103/S1060992X2306005X","url":null,"abstract":"<p>Unsupervised domain adaptation plays a crucial role in semantic segmentation tasks due to the high cost of annotating data. Existing approaches often rely on large transformer models and momentum networks to stabilize and improve the self-training process. In this study, we investigate the applicability of low-rank adaptation (LoRA) to domain adaptation in computer vision. Our focus is on the unsupervised domain adaptation task of semantic segmentation, which requires adapting models from a synthetic dataset (GTA5) to a real-world dataset (City-scapes). We employ the Swin Transformer as the feature extractor and TransDA domain adaptation framework. Through experiments, we demonstrate that LoRA effectively stabilizes the self-training process, achieving similar training dynamics to the exponentially moving average (EMA) mechanism. Moreover, LoRA provides comparable metrics to EMA under the same limited computation budget. In GTA5 → Cityscapes experiments, the adaptation pipeline with LoRA achieves a mIoU of 0.515, slightly surpassing the EMA baseline’s mIoU of 0.513, while also offering an 11% speedup in training time and video memory saving. These re-sults highlight LoRA as a promising approach for domain adaptation in computer vision, offering a viable alternative to momentum networks which also saves computational resources.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S277 - S283"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Influence of Neural Network Receptive Field on Monocular Depth and Ego-Motion Estimation 神经网络感受野对单眼深度和自我运动估计的影响
IF 0.9
Optical Memory and Neural Networks Pub Date : 2023-11-28 DOI: 10.3103/S1060992X23060103
S. A. Linok, D. A. Yudin
{"title":"Influence of Neural Network Receptive Field on Monocular Depth and Ego-Motion Estimation","authors":"S. A. Linok,&nbsp;D. A. Yudin","doi":"10.3103/S1060992X23060103","DOIUrl":"10.3103/S1060992X23060103","url":null,"abstract":"<p>We present an analysis of a self-supervised learning approach for monocular depth and ego-motion estimation. This is an important problem for computer vision systems of robots, autonomous vehicles and other intelligent agents, equipped only with monocular camera sensor. We have explored a number of neural network architectures that perform single-frame depth and multi-frame camera pose predictions to minimize photometric error between consecutive frames on a sequence of camera images. Unlike other existing works, our proposed approach called ERF-SfMLearner examines the influence of the deep neural network receptive field on the performance of depth and ego-motion estimation. To do this, we study the modification of network layers with two convolution operators with extended receptive field: dilated and deformable convolutions. We demonstrate on the KITTI dataset that increasing the receptive field leads to better metrics and lower errors both in terms of depth and ego-motion estimation. Code is publicly available at github.com/linukc/ERF-SfMLearner.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S206 - S213"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3103/S1060992X23060103.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Convolutional Neural Networks for Creation of Photoluminescent Carbon Nanosensor for Heavy Metals Detection 卷积神经网络在重金属检测光致发光碳纳米传感器中的应用
IF 0.9
Optical Memory and Neural Networks Pub Date : 2023-11-28 DOI: 10.3103/S1060992X23060036
G. N. Chugreeva, O. E. Sarmanova, K. A. Laptinskiy, S. A. Burikov, T. A. Dolenko
{"title":"Application of Convolutional Neural Networks for Creation of Photoluminescent Carbon Nanosensor for Heavy Metals Detection","authors":"G. N. Chugreeva,&nbsp;O. E. Sarmanova,&nbsp;K. A. Laptinskiy,&nbsp;S. A. Burikov,&nbsp;T. A. Dolenko","doi":"10.3103/S1060992X23060036","DOIUrl":"10.3103/S1060992X23060036","url":null,"abstract":"<p>The paper presents results of the use of convolutional neural networks for the development of a multimodal photoluminescent nanosensor based on carbon dots (CD) for simultaneous measurement of the number of parameters of multicomponent liquid media. It is shown that using 2D convolutional neural networks allows to determine the concentrations of heavy metal cations Cu<sup>2+</sup>, Ni<sup>2+</sup>, Cr<sup>3+</sup>, <span>({text{NO}}_{3}^{ - })</span> anions and pH value of aqueous solutions with a mean absolute error of 0.29, 0.96, 0.22, 1.82 and 0.05 mM, respectively. The resulting errors satisfy the needs of monitoring the composition of technological and industrial waters.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S244 - S251"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Individual Tree Segmentation Quality Evaluation Using Deep Learning Models LiDAR Based 基于激光雷达的深度学习模型的单树分割质量评价
IF 0.9
Optical Memory and Neural Networks Pub Date : 2023-11-28 DOI: 10.3103/S1060992X23060061
I. A. Grishin, T. Y. Krutov, A. I. Kanev, V. I. Terekhov
{"title":"Individual Tree Segmentation Quality Evaluation Using Deep Learning Models LiDAR Based","authors":"I. A. Grishin,&nbsp;T. Y. Krutov,&nbsp;A. I. Kanev,&nbsp;V. I. Terekhov","doi":"10.3103/S1060992X23060061","DOIUrl":"10.3103/S1060992X23060061","url":null,"abstract":"<p>The study of the forest structure makes it possible to solve many important problems of forest inventory. LiDAR scanning is one of the most widely used methods for obtaining information about a forest area today. To calculate the structural parameters of plantations, a reliable segmentation of the initial data is required, the quality of segmentation can be difficult to assess in conditions of large volumes of forest areas. For this purpose, in this work, a system of correctness and quality of segmentation was developed using deep learning models. Segmentation was carried out on a forest area with a high planting density, using a phased segmentation of layers using the DBSCAN method with preliminary detection of planting coordinates and partitioning the plot using a Voronoi diagram. The correctness model was trained and tested on the extracted data of individual trees on the PointNet ++ and CurveNet neural networks, and good model accuracies were obtained in 89 and 88%, respectively, and are proposed to use the quality assessment of clustering methods, as well as improve the quality of LiDAR data segmentation on separate point clouds of forest plantations by detecting frequently occurring segmentation defects.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S270 - S276"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Motion Control of Supersonic Passenger Aircraft Using Machine Learning Methods 基于机器学习方法的超音速客机运动控制
IF 0.9
Optical Memory and Neural Networks Pub Date : 2023-11-28 DOI: 10.3103/S1060992X23060127
A. Yu. Tiumentsev, Yu. V. Tiumentsev
{"title":"Motion Control of Supersonic Passenger Aircraft Using Machine Learning Methods","authors":"A. Yu. Tiumentsev,&nbsp;Yu. V. Tiumentsev","doi":"10.3103/S1060992X23060127","DOIUrl":"10.3103/S1060992X23060127","url":null,"abstract":"<p>Motion control of modern and advanced aircraft has to be provided under conditions of incomplete and inaccurate knowledge of their parameters and characteristics, possible flight regimes, and environmental influences. In addition, a variety of abnormal situations may arise during flight, in particular, equipment failures and structural damage. The control system must be able to adapt to these changes by adjusting the control laws in use. The tools of the adaptive control allows us to meet this requirement. One of the effective approaches to the implementation of adaptivity concepts is the approach based on methods and tools of neural network modeling and control. In this case, a fairly common option in solving such problems is the use of recurrent neural networks, in particular, networks of NARX and NARMAX type. However, in a number of cases, in particular for control objects with complicated dynamic properties, this approach is ineffective. As a possible alternative, it is proposed to consider deep neural networks used both for modeling of dynamical systems and for their control. The capabilities of this approach are demonstrated on the example of a real applied problem, in which the control law of longitudinal angular motion of a supersonic passenger airplane is synthesized. The results obtained allow us to evaluate the effectiveness of the proposed approach, including the case of failure situations.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S195 - S205"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Strategy of Incremental Learning on a Compartmental Spiking Neuron Model 隔室尖峰神经元模型的增量学习策略
IF 0.9
Optical Memory and Neural Networks Pub Date : 2023-11-28 DOI: 10.3103/S1060992X23060073
A. M. Korsakov, T. T. Isakov, A. V. Bakhshiev
{"title":"Strategy of Incremental Learning on a Compartmental Spiking Neuron Model","authors":"A. M. Korsakov,&nbsp;T. T. Isakov,&nbsp;A. V. Bakhshiev","doi":"10.3103/S1060992X23060073","DOIUrl":"10.3103/S1060992X23060073","url":null,"abstract":"<p>The article presents a method for implementing incremental learning on a compartmental spiking neuron model. The training of one neuron with the possibility of forming new classes was chosen as an incremental learning scenario. During the training, only a new sample was used, without knowledge of the entire previous training samples. The results of experiments on the Iris dataset are presented, demonstrating the applicability of the chosen strategy for incremental learning on a compartmental spiking neuron model.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S237 - S243"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Control Selection for Stabilizing the Inverted Pendulum Problem Using Neural Network Method 用神经网络方法稳定倒立摆问题的最优控制选择
IF 0.9
Optical Memory and Neural Networks Pub Date : 2023-11-28 DOI: 10.3103/S1060992X23060115
D. A. Tarkhov, D. A. Lavygin, O. A. Skripkin, M. D. Zakirova, T. V. Lazovskaya
{"title":"Optimal Control Selection for Stabilizing the Inverted Pendulum Problem Using Neural Network Method","authors":"D. A. Tarkhov,&nbsp;D. A. Lavygin,&nbsp;O. A. Skripkin,&nbsp;M. D. Zakirova,&nbsp;T. V. Lazovskaya","doi":"10.3103/S1060992X23060115","DOIUrl":"10.3103/S1060992X23060115","url":null,"abstract":"<p>The task of managing unstable systems is a critically important management problem, as an unstable object can pose significant danger to humans and the environment when it fails. In this paper, a neural network was trained to determine the optimal control for an unstable system, based on a comparative analysis of two control methods: the implicit Euler method and the linearization method. This neural network identifies the optimal control based on the position of a point on the phase plane.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S214 - S225"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementation Challenges and Strategies for Hebbian Learning in Convolutional Neural Networks 卷积神经网络中Hebbian学习的实现挑战与策略
IF 0.9
Optical Memory and Neural Networks Pub Date : 2023-11-28 DOI: 10.3103/S1060992X23060048
A. V. Demidovskij, M. S. Kazyulina, I. G. Salnikov, A. M. Tugaryov, A. I. Trutnev, S. V. Pavlov
{"title":"Implementation Challenges and Strategies for Hebbian Learning in Convolutional Neural Networks","authors":"A. V. Demidovskij,&nbsp;M. S. Kazyulina,&nbsp;I. G. Salnikov,&nbsp;A. M. Tugaryov,&nbsp;A. I. Trutnev,&nbsp;S. V. Pavlov","doi":"10.3103/S1060992X23060048","DOIUrl":"10.3103/S1060992X23060048","url":null,"abstract":"<p>Given the unprecedented growth of deep learning applications, training acceleration is becoming a subject of strong academic interest. Hebbian learning as a training strategy alternative to backpropagation presents a promising optimization approach due to its locality, lower computational complexity and parallelization potential. Nevertheless, due to the challenging optimization of Hebbian learning, there is no widely accepted approach to the implementation of such mixed strategies. The current paper overviews the 4 main strategies for updating weights using the Hebbian rule, including its widely used modifications—Oja’s and Instar rules. Additionally, the paper analyses 21 industrial implementations of Hebbian learning, discusses merits and shortcomings of Hebbian rules, as well as presents the results of computational experiments on 4 convolutional networks. Experiments show that the most efficient implementation strategy of Hebbian learning allows for <span>(1.66 times )</span> acceleration and <span>(3.76 times )</span> memory consumption when updating DenseNet121 weights compared to backpropagation. Finally, a comparative analysis of the implementation strategies is carried out and grounded recommendations for Hebbian learning application are formulated.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S252 - S264"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attractor Properties of Spatiotemporal Memory in Effective Sequence Processing Task 有效序列处理任务中时空记忆的吸引子性质
IF 0.9
Optical Memory and Neural Networks Pub Date : 2023-11-28 DOI: 10.3103/S1060992X23060097
P. Kuderov, E. Dzhivelikian, A. I. Panov
{"title":"Attractor Properties of Spatiotemporal Memory in Effective Sequence Processing Task","authors":"P. Kuderov,&nbsp;E. Dzhivelikian,&nbsp;A. I. Panov","doi":"10.3103/S1060992X23060097","DOIUrl":"10.3103/S1060992X23060097","url":null,"abstract":"<p>For autonomous AI systems, it is important to process spatiotemporal information to encode and memorize it and extract and reuse abstractions effectively. What is natural for natural intelligence is still a challenge for AI systems. In this paper, we propose a biologically plausible model of spatiotemporal memory with an attractor module and study its ability to encode sequences and efficiently extract and reuse repetitive patterns. The results of experiments on synthetic and textual data and data from DVS cameras demonstrate a qualitative improvement in the properties of the model when using the attractor module.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"32 2","pages":"S284 - S292"},"PeriodicalIF":0.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3103/S1060992X23060097.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138449137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信