arXiv - CS - Neural and Evolutionary Computing最新文献

筛选
英文 中文
When In-memory Computing Meets Spiking Neural Networks -- A Perspective on Device-Circuit-System-and-Algorithm Co-design 当内存计算遇到尖峰神经网络--设备-电路-系统-算法协同设计透视
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-22 DOI: arxiv-2408.12767
Abhishek Moitra, Abhiroop Bhattacharjee, Yuhang Li, Youngeun Kim, Priyadarshini Panda
{"title":"When In-memory Computing Meets Spiking Neural Networks -- A Perspective on Device-Circuit-System-and-Algorithm Co-design","authors":"Abhishek Moitra, Abhiroop Bhattacharjee, Yuhang Li, Youngeun Kim, Priyadarshini Panda","doi":"arxiv-2408.12767","DOIUrl":"https://doi.org/arxiv-2408.12767","url":null,"abstract":"This review explores the intersection of bio-plausible artificial\u0000intelligence in the form of Spiking Neural Networks (SNNs) with the analog\u0000In-Memory Computing (IMC) domain, highlighting their collective potential for\u0000low-power edge computing environments. Through detailed investigation at the\u0000device, circuit, and system levels, we highlight the pivotal synergies between\u0000SNNs and IMC architectures. Additionally, we emphasize the critical need for\u0000comprehensive system-level analyses, considering the inter-dependencies between\u0000algorithms, devices, circuit & system parameters, crucial for optimal\u0000performance. An in-depth analysis leads to identification of key system-level\u0000bottlenecks arising from device limitations which can be addressed using\u0000SNN-specific algorithm-hardware co-design techniques. This review underscores\u0000the imperative for holistic device to system design space co-exploration,\u0000highlighting the critical aspects of hardware and algorithm research endeavors\u0000for low-power neuromorphic solutions.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Efficient Formal Verification of Spiking Neural Network 实现尖峰神经网络的高效形式验证
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-20 DOI: arxiv-2408.10900
Baekryun Seong, Jieung Kim, Sang-Ki Ko
{"title":"Towards Efficient Formal Verification of Spiking Neural Network","authors":"Baekryun Seong, Jieung Kim, Sang-Ki Ko","doi":"arxiv-2408.10900","DOIUrl":"https://doi.org/arxiv-2408.10900","url":null,"abstract":"Recently, AI research has primarily focused on large language models (LLMs),\u0000and increasing accuracy often involves scaling up and consuming more power. The\u0000power consumption of AI has become a significant societal issue; in this\u0000context, spiking neural networks (SNNs) offer a promising solution. SNNs\u0000operate event-driven, like the human brain, and compress information\u0000temporally. These characteristics allow SNNs to significantly reduce power\u0000consumption compared to perceptron-based artificial neural networks (ANNs),\u0000highlighting them as a next-generation neural network technology. However,\u0000societal concerns regarding AI go beyond power consumption, with the\u0000reliability of AI models being a global issue. For instance, adversarial\u0000attacks on AI models are a well-studied problem in the context of traditional\u0000neural networks. Despite their importance, the stability and property\u0000verification of SNNs remains in the early stages of research. Most SNN\u0000verification methods are time-consuming and barely scalable, making practical\u0000applications challenging. In this paper, we introduce temporal encoding to\u0000achieve practical performance in verifying the adversarial robustness of SNNs.\u0000We conduct a theoretical analysis of this approach and demonstrate its success\u0000in verifying SNNs at previously unmanageable scales. Our contribution advances\u0000SNN verification to a practical level, facilitating the safer application of\u0000SNNs.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics-Driven AI Correction in Laser Absorption Sensing Quantification 激光吸收传感定量中的物理驱动人工智能校正
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-20 DOI: arxiv-2408.10714
Ruiyuan Kang, Panos Liatsis, Meixia Geng, Qingjie Yang
{"title":"Physics-Driven AI Correction in Laser Absorption Sensing Quantification","authors":"Ruiyuan Kang, Panos Liatsis, Meixia Geng, Qingjie Yang","doi":"arxiv-2408.10714","DOIUrl":"https://doi.org/arxiv-2408.10714","url":null,"abstract":"Laser absorption spectroscopy (LAS) quantification is a popular tool used in\u0000measuring temperature and concentration of gases. It has low error tolerance,\u0000whereas current ML-based solutions cannot guarantee their measure reliability.\u0000In this work, we propose a new framework, SPEC, to address this issue. In\u0000addition to the conventional ML estimator-based estimation mode, SPEC also\u0000includes a Physics-driven Anomaly Detection module (PAD) to assess the error of\u0000the estimation. And a Correction mode is designed to correct the unreliable\u0000estimation. The correction mode is a network-based optimization algorithm,\u0000which uses the guidance of error to iteratively correct the estimation. A\u0000hybrid surrogate error model is proposed to estimate the error distribution,\u0000which contains an ensemble of networks to simulate reconstruction error, and\u0000true feasible error computation. A greedy ensemble search is proposed to find\u0000the optimal correction robustly and efficiently from the gradient guidance of\u0000surrogate model. The proposed SPEC is validated on the test scenarios which are\u0000outside the training distribution. The results show that SPEC can significantly\u0000improve the estimation quality, and the correction mode outperforms current\u0000network-based optimization algorithms. In addition, SPEC has the\u0000reconfigurability, which can be easily adapted to different quantification\u0000tasks via changing PAD without retraining the ML estimator.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Differential Evolution based Feature Selection through Quantum, Chaos, and Lasso 通过量子、混沌和拉索改进基于差分进化的特征选择
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-20 DOI: arxiv-2408.10693
Yelleti Vivek, Sri Krishna Vadlamani, Vadlamani Ravi, P. Radha Krishna
{"title":"Improved Differential Evolution based Feature Selection through Quantum, Chaos, and Lasso","authors":"Yelleti Vivek, Sri Krishna Vadlamani, Vadlamani Ravi, P. Radha Krishna","doi":"arxiv-2408.10693","DOIUrl":"https://doi.org/arxiv-2408.10693","url":null,"abstract":"Modern deep learning continues to achieve outstanding performance on an\u0000astounding variety of high-dimensional tasks. In practice, this is obtained by\u0000fitting deep neural models to all the input data with minimal feature\u0000engineering, thus sacrificing interpretability in many cases. However, in\u0000applications such as medicine, where interpretability is crucial, feature\u0000subset selection becomes an important problem. Metaheuristics such as Binary\u0000Differential Evolution are a popular approach to feature selection, and the\u0000research literature continues to introduce novel ideas, drawn from quantum\u0000computing and chaos theory, for instance, to improve them. In this paper, we\u0000demonstrate that introducing chaos-generated variables, generated from\u0000considerations of the Lyapunov time, in place of random variables in\u0000quantum-inspired metaheuristics significantly improves their performance on\u0000high-dimensional medical classification tasks and outperforms other approaches.\u0000We show that this chaos-induced improvement is a general phenomenon by\u0000demonstrating it for multiple varieties of underlying quantum-inspired\u0000metaheuristics. Performance is further enhanced through Lasso-assisted feature\u0000pruning. At the implementation level, we vastly speed up our algorithms through\u0000a scalable island-based computing cluster parallelization technique.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recurrent Neural Networks Learn to Store and Generate Sequences using Non-Linear Representations 递归神经网络利用非线性表征学习存储和生成序列
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-20 DOI: arxiv-2408.10920
Róbert Csordás, Christopher Potts, Christopher D. Manning, Atticus Geiger
{"title":"Recurrent Neural Networks Learn to Store and Generate Sequences using Non-Linear Representations","authors":"Róbert Csordás, Christopher Potts, Christopher D. Manning, Atticus Geiger","doi":"arxiv-2408.10920","DOIUrl":"https://doi.org/arxiv-2408.10920","url":null,"abstract":"The Linear Representation Hypothesis (LRH) states that neural networks learn\u0000to encode concepts as directions in activation space, and a strong version of\u0000the LRH states that models learn only such encodings. In this paper, we present\u0000a counterexample to this strong LRH: when trained to repeat an input token\u0000sequence, gated recurrent neural networks (RNNs) learn to represent the token\u0000at each position with a particular order of magnitude, rather than a direction.\u0000These representations have layered features that are impossible to locate in\u0000distinct linear subspaces. To show this, we train interventions to predict and\u0000manipulate tokens by learning the scaling factor corresponding to each sequence\u0000position. These interventions indicate that the smallest RNNs find only this\u0000magnitude-based solution, while larger RNNs have linear representations. These\u0000findings strongly indicate that interpretability research should not be\u0000confined by the LRH.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Exploratory Landscape Analysis 神经探索性景观分析
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-20 DOI: arxiv-2408.10672
Zeyuan Ma, Jiacheng Chen, Hongshu Guo, Yue-Jiao Gong
{"title":"Neural Exploratory Landscape Analysis","authors":"Zeyuan Ma, Jiacheng Chen, Hongshu Guo, Yue-Jiao Gong","doi":"arxiv-2408.10672","DOIUrl":"https://doi.org/arxiv-2408.10672","url":null,"abstract":"Recent research in Meta-Black-Box Optimization (MetaBBO) have shown that\u0000meta-trained neural networks can effectively guide the design of black-box\u0000optimizers, significantly reducing the need for expert tuning and delivering\u0000robust performance across complex problem distributions. Despite their success,\u0000a paradox remains: MetaBBO still rely on human-crafted Exploratory Landscape\u0000Analysis features to inform the meta-level agent about the low-level\u0000optimization progress. To address the gap, this paper proposes Neural\u0000Exploratory Landscape Analysis (NeurELA), a novel framework that dynamically\u0000profiles landscape features through a two-stage, attention-based neural\u0000network, executed in an entirely end-to-end fashion. NeurELA is pre-trained\u0000over a variety of MetaBBO algorithms using a multi-task neuroevolution\u0000strategy. Extensive experiments show that NeurELA achieves consistently\u0000superior performance when integrated into different and even unseen MetaBBO\u0000tasks and can be efficiently fine-tuned for further performance boost. This\u0000advancement marks a pivotal step in making MetaBBO algorithms more autonomous\u0000and broadly applicable.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Event Stream based Sign Language Translation: A High-Definition Benchmark Dataset and A New Algorithm 基于事件流的手语翻译:高清基准数据集与新算法
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-20 DOI: arxiv-2408.10488
Xiao Wang, Yao Rong, Fuling Wang, Jianing Li, Lin Zhu, Bo Jiang, Yaowei Wang
{"title":"Event Stream based Sign Language Translation: A High-Definition Benchmark Dataset and A New Algorithm","authors":"Xiao Wang, Yao Rong, Fuling Wang, Jianing Li, Lin Zhu, Bo Jiang, Yaowei Wang","doi":"arxiv-2408.10488","DOIUrl":"https://doi.org/arxiv-2408.10488","url":null,"abstract":"Sign Language Translation (SLT) is a core task in the field of AI-assisted\u0000disability. Unlike traditional SLT based on visible light videos, which is\u0000easily affected by factors such as lighting, rapid hand movements, and privacy\u0000breaches, this paper proposes the use of high-definition Event streams for SLT,\u0000effectively mitigating the aforementioned issues. This is primarily because\u0000Event streams have a high dynamic range and dense temporal signals, which can\u0000withstand low illumination and motion blur well. Additionally, due to their\u0000sparsity in space, they effectively protect the privacy of the target person.\u0000More specifically, we propose a new high-resolution Event stream sign language\u0000dataset, termed Event-CSL, which effectively fills the data gap in this area of\u0000research. It contains 14,827 videos, 14,821 glosses, and 2,544 Chinese words in\u0000the text vocabulary. These samples are collected in a variety of indoor and\u0000outdoor scenes, encompassing multiple angles, light intensities, and camera\u0000movements. We have benchmarked existing mainstream SLT works to enable fair\u0000comparison for future efforts. Based on this dataset and several other\u0000large-scale datasets, we propose a novel baseline method that fully leverages\u0000the Mamba model's ability to integrate temporal information of CNN features,\u0000resulting in improved sign language translation outcomes. Both the benchmark\u0000dataset and source code will be released on\u0000https://github.com/Event-AHU/OpenESL","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation Framework for AI-driven Molecular Design of Multi-target Drugs: Brain Diseases as a Case Study 人工智能驱动的多靶点药物分子设计评估框架:以脑部疾病为例
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-20 DOI: arxiv-2408.10482
Arthur Cerveira, Frederico Kremer, Darling de Andrade Lourenço, Ulisses B Corrêa
{"title":"Evaluation Framework for AI-driven Molecular Design of Multi-target Drugs: Brain Diseases as a Case Study","authors":"Arthur Cerveira, Frederico Kremer, Darling de Andrade Lourenço, Ulisses B Corrêa","doi":"arxiv-2408.10482","DOIUrl":"https://doi.org/arxiv-2408.10482","url":null,"abstract":"The widespread application of Artificial Intelligence (AI) techniques has\u0000significantly influenced the development of new therapeutic agents. These\u0000computational methods can be used to design and predict the properties of\u0000generated molecules. Multi-target Drug Discovery (MTDD) is an emerging paradigm\u0000for discovering drugs against complex disorders that do not respond well to\u0000more traditional target-specific treatments, such as central nervous system,\u0000immune system, and cardiovascular diseases. Still, there is yet to be an\u0000established benchmark suite for assessing the effectiveness of AI tools for\u0000designing multi-target compounds. Standardized benchmarks allow for comparing\u0000existing techniques and promote rapid research progress. Hence, this work\u0000proposes an evaluation framework for molecule generation techniques in MTDD\u0000scenarios, considering brain diseases as a case study. Our methodology involves\u0000using large language models to select the appropriate molecular targets,\u0000gathering and preprocessing the bioassay datasets, training quantitative\u0000structure-activity relationship models to predict target modulation, and\u0000assessing other essential drug-likeness properties for implementing the\u0000benchmarks. Additionally, this work will assess the performance of four deep\u0000generative models and evolutionary algorithms over our benchmark suite. In our\u0000findings, both evolutionary algorithms and generative models can achieve\u0000competitive results across the proposed benchmarks.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mutation Strength Adaptation of the $(μ/μ_I, λ)$-ES for Large Population Sizes on the Sphere Function 球函数上大种群规模的$(μ/μ_I, λ)$-ES突变强度适应性研究
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-19 DOI: arxiv-2408.09761
Amir Omeradzic, Hans-Georg Beyer
{"title":"Mutation Strength Adaptation of the $(μ/μ_I, λ)$-ES for Large Population Sizes on the Sphere Function","authors":"Amir Omeradzic, Hans-Georg Beyer","doi":"arxiv-2408.09761","DOIUrl":"https://doi.org/arxiv-2408.09761","url":null,"abstract":"The mutation strength adaptation properties of a multi-recombinative\u0000$(mu/mu_I, lambda)$-ES are studied for isotropic mutations. To this end,\u0000standard implementations of cumulative step-size adaptation (CSA) and mutative\u0000self-adaptation ($sigma$SA) are investigated experimentally and theoretically\u0000by assuming large population sizes ($mu$) in relation to the search space\u0000dimensionality ($N$). The adaptation is characterized in terms of the\u0000scale-invariant mutation strength on the sphere in relation to its maximum\u0000achievable value for positive progress. %The results show how the different\u0000$sigma$-adaptation variants behave as $mu$ and $N$ are varied. Standard\u0000CSA-variants show notably different adaptation properties and progress rates on\u0000the sphere, becoming slower or faster as $mu$ or $N$ are varied. This is shown\u0000by investigating common choices for the cumulation and damping parameters.\u0000Standard $sigma$SA-variants (with default learning parameter settings) can\u0000achieve faster adaptation and larger progress rates compared to the CSA.\u0000However, it is shown how self-adaptation affects the progress rate levels\u0000negatively. Furthermore, differences regarding the adaptation and stability of\u0000$sigma$SA with log-normal and normal mutation sampling are elaborated.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Liquid Fourier Latent Dynamics Networks for fast GPU-based numerical simulations in computational cardiology 液体傅立叶潜动力网络用于计算心脏病学中基于 GPU 的快速数值模拟
arXiv - CS - Neural and Evolutionary Computing Pub Date : 2024-08-19 DOI: arxiv-2408.09818
Matteo Salvador, Alison L. Marsden
{"title":"Liquid Fourier Latent Dynamics Networks for fast GPU-based numerical simulations in computational cardiology","authors":"Matteo Salvador, Alison L. Marsden","doi":"arxiv-2408.09818","DOIUrl":"https://doi.org/arxiv-2408.09818","url":null,"abstract":"Scientific Machine Learning (ML) is gaining momentum as a cost-effective\u0000alternative to physics-based numerical solvers in many engineering\u0000applications. In fact, scientific ML is currently being used to build accurate\u0000and efficient surrogate models starting from high-fidelity numerical\u0000simulations, effectively encoding the parameterized temporal dynamics\u0000underlying Ordinary Differential Equations (ODEs), or even the spatio-temporal\u0000behavior underlying Partial Differential Equations (PDEs), in appropriately\u0000designed neural networks. We propose an extension of Latent Dynamics Networks\u0000(LDNets), namely Liquid Fourier LDNets (LFLDNets), to create parameterized\u0000space-time surrogate models for multiscale and multiphysics sets of highly\u0000nonlinear differential equations on complex geometries. LFLDNets employ a\u0000neurologically-inspired, sparse, liquid neural network for temporal dynamics,\u0000relaxing the requirement of a numerical solver for time advancement and leading\u0000to superior performance in terms of tunable parameters, accuracy, efficiency\u0000and learned trajectories with respect to neural ODEs based on feedforward\u0000fully-connected neural networks. Furthermore, in our implementation of\u0000LFLDNets, we use a Fourier embedding with a tunable kernel in the\u0000reconstruction network to learn high-frequency functions better and faster than\u0000using space coordinates directly as input. We challenge LFLDNets in the\u0000framework of computational cardiology and evaluate their capabilities on two\u00003-dimensional test cases arising from multiscale cardiac electrophysiology and\u0000cardiovascular hemodynamics. This paper illustrates the capability to run\u0000Artificial Intelligence-based numerical simulations on single or multiple GPUs\u0000in a matter of minutes and represents a significant step forward in the\u0000development of physics-informed digital twins.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信