Machine Learning Science and Technology最新文献

筛选
英文 中文
Quality assurance for online adaptive radiotherapy: a secondary dose verification model with geometry-encoded U-Net. 在线自适应放射治疗的质量保证:采用几何编码 U-Net 的二次剂量验证模型。
IF 6.3 2区 物理与天体物理
Machine Learning Science and Technology Pub Date : 2024-12-01 Epub Date: 2024-10-11 DOI: 10.1088/2632-2153/ad829e
Shunyu Yan, Austen Maniscalco, Biling Wang, Dan Nguyen, Steve Jiang, Chenyang Shen
{"title":"Quality assurance for online adaptive radiotherapy: a secondary dose verification model with geometry-encoded U-Net.","authors":"Shunyu Yan, Austen Maniscalco, Biling Wang, Dan Nguyen, Steve Jiang, Chenyang Shen","doi":"10.1088/2632-2153/ad829e","DOIUrl":"https://doi.org/10.1088/2632-2153/ad829e","url":null,"abstract":"<p><p>In online adaptive radiotherapy (ART), quick computation-based secondary dose verification is crucial for ensuring the quality of ART plans while the patient is positioned on the treatment couch. However, traditional dose verification algorithms are generally time-consuming, reducing the efficiency of ART workflow. This study aims to develop an ultra-fast deep-learning (DL) based secondary dose verification algorithm to accurately estimate dose distributions using computed tomography (CT) and fluence maps (FMs). We integrated FMs into the CT image domain by explicitly resolving the geometry of treatment delivery. For each gantry angle, an FM was constructed based on the optimized multi-leaf collimator apertures and corresponding monitoring units. To effectively encode treatment beam configuration, the constructed FMs were back-projected to <math><mrow><mn>30</mn></mrow> </math> cm away from the isocenter with respect to the exact geometry of the treatment machines. Then, a 3D U-Net was utilized to take the integrated CT and FM volume as input to estimate dose. Training and validation were performed on <math><mrow><mn>381</mn></mrow> </math> prostate cancer cases, with an additional <math><mrow><mn>40</mn></mrow> </math> testing cases for independent evaluation of model performance. The proposed model can estimate dose in ∼ <math><mrow><mn>15</mn></mrow> </math> ms for each patient. The average <i>γ</i> passing rate ( <math><mrow><mn>3</mn> <mi>%</mi> <mrow><mo>/</mo></mrow> <mn>2</mn> <mstyle></mstyle> <mrow><mtext>mm</mtext></mrow> </mrow> </math> , <math><mrow><mn>10</mn> <mi>%</mi></mrow> </math> threshold) for the estimated dose was 99.9% ± 0.15% on testing patients. The mean dose differences for the planning target volume and organs at risk were <math><mrow><mn>0.07</mn> <mi>%</mi> <mo>±</mo> <mn>0.34</mn> <mi>%</mi></mrow> </math> and <math><mrow><mn>0.48</mn> <mi>%</mi> <mo>±</mo> <mn>0.72</mn> <mi>%</mi></mrow> </math> , respectively. We have developed a geometry-resolved DL framework for accurate dose estimation and demonstrated its potential in real-time online ART doses verification.</p>","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"5 4","pages":"045013"},"PeriodicalIF":6.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11467776/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142476443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Equivariant tensor network potentials 等变张量网络势
IF 6.8 2区 物理与天体物理
Machine Learning Science and Technology Pub Date : 2024-09-18 DOI: 10.1088/2632-2153/ad79b5
M Hodapp and A Shapeev
{"title":"Equivariant tensor network potentials","authors":"M Hodapp and A Shapeev","doi":"10.1088/2632-2153/ad79b5","DOIUrl":"https://doi.org/10.1088/2632-2153/ad79b5","url":null,"abstract":"Machine-learning interatomic potentials (MLIPs) have made a significant contribution to the recent progress in the fields of computational materials and chemistry due to the MLIPs’ ability of accurately approximating energy landscapes of quantum-mechanical models while being orders of magnitude more computationally efficient. However, the computational cost and number of parameters of many state-of-the-art MLIPs increases exponentially with the number of atomic features. Tensor (non-neural) networks, based on low-rank representations of high-dimensional tensors, have been a way to reduce the number of parameters in approximating multidimensional functions, however, it is often not easy to encode the model symmetries into them. In this work we develop a formalism for rank-efficient equivariant tensor networks (ETNs), i.e. tensor networks that remain invariant under actions of SO(3) upon contraction. All the key algorithms of tensor networks like orthogonalization of cores and DMRG-based algorithms carry over to our equivariant case. Moreover, we show that many elements of modern neural network architectures like message passing, pulling, or attention mechanisms, can in some form be implemented into the ETNs. Based on ETNs, we develop a new class of polynomial-based MLIPs that demonstrate superior performance over existing MLIPs for multicomponent systems.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"4 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing ZX-diagrams with deep reinforcement learning 利用深度强化学习优化 ZX 图
IF 6.8 2区 物理与天体物理
Machine Learning Science and Technology Pub Date : 2024-09-18 DOI: 10.1088/2632-2153/ad76f7
Maximilian Nägele and Florian Marquardt
{"title":"Optimizing ZX-diagrams with deep reinforcement learning","authors":"Maximilian Nägele and Florian Marquardt","doi":"10.1088/2632-2153/ad76f7","DOIUrl":"https://doi.org/10.1088/2632-2153/ad76f7","url":null,"abstract":"ZX-diagrams are a powerful graphical language for the description of quantum processes with applications in fundamental quantum mechanics, quantum circuit optimization, tensor network simulation, and many more. The utility of ZX-diagrams relies on a set of local transformation rules that can be applied to them without changing the underlying quantum process they describe. These rules can be exploited to optimize the structure of ZX-diagrams for a range of applications. However, finding an optimal sequence of transformation rules is generally an open problem. In this work, we bring together ZX-diagrams with reinforcement learning, a machine learning technique designed to discover an optimal sequence of actions in a decision-making problem and show that a trained reinforcement learning agent can significantly outperform other optimization techniques like a greedy strategy, simulated annealing, and state-of-the-art hand-crafted algorithms. The use of graph neural networks to encode the policy of the agent enables generalization to diagrams much bigger than seen during the training phase.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"43 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142255165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DiffLense: a conditional diffusion model for super-resolution of gravitational lensing data DiffLense:引力透镜数据超分辨率的条件扩散模型
IF 6.8 2区 物理与天体物理
Machine Learning Science and Technology Pub Date : 2024-09-18 DOI: 10.1088/2632-2153/ad76f8
Pranath Reddy, Michael W Toomey, Hanna Parul and Sergei Gleyzer
{"title":"DiffLense: a conditional diffusion model for super-resolution of gravitational lensing data","authors":"Pranath Reddy, Michael W Toomey, Hanna Parul and Sergei Gleyzer","doi":"10.1088/2632-2153/ad76f8","DOIUrl":"https://doi.org/10.1088/2632-2153/ad76f8","url":null,"abstract":"Gravitational lensing data is frequently collected at low resolution due to instrumental limitations and observing conditions. Machine learning-based super-resolution techniques offer a method to enhance the resolution of these images, enabling more precise measurements of lensing effects and a better understanding of the matter distribution in the lensing system. This enhancement can significantly improve our knowledge of the distribution of mass within the lensing galaxy and its environment, as well as the properties of the background source being lensed. Traditional super-resolution techniques typically learn a mapping function from lower-resolution to higher-resolution samples. However, these methods are often constrained by their dependence on optimizing a fixed distance function, which can result in the loss of intricate details crucial for astrophysical analysis. In this work, we introduce DiffLense, a novel super-resolution pipeline based on a conditional diffusion model specifically designed to enhance the resolution of gravitational lensing images obtained from the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP). Our approach adopts a generative model, leveraging the detailed structural information present in Hubble space telescope (HST) counterparts. The diffusion model, trained to generate HST data, is conditioned on HSC data pre-processed with denoising techniques and thresholding to significantly reduce noise and background interference. This process leads to a more distinct and less overlapping conditional distribution during the model’s training phase. We demonstrate that DiffLense outperforms existing state-of-the-art single-image super-resolution techniques, particularly in retaining the fine details necessary for astrophysical analyses.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"70 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142255166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Masked particle modeling on sets: towards self-supervised high energy physics foundation models 集合上的掩蔽粒子建模:走向自监督高能物理基础模型
IF 6.8 2区 物理与天体物理
Machine Learning Science and Technology Pub Date : 2024-09-16 DOI: 10.1088/2632-2153/ad64a8
Tobias Golling, Lukas Heinrich, Michael Kagan, Samuel Klein, Matthew Leigh, Margarita Osadchy and John Andrew Raine
{"title":"Masked particle modeling on sets: towards self-supervised high energy physics foundation models","authors":"Tobias Golling, Lukas Heinrich, Michael Kagan, Samuel Klein, Matthew Leigh, Margarita Osadchy and John Andrew Raine","doi":"10.1088/2632-2153/ad64a8","DOIUrl":"https://doi.org/10.1088/2632-2153/ad64a8","url":null,"abstract":"We propose masked particle modeling (MPM) as a self-supervised method for learning generic, transferable, and reusable representations on unordered sets of inputs for use in high energy physics (HEP) scientific data. This work provides a novel scheme to perform masked modeling based pre-training to learn permutation invariant functions on sets. More generally, this work provides a step towards building large foundation models for HEP that can be generically pre-trained with self-supervised learning and later fine-tuned for a variety of down-stream tasks. In MPM, particles in a set are masked and the training objective is to recover their identity, as defined by a discretized token representation of a pre-trained vector quantized variational autoencoder. We study the efficacy of the method in samples of high energy jets at collider physics experiments, including studies on the impact of discretization, permutation invariance, and ordering. We also study the fine-tuning capability of the model, showing that it can be adapted to tasks such as supervised and weakly supervised jet classification, and that the model can transfer efficiently with small fine-tuning data sets to new classes and new data domains.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"75 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142255167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transforming the bootstrap: using transformers to compute scattering amplitudes in planar N =... 转换自举法:使用转换器计算平面 N =...
IF 6.8 2区 物理与天体物理
Machine Learning Science and Technology Pub Date : 2024-09-15 DOI: 10.1088/2632-2153/ad743e
Tianji Cai, Garrett W Merz, François Charton, Niklas Nolte, Matthias Wilhelm, Kyle Cranmer and Lance J Dixon
{"title":"Transforming the bootstrap: using transformers to compute scattering amplitudes in planar N =...","authors":"Tianji Cai, Garrett W Merz, François Charton, Niklas Nolte, Matthias Wilhelm, Kyle Cranmer and Lance J Dixon","doi":"10.1088/2632-2153/ad743e","DOIUrl":"https://doi.org/10.1088/2632-2153/ad743e","url":null,"abstract":"We pursue the use of deep learning methods to improve state-of-the-art computations in theoretical high-energy physics. Planar Super Yang–Mills theory is a close cousin to the theory that describes Higgs boson production at the Large Hadron Collider; its scattering amplitudes are large mathematical expressions containing integer coefficients. In this paper, we apply transformers to predict these coefficients. The problem can be formulated in a language-like representation amenable to standard cross-entropy training objectives. We design two related experiments and show that the model achieves high accuracy ( on both tasks. Our work shows that transformers can be applied successfully to problems in theoretical physics that require exact solutions.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"12 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142255168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning on the correctness class for domain inverse problems of gravimetry 关于重力测量领域反问题正确性类的学习
IF 6.8 2区 物理与天体物理
Machine Learning Science and Technology Pub Date : 2024-09-11 DOI: 10.1088/2632-2153/ad72cc
Yihang Chen and Wenbin Li
{"title":"Learning on the correctness class for domain inverse problems of gravimetry","authors":"Yihang Chen and Wenbin Li","doi":"10.1088/2632-2153/ad72cc","DOIUrl":"https://doi.org/10.1088/2632-2153/ad72cc","url":null,"abstract":"We consider end-to-end learning approaches for inverse problems of gravimetry. Due to ill-posedness of the inverse gravimetry, the reliability of learning approaches is questionable. To deal with this problem, we propose the strategy of learning on the correctness class. The well-posedness theorems are employed when designing the neural-network architecture and constructing the training set. Given the density-contrast function as a priori information, the domain of mass can be uniquely determined under certain constrains, and the domain inverse problem is a correctness class of the inverse gravimetry. Under this correctness class, we design the neural network for learning by mimicking the level-set formulation for the inverse gravimetry. Numerical examples illustrate that the method is able to recover mass models with non-constant density contrast.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"5 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142197718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A combined modeling method for complex multi-fidelity data fusion 复杂多保真数据融合的组合建模方法
IF 6.8 2区 物理与天体物理
Machine Learning Science and Technology Pub Date : 2024-09-10 DOI: 10.1088/2632-2153/ad718f
Lei Tang, Feng Liu, Anping Wu, Yubo Li, Wanqiu Jiang, Qingfeng Wang and Jun Huang
{"title":"A combined modeling method for complex multi-fidelity data fusion","authors":"Lei Tang, Feng Liu, Anping Wu, Yubo Li, Wanqiu Jiang, Qingfeng Wang and Jun Huang","doi":"10.1088/2632-2153/ad718f","DOIUrl":"https://doi.org/10.1088/2632-2153/ad718f","url":null,"abstract":"Currently, mainstream methods for multi-fidelity data fusion have achieved great success in many fields, but they generally suffer from poor scalability. Therefore, this paper proposes a combination modeling method for complex multi-fidelity data fusion, devoted to solving the modeling problems with three types of multi-fidelity data fusion, and explores a general solution for any n types of multi-fidelity data fusion. Different from the traditional direct modeling method—Multi-Fidelity Deep Neural Network (MFDNN)—the method is an indirect modeling method. The experimental results on three representative benchmark functions and the prediction tasks of SG6043 airfoil aerodynamic performance show that combination modeling has the following advantages: (1) It can quickly establish the mapping relationship between high, medium, and low fidelity data. (2) It can effectively solve the data imbalance problem in multi-fidelity modeling. (3) Compared with MFDNN, it has stronger noise resistance and higher prediction accuracy. Additionally, this paper discusses the scalability problem of the method when n = 4 and n = 5, providing a reference for further research on the combined modeling method.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"56 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142197714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a comprehensive visualisation of structure in large scale data sets 实现大规模数据集结构的全面可视化
IF 6.8 2区 物理与天体物理
Machine Learning Science and Technology Pub Date : 2024-09-09 DOI: 10.1088/2632-2153/ad6fea
Joan Garriga and Frederic Bartumeus
{"title":"Towards a comprehensive visualisation of structure in large scale data sets","authors":"Joan Garriga and Frederic Bartumeus","doi":"10.1088/2632-2153/ad6fea","DOIUrl":"https://doi.org/10.1088/2632-2153/ad6fea","url":null,"abstract":"Dimensionality reduction methods are fundamental to the exploration and visualisation of large data sets. Basic requirements for unsupervised data exploration are flexibility and scalability. However, current methods have computational limitations that restrict our ability to explore data structures to the lower range of scales. We focus on t-SNE and propose a chunk-and-mix protocol that enables the parallel implementation of this algorithm, as well as a self-adaptive parametric scheme that facilitates its parametric configuration. As a proof of concept, we present the pt-SNE algorithm, a parallel version of Barnes-Hat-SNE (an implementation of t-SNE). In pt-SNE, a single free parameter for the size of the neighbourhood, namely the perplexity, modulates the visualisation of the data structure at different scales, from local to global. Thanks to parallelisation, the runtime of the algorithm remains almost independent of the perplexity, which extends the range of scales to be analysed. The pt-SNE converges to a good global embedding comparable to current solutions, although it adds little noise at the local scale. This noise illustrates an unavoidable trade-off between computational speed and accuracy. We expect the same approach to be applicable to faster embedding algorithms than Barnes-Hat-SNE, such as Fast-Fourier Interpolation-based t-SNE or Uniform Manifold Approximation and Projection, thus extending the state of the art and allowing a more comprehensive visualisation and analysis of data structures.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"30 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142197711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing quantum multi-category classifier from the perspective of brain processing information 从大脑处理信息的角度设计量子多类别分类器
IF 6.8 2区 物理与天体物理
Machine Learning Science and Technology Pub Date : 2024-09-06 DOI: 10.1088/2632-2153/ad7570
Xiaodong Ding, Jinchen Xu, Zhihui Song, Yifan Hou, Zheng Shan
{"title":"Designing quantum multi-category classifier from the perspective of brain processing information","authors":"Xiaodong Ding, Jinchen Xu, Zhihui Song, Yifan Hou, Zheng Shan","doi":"10.1088/2632-2153/ad7570","DOIUrl":"https://doi.org/10.1088/2632-2153/ad7570","url":null,"abstract":"In the field of machine learning, the multi-category classification problem plays a crucial role. Solving the problem has a profound impact on driving the innovation and development of machine learning techniques and addressing complex problems in the real world. In recent years, researchers have begun to focus on utilizing quantum computing to solve the multi-category classification problem. Some studies have shown that the process of processing information in the brain may be related to quantum phenomena, with different brain regions having neurons with different structures. Inspired by this, we design a quantum multi-category classifier model from this perspective for the first time. The model employs a heterogeneous population of quantum neural networks (QNNs) to simulate the cooperative work of multiple different brain regions. When processing information, these heterogeneous clusters of QNNs allow for simultaneous execution on different quantum computers, thus simulating the brain’s ability to utilize multiple brain regions working in concert to maintain the robustness of the model. By setting the number of heterogeneous QNN clusters and parameterizing the number of stacks of unit layers in the quantum circuit, the model demonstrates excellent scalability in dealing with different types of data and different numbers of classes in the classification problem. Based on the attention mechanism of the brain, we integrate the processing results of heterogeneous QNN clusters to achieve high accuracy in classification. Finally, we conducted classification simulation experiments on different datasets. The results show that our method exhibits strong robustness and scalability. Among them, on different subsets of the MNIST dataset, its classification accuracy improves by up to about 5% compared to other quantum multiclassification algorithms. This result becomes the state-of-the-art simulation result for quantum classification models and exceeds the performance of classical classifiers with a considerable number of trainable parameters on some subsets of the MNIST dataset.","PeriodicalId":33757,"journal":{"name":"Machine Learning Science and Technology","volume":"27 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142197712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信