Optical Memory and Neural Networks最新文献

筛选
英文 中文
Magnetic Field-Controlled Phase Transitions in Antiferromagnetic Structures
IF 1
Optical Memory and Neural Networks Pub Date : 2025-02-03 DOI: 10.3103/S1060992X24700486
V. I. Egorov, B. V. Kryzhanovsky
{"title":"Magnetic Field-Controlled Phase Transitions in Antiferromagnetic Structures","authors":"V. I. Egorov,&nbsp;B. V. Kryzhanovsky","doi":"10.3103/S1060992X24700486","DOIUrl":"10.3103/S1060992X24700486","url":null,"abstract":"<p>The properties of an antiferromagnetic substance are investigated in the presence of a magnetic field. Analytical expressions are obtained in terms of the mean-field approximation. An external magnetic field is shown to be non-destructive to the phase transition in the antiferromagnetic substance. It only changes critical exponents and shifts the critical point. This allows us to control the critical properties of the system. The number of critical points can vary from one (the second-order phase transition) to four (two first-order phase transitions and two second-order phase transitions). It is shown that variations in the magnetic field magnitude can raise the critical temperature by three-odd times in materials with strong antiferromagnetic interactions. A Monte Carlo simulation carried out for a three-dimensional lattice with a finite interaction radius substantiates that the action of an external field brings about a shift in the temperature of the transition. The simulation results agree well with the analytical expressions of the mean field theory.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 4","pages":"401 - 410"},"PeriodicalIF":1.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143108125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Personality Prediction Using Knowledge Distillation with BERT: A Focus on MBTI
IF 1
Optical Memory and Neural Networks Pub Date : 2025-02-03 DOI: 10.3103/S1060992X2470084X
Suman A. Patil, Shivleela Patil, Vijayalaxmi V. Tadkal
{"title":"Enhanced Personality Prediction Using Knowledge Distillation with BERT: A Focus on MBTI","authors":"Suman A. Patil,&nbsp;Shivleela Patil,&nbsp;Vijayalaxmi V. Tadkal","doi":"10.3103/S1060992X2470084X","DOIUrl":"10.3103/S1060992X2470084X","url":null,"abstract":"<p>A person’s personality comprises a range of behaviours, attitudes, and emotional patterns that shift throughout time due to ecological and biological influences. Personality prediction from the MBTI dataset poses computational efficiency, memory utilisation, and class imbalance challenges. This study proposes a novel approach leveraging Knowledge Distillation-based BERT to address these challenges. The process involves three stages: pre-processing, feature extraction, and classification. Initially, data is cleaned by removing irrelevant characters and URLs, followed by tokenisation and conversion to lowercase for consistency. The padding ensures uniform input size for DistilBERT, with attention masks aiding focus on relevant tokens. DistilBERT extracts contextual embeddings, enhanced by segment and positional embeddings, capturing semantic meaning via multi-head self-attention. A fully connected layer with GELU activation and batch normalisation mitigates overfitting, followed by a classification layer with Sparsemax activation, addressing the class imbalance. Fine-tuning pre-trained DistilBERT maximises detection accuracy while excluding irrelevant learning objectives. Dynamic masking during inference replaces static masking, and the Radam optimiser optimises hyperparameters for improved convergence. Our approach offers a robust solution that achieves 93% accuracy and 95% F1-score for accurate personality prediction while mitigating computational complexities and class imbalance issues.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 4","pages":"455 - 465"},"PeriodicalIF":1.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143108211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized Jordan Neural Network and Bandwidth Aware Routing Protocol for Congestion Prediction and Avoidance in IOT for Effective Communication
IF 1
Optical Memory and Neural Networks Pub Date : 2025-02-03 DOI: 10.3103/S1060992X24700838
Mallavalli Raghavendra Suma, Bhosale Rajkumar Shankarrao, Adapa Gopi, Nilesh U. Sambhe, Laxmikant Umate
{"title":"Optimized Jordan Neural Network and Bandwidth Aware Routing Protocol for Congestion Prediction and Avoidance in IOT for Effective Communication","authors":"Mallavalli Raghavendra Suma,&nbsp;Bhosale Rajkumar Shankarrao,&nbsp;Adapa Gopi,&nbsp;Nilesh U. Sambhe,&nbsp;Laxmikant Umate","doi":"10.3103/S1060992X24700838","DOIUrl":"10.3103/S1060992X24700838","url":null,"abstract":"<p>Development of 5G internet in today’s trend leads to the evaluation of many IOT devices. The information is transmitted by a network in IOT to store the data in the cloud. Due to the wide usage of IOT devices by people, congestion may occurs in IOT networks, which delays the information or sometimes resulting in data loss despite the implementation of congestion control methods. So many machine learning and congestion control protocols are used to predict and avoid congestion in IOT network. But these existing systems consist of drawbacks such as accuracy drop for prediction, packet loss and time delay. Hence, the Bandwidth Aware Routing Strategy (BARS) protocol using Jordan Neural Network (JNN) was developed to predict and avoid congestion in the network. Initially, the IOT nodes are deployed and the data are collected and preprocessed using a sigmoidal function and Extreme Learning machine to improve the quality of the original data. Then extract the features from the pre-processed data using Locality Preserving Projection (LPP). After that, Jordan Neural Network is used for congestion prediction and pine cone optimization is used to tune the hyper parameters such as learning rate and batch size which is utilized to improve the classifier performance. Then, BARS protocol is used to avoid the congestion present in the IOT network. According to the experimental approach, the proposed techniques achieves 95.45% of Accuracy, 95.71% of Precision, 95.39% of F1-Scorce and 95.02 of specificity. Thus, the congestion and avoidance of Information in the IOT network is processed in high efficiency by using this proposed approach.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 4","pages":"429 - 446"},"PeriodicalIF":1.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143108221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sign Language Video Generation from Text Using Generative Adversarial Networks
IF 1
Optical Memory and Neural Networks Pub Date : 2025-02-03 DOI: 10.3103/S1060992X24700851
R. Sreemathy, Param Chordiya, Soumya Khurana, Mousami Turuk
{"title":"Sign Language Video Generation from Text Using Generative Adversarial Networks","authors":"R. Sreemathy,&nbsp;Param Chordiya,&nbsp;Soumya Khurana,&nbsp;Mousami Turuk","doi":"10.3103/S1060992X24700851","DOIUrl":"10.3103/S1060992X24700851","url":null,"abstract":"<p>This work presents a technique developed by utilizing Generative Adversarial Networks (GANs) to generate Sign Language videos. Sign Language is the main mode of communication for people in the hearing impaired community. The process of teaching sign language is difficult as there are not a lot of tools available for this purpose. Generative artificial intelligence can be very helpful for this task as it is able to learn from the limited data and is able to generate various images and videos. In this work, Conditional GANs (cGANs) were employed to generate videos for Indian Sign Language (ISL) based on a text input. It is found that the results obtained from cGANs exhibit superior quality and control based on the performance metrics such as SSIM, FID and MSE values. The effectiveness of the cGANs in generating accurate and visually appealing sign language videos highlights their potential for teaching sign language and improving sign language communication systems.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 4","pages":"466 - 476"},"PeriodicalIF":1.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143108289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advanced Attention-Based Pre-Trained Transfer Learning Model for Accurate Brain Tumor Detection and Classification from MRI Images
IF 1
Optical Memory and Neural Networks Pub Date : 2025-02-03 DOI: 10.3103/S1060992X24700863
A. Priya, V. Vasudevan
{"title":"Advanced Attention-Based Pre-Trained Transfer Learning Model for Accurate Brain Tumor Detection and Classification from MRI Images","authors":"A. Priya,&nbsp;V. Vasudevan","doi":"10.3103/S1060992X24700863","DOIUrl":"10.3103/S1060992X24700863","url":null,"abstract":"<p>Brain tumor identification using MRI images involves the detailed examination of brain tissues to detect and characterize tumors. Conventional ML and DL algorithms sometimes encounter difficulties due to a lack of labelled data, resulting in inferior performance and poor generalization. To address these issues, this study introduces an Advanced Attention-based Pre-trained Transfer Learning (TL) model that enhances accuracy and resilience in identifying and categorizing brain tumors using MRI images. The methodology starts with pre-processing, which includes image scaling and noise reduction with an adaptive median filter. After pre-processing, the images are fed into a CNN-based framework called Pre-trained Attention-fused Image SpectraNet. This framework comprises of five convolutional layers, after which Rectified Linear Unit (ReLU) activation and pooling layers are added to learn progressively more complex features. A novel self-attention layer is implemented to capture deep features that reveal aberrant tissue patterns, hence increasing model interpretability and accuracy. A globally average pooling layer is employed to reduce computational complexity, and it is accompanied by a fully connected layer with batch normalization to assure stability and convergence during training. The last layer uses softmax to categorize normal, pituitary, glioma, and meningioma. Utilizing the Adam optimizer, the suggested approach enhances performance, yielding excellent metrics such as 98.33% accuracy, 98.35% precision, 98.28% recall, and a 98.31% F1-score. These measures show considerable increases over existing ML and DL methods, demonstrating the system’s ability to improve brain tumor detection accuracy. The advancement of these treatments has significant implications for medical professionals who specialize in the timely identification of brain tumors.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 4","pages":"477 - 491"},"PeriodicalIF":1.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143108288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IFDRF: Advancing Anomaly Detection with a Hybrid Machine Learning Model
IF 1
Optical Memory and Neural Networks Pub Date : 2025-02-03 DOI: 10.3103/S1060992X24700474
Hariharan Ramesh, Faridoddin Shariaty, Sanjiban Sekhar Roy
{"title":"IFDRF: Advancing Anomaly Detection with a Hybrid Machine Learning Model","authors":"Hariharan Ramesh,&nbsp;Faridoddin Shariaty,&nbsp;Sanjiban Sekhar Roy","doi":"10.3103/S1060992X24700474","DOIUrl":"10.3103/S1060992X24700474","url":null,"abstract":"<p>Anomaly detection is the identification of aberrations in the dataset using statistical methods or machine learning algorithms. It is widely performed using unsupervised learning algorithms because labelling the data manually can be expensive. While unsupervised anomaly detection is sufficient for data cleaning, this is not the case in real-world applications, where accuracy is of the utmost importance. For example, it would be unacceptable to misdiagnose someone as not having breast cancer and not provide them with treatment because our model failed to recognize it as an anomaly. In this paper, we propose an optimized model—IFDRF (Isolation Forest, DBSCAN, and Random Forest) that has incorporated feedback (corrections) into the unsupervised detection model. IFDRF is a novel hybrid model combining an unsupervised learning model at the first layer followed by a clustering model at the second layer and a supervised learning model at the end. The proposed model tunes the unsupervised learning model followed by a model fitting with the help of the feedback mechanism. It obviates the need to label the entire dataset and thus increases the scope of anomaly detection applications. We have compared our proposed model to the existing state-of-the-art anomaly detection baseline models to show its efficacy. The proposed model performed significantly (<span>(P{text{-value}} &lt; 2.2 times {{10}^{{ - 16}}})</span>) better than the other algorithms, with an AUC score of 0.875.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 4","pages":"385 - 400"},"PeriodicalIF":1.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143108094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tracking and Computation of Characteristics of the Movement of People in Groups on Video Using Convolutional Neural Networks
IF 1
Optical Memory and Neural Networks Pub Date : 2025-02-03 DOI: 10.3103/S1060992X24700802
Huafeng Chen, A. Krytsky, Shiping Ye, Rykhard Bohush, S. Ablameyko
{"title":"Tracking and Computation of Characteristics of the Movement of People in Groups on Video Using Convolutional Neural Networks","authors":"Huafeng Chen,&nbsp;A. Krytsky,&nbsp;Shiping Ye,&nbsp;Rykhard Bohush,&nbsp;S. Ablameyko","doi":"10.3103/S1060992X24700802","DOIUrl":"10.3103/S1060992X24700802","url":null,"abstract":"<p>This paper proposes an approach for tracking the behavior of people in a group on video by using convolutional neural networks. At the beginning, definitions of group movement of people are given, and features for accompaniment are defined that can be used to analyze people’s behavior. Next, an algorithm is proposed for calculating the distance between people in video, which includes three stages: detection and tracking of objects, coordinate transformation, calculation of the distance between people and detection of distance violations. The results of experimental studies and comparison with known algorithms are presented, which confirms the effectiveness of the algorithm.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 4","pages":"373 - 384"},"PeriodicalIF":1.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143108095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Network Model for Cardiac Image Segmentation Using MRI Images
IF 1
Optical Memory and Neural Networks Pub Date : 2025-02-03 DOI: 10.3103/S1060992X24700498
A. Rasmi
{"title":"Hybrid Network Model for Cardiac Image Segmentation Using MRI Images","authors":"A. Rasmi","doi":"10.3103/S1060992X24700498","DOIUrl":"10.3103/S1060992X24700498","url":null,"abstract":"<p>Cardiac magnetic resonance imaging (MRI) commonly yields numerous images per scan, and manually delineating structures from these images is a laborious and time-intensive task. The automation of this process is highly desirable as it would enable the generation of crucial clinical measurements like ejection fraction and stroke volume. However, due to variations in scanning settings and patient characteristics, automated segmentation faces several challenges that lead to a high degree of variability in picture statistics and quality. Our study presents a neural network approach that utilizes the UNet and ResNet-50 architectures to efficiently partition the left and right ventricles' endocardial and epicardial boundaries. The Dice metric is used as the loss function in our strategy to maximize the trainable parameters in the network. Additionally, in the neural network’s predicted binary picture, we employed a preprocessing step to save just the segmentation labels' most connected component. Using datasets from the Multi-Vendor &amp; Multi-Disease Cardiac Image Segmentation Challenge, the suggested method was learned. The test set of 160 that had been reserved for testing was used by the challenge organizers to evaluate the approach.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 4","pages":"447 - 454"},"PeriodicalIF":1.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143108112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Abnormal Sound Event Detection Method Based on Time-Spectrum Information Fusion
IF 1
Optical Memory and Neural Networks Pub Date : 2025-02-03 DOI: 10.3103/S1060992X24700814
Changgeng Yu, Chaowen He, Dashi Lin
{"title":"Abnormal Sound Event Detection Method Based on Time-Spectrum Information Fusion","authors":"Changgeng Yu,&nbsp;Chaowen He,&nbsp;Dashi Lin","doi":"10.3103/S1060992X24700814","DOIUrl":"10.3103/S1060992X24700814","url":null,"abstract":"<p>In this paper, we propose an abnormal sound event detection method based on Time-Frequency Spectral Information Fusion Neural Network (TFSIFNN), addressing the problem that the time structure and frequency information of sound events in real environment are widely varied, resulting in poor performance of abnormal sound event detection. First, we construct a TCN-BiLSTM network based on Temporal Convolutional Networks (TCN) and Bidirectional Long Short-Term Memory (BiLSTM) networks to extract the temporal context information from sound events. Next, we enhance the feature learning capability of the MobileNetV3 network through Efficient Channel Attention (ECA), culminating in the design of an ECA-MobileNetV3 network to capture the spectral information within sound events. Finally, a TFSIFNN model was established based on TCN-BiLSTM and ECA-MobileNetV3 to improve the performance of abnormal sound event detection. The experimental results, conducted on the Urbansound8K and TUT Rare Sound Events 2017 datasets, demonstrate that our TFSIFNN model achieved notable performance improvements. Specifically, it reached an accuracy of 93.93% and an <i>F</i>1<i>-Score</i> of 94.15% on the Urbansound8K dataset. On the TUT Rare Sound Events 2017 dataset, compared to the baseline method, the error rate on the evaluation set decreased by 0.55, and the <i>F</i>1<i>-Score</i> improved by 29.69%.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 4","pages":"411 - 421"},"PeriodicalIF":1.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143108126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer Analysis of EPR Spectra of 31P Atom Quantum Pair Embedded in Spinless Isotope 28Si Substrate
IF 1
Optical Memory and Neural Networks Pub Date : 2025-02-03 DOI: 10.3103/S1060992X24700826
S. N. Dobryakov, V. V. Privezentsev
{"title":"Computer Analysis of EPR Spectra of 31P Atom Quantum Pair Embedded in Spinless Isotope 28Si Substrate","authors":"S. N. Dobryakov,&nbsp;V. V. Privezentsev","doi":"10.3103/S1060992X24700826","DOIUrl":"10.3103/S1060992X24700826","url":null,"abstract":"<p>In this paper we use EPR spectrums to explore interactions between elements of a quantum pair <sup>31</sup>P–<sup>31</sup>P embedded into <sup>28</sup>Si isotope substrate supposing that several silicon atoms separate phosphorus isotopes. The EPR method allows us to identify at a quantum level mechanisms of interaction between the phosphorus atoms and to analyze the influence of the silicon substrate on the spin-spin interaction between <sup>31</sup>P atoms in the quantum pairs. We also examined possibilities to control these interactions. When simulating, we take into account scalar and vector exchange interactions as well as a dipole interaction between unpaired electrons of <sup>31</sup>P atoms. We suppose that an indirect dipole-dipole interaction is carried out via a system of conjugated 3<i>d</i>-orbits and by means of a polarization of the medium (the <sup>28</sup>Si isotope substrate). The exchange interaction between the spins (the magnetic moments) of electrons of the two phosphorus atoms also is carried out via the polarized medium. We discuss the obtained simulated EPR spectrums.</p>","PeriodicalId":721,"journal":{"name":"Optical Memory and Neural Networks","volume":"33 4","pages":"422 - 428"},"PeriodicalIF":1.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143108220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信