2021 International Joint Conference on Neural Networks (IJCNN)最新文献

筛选
英文 中文
Multimodal Traffic Travel Time Prediction 多模式交通出行时间预测
2021 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2021-07-18 DOI: 10.1109/IJCNN52387.2021.9533356
Shizhen Fan, Jianbo Li, Zhiqiang Lv, Aite Zhao
{"title":"Multimodal Traffic Travel Time Prediction","authors":"Shizhen Fan, Jianbo Li, Zhiqiang Lv, Aite Zhao","doi":"10.1109/IJCNN52387.2021.9533356","DOIUrl":"https://doi.org/10.1109/IJCNN52387.2021.9533356","url":null,"abstract":"With the continuous growth of urban population, it is urgent for people to accurately plan the travel time. Therefore, travel time prediction of urban areas has become a key research direction in the field of smart cities. At present, several studies on travel time prediction are only conducted on a single mode, where the prediction process only treats a certain vehicle as an isolated traffic state on the route. However, the factors affecting traffic are extremely complex, thus making it very difficult to produce a comprehensive forecast. Based on this situation, the mixed existing model and mutual influence of multiple modes of transportation in the city are fully considered, and a multimodal deep learning model namely MC-GRU (Multimodal Convoluted Gated Recurrent Unit Network) is proposed. At the same time, to solve the problem of some objective factors, such as departure time and travel distance, we propose an attribute module to deal with these implicit factors. In addition, to explore the interaction between different modes of vehicles, a feature fusion module for obtaining the interaction effect between different modes of vehicles is proposed. Finally, we use GRU to learn the long-term dependence. MC-GRU can realize the accurate prediction of travel time in multimodal traffic state, as well as implement travel time prediction for three types of travel modes. The experimental results show that MC-GRU achieves higher prediction accuracy on a challenging real world dataset as compared with MAE, MAPE and RMSE.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114214681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Lightweight sequence-based Unsupervised Loop Closure Detection 一种轻量级的基于序列的无监督闭环检测
2021 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2021-07-18 DOI: 10.1109/IJCNN52387.2021.9534180
Fangyuan Xiong, Yan Ding, Mingrui Yu, Wenzhe Zhao, Nanning Zheng, Pengju Ren
{"title":"A Lightweight sequence-based Unsupervised Loop Closure Detection","authors":"Fangyuan Xiong, Yan Ding, Mingrui Yu, Wenzhe Zhao, Nanning Zheng, Pengju Ren","doi":"10.1109/IJCNN52387.2021.9534180","DOIUrl":"https://doi.org/10.1109/IJCNN52387.2021.9534180","url":null,"abstract":"Stable, effective and lightweight loop closure detection is an always pursued goal in real-time SLAM systems, that can be ported on embedded processors and deployed on autonomous robotics. Deep learning methods have extended the expressive ability and adaptability of the descriptor, and sequence-based methods can greatly improve the matching accuracy. However, the increased computation complexity and storage bandwidth requirements of matching calculations for high-dimensional descriptor make it infeasible for real-time deployment, especially for robots that navigate in relatively big maps. To address this challenge, we propose a lightweight sequence-based unsupervised loop closure detection scheme. To be specific, Principal Component Analysis (PCA) is applied to squeeze the descriptor dimensions while maintaining sufficient expressive ability. Additionally, with the consideration of the image sequence and combining linear query with fast approximate nearest neighbor search to further reduce the execution time and improve the efficiency of sequence matching. We implement our method on CALC, a state-of-the-art unsupervised solution, and conduct experiments on NVIDIA TX2, results demonstrate that the accuracy has been improved by 5%, while the execution speed is 2× faster. Source code is available at https://github.com/Mingrui-Yu/Seq-CALC.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114228630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Evolutionary NAS in Light of Model Stability for Accurate Continual Learning 基于模型稳定性的精确持续学习的进化NAS
2021 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2021-07-18 DOI: 10.1109/IJCNN52387.2021.9534079
Xiaocong Du, Zheng Li, Jingbo Sun, Frank Liu, Yu Cao
{"title":"Evolutionary NAS in Light of Model Stability for Accurate Continual Learning","authors":"Xiaocong Du, Zheng Li, Jingbo Sun, Frank Liu, Yu Cao","doi":"10.1109/IJCNN52387.2021.9534079","DOIUrl":"https://doi.org/10.1109/IJCNN52387.2021.9534079","url":null,"abstract":"Continual learning, the capability to learn new knowledge from streaming data without forgetting the previous knowledge, is a critical requirement for dynamic learning systems, especially for emerging edge devices such as self-driving cars and drones. However, continual learning is still facing the catastrophic forgetting problem. Previous work illustrate that model performance on continual learning is not only related to the learning algorithms but also strongly dependent on the inherited model, i.e., the model where continual learning starts. The better stability of the inherited model, the less catastrophic forgetting and thus, the inherited model should be elaborately selected. Inspired by this finding, we develop an evolutionary neural architecture search (ENAS) algorithm that emphasizes the Stability of the inherited model, namely ENAS-S. ENAS-S aims to find optimal architectures for accurate continual learning on edge devices. On CIFAR-10 and CIFAR-100, we present that ENAS-S achieves competitive architectures with lower catastrophic forgetting and smaller model size when learning from a data stream, as compared with handcrafted DNNs.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114314737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Selective Adversarial Adaptation Learning via Exclusive Regularization for Partial Domain Adaptation 基于排他性正则化的选择性对抗适应学习
2021 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2021-07-18 DOI: 10.1109/IJCNN52387.2021.9533438
Ping Li, Linlin Shen, H. Ling, L. Wu, Qian Wang, Chuang Zhao
{"title":"Selective Adversarial Adaptation Learning via Exclusive Regularization for Partial Domain Adaptation","authors":"Ping Li, Linlin Shen, H. Ling, L. Wu, Qian Wang, Chuang Zhao","doi":"10.1109/IJCNN52387.2021.9533438","DOIUrl":"https://doi.org/10.1109/IJCNN52387.2021.9533438","url":null,"abstract":"In consideration of the suitability for the application scenario, partial domain adaptation is more significant and more valuable than traditional domain adaptation. Most existing partial domain adaptation methods adopt weighting mechanism to avoid negative migration which is caused by outlier classes samples. However, these methods give the equal consideration of each category in the source domain and determine the classes weight by classifier or discriminator, and they do not consider the possible misprediction of the similar samples from classes which are difficult to distinguish in the source domain. This situation may cause the misalignment of the outlier source classes and target classes, and the wrong alignment of the discriminators. In this work, we propose a selective adversarial adaptation learning method via exclusive regularization for partial domain adaptation (ERPDA) to solve these problems. Specifically, we utilize the exclusive regularization to extend the distance between samples of different classes in source domain to learn an inter-class separable discriminant representation to avoid negative transfer. Meanwhile, the positive transfer is performed by Joint Maximum Mean Discrepancy (JMMD) based on selective adaptation adversarial learning via multi-discriminator. Extensive experiments show that ERPDA achieves state-of-the-art results on several partial domain adaptation benchmark datasets.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114619080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TW-TGNN: Two Windows Graph-Based Model for Text Classification TW-TGNN:两种基于Windows图的文本分类模型
2021 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2021-07-18 DOI: 10.1109/IJCNN52387.2021.9534150
Xinyu Wu, Zheng Luo, Zhanwei Du, Jiaxin Wang, Chao Gao, Xianghua Li
{"title":"TW-TGNN: Two Windows Graph-Based Model for Text Classification","authors":"Xinyu Wu, Zheng Luo, Zhanwei Du, Jiaxin Wang, Chao Gao, Xianghua Li","doi":"10.1109/IJCNN52387.2021.9534150","DOIUrl":"https://doi.org/10.1109/IJCNN52387.2021.9534150","url":null,"abstract":"Text classification is the most fundamental and classical task in the natural language processing (NLP). Recently, graph neural network (GNN) methods, especially the graph-based model, have been applied for solving this issue because of their superior capacity of capturing the global co-occurrence information. However, some existing GNN-based methods adopt a corpus-level graph structure which causes a high memory consumption. In addition, these methods have not taken account of the global co-occurrence information and local semantic information at the same time. To address these problems, we propose a new GNN-based model, namely two windows text gnn model (TW-TGNN), for text classification. More specifically, we build text-level graph for each text with a local sliding window and a dynamic global window. For one thing, the local window sliding inside the text will acquire enough local semantic features. For another, the dynamic global window sliding betweent texts can generate dynamic shared weight matrix, which overcomes the limitation of the fixed corpus level co-occurrence and provides richer dynamic global information. Our experimental results on four benchmark datasets illustrate the improvement of the proposed method over state-of-the-art text classification methods. Moreover, we find that our method captures adequate global information for the short text which is beneficial for overcoming the insufficient contextual information in the process of the short text classification.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116265078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Universal Transformer Hawkes process 通用变压器霍克工艺
2021 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2021-07-18 DOI: 10.1109/IJCNN52387.2021.9533810
Lu-ning Zhang, Jian-wei Liu, Zhi-yan Song, Xin Zuo, Wei-min Li, Ze-yu Liu
{"title":"Universal Transformer Hawkes process","authors":"Lu-ning Zhang, Jian-wei Liu, Zhi-yan Song, Xin Zuo, Wei-min Li, Ze-yu Liu","doi":"10.1109/IJCNN52387.2021.9533810","DOIUrl":"https://doi.org/10.1109/IJCNN52387.2021.9533810","url":null,"abstract":"The recent increase of asynchronous event sequence data in a diversity of fields, make researchers pay more attention to how to mine knowledge from them. In the initial research phase, researchers tend to make use of basic mathematical-based point process models, such as Poisson process and Hawkes process. And in recent years, recurrent neural network (RNN) based point process models are proposed which have significant model performance improvement, while it is still hard to describe the long-term relation between events. To address this issue, transformer Hawkes process is proposed. However, it is worth noting that transformer with a fixed stack of different layers is failure to implement the parallel processing, recursive learning, and abstracting the local salient properties, while they may be very important. In order to make up for this shortcoming, we present a Universal Transformer Hawkes Process (UTHP), which introduces the recurrent structure in encode process, and introduce convolutional neural network (CNN) in the position-wise-feed-forward neural network. Experiments on several datasets show that the performance of our model is improved compared to the performance of the state-of-the-art.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116479500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Let Imbalance Have Nowhere to Hide: Class-Sensitive Feature Extraction for Imbalanced Traffic Classification 让不平衡无处可藏:针对不平衡流量分类的类敏感特征提取
2021 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2021-07-18 DOI: 10.1109/IJCNN52387.2021.9533821
Yu Guo, Gaopeng Gou, G. Xiong, Minghao Jiang, Junzheng Shi, Wei Xia
{"title":"Let Imbalance Have Nowhere to Hide: Class-Sensitive Feature Extraction for Imbalanced Traffic Classification","authors":"Yu Guo, Gaopeng Gou, G. Xiong, Minghao Jiang, Junzheng Shi, Wei Xia","doi":"10.1109/IJCNN52387.2021.9533821","DOIUrl":"https://doi.org/10.1109/IJCNN52387.2021.9533821","url":null,"abstract":"With the full encryption of network traffic, traffic classification schemes based on machine learning emerge in endlessly. Class imbalance, as a widely-studied challenge in machine learning, has not attracted enough attention in traffic classification researches. The uneven distribution hidden in the real-world traffic will cause performance degradation of the existing schemes. In existing methods, data pre-sampling is easy to introduce noise or lose massive information; the cost matrix of cost-sensitive methods is difficult to design; feature selection methods will filter out lots of “redundant” features and cause unsatisfactory results. In this paper, we propose an effective end-to-end framework for imbalanced traffic classification which avoids the above weaknesses, called DeepFE. We adopt deep neural networks for feature extraction, and model features from the perspective of channels. It can learn class-sensitive feature representation, which is quite helpful to distinguish the minority traffic classes. Moreover, DeepFE can be applied to various tasks because of its unlimited input format, i.e., both the raw bytes and the packet length sequence can be used. We conducted experiments on the public dataset ISCXVPN2016 and a realworld traffic dataset covering 27 applications. The results show that DeepFE achieves excellent results, significantly alleviating the performance degradation caused by imbalance, and surpasses several state-of-the-art methods.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121444230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
TRANSFAKE: Multi-task Transformer for Multimodal Enhanced Fake News Detection TRANSFAKE:多任务变压器多模式增强假新闻检测
2021 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2021-07-18 DOI: 10.1109/IJCNN52387.2021.9533433
Quanliang Jing, Di Yao, Xinxin Fan, Baoli Wang, Haining Tan, Xiangpeng Bu, Jingping Bi
{"title":"TRANSFAKE: Multi-task Transformer for Multimodal Enhanced Fake News Detection","authors":"Quanliang Jing, Di Yao, Xinxin Fan, Baoli Wang, Haining Tan, Xiangpeng Bu, Jingping Bi","doi":"10.1109/IJCNN52387.2021.9533433","DOIUrl":"https://doi.org/10.1109/IJCNN52387.2021.9533433","url":null,"abstract":"Social media has became a critical manner for people to acquire information in daily life. Despite the great convenience, fake news can be widely spread through social networks, causing various adverse effects on people's lives. Detecting these fake news or misinformations has proved to be a critical task and draws attentions from both governments and individuals. Recently, many methods have been proposed to solve this problem, but most of them rely on the body content of the news, ignoring the social context information such as the comments. We argue that the comments of a specific news contain common judgements of the whole society and could be extremely useful for detecting fake news. In this paper, we propose a new method TRANSFAKE which jointly models the body content and comments of news systemically, and detects fake news with multi-task learning framework. TRANSFAKE model is a Transformer-based model. It takes different modalities as input and employs multiple tasks, i.e. rumor score prediction and event classification, as intermediate tasks for extracting useful hidden relationships across various modalities. These intermediate tasks promote each other and encourage TRANSFAKE making the right decision. Extensive experiments on two standard real-life datasets demonstrate that TRANSFAKE outperforms state-of-the-art methods. It improves the detection accuracy by margins as large as ~12.6% and F1 scores as large as ~15%.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121539701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Disk Failure Prediction with Multiple Channel Convolutional Neural Network 基于多通道卷积神经网络的磁盘故障预测
2021 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2021-07-18 DOI: 10.1109/IJCNN52387.2021.9534457
Jian Wu, Haiyang Yu, Zhen Yang, Ruiping Yin
{"title":"Disk Failure Prediction with Multiple Channel Convolutional Neural Network","authors":"Jian Wu, Haiyang Yu, Zhen Yang, Ruiping Yin","doi":"10.1109/IJCNN52387.2021.9534457","DOIUrl":"https://doi.org/10.1109/IJCNN52387.2021.9534457","url":null,"abstract":"With the increase of data centers, the number of disks also grows rapidly. Therefore, the prediction of disk failures has become an important task for both academia and industry. Existing prediction schemes predict disk failure in the short prediction horizon or with a short time window. However, these schemes cannot achieve ideal performance for a long prediction horizon with a long time window. In this paper, we proposed a deep learning method that can effectively solve the above problems. We refine the Self-Monitoring, Analysis and Reporting Technology (SMART) attributes by using information entropy to select the most related attributes for prediction. Moreover, we proposed the Multiple Channel Convolutional Neural Network based LSTM (MCCNN-LSTM) model to predict whether disk failures will occur in a given disk in next few days. We further evaluate the MCCNN-LSTM model by comparing it with the state-of-the-art works. Extensive experiments show that our model can improve FDR (Fault Detection Rate) to 99.8% and reduce FAR (False Alarm Rate) to 0.2%.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124295922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Graph Convolutional Network Based Patent Issue Discovery Model 基于图卷积网络的专利问题发现模型
2021 International Joint Conference on Neural Networks (IJCNN) Pub Date : 2021-07-18 DOI: 10.1109/IJCNN52387.2021.9533370
Weidong Liu, Hao-nan Zhang, Xudong Guo, Yong Han
{"title":"Graph Convolutional Network Based Patent Issue Discovery Model","authors":"Weidong Liu, Hao-nan Zhang, Xudong Guo, Yong Han","doi":"10.1109/IJCNN52387.2021.9533370","DOIUrl":"https://doi.org/10.1109/IJCNN52387.2021.9533370","url":null,"abstract":"With the increasing attention on the protection of intellectual property rights, a large number of patents need to be processed. However, since patent is a kind of complicated technical text, it is difficult to understand patents. How to quickly understand a patent by computer is the problem. To solve the above problem, our method is to tag the issue sentences, these sentences describe problems to be solved in patents. Tagging the issue sentences is a very important research topic in patent understanding, because a patent revolves around issue sentences, issue sentences are the key to understand a patent. There are two challenges in our task: (1) How to extract issue sentences to get corpus? (2) What kinds of features and models are better for our task? In order to solve the above challenges: (1) We find that the issue sentences mainly exist in the “technical background” section of patent, so that we can extract issue sentences from this section to get corpus. (2) We split the “background technology” section into sentences, and obtain two sets of features from a sentence include: 1) Part-of-speech features of a sentence. 2) Association information feature between the sentence and the claim of patent. Then we construct graph according to above two sets of features, and use Graph Convolutional Neural Network to train and test.","PeriodicalId":396583,"journal":{"name":"2021 International Joint Conference on Neural Networks (IJCNN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124414723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信