2021 IEEE 13th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM)最新文献

筛选
英文 中文
Development of a One Way, Imaging Based Fish Fingerling Counter Using Raspberry Pi 基于树莓派的单向成像鱼种计数器的研制
M. Manuel, John Edward D. Cruz, Ronnel L. Reyes, Mark Joseph V. Macapuno, Jennifer C. Dela Cruz, Roderick C. Tud
{"title":"Development of a One Way, Imaging Based Fish Fingerling Counter Using Raspberry Pi","authors":"M. Manuel, John Edward D. Cruz, Ronnel L. Reyes, Mark Joseph V. Macapuno, Jennifer C. Dela Cruz, Roderick C. Tud","doi":"10.1109/HNICEM54116.2021.9732058","DOIUrl":"https://doi.org/10.1109/HNICEM54116.2021.9732058","url":null,"abstract":"Aquaculture is also growing much faster than capture fisheries. Through this study, it can greatly benefit the country, especially the fishermen and fish companies, to automate the way of counting the fish instead of counting them manually. The researchers are able to create a Raspberry Pi system in order to count the fish fingerlings considering one-way, imaging-based process. For the housing, an angle of depression of 3 degrees is considered; thus, the program can detect and count the colors with in its boundary. The fish fingerling counter has an accuracy at least 90% for Running Total and Binary Classification.","PeriodicalId":129868,"journal":{"name":"2021 IEEE 13th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115712205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multigene Genetic Programming Model for Temperature Optimization to Improve Lettuce Quality 莴苣品质温度优化的多基因遗传规划模型
Jo-Ann V. Magsumbol, Maria Gemel B. Palconit, Lovelyn C. Garcia, Marife A. Rosales, A. Bandala, E. Dadios
{"title":"Multigene Genetic Programming Model for Temperature Optimization to Improve Lettuce Quality","authors":"Jo-Ann V. Magsumbol, Maria Gemel B. Palconit, Lovelyn C. Garcia, Marife A. Rosales, A. Bandala, E. Dadios","doi":"10.1109/HNICEM54116.2021.9731974","DOIUrl":"https://doi.org/10.1109/HNICEM54116.2021.9731974","url":null,"abstract":"This paper presents a Multigene Genetic Programming (MGGP) approach in optimizing the temperature of romaine lettuce inside an artificially controlled environment (ACE). In this research, MGGP is used to find the prediction model that will lead to the optimum temperature for growing lettuce crop. The system used a 1000 population using tournament selection with 40 generations. A mutation probability of 0.14 was applied to validate if it is at global optima. When the iterations reached the termination criteria, the system stopped, resulting in the best temperature model for growing lettuce crop. Training and testing of predictions were done. The model developed in this study can be used for the control system of the temperature setting inside the ACE which can provide optimal condition.","PeriodicalId":129868,"journal":{"name":"2021 IEEE 13th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114869146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Detection of Outer Throat Infection using Deep Convolutional Neural Network 应用深度卷积神经网络检测外咽部感染
Emmanuel Coronel, Martin N. Mababangloob, Jessie R. Balbin
{"title":"Detection of Outer Throat Infection using Deep Convolutional Neural Network","authors":"Emmanuel Coronel, Martin N. Mababangloob, Jessie R. Balbin","doi":"10.1109/HNICEM54116.2021.9731949","DOIUrl":"https://doi.org/10.1109/HNICEM54116.2021.9731949","url":null,"abstract":"It is integral for physicians to be able to assess through a thorough history and physical exam. However, it has been increasingly difficult to perform rigorous physical examinations because of the COVID-19 pandemic. Thus, there is an increasing relevance of improved techniques of assessment through image classification using Deep Convolutional Neural Network. The ResNet50 architecture will be used as a classifier in Convolutional Neural Network. This type of network subtracts the feature learned from any given layer for which the ResNet50 learns by utilizing the found shortcut connections, which proved to be easier compared to some types of Convolutional Neural Networks. The learned features from ResNet50 are essential to Fully Connected Layers in Neural Networks as it aids the neural network to decide based on the features extracted and come up with a result using softmax function. The researchers are able to train a network and test it. It is very convenient for a patient, especially in the midst of the COVID-19 pandemic, to be assessed without having to be physically examined by a physician. In the GUI, the patient must register on the web app and take a photo of the throat and send it – the patient will reserve a notification containing the diagnosis of the photo. The network obtained positive results, with a 92% accuracy rate in looking for healthy, inflamed throat, inflamed throat and swollen tonsilitis, inflamed throat/ swollen tonsils, and white spots in throat images.","PeriodicalId":129868,"journal":{"name":"2021 IEEE 13th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114698046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Analysis of Machine Learning Algorithms in Generating Urban Land Cover Map of Quezon City, Philippines Using Sentinel-2 Satellite Imagery 利用Sentinel-2卫星图像生成菲律宾奎松市城市土地覆盖图的机器学习算法性能分析
Robert Martin C. Santiago, R. Gustilo, G. Arada, E. Magsino, E. Sybingco
{"title":"Performance Analysis of Machine Learning Algorithms in Generating Urban Land Cover Map of Quezon City, Philippines Using Sentinel-2 Satellite Imagery","authors":"Robert Martin C. Santiago, R. Gustilo, G. Arada, E. Magsino, E. Sybingco","doi":"10.1109/HNICEM54116.2021.9731856","DOIUrl":"https://doi.org/10.1109/HNICEM54116.2021.9731856","url":null,"abstract":"As urban expansion is expected to persist and may even accelerate in the coming years, understanding and effectively managing urbanization become increasingly important in achieving long-term progress specifically in making cities and human settlements inclusive, safe, resilient, and sustainable. One way to accomplish these is to obtain reliable and updated information about the land cover characteristics of an area in the form of a map which can be done using remote sensing and machine learning. However, the practice of using these technologies for urban land cover mapping was observed to occur in the geographic locality level, and in the case of the Philippines, this is a domain that needs to be further explored to quantitatively comprehend urban extent. In this study, a map of man-made structures or built-up areas and natural structures or nonbuilt-up areas was generated over Quezon City and nearby surrounding areas where rapid rise in population occurs along with urban development. In addition, since related previous studies used various machine learning algorithms in doing the classification, this study compared the performances of three algorithms specifically random forest classifier, k-nearest neighbors, and Gaussian mixture model to identify which performed best in this particular application. The satellite imagery of the area of interest was collected from the Sentinel-2 mission satellites. All the three algorithms attained high accuracies across all measurements with small variations but greatly differed in the time consumed doing the classification. The highest over-all accuracy of 99.32% was obtained using random forest classifier despite taking the longest time to finish the classification, next is 98.95% using the k-nearest neighbors algorithm which also ranked second in terms of speed of classification, and last is 97.17% using the Gaussian mixture model despite being the fastest to complete the classification. Further studies may explore other machine learning algorithms as well as deep learning techniques to harness their capabilities in feature extraction for more complex applications. Aside from Sentinel-2, other satellite missions may also be utilized as sources of satellite imageries which can offer different spectral, spatial, and temporal resolutions that would fit a specific application.","PeriodicalId":129868,"journal":{"name":"2021 IEEE 13th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM)","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122049191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gender Identification Using Keras Model Through Detection of Face 基于人脸检测的Keras模型性别识别
Steven Dg. Boncolmo, Emerson V. Calaquian, M. V. Caya
{"title":"Gender Identification Using Keras Model Through Detection of Face","authors":"Steven Dg. Boncolmo, Emerson V. Calaquian, M. V. Caya","doi":"10.1109/HNICEM54116.2021.9731814","DOIUrl":"https://doi.org/10.1109/HNICEM54116.2021.9731814","url":null,"abstract":"Gender identification is a critical topic in which research is still ongoing. Many gender identification systems have been developed utilizing various designs. With the help of the Raspberry Pi 4 Model B and Raspberry Pi Camera Module V2, this paper provides a real-time system for gender identification from images. Gender identification from face images has become a significant issue in recent years. In computer vision, various practical techniques are being explored to address such a difficult challenge. The face characteristic acquired is sent into the neural network as input or test data. The neural network was created to extract features and to function as a classifier to detect genders. However, the majority of these methods fall short of great precision and accuracy. With Python as the programming language, several functions such as OpenCV, Keras, and TensorFlow were utilized to assess the effectiveness of the design. A thousand samples were tested for foreign and Filipino datasets, yielding a training accuracy of nearly 90 percent and less than 1 percent loss accuracy. As a result, the system is a reliable device for determining a user’s gender.","PeriodicalId":129868,"journal":{"name":"2021 IEEE 13th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122134579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Strengthening Module Development to Full Online Modality: Faculty and Student Adaptation in the Pandemic Era 加强模块开发到全面在线模式:大流行时代的教师和学生适应
Arlene Mae C. Valderama, Mengvi P. Gatpandan, Mary Ann B. Taduyo
{"title":"Strengthening Module Development to Full Online Modality: Faculty and Student Adaptation in the Pandemic Era","authors":"Arlene Mae C. Valderama, Mengvi P. Gatpandan, Mary Ann B. Taduyo","doi":"10.1109/HNICEM54116.2021.9731881","DOIUrl":"https://doi.org/10.1109/HNICEM54116.2021.9731881","url":null,"abstract":"The COVID-19 pandemic has led education administrations worldwide to adhere to flexible learning environment and search alternatives to face-to-face instruction or the already manifested blended learning. Full online teaching and learning modalities were utilized by universities, the faculty members and students had to make the adjustment in an exceptional scale. This paper presents how a faculty has prepared and strengthened the content of the modules from the course learning outcomes, key performance indicators, delivery for teaching and learning, assessment methods and tools, and the course’s evaluation target. Findings from the course Integrative Programming and Technologies was employed as the module contents for its 18-week duration was presented and the students’ performance reflected more than 60% attainment in its course learning outcome (CLO) targets.","PeriodicalId":129868,"journal":{"name":"2021 IEEE 13th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124054072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Python Based Defect Classification of Theobroma Cacao Bean using Fine-Tuned Visual Geometry Group16 基于Python的可可豆缺陷精细视觉几何分类
Aileen F. Villamonte, Patrick John S. Silva, D. G. D. Ronquillo, Marife A. Rosales, A. Bandala, E. Dadios
{"title":"Python Based Defect Classification of Theobroma Cacao Bean using Fine-Tuned Visual Geometry Group16","authors":"Aileen F. Villamonte, Patrick John S. Silva, D. G. D. Ronquillo, Marife A. Rosales, A. Bandala, E. Dadios","doi":"10.1109/HNICEM54116.2021.9731887","DOIUrl":"https://doi.org/10.1109/HNICEM54116.2021.9731887","url":null,"abstract":"The study aims to classify cacao bean defects based on the captured image using vgg16. Seven classes of cacao beans were gathered including broken, cluster, flat, germinated, good, insect and moldy. One hundred images per class were captured using an enclosed capturing box with c920 Logitech camera inside and LED as light source. Image augmentation was done to increase dataset. Transfer learning technique was implemented by utilizing the pre-trained vgg16 model architecture adding 10% Dropout after FC2 layer and using default weights of several layers through fine-tuning. Three methods of fine-tuning were conducted by freezing the convolutional blocks. Performance of the trained model using several optimizers (such as Adam, RMSprop and SGD) and loss functions (such as categorical crossentropy and mean squared error) were analysed. The effect of the no. of epochs as well as different learning rates during training was considered and checked. The metrics used in choosing the model were based on the confusion matrix. The chosen model is using vgg16 architecture with 10% dropout + adam optimizer + 0.0001 learning rate + categorical crossentropy loss function run in 20 epochs. It has 95.33% average accuracy. The model was embedded in a processor for actual testing. It has an accuracy of 97.29% based on the actual testing on prototype with 37 testing samples.","PeriodicalId":129868,"journal":{"name":"2021 IEEE 13th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128424991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards the Integration of Computer Vision and Applied Artificial Intelligence in Postharvest Storage Systems: Non-invasive Harvested Crop Monitoring 面向采后存储系统中计算机视觉与应用人工智能的集成:无创收获作物监测
Ronnie S. Concepcion, Llewelyn S. Moron, I. Valenzuela, Jonnel D. Alejandrino, R. R. Vicerra, A. Bandala, E. Dadios
{"title":"Towards the Integration of Computer Vision and Applied Artificial Intelligence in Postharvest Storage Systems: Non-invasive Harvested Crop Monitoring","authors":"Ronnie S. Concepcion, Llewelyn S. Moron, I. Valenzuela, Jonnel D. Alejandrino, R. R. Vicerra, A. Bandala, E. Dadios","doi":"10.1109/HNICEM54116.2021.9731973","DOIUrl":"https://doi.org/10.1109/HNICEM54116.2021.9731973","url":null,"abstract":"Agricultural production system does not end with the actual harvesting of crops rather it extends to the postharvest system which primarily consists of crop storing, marketing, and transportation. However, temperature and humidity directly affect the quality of stored agricultural products. In a tropical country like the Philippines, tomato, lettuce, and other thin-skinned and highly moist crops degrade its quality and experience shape deformation over time. This study is a thematic taxonomy of intelligent postharvest storage systems discussing the techniques in the phenotyping of agricultural produce and emerging needs, trends in computer-vision-based postharvest systems, integration of artificial intelligence in postharvest systems, the current issues, challenges, and corresponding future directives in intelligent storage systems. Based on the systematic analysis, technical modeling of the storage system and postharvest crop quality grading are the emerging challenges in effectively storing crops for human consumption. It was found out that non-invasive high throughput methods for evaluation of quality and shelf life are needed. This can be done through vision-based fruit and vegetable quality grading and vision-based adaptive controls in the storage chamber. Overall, computer vision allied with artificial intelligence can make an intelligent postharvest storage system that is sustainable, profitable, and easy to implement.","PeriodicalId":129868,"journal":{"name":"2021 IEEE 13th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129374406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Classification of Otitis Media Infections using Image Processing and Convolutional Neural Network 基于图像处理和卷积神经网络的中耳炎感染分类
Ahmed I. Elabbas, K. Khan, Carlos C. Hortinela
{"title":"Classification of Otitis Media Infections using Image Processing and Convolutional Neural Network","authors":"Ahmed I. Elabbas, K. Khan, Carlos C. Hortinela","doi":"10.1109/HNICEM54116.2021.9732013","DOIUrl":"https://doi.org/10.1109/HNICEM54116.2021.9732013","url":null,"abstract":"Developing countries still to this day suffer from misdiagnosis of otitis media infections. Various studies to solve this issue with various success rates. This study explores a different variation of convolutional neural network (CNN), YOLO V3, or Version 3 of You Only Look Once. This algorithm detects particular objects in various forms of media, and one of them is images. Considering it is designed to detect specific objects, it was the perfect candidate to test on detecting Acute Otitis Media (AOM) and Chronic Suppurative Otitis Media (CSOM). These two variations have an object to look for whenever a doctor is diagnosing a case. Inflammation of the middle ear or otitis media (OM) are separate disease entities but may overlap. Hence, it may be confusing for a newly trained doctor to diagnose it correctly. This study achieved an accuracy rate of 75% when 20 images of AOM, CSOM, and normal tympanic membrane were tested. This result can be improved by adding more images into the training datasets using the same camera used in testing. Another appealing feature of YOLOV3 is the low cost of development and the availability of documentation on using and improving it.","PeriodicalId":129868,"journal":{"name":"2021 IEEE 13th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130592181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Hardware Development of a Humanoid Robot Head: “Gabot” 仿人机器人头部“Gabot”的硬件开发
Justein Alagenio, Edriane James L. Jabanes, Cresencio P. Genobiagon, N. Linsangan
{"title":"Hardware Development of a Humanoid Robot Head: “Gabot”","authors":"Justein Alagenio, Edriane James L. Jabanes, Cresencio P. Genobiagon, N. Linsangan","doi":"10.1109/HNICEM54116.2021.9731880","DOIUrl":"https://doi.org/10.1109/HNICEM54116.2021.9731880","url":null,"abstract":"The development aims to mimic the anthropomorphic specifications of a human head by using available modern equipment such as the 3D printing machine. The proponents developed mechanisms to meet the Anthropomorphic data of a human head, angle of actuation, and the angular velocity of the mouth, eyes, and neck. The proponents also tested motor torque and stress on the parts to ensure the robustness of the machine, which yields 520 N-mm and 27.74 MPa on the neck tilting, 188 N-mm and 24.74 Mpa on neck swinging, 114 N-mm, and 12.09 Mpa on the neck panning, 73 and 7.052 MPa on the Eye Tilting. The maximum angular velocity of each part is 266.33 deg/sec on neck tilting, 262.33 deg/sec on neck swinging, 314 deg/sec on neck panning, 795.66 deg/sec on eye tilting, and 785.66 deg/sec on eye panning. The Proponents used the MPU-6050 accelerometer to test for the result of this study to achieve the required data. The effectiveness of the machine is as follows; eyes, 92.43% for panning, and 93.60% for tilting, neck, 89.20% for panning, 75.66% for tilting, 75.52% for swinging, for the mouth, 81.94%.","PeriodicalId":129868,"journal":{"name":"2021 IEEE 13th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130629996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信