Jiseon Moon, Sang-il Ahn, M. Joo, K. Park, H. Baac, Jitae Shin
{"title":"Multi Modal Deep Learning Based on Feature Attention for Prediction of Blood Clot Elasticity","authors":"Jiseon Moon, Sang-il Ahn, M. Joo, K. Park, H. Baac, Jitae Shin","doi":"10.1109/ITC-CSCC58803.2023.10212605","DOIUrl":"https://doi.org/10.1109/ITC-CSCC58803.2023.10212605","url":null,"abstract":"Blood clot is formed inside a blood vessel with various reasons. Carotid artery is major blood vessel in the neck that supply blood to the brain. If blood clot is harden in carotid artery, blood clot of carotid artery can block the blood vessel and make narrowed blood vessel. Therefore, it is essential to predict the coagulation of blood clot in blood vessels. In this paper, we propose the method to determine the coagulation progress of blood clot. We use different two data which are waveform of blood clot and frequency spectra data obtained by applying the Fourier transform to the waveform data. And then feature vectors are extracted from two different data. We apply an encoder block network for waveform data and propose a feature attention network for frequency spectra data. The extracted feature vectors are classified into 3 stages of coagulation progress through multi-modal deep learning. Through the proposed method, we show a meaningful result with an accuracy of 98% in determining the stage of coagulation of blood clot.","PeriodicalId":220939,"journal":{"name":"2023 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134565721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"P300-Based Partial Face Recognition With xDAWN Spatial Filter and Covariance Matrix","authors":"Ingon Chanpornpakdi, Toshihisa Tanaka","doi":"10.1109/ITC-CSCC58803.2023.10212494","DOIUrl":"https://doi.org/10.1109/ITC-CSCC58803.2023.10212494","url":null,"abstract":"Face cognition is one of the most crucial cognition processes in social interaction. In the study of face cognition, rapid serial face cognition (RSVP), the presentation of target and non-target images, is often used to understand the cognition mechanism. When a person perceives the target image, the event-related potential (ERP) is evoked. To identify the target image or the event of interest of a person, the classification model machine learning is introduced. However, the machine learning model that works the best when applied to ERP is still in question. This study aimed to investigate the simplest machine learning model that performs best when comparing six classification models applied to ERP peak evoked during the partial face cognition task. The six models used in this investigation were linear discrimination analysis (LDA), xDAWN filter + linear support vector machine (SVM), xDAWN filter + LightGBM, xDAWN covariance matrix + tangent space + linear SVM, xDAWN covariance matrix + tangent space + LightGBM, and xDAWN covariance matrix + minimum distance to mean (MDM). As a result, we found that the xDAWN covariance matrix improved the classification performance compared to combining the xDAWN filter with the same classification models. In addition, the combination of the xDAWN covariance matrix and MDM provided the best performance in participant-dependent cross-validation. In contrast, the xDAWN covariance matrix, tangent space, and LightGBM provided the most promising performance in the participant-independent cross-validation.","PeriodicalId":220939,"journal":{"name":"2023 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC)","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131643127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyunha Hwang, Se-Hun Kim, Mincheol Cha, Min-Ho Choi, Kyujoong Lee, Hyuk-Jae Lee
{"title":"Analysis of the Effect of Feature Denoising from the Perspective of Corruption Robustness","authors":"Hyunha Hwang, Se-Hun Kim, Mincheol Cha, Min-Ho Choi, Kyujoong Lee, Hyuk-Jae Lee","doi":"10.1109/ITC-CSCC58803.2023.10212895","DOIUrl":"https://doi.org/10.1109/ITC-CSCC58803.2023.10212895","url":null,"abstract":"Adversarial attack is a method that aims to cause incorrect predictions in a deep learning model by making slight perturbations to the input. As a result of this vulnerability, various studies have been conducted to improve adversarial robustness. However, deep learning models are also vulnerable to distribution mismatch between training data and test data. This mismatch can occur due to natural corruption in test data. Research on corruption robustness has been less explored compared to adversarial robustness. This paper analyzes the effect of feature denoising network, which is for improving adversarial robustness, in terms of corruption robustness. Experimental results show that feature denoising network can also lead to improved robustness against common corruptions.","PeriodicalId":220939,"journal":{"name":"2023 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131182290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of Computer-Aided Diagnosis System Using Single FCN Capable for Indicating Detailed Inference Results in Colon NBI Endoscopy","authors":"Daisuke Katayama, Yongfei Wu, Tetsushi Koide, Toru Tamaki, Shigeto Yoshida, Shin Morimoto, Yuki Okamoto, S. Oka, Shinji Tanaka","doi":"10.1109/ITC-CSCC58803.2023.10212877","DOIUrl":"https://doi.org/10.1109/ITC-CSCC58803.2023.10212877","url":null,"abstract":"In this paper, we propose a single fully convolutional network (FCN) capable of indicating the detailed inference results for Computer-Aided Diagnosis (CAD) in colon Narrow Band Imaging (NBI) endoscopy. The proposed CAD system is capable of real-time processing with a latency of 0.05 seconds and 20 frames per second and can detect more than 80% of lesions even for non-magnified images. Classification results at the pixel with the highest confidence level at resulted in a diagnosis with 73% agreement with histopathologic findings.","PeriodicalId":220939,"journal":{"name":"2023 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131125555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kwonseung Bok, Sang-Seol Lee, Aeri Kim, Sujin Han, Kyungho Kim
{"title":"Real-Time Inference Platform for Object Detection on Edge Device","authors":"Kwonseung Bok, Sang-Seol Lee, Aeri Kim, Sujin Han, Kyungho Kim","doi":"10.1109/ITC-CSCC58803.2023.10212984","DOIUrl":"https://doi.org/10.1109/ITC-CSCC58803.2023.10212984","url":null,"abstract":"Deep Neural Networks (DNNs) that perform object detection, such as autonomous driving, facial recognition, and medical healthcare, have received great attention recently. Due to the large amounts of data used in object detection DNNs, a cloud computing system for AI with centralized computing power and storage capacity has been used. However, with the increasing number of edge devices in the IoT trend and the growing amount of data, cloud-based AI encounters a challenge of network latency in processing real-time inference. In this paper, we propose a platform consisting of an edge device with a DNN inference accelerator and an optimized network to address the latency issues and achieve real-time inference of DNNs. The proposed platform adopts SqueezeNet, which is suitable for mobile devices due to its smaller network size than other DNNs. Post Training Quantization compresses the pre-trained SqueezeNet model size without accuracy loss. With the compressed network, an xczu3eg chip-based MPSoC board that includes an AI accelerator is used as the edge device. To further improve inference throughput, multi-threading is also used to reduce the latency between the Processing System(PS) and Programmable Logic(PL). Through the proposed platform, we achieve a 55 frame-per-second(fps) throughput, which is a sufficient real-time object detection inference performance.","PeriodicalId":220939,"journal":{"name":"2023 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132954191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fully Integrated CMOS Wideband Power Amplifier for Fifth Generation Mobile Communications","authors":"Bonghyuk Park, Hui-Dong Lee, Seunghyun Jang, Sunwoo Kong, Seunghun Wang, Jung-hwan Hwang","doi":"10.1109/ITC-CSCC58803.2023.10212682","DOIUrl":"https://doi.org/10.1109/ITC-CSCC58803.2023.10212682","url":null,"abstract":"This paper describes a power amplifier(PA) that performs frequency range 2(FR2) for fifth generation mobile networks (5G) using 65nm bulk CMOS devices. The power amplifier with two-stage cascode architecture achieved a small signal gain of 29.2 dB, the output 1-dB compression power (OP1dB) of 20.35 dBm, and the power added efficiency at peak power of 27.1% at 28 GHz under 2.2-V supply voltage. At 29GHz the small signal gain is 28.8 dB, the OP1dB is 20.29 dBm, and the power-added efficiency at peak power is 26.9% under 2.2-V supply voltage.","PeriodicalId":220939,"journal":{"name":"2023 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132752864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multipath Cluster-Based Scatterer Recognition by Object Detection Techniques Using Panoramic Images","authors":"Inocent Calist, Minseok Kim","doi":"10.1109/ITC-CSCC58803.2023.10212703","DOIUrl":"https://doi.org/10.1109/ITC-CSCC58803.2023.10212703","url":null,"abstract":"Object detection is crucial in the field of wireless communication, as it helps in predicting the channel behavior and channel model parameters that are necessary for efficient communication. In recent years, various object detection techniques have been proposed for this purpose, ranging from traditional statistical methods to deep learning-based approaches. This paper provides a comprehensive review of object detection techniques for predicting wireless channel model parameters. Additionally, the paper discuss the advantages and limitations of different object detection frameworks such as YOLO, Faster R-CNN, and Mask R-CNN. Finally, the paper puts foward a conceptual introduction of a novel approach to utilize Faster R-CNN object detection technique andcomputer vision, to predict the scatterers and eventually estimate the channel characteristics of a wireless channel. The paper highlights the current challenges and future directions in object detection for wireless channel model parameters prediction. The dataset to train the deep learning model is generated from an example conference room environment panoramic images. The proposed approach can be applied in various wireless communication scenarios, such as 5G and beyond, to accurately predict the location of scatterers based on multipath clusters so as to optimize network design and improve the overall performance of the system.","PeriodicalId":220939,"journal":{"name":"2023 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115652516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mau-Luen Tham, Bee-Sim Tay, Kok-Chin Khor, S. Phon-Amnuaisuk
{"title":"Deep Reinforcement Learning Based Bus Stop-Skipping Strategy","authors":"Mau-Luen Tham, Bee-Sim Tay, Kok-Chin Khor, S. Phon-Amnuaisuk","doi":"10.1109/ITC-CSCC58803.2023.10212607","DOIUrl":"https://doi.org/10.1109/ITC-CSCC58803.2023.10212607","url":null,"abstract":"Stop-skipping strategy can benefit both bus operators and passengers if the control is intelligent enough to adapt to the changes in passenger demands and traffic conditions. This is possible via deep reinforcement learning (DRL), where an agent can learn the optimal policy by continuously interacting with the dynamic bus operating environment. In this paper, one express bus lane followed by one no-skip flow is treated as one episode for bus route optimization. The objective is to maximize the passenger satisfaction level while minimizing the bus operator expenditures. To this end, a reward function is formulated as a function of passenger waiting time, passenger in-vehicle time, and total bus travel time. By training an agent of a double deep Q-network (DDQN), simulation results show that the agent can intelligently skip the stations and outperform the noskip method, under different passenger distribution patterns.","PeriodicalId":220939,"journal":{"name":"2023 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123893882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Mekruksavanich, Ponnipa Jantawong, D. Tancharoen, A. Jitpattanakul
{"title":"Sensor-Based Cattle Behavior Classification Using Deep Learning Approaches","authors":"S. Mekruksavanich, Ponnipa Jantawong, D. Tancharoen, A. Jitpattanakul","doi":"10.1109/ITC-CSCC58803.2023.10212958","DOIUrl":"https://doi.org/10.1109/ITC-CSCC58803.2023.10212958","url":null,"abstract":"The usage of precision livestock has grown due to the need for higher efficiency and productivity in response to the high demand for food. To ensure sustainable development and quality control of the inputs required by the industry, it is essential to monitor and classify the behavior of cattle. Sensor-based monitoring systems provide accurate information by capturing raw data and identifying behavior through machine learning and deep learning algorithms. This approach has allowed farmers to better understand the individual needs of their animals. This study presents a deep residual neural network for the classification of cattle behavior. The performance of the ResNeXt model was evaluated using a public real-world dataset collected from sensors attached to the neck of six different Japanese black beef cows. The experimental results showed that the presented ResNeXt model achieved the highest average accuracy of 94.96% and the highest average F1-score of 93.66%. Compared to other baseline deep learning models and the current state-of-the-art model for cattle behavior classification, the presented model outperformed them and achieved better performance.","PeriodicalId":220939,"journal":{"name":"2023 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114869499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. A. Fatika, S. Koonkarnkhai, P. Kovintavewat, C. Warisarn
{"title":"Neural Networks Input Techniques to Maintain a Small Skew Angle in Bit-Patterned Magnetic Recording with a V-Shaped Read-Head Array","authors":"K. A. Fatika, S. Koonkarnkhai, P. Kovintavewat, C. Warisarn","doi":"10.1109/ITC-CSCC58803.2023.10212691","DOIUrl":"https://doi.org/10.1109/ITC-CSCC58803.2023.10212691","url":null,"abstract":"The demand for enormous storage devices has kept increasing, leading to the development of various advanced technologies with a vast storage capacity. Extensive numbers of related research studies have been aiming at optimizing code design and algorithms analytically; however, enacting them on practical devices has been scarce. Achieving this demand might bring some obstacles called two-dimensional interference and skew angle (SA). To meet the challenge of the obstacle, we propose a SA detection method for bit-patterned magnetic recording systems by computing a specific target by three readback sequences before estimating the SA value and detecting the SA amount happening in the system using an application of neural network namely multilayer perceptron. An error correction code, low-density parity-check, is applied, and its decoder outputs a log-likelihood ratio whose probability density distribution is examined. The simulation results show that the sliding window technique can significantly provide a better bit error rate performance.","PeriodicalId":220939,"journal":{"name":"2023 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC)","volume":"49 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116311548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}