{"title":"Air Quality Measurement Device Using Programmable Quadcopter Drone Towards Internet of Drone Things","authors":"N. Karna, Deriel Laska Lubna, S. Shin","doi":"10.1109/ICTC52510.2021.9621039","DOIUrl":"https://doi.org/10.1109/ICTC52510.2021.9621039","url":null,"abstract":"Air pollution is a condition in which air quality is damaged and contaminated either by harmful or harmless substances for living beings. That is the reason why smart cities monitor the air quality. However, the installation for air quality measurement system is mainly in an area where there is a lot of pollution from traffics. This research proposes air quality measurement system over the ground using programmable quadcopter drone towards Internet of Drone Things. The sensors used to measure the air quality are MQ-135 and DHT22. NodeMCU is used to process and send the measurement value to Firebase, which can be further monitored by smartphone in real time. Air quality measurement test was carried out in two places, location 1 is the one with quiet environment and surrounded by trees, and location 2 is a very busy place surrounded with construction sites, each with 3 different altitudes (0, 3, and 5 meters) and 4 different sampling time (09.00, 12.00, 16.00, and 21.00). The system shows that higher altitude (5 meters) gives better air quality index compared to on the ground measurement (0 meter). Morning time gives better air quality index compared to other sampling time, even at nighttime, especially where there are lots of tree surrounding the environment.","PeriodicalId":299175,"journal":{"name":"2021 International Conference on Information and Communication Technology Convergence (ICTC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117134265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyeongrok Kim, H. Yang, Tony Q. S. Quek, Jae-Hyun Kim
{"title":"Performance Maximization of Satellite SAR image Processing using Reinforcement Learning","authors":"Kyeongrok Kim, H. Yang, Tony Q. S. Quek, Jae-Hyun Kim","doi":"10.1109/ICTC52510.2021.9620754","DOIUrl":"https://doi.org/10.1109/ICTC52510.2021.9620754","url":null,"abstract":"Synthetic aperture radar (SAR) observes a wide area during the mission in space and synthesizes the acquired data into the image of the specific area in a ground station. One scene of SAR is composed of several hundreds of kilometers for one minute observation. In a ground station, the image processing time takes few hours for one scene. Therefore, an efficient method, considering the link time of satellite SAR and ground station, is of necessity to reduce the idle computing time. In this paper, we propose a method that achieves performance maximization of SAR image processing. The proposed method considers the active resource using reinforcement learning at the separated ground stations. We analyze the predefined satellite route and select processing level according to the link time. For the performance maximization, we set a reward at the available area which can process the data, and a penalty at the idle area in our reinforcement learning model. The simulation result shows the optimal list of processing levels for avoiding idle computing. In addition, the proposed method guarantees 18% of performance improvements.","PeriodicalId":299175,"journal":{"name":"2021 International Conference on Information and Communication Technology Convergence (ICTC)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127260642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Blockchain-based Personal Data Trading System using Decentralized Identifiers and Verifiable Credentials","authors":"Dae-yeob Yoon, S. Moon, Kisung Park, Sungkee Noh","doi":"10.1109/ICTC52510.2021.9621153","DOIUrl":"https://doi.org/10.1109/ICTC52510.2021.9621153","url":null,"abstract":"As the needs for personal data increase due to the advent of the AI era, many companies are collecting their users' data and using it to advance the service. As the use of personal data increases, the value of personal data also increases. Although these valuable personal data are generated by individuals, only centralized service providers get profit from the data. In this paper, we propose a blockchain-based personal data trading system using DID (Decentralized Identifiers) and VC (Verifiable Credentials). Our proposed system allows users to collect personal data in their own data storage provided by the system. DID and VC are used to authenticate the user's identity and to prove ownership of the data without any centralized systems, respectively. The integrity of the traded data and the history of the transactions are ensured by Hyperledger Fabric, which is a decentralized infrastructure composed of consortium blockchain nodes. We show how our system works by implementing the monitoring system that provides the current status of the user's data and trading. We verify that two end entities including a seller and a buyer can complete personal data trading by using our proposed system without centralized service providers.","PeriodicalId":299175,"journal":{"name":"2021 International Conference on Information and Communication Technology Convergence (ICTC)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124844397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu-Lim Min, Yun Jeong Kim, Jeong Nam Kim, Hye-jin Kim
{"title":"Multi-feature based Object Classification using Flexible Gloves inspired by Human Grasping","authors":"Yu-Lim Min, Yun Jeong Kim, Jeong Nam Kim, Hye-jin Kim","doi":"10.1109/ICTC52510.2021.9621089","DOIUrl":"https://doi.org/10.1109/ICTC52510.2021.9621089","url":null,"abstract":"We present high accuracy object classification using flexible gloves and machine learning algorithms. The flexible gloves are designed with two flex sensors mounted on finger joints and two FSR sensors inside fingertips. When grasping an object, electrical signals are acquired from physically deformed sensors. In this paper, the key features of objects are extracted from the mean and standard deviation values of the sensing signal waveforms. We prepared four sets of blocks for classification and each of them had a different size and weight. As a result, we demonstrated the accuracy of the object classification can be achieved 100 % using the multi-featured sensing dataset acquired by the flexible glove. The multi-featured classification method which combines the flexible gloves and machine learning technology shows a great potential application such as visual impairment aid and human-machine interface.","PeriodicalId":299175,"journal":{"name":"2021 International Conference on Information and Communication Technology Convergence (ICTC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124909384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D Volume Reconstruction from MRI Slices based on VTK","authors":"Jakhongir Nodirov, A. Abdusalomov, T. Whangbo","doi":"10.1109/ICTC52510.2021.9621022","DOIUrl":"https://doi.org/10.1109/ICTC52510.2021.9621022","url":null,"abstract":"In today's fast-advancing world, Deep learning brought the huge potential to the healthcare system and it still undergoes different amazing new techniques. New automatic brain tumor segmentation models have been realized. As a result, it is being much more affordable and faster to save lives. However, most of the tumor detection works are still being conducted with 2D single slices of brain image, although, there are new 3D CNN [1] models with more benefits. Those 3D models enable to scan of brain images in 3d volume. 2D models accept only single slices as input and they innately fail to use context from neighboring slices. Missed voxel data from contiguous slices might affect the detection of tumors and decrease the accuracy of the model. 3D models address this issue by utilizing 3D convolutional kernels to make predictions from volumetric inputs. The capacity to use interslice features can increase the further performance of the model. Therefore, in practice, 3D volumes enable to obtain much more efficient and clear diagnoses. İn this paper we purpose our new 3D MRI reconstruction algorithm based on VTK toolkit [3].","PeriodicalId":299175,"journal":{"name":"2021 International Conference on Information and Communication Technology Convergence (ICTC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125838148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improved Early Exiting Activation to Accelerate Edge Inference","authors":"Junyong Park, Jong-Ryul Lee, Yong-Hyuk Moon","doi":"10.1109/ICTC52510.2021.9621109","DOIUrl":"https://doi.org/10.1109/ICTC52510.2021.9621109","url":null,"abstract":"As mobile & edge devices are getting powerful, on-device deep learning is becoming a reality. However, there are still many challenges for deep learning edge inferences, such as limited resources such as computing power, memory space, and energy. To address these challenges, model compression such as channel pruning, low rank representation, network quantization, and early exiting has been introduce to reduce the computational load of neural networks at a whole. In this paper, we propose an improved method of implementing early exiting branches on a pre-defined neural network, so that it can determine whether the input data is easy to process, therefore use less resource to execute the task. Our method starts with an entire search for activations in a given network, then inserting early exiting modules, testing those early exit branches, resulting in selecting useful branches that are both accurate and fast. Our contribution is reducing the computing time of neural networks by breaking the flow of models using execution branches. Additionally, by testing on all activations in neural network, we gain knowledge of the neural network model and insight on where to place the ideal early exit auxiliary classifier. We test on ResNet model and show reduction in real computation time on single input images.","PeriodicalId":299175,"journal":{"name":"2021 International Conference on Information and Communication Technology Convergence (ICTC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125857023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zulqar Nain, Arslan Musaddiq, Yazdan Ahmad Qadri, S. Kim
{"title":"History-Aware Adaptive Route Update Scheme for Low-Power and Lossy Networks","authors":"Zulqar Nain, Arslan Musaddiq, Yazdan Ahmad Qadri, S. Kim","doi":"10.1109/ICTC52510.2021.9621062","DOIUrl":"https://doi.org/10.1109/ICTC52510.2021.9621062","url":null,"abstract":"Sudden link failure degrades the network performance in the Internet of Things, which comprises energy-constrained sensors. At the network layer, the trickle timer algorithm is responsible to propagate the route updates across the network. The trickle timer manages the transmission of control messages by increasing the control message transmission frequency after detecting inconsistency in the network, and if the network is consistent, it reduces the transmission rate. The transmission rate is regulated by maintaining a redundancy coefficient parameter. Optimizing the control traffic transmission is an active research area that aims to reduce the network traffic overhead and power consumption, which directly affects the network lifetime. The control traffic can be optimized more efficiently by optimal selection of redundancy coefficient value. This article proposes a History-Aware Adaptive Trickle (HAAT) algorithm. HAAT algorithm selects the optimal redundancy coefficient value based on the history of DODAG information object transmissions and suppression corresponding to every redundancy coefficient value. The simulation results indicate that the proposed HAAT algorithm improves the network performance compared to other state-of-the-art mechanisms.","PeriodicalId":299175,"journal":{"name":"2021 International Conference on Information and Communication Technology Convergence (ICTC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126123816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UAV Detection Using Split-Parallel CNN For Surveillance Systems","authors":"Ali Aouto, Jae-Min Lee, Dong-Seong Kim","doi":"10.1109/ICTC52510.2021.9620862","DOIUrl":"https://doi.org/10.1109/ICTC52510.2021.9620862","url":null,"abstract":"Commercial drones have become available to everyone with different sizes and shapes. Many are equipped with cameras and some with signal sabotage devices, the scariest scenario is that there are websites that offers weapons which can be attached to the drone. All those security threats either for privacy matters or people's safety, encouraged the researchers to find an intelligent system that can be implemented into the surveillance systems to classify unauthorized UAVs that are flying in a restricted area. This paper proposes a system that detects UAVs by acquiring RGB images via sensor then apply them to a convolutional neural network (CNN) that behave as an object classifier. Proposing Split-Parallel Cross Stage Partial DenseNet (PCSPDensenet) that is built from a modified CSPDenseNet. By splitting the feature map in two parts. Then, make each part flow in different side of the parallel network. The proposed network shows simulation results of an increment in the precision and showed higher $AP_{50}$ and $AP_{75}$ at higher frame rate on the UAV dataset With lower computational complexity.","PeriodicalId":299175,"journal":{"name":"2021 International Conference on Information and Communication Technology Convergence (ICTC)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123239976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nasolabial Wrinkle Segmentation Based on Nested Convolutional Neural Network","authors":"Sabina Umirzakova, T. Whangbo","doi":"10.1109/ICTC52510.2021.9620886","DOIUrl":"https://doi.org/10.1109/ICTC52510.2021.9620886","url":null,"abstract":"Wrinkles one of the common structures on human faces. Their detection is often challenging to effectively cope with skin images and can be an important step for many different applications. Skin wrinkle segmentation play an important role in face-feature analysis and assessing the beneficial effects of dermatological and cosmetic anti-aging treatments. Existing approaches of the image-based analysis of wrinkle extraction performance, which usually decreased because of weakness of wrinkle edges and similarity to the surrounding skin. In this paper, nested convolution neural network is applied to extract nasolabial wrinkles from facial images. In addition we applied a structure of deep encoder - decoder style network suitable for nasolabial wrinkle extraction. The proposed nested network, shows state-of-the-art results obtained an accuracy of 98.9%, which demonstrate novelness of this method","PeriodicalId":299175,"journal":{"name":"2021 International Conference on Information and Communication Technology Convergence (ICTC)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123472634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"OSTGazeNet: One-stage Trainable 2D Gaze Estimation Network","authors":"Heeyoung Joo, Min-Soo Ko, Hyok Song","doi":"10.1109/ICTC52510.2021.9620812","DOIUrl":"https://doi.org/10.1109/ICTC52510.2021.9620812","url":null,"abstract":"Gaze estimation refers to estimating the user's gaze information, such as the gaze direction. Recently, various deep learning-based methods for gaze estimation which are robust to lighting conditions or occlusions have been introduced. Previously proposed methods for gaze estimation were mainly composed of 2 different steps, one is for localizing the eye landmarks and another is for regressing the gaze direction. In this paper, we propose a novel one-stage trainable 2D gaze estimation network, namely One-stage Trainable 2D Gaze Estimation Network(OSTGazeNet), in which the localization of eye landmarks and the regression of the 2D gaze direction vector are integrated into the one-stage trainable deep learning network. OSTGazeNet used Stacked Hourglass Network as a backbone network, and the pixel coordinates of eye landmarks in 2D image space and a normalized gaze direction vector in the spherical coordinate system are estimated simultaneously in OSTGazeNet. About the learning of the network, we used synthetic eye images dataset named UnityEyes for training and also used an unconstrained eye images dataset named MPIIGaze for the evaluation. We performed experiments to determine the hyperparameters of learning and used mean square error as a performance metric. The best performance was a mean square error of 0.038838, and the inference time was 42 FPS.","PeriodicalId":299175,"journal":{"name":"2021 International Conference on Information and Communication Technology Convergence (ICTC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123773100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}