2022 IEEE International Conference on Consumer Electronics (ICCE)最新文献

筛选
英文 中文
Prediction-Guided Performance Improvement on Compressed Memory Swap 预测导向的压缩内存交换性能改进
2022 IEEE International Conference on Consumer Electronics (ICCE) Pub Date : 2022-01-07 DOI: 10.1109/ICCE53296.2022.9730361
Taejoon Song, Myeongseong Kim, Gunho Lee, Youngjin Kim
{"title":"Prediction-Guided Performance Improvement on Compressed Memory Swap","authors":"Taejoon Song, Myeongseong Kim, Gunho Lee, Youngjin Kim","doi":"10.1109/ICCE53296.2022.9730361","DOIUrl":"https://doi.org/10.1109/ICCE53296.2022.9730361","url":null,"abstract":"Due to ever increasing demands for memory size, compressed memory swap technique has been widely deployed in many consumer electronics. Although reducing data size effectively extends available memory, it inevitably brings computational overhead. Also, the effectiveness of this technique highly depends on the compression ratio. If there is a significant amount of incompressible data, the compression only brings unnecessary overhead without any benefits. In this paper, we address this problem by skipping the compression of incompressible pages in an efficient manner. We propose a novel compression predictor which quickly and accurately estimates whether a page is compressible or not. The experimental results show that our predictor can improve launch time by 29.5% on average with 97.4% accuracy.","PeriodicalId":350644,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics (ICCE)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115388326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Fast CU Partition Decision Strategy for AVS3 Intra Coding AVS3内部编码的快速CU分区决策策略
2022 IEEE International Conference on Consumer Electronics (ICCE) Pub Date : 2022-01-07 DOI: 10.1109/ICCE53296.2022.9730138
Changxin Chen, Xinjie Luo, Yunyao Yan, Guoqing Xiang, Peng Zhang, Xiaofeng Huang, Wei Yan
{"title":"A Fast CU Partition Decision Strategy for AVS3 Intra Coding","authors":"Changxin Chen, Xinjie Luo, Yunyao Yan, Guoqing Xiang, Peng Zhang, Xiaofeng Huang, Wei Yan","doi":"10.1109/ICCE53296.2022.9730138","DOIUrl":"https://doi.org/10.1109/ICCE53296.2022.9730138","url":null,"abstract":"Third generation audio and video coding standard (AVS3) is a latest video coding standard developed by China. AVS3 allows flexible coding unit (CU) partition by applying quad-tree (QT), binary tree (BT), and extended quad-tree (EQT) partitioning structures. The increased block partition flexibility brings the significant coding complexity cost. This paper proposes a fast CU partition decision method based on the gradient and the variance for AVS3 intra coding. Experimental results show that the proposed fast algorithm can achieve a good balance between computational complexity and coding performance. Compared with AVS3 software reference HPM4.0, the coding time is reduced by 43%, while the average BD-Rate is only increased by 0.9% in AI (All-Intra) configuration.","PeriodicalId":350644,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics (ICCE)","volume":"50 19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121088162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automatic Segmentation of Infant Brain Ventricles with Hydrocephalus in MRI Based on Deep Multi-path Learning 基于深度多路径学习的脑积水婴儿脑室MRI自动分割
2022 IEEE International Conference on Consumer Electronics (ICCE) Pub Date : 2022-01-07 DOI: 10.1109/ICCE53296.2022.9730469
Hikari Jinbo, Y. Iwamoto, M. Nonaka, Yen-Wei Chen
{"title":"Automatic Segmentation of Infant Brain Ventricles with Hydrocephalus in MRI Based on Deep Multi-path Learning","authors":"Hikari Jinbo, Y. Iwamoto, M. Nonaka, Yen-Wei Chen","doi":"10.1109/ICCE53296.2022.9730469","DOIUrl":"https://doi.org/10.1109/ICCE53296.2022.9730469","url":null,"abstract":"Infant Brain Ventricles with Hydrocephalus is a disease of hydrocephalus that occurs in children, in which cerebrospinal fluid accumulates in the ventricles and the ventricles expand abnormally. The ventricles have the potential to cause brain damage by compressing other brain tissues. It is crucial to extract the ventricles with less burden for early detection and postoperative follow-up. However, automatic segmentation of infant brain ventricles with hydrocephalus is a challenging task; especially for those with hydrocephalus because they have complicated and diverse shapes. Further, because preparing a large amount of annotated data is challenging, it is necessary to train with a small amount of data. Thus, achieving an accurate segmentation with conventional deep learning is challenging. We proposed a deep multi-path learning approach for the accurate segmentation of infant brain ventricles with hydrocephalus in this study. In the proposed method, we developed three deep learning models for axial, sagittal, and coronal planes, then integrated the results of the models to obtain the final segmentation result. With a minimal amount of data, our proposed method acquired massive features. The segmentation accuracy of our proposed method increased from 74.3% to 81.1%, when compared with the related method.","PeriodicalId":350644,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics (ICCE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124905844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A luminance control method for OLED burn-in prevention using user information 一种利用用户信息防止OLED老化的亮度控制方法
2022 IEEE International Conference on Consumer Electronics (ICCE) Pub Date : 2022-01-07 DOI: 10.1109/ICCE53296.2022.9730535
Hong-Kyu Shin, Hea-Bin Yang, S. Baek, Sung-Jea Ko
{"title":"A luminance control method for OLED burn-in prevention using user information","authors":"Hong-Kyu Shin, Hea-Bin Yang, S. Baek, Sung-Jea Ko","doi":"10.1109/ICCE53296.2022.9730535","DOIUrl":"https://doi.org/10.1109/ICCE53296.2022.9730535","url":null,"abstract":"In this paper, we propose a novel luminance control method utilizing user information (UI) to prevent burn-in in organic light emitting diode (OLED) displays. To control the luminance on a burn-in potential region (BPR), we first calculate the visual angle of each pixel on display utilizing UI and design a visual acuity model based on the cortical magnification theory. Next, we generate a UI-based saliency map by combining the visual acuity model and an image saliency map. Finally, the luminance of the BPR is adjusted based on the UI-based saliency map and the local contrast of the BPR. Experimental results show that the proposed method not only controls the luminance of the BPR to a degree of which users barely notice the change but also greatly extends the lifetime of the OLED.","PeriodicalId":350644,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics (ICCE)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124946702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Noise Canceller Algorithm with SNR-Based Stepsize Control and Gain Adjustment 基于信噪比的步长控制和增益调节的鲁棒噪声消除算法
2022 IEEE International Conference on Consumer Electronics (ICCE) Pub Date : 2022-01-07 DOI: 10.1109/ICCE53296.2022.9730207
A. Sugiyama
{"title":"Robust Noise Canceller Algorithm with SNR-Based Stepsize Control and Gain Adjustment","authors":"A. Sugiyama","doi":"10.1109/ICCE53296.2022.9730207","DOIUrl":"https://doi.org/10.1109/ICCE53296.2022.9730207","url":null,"abstract":"This paper proposes a robust noise canceller algorithm with SNR-based stepsize control and gain adjustment. Use of estimated SNRs for stepsize control reduces interference by the target signal in adaptation. A second SNR estimate, which is the output over an adjusted reference input, initially controls the stepsize to promote coefficient growth, followed by a first SNR estimate which is the output over the noise replica. Changeover from the second to the first SNR estimate takes place when the coefficient growth is saturated. The power gap between the reference input and the noise to be cancelled is adjusted by a factor estimated during an initial period. Evaluations with clean speech and noise recorded at a busy station demonstrate that the coefficient error by the proposed algorithm is as much as 8dB smaller than that without gain adjustment whereas conventional algorithms exhibit initial increase in the coefficient error and never reach the switchover status at a high SNR.","PeriodicalId":350644,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics (ICCE)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125286158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Frame Rate Up-Conversion for HDR Video Using Dual Exposure Camera 使用双曝光相机的HDR视频帧率提升转换
2022 IEEE International Conference on Consumer Electronics (ICCE) Pub Date : 2022-01-07 DOI: 10.1109/ICCE53296.2022.9730369
Hak-Hun Choi, Jae-Won Kim, Hojae Lee
{"title":"Frame Rate Up-Conversion for HDR Video Using Dual Exposure Camera","authors":"Hak-Hun Choi, Jae-Won Kim, Hojae Lee","doi":"10.1109/ICCE53296.2022.9730369","DOIUrl":"https://doi.org/10.1109/ICCE53296.2022.9730369","url":null,"abstract":"In this paper, we propose a new frame rate up-conversion (FRUC) for HDR video using a dual exposure camera that captures alternating frames with two different exposure levels (short and long). First, optical flows are estimated in the proposed convolutional neural network (CNN)-based motion estimation (ME) analyzing dual-exposed frames. Then, with the estimated true motions and motion occlusion maps, an HDR frame is interpolated while suppressing ghost artifacts caused by HDR fusion. Finally, FRUC is performed by re-using motions and occlusion maps. Experimental results indicate that the proposed method is effective to eliminate ghost artifacts and increase frame rate at the same time.","PeriodicalId":350644,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics (ICCE)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126968530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zenbo on Zoom: Evaluating the Human-Robot Interaction User Experience in a Video Conferencing Session Zenbo on Zoom:在视频会议会话中评估人机交互用户体验
2022 IEEE International Conference on Consumer Electronics (ICCE) Pub Date : 2022-01-07 DOI: 10.1109/ICCE53296.2022.9730259
Curtis L Gittens, Damian Garnes
{"title":"Zenbo on Zoom: Evaluating the Human-Robot Interaction User Experience in a Video Conferencing Session","authors":"Curtis L Gittens, Damian Garnes","doi":"10.1109/ICCE53296.2022.9730259","DOIUrl":"https://doi.org/10.1109/ICCE53296.2022.9730259","url":null,"abstract":"The COVID-19 pandemic has restricted the ability of HRI researchers to undertake face-to-face HRI user studies while obeying existing social and physical distancing mandates. In this pilot study we evaluated the quality of the user experience reported by undergraduate CS/IT students after they had two online interactions with a social robot using the Zoom video conferencing system. Our results showed that there was nothing inherently detrimental to performing HRI user studies online. Indeed, based on these preliminary results, researchers who are conducting HRI user studies online can have more confidence that the online interaction modality does not negatively affect their results.","PeriodicalId":350644,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics (ICCE)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124199037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Identification of Peritonitis Using Two-Stream Deep Spatial-Temporal Convolutional Networks 利用双流深时空卷积网络识别腹膜炎
2022 IEEE International Conference on Consumer Electronics (ICCE) Pub Date : 2022-01-07 DOI: 10.1109/ICCE53296.2022.9730265
Toshiki Kawahara, Akitoshi Inoue, Y. Iwamoto, Bolorkh Batsaikhan, Svohei Chatani, Akira Furukawa, Yen-Wei Chen
{"title":"Identification of Peritonitis Using Two-Stream Deep Spatial-Temporal Convolutional Networks","authors":"Toshiki Kawahara, Akitoshi Inoue, Y. Iwamoto, Bolorkh Batsaikhan, Svohei Chatani, Akira Furukawa, Yen-Wei Chen","doi":"10.1109/ICCE53296.2022.9730265","DOIUrl":"https://doi.org/10.1109/ICCE53296.2022.9730265","url":null,"abstract":"Cine magnetic resonance imaging (MRI) analysis methods are used to evaluate intestinal peristalsis. However, the evaluation of intestinal peristalsis by MRI is subjective, time-consuming, and not reproducible, which are recognized as an important issue that needs to be addressed. In our previous work, we used a deep optical flow network (DOFN) to extract temporal-spatial features of intestinal movements and differentiate peritonitis from intestinal peristalsis. However, since the DOFN is based on the image difference of two neighboring frames, it lacks texture and spatial information of small bowels. To solve these problems, this paper proposed a new model with two-stream deep spatial-temporal convolutional networks (two-stream DSTCN) consisting of optical flow stream (i.e, DOFN) and dynamic image stream. The proposed method is an improved version of our DOFN by introducing a dynamic image stream to extract temporal-spatial features from cine MR images. The final result is obtained by the average fusion of the two streams. The accuracy is improved by about 3% with the proposed method.","PeriodicalId":350644,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics (ICCE)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123346280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Position detection for lost items finding system using LoRa devices in large building 基于LoRa设备的大型建筑失物寻回系统的位置检测
2022 IEEE International Conference on Consumer Electronics (ICCE) Pub Date : 2022-01-07 DOI: 10.1109/ICCE53296.2022.9730134
Natsumi Shoji, K. Ohno
{"title":"Position detection for lost items finding system using LoRa devices in large building","authors":"Natsumi Shoji, K. Ohno","doi":"10.1109/ICCE53296.2022.9730134","DOIUrl":"https://doi.org/10.1109/ICCE53296.2022.9730134","url":null,"abstract":"This paper discusses an indoor location finding method using LoRa devices in a large building. To find the position, RSSI from the receiver is measured. The floor number of the item can be detected by measuring the RSSI on each floor, whether the lost item is put outside the room, inside the room, or in a steel box. Moreover, a directional antenna is used to detect the angle of the item.","PeriodicalId":350644,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics (ICCE)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121400409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Big Data Edge on Consumer Devices for Precision Medicine 精准医疗消费设备的大数据优势
2022 IEEE International Conference on Consumer Electronics (ICCE) Pub Date : 2022-01-07 DOI: 10.1109/ICCE53296.2022.9730484
Jake Stauffer, Qingxue Zhang
{"title":"Big Data Edge on Consumer Devices for Precision Medicine","authors":"Jake Stauffer, Qingxue Zhang","doi":"10.1109/ICCE53296.2022.9730484","DOIUrl":"https://doi.org/10.1109/ICCE53296.2022.9730484","url":null,"abstract":"Consumer electronics like smartphones and wearable computers are furthering precision medicine significantly, through capturing/leveraging big data on the edge towards real-time, interactive healthcare applications. Here we propose a big data edge platform that can, not only capture/manage different biomedical dynamics, but also enable real-time visualization of big data. The big data can also be uploaded to cloud for long-term management. The system has been evaluated on the real-world biomechanical data-based application, and demonstrated its effectiveness on big data management and interactive visualization. This study is expected to greatly advance big data-driven precision medicine applications.","PeriodicalId":350644,"journal":{"name":"2022 IEEE International Conference on Consumer Electronics (ICCE)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114375822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信