Wenru Zheng, Ryota Yoshihashi, Rei Kawakami, Ikuro Sato, Asako Kanezaki
{"title":"Multi Event Localization by Audio-Visual Fusion with Omnidirectional Camera and Microphone Array","authors":"Wenru Zheng, Ryota Yoshihashi, Rei Kawakami, Ikuro Sato, Asako Kanezaki","doi":"10.1109/CVPRW59228.2023.00255","DOIUrl":"https://doi.org/10.1109/CVPRW59228.2023.00255","url":null,"abstract":"Audio-visual fusion is a promising approach for identifying multiple events occurring simultaneously at different locations in the real world. Previous studies on audio-visual event localization (AVE) have been built on datasets that only have monaural or stereo channels in the audio; thus, it was hard to distinguish the direction of audio when different sounds are heard from multiple locations. In this paper, we develop a multi-event localization method using multichannel audio and omnidirectional images. To take full advantage of the spatial correlation between the features in the two modalities, our method employs early fusion that can retain audio direction and background information in images. We also created a new dataset of multi-label events containing around 660 omnidirectional videos with multichannel audio, which was used to showcase the effectiveness of the proposed method.","PeriodicalId":355438,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129993405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kshitij Nikhal, Nkiruka Uzuegbunam, Bridget Kennedy, B. Riggan
{"title":"Mitigating Catastrophic Interference using Unsupervised Multi-Part Attention for RGB-IR Face Recognition","authors":"Kshitij Nikhal, Nkiruka Uzuegbunam, Bridget Kennedy, B. Riggan","doi":"10.1109/CVPRW59228.2023.00039","DOIUrl":"https://doi.org/10.1109/CVPRW59228.2023.00039","url":null,"abstract":"Modern algorithms for RGB-IR facial recognition— a challenging problem where infrared probe images are matched with visible gallery images—leverage precise and accurate guidance from curated (i.e., labeled) data to bridge large spectral differences. However, supervised cross-spectral face recognition methods are often extremely sensitive due to over-fitting to labels, performing well in some settings but not in others. Moreover, when fine-tuning on data from additional settings, supervised cross-spectral face recognition are prone to catastrophic forgetting. Therefore, we propose a novel unsupervised framework for RGB-IR face recognition to minimize the cost and time inefficiencies pertaining to labeling large-scale, multispectral data required to train supervised cross-spectral recognition methods and to alleviate the effect of forgetting by removing over dependence on hard labels to bridge such large spectral differences. The proposed framework integrates an efficient backbone network architecture with part-based attention models, which collectively enhances common information between visible and infrared faces. Then, the framework is optimized using pseudo-labels and a new cross-spectral memory bank loss. This framework is evaluated on the ARL-VTF and TUFTS datasets, achieving 98.55% and 43.28% true accept rate, respectively. Additionally, we analyze effects of forgetting and show that our framework is less prone to these effects.","PeriodicalId":355438,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129462404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
†. MarcosV.Conde, †. EduardZamfir, R. Timofte, Daniel Motilla, Cen Liu, Zexin Zhang, Yunbo Peng, Yue Lin, Jiaming Guo, X. Zou, Yu-Yi Chen, Yi Liu, Jiangnan Hao, Youliang Yan, Yuan Zhang, Gen Li, Lei Sun, Lingshun Kong, Haoran Bai, Jin-shan Pan, Jiangxin Dong, Jinhui Tang, Mustafa Ayazoglu Bahri, Batuhan Bilecen, Mingxiu Li, Yuhang Zhang, Xianjun Fan, Yan Sheng, Long Sun, Zibin Liu, Weiran Gou, Sha Li, Ziyao Yi, Yan Xiang, Dehui Kong, Ke Xu, G. Gankhuyag, Kuk-jin Yoon, Jin Zhang, G. Yu, Feng Zhang, Hongbin Wang, Zhou Zhou, Jiahao Chao, Hong-Xin Gao, Jiali Gong, Zhengfeng Yang, Zhenbing Zeng, Cheng-An Chen, Zichao Guo, Anjin Park, Yu Qi, Hongyuan Jia, Xuan Yu, K. Yin, Dongyang Zuo, Zhang Ting, Zhengxue Fu, Cheng Shiai, Dajiang Zhu, Hong Zhou, Weichen Yu, Jiahua Dong, Yajun Zou, Zhuoyuan Wu, B. Han, Xiaolin Zhang, He Zhang, X. Yin, Benke Shao, Shaolong Zheng, Daheng Yin, Baijun Chen, Mengyang Liu, Marian-Sergiu Nistor, Yi-Chung Chen, Zhi-Kai Huang, Yuan Chiang, Wei-Ting Chen, Hao Yang, Hua-En Chang, I-Hsiang
{"title":"Efficient Deep Models for Real-Time 4K Image Super-Resolution. NTIRE 2023 Benchmark and Report","authors":"†. MarcosV.Conde, †. EduardZamfir, R. Timofte, Daniel Motilla, Cen Liu, Zexin Zhang, Yunbo Peng, Yue Lin, Jiaming Guo, X. Zou, Yu-Yi Chen, Yi Liu, Jiangnan Hao, Youliang Yan, Yuan Zhang, Gen Li, Lei Sun, Lingshun Kong, Haoran Bai, Jin-shan Pan, Jiangxin Dong, Jinhui Tang, Mustafa Ayazoglu Bahri, Batuhan Bilecen, Mingxiu Li, Yuhang Zhang, Xianjun Fan, Yan Sheng, Long Sun, Zibin Liu, Weiran Gou, Sha Li, Ziyao Yi, Yan Xiang, Dehui Kong, Ke Xu, G. Gankhuyag, Kuk-jin Yoon, Jin Zhang, G. Yu, Feng Zhang, Hongbin Wang, Zhou Zhou, Jiahao Chao, Hong-Xin Gao, Jiali Gong, Zhengfeng Yang, Zhenbing Zeng, Cheng-An Chen, Zichao Guo, Anjin Park, Yu Qi, Hongyuan Jia, Xuan Yu, K. Yin, Dongyang Zuo, Zhang Ting, Zhengxue Fu, Cheng Shiai, Dajiang Zhu, Hong Zhou, Weichen Yu, Jiahua Dong, Yajun Zou, Zhuoyuan Wu, B. Han, Xiaolin Zhang, He Zhang, X. Yin, Benke Shao, Shaolong Zheng, Daheng Yin, Baijun Chen, Mengyang Liu, Marian-Sergiu Nistor, Yi-Chung Chen, Zhi-Kai Huang, Yuan Chiang, Wei-Ting Chen, Hao Yang, Hua-En Chang, I-Hsiang","doi":"10.1109/CVPRW59228.2023.00154","DOIUrl":"https://doi.org/10.1109/CVPRW59228.2023.00154","url":null,"abstract":"This paper introduces a novel benchmark for efficient up-scaling as part of the NTIRE 2023 Real-Time Image Super-Resolution (RTSR) Challenge, which aimed to upscale images from 720p and 1080p resolution to native 4K (×2 and ×3 factors) in real-time on commercial GPUs. For this, we use a new test set containing diverse 4K images ranging from digital art to gaming and photography. We assessed the methods devised for 4K SR by measuring their runtime, parameters, and FLOPs, while ensuring a minimum PSNR fidelity over Bicubic interpolation. Out of the 170 participants, 25 teams contributed to this report, making it the most comprehensive benchmark to date and showcasing the latest advancements in real-time SR.","PeriodicalId":355438,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129612349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PanopticVis: Integrated Panoptic Segmentation for Visibility Estimation at Twilight and Night","authors":"Hidetomo Sakaino","doi":"10.1109/CVPRW59228.2023.00341","DOIUrl":"https://doi.org/10.1109/CVPRW59228.2023.00341","url":null,"abstract":"Visibility affects traffic flow and control on city roads, highways, and runways. Visibility distance or level is an important measure for predicting the risk on the road. Particularly, it is known that traffic accidents can be raised at foggy twilight and night. Cameras monitor visual conditions like fog. However, only a few papers have tackled such nighttime vision with visibility estimation. This paper proposes a Panoptic Segmentation-based foggy night visibility estimation integrating multiple Deep Learning models: DeepReject/Depth/ Scene/Vis/Fog using single images. We call PanopticVis. DeepFog is trained for no-fog and heavy fog. DeepVis for medium fog is trained by annotated visibility physical scales in a regression manner. DeepDepth is improved to be robust to strong local illumination. DeepScene panoptic-segments scenes with stuff and things, booted by Deep-Depth. DeepReject conducts adversarial visual conditions: strong illumination and darkness. Notably, the proposed multiple Deep Learning framework provides high efficiency in memory, cost, and easy-tomaintenance. Unlike previous synthetic test images, experimental results show the effectiveness of the proposed integrated multiple Deep Learning approaches for estimating visibility distances on real foggy night roads. The superiority of PanopticVis is demonstrated over state-of-the-art panoptic-based Deep Learning models in terms of stability, robustness, and accuracy.","PeriodicalId":355438,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130917114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Galadrielle Humblot-Renaux, Sergio Escalera, T. Moeslund
{"title":"Beyond AUROC & co. for evaluating out-of-distribution detection performance","authors":"Galadrielle Humblot-Renaux, Sergio Escalera, T. Moeslund","doi":"10.1109/CVPRW59228.2023.00402","DOIUrl":"https://doi.org/10.1109/CVPRW59228.2023.00402","url":null,"abstract":"While there has been a growing research interest in developing out-of-distribution (OOD) detection methods, there has been comparably little discussion around how these methods should be evaluated. Given their relevance for safe(r) AI, it is important to examine whether the basis for comparing OOD detection methods is consistent with practical needs. In this work, we take a closer look at the go-to metrics for evaluating OOD detection, and question the approach of exclusively reducing OOD detection to a binary classification task with little consideration for the detection threshold. We illustrate the limitations of current metrics (AUROC & its friends) and propose a new metric - Area Under the Threshold Curve (AUTC), which explicitly penalizes poor separation between ID and OOD samples. Scripts and data are available at https://github.com/glhr/beyond-auroc","PeriodicalId":355438,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131309749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lin Zhong, Bangcheng Zong, Qiming Wang, Junle Yu, Wenhui Zhou
{"title":"Implicit Epipolar Geometric Function based Light Field Continuous Angular Representation","authors":"Lin Zhong, Bangcheng Zong, Qiming Wang, Junle Yu, Wenhui Zhou","doi":"10.1109/CVPRW59228.2023.00349","DOIUrl":"https://doi.org/10.1109/CVPRW59228.2023.00349","url":null,"abstract":"Light field plays an important role in many different applications such as virtual reality, microscopy and computational photography. However, low angular resolution limits the further application of light field. The existing state of the art light field angular super-resolution reconstruction methods can only achieve limited fixed-scale angular super-resolution. This paper focuses on a continuous arbitrary-scale light field angular super-resolution via introducing the implicit neural representation into the light field two-plane parametrization. Specifically, we first formulate a 4D implicit epipolar geometric function for light field continuous angular representation. Considering it is difficult and inefficient to directly learn this 4D implicit function, a divide-and-conquer learning strategy and a spatial information embedded encoder are then proposed to convert the 4D implicit function learning into a joint learning of 2D local implicit functions. Furthermore, we design a special epipolar geometric convolution block (EPIBlock) to encode the light field epipolar constraint information. Experiments on both synthetic and real-world light field datasets demonstrate that our method exhibits not only significant superiority in fixed-scale angular super-resolution, but also achieves arbitrary high magnification light field super-resolution while still maintaining the clear light field epipolar geometric structure.","PeriodicalId":355438,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131311694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DeepSmooth: Efficient and Smooth Depth Completion","authors":"Sriram Krishna, Basavaraja Shanthappa Vandrotti","doi":"10.1109/CVPRW59228.2023.00338","DOIUrl":"https://doi.org/10.1109/CVPRW59228.2023.00338","url":null,"abstract":"Accurate and consistent depth maps are essential for numerous applications across domains such as robotics, Augmented Reality and others. High-quality depth maps that are spatially and temporally consistent enable tasks such as Spatial Mapping, Video Portrait effects and more generally, 3D Scene Understanding. Depth data acquired from sensors is often incomplete and contains holes whereas depth estimated from RGB images can be inaccurate. This work focuses on Depth Completion, the task of filling holes in depth data using color images. Most work in depth completion formulates the task at the frame level, individually filling each frame’s depth. This results in undesirable flickering artifacts when the RGB-D video stream is viewed as a whole and has detrimental effects on downstream tasks. We propose DeepSmooth, a model that spatio-temporally propagates information to fill in depth maps. Using an EfficientNet and pseudo 3D-Conv based architecture, and a loss function which enforces consistency across space and time, the proposed solution produces smooth depth maps.","PeriodicalId":355438,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131372058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-Supervised Learning for Accurate Liver View Classification in Ultrasound Images with Minimal Labeled Data","authors":"Dr. Abder-Rahman Ali, A. Samir, Peng Guo","doi":"10.1109/CVPRW59228.2023.00310","DOIUrl":"https://doi.org/10.1109/CVPRW59228.2023.00310","url":null,"abstract":"Conventional B-mode \"grey scale\" medical ultrasound and shear wave elastography (SWE) are widely used for chronic liver disease diagnosis and risk stratification. Liver disease is very common and is clinically and socially important. As a result, multiple medical device manufacturers have proposed or developed AI systems for ultrasound image analysis. However, many abdominal ultrasound images do not include views of the liver, necessitating manual data curation for model development. To optimize the efficiency of real-time processing, a pre-processing liver view detection step is necessary before feeding the image to the AI system. Deep learning techniques have shown great promise for image classification, yet labeling large datasets for training classification models is timeconsuming and expensive. In this paper, we present a selfsupervised learning method for image classification that utilizes a large set of unlabeled abdominal ultrasound images to learn image representations. These representations are then applied on the downstream task of liver view classification, resulting in efficient classification and alleviation of the labeling burden. In comparison to two state-of-the-art (SOTA) models, ResNet-18 and MLP-Mixer, when trained for 100 epochs the proposed SimCLR+LR approach demonstrated outstanding performance when only labeling \"one\" image per class, achieving an accuracy similar to MLP-Mixer (86%) and outperforming the performance of ResNet-18 (70.2%), when trained on 854 (with liver: 495, without liver: 359) B-mode images. When trained on the whole dataset for 1000 epochs, SimCLR+LR and ResNet-18 achieved an accuracy of 98.7% and 79.3%, respectively. These findings highlight the potential of the SimCLR+LR approach as a superior alternative to traditional supervised learning methods for liver view classification. Our proposed method has the ability to reduce both the time and cost associated with data labeling, as it eliminates the need for human labor (i.e., SOTA performance achieved with only a small amount of labeled data). The approach could also be advantageous in scenarios where a subset of images with a particular organ needs to be extracted from a large dataset that includes images of various organs.","PeriodicalId":355438,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128820083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PerfHD: Efficient ViT Architecture Performance Ranking using Hyperdimensional Computing","authors":"Dongning Ma, Pengfei Zhao, Xun Jiao","doi":"10.1109/CVPRW59228.2023.00217","DOIUrl":"https://doi.org/10.1109/CVPRW59228.2023.00217","url":null,"abstract":"Neural Architecture Search (NAS) aims at identifying the optimal network architecture for a specific need in an automated manner, which serves as an alternative to the manual process of model development, selection, evaluation and performance estimation. However, evaluating performance of candidate architectures in the search space during NAS, which often requires training and ranking a mass amount of architectures, is often prohibitively computation-demanding. To reduce this cost, recent works propose to estimate and rank the architecture performance with-out actual training or inference. In this paper, we present PerfHD, an efficient-while-accurate architecture performance ranking approach using hyperdimensional computing for the emerging vision transformer (ViT), which has demonstrated state-of-the-art (SOTA) performance in vision tasks. Given a set of ViT models, PerfHD can accurately and quickly rank their performance solely based on their hyper-parameters without training. We develop two encoding schemes for PerfHD, Gram-based and Record-based, to encode the features from candidate ViT architecture parameters. Using the VIMER-UFO benchmark dataset of eight tasks from a diverse range of domains, we compare PerfHD with four SOTA methods. Experimental results show that PerfHD can rank nearly 100K ViT models in about just 1 minute, which is up to 10X faster than SOTA methods, while achieving comparable or even superior ranking accuracy. We open-source PerfHD in PyTorch implementation at https://github.com/VU-DETAIL/PerfHD.","PeriodicalId":355438,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127283041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pablo Rodrigo Gantier Cadena, Yeqiang Qian, Chunxiang Wang, Ming Yang
{"title":"Sparse-E2VID: A Sparse Convolutional Model for Event-Based Video Reconstruction Trained with Real Event Noise","authors":"Pablo Rodrigo Gantier Cadena, Yeqiang Qian, Chunxiang Wang, Ming Yang","doi":"10.1109/CVPRW59228.2023.00437","DOIUrl":"https://doi.org/10.1109/CVPRW59228.2023.00437","url":null,"abstract":"Event cameras are image sensors inspired by biology and offer several advantages over traditional frame-based cameras. However, most algorithms for reconstructing images from event camera data do not exploit the sparsity of events, resulting in inefficient zero-filled data. Given that event cameras typically have a sparse index of 90% or higher, this is particularly wasteful. In this work, we propose a sparse model, Sparse-E2VID, that efficiently reconstructs event-based images, reducing inference time by 30%. Our model takes advantage of the sparsity of event data, making it more computationally efficient, and scales better at higher resolutions. Additionally, by using data augmentation and real noise from an event camera, our model reconstructs nearly noise-free images. In summary, our proposed model efficiently and accurately reconstructs images from event camera data by exploiting the sparsity of events. This has the potential to greatly improve the performance of event-based applications, particularly at higher resolutions. Some results can be seen in the following video: https://youtu.be/sFH9zp6kuWE, 1.","PeriodicalId":355438,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125611568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}