2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)最新文献

筛选
英文 中文
A Tipping Point? Heightened self-disclosure during the Coronavirus pandemic 引爆点?在冠状病毒大流行期间加强自我披露
2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI) Pub Date : 2020-10-01 DOI: 10.1109/CogMI50398.2020.00040
A. Squicciarini, Sarah Rajtmaier, Prasanna Umar, Taylor Blose
{"title":"A Tipping Point? Heightened self-disclosure during the Coronavirus pandemic","authors":"A. Squicciarini, Sarah Rajtmaier, Prasanna Umar, Taylor Blose","doi":"10.1109/CogMI50398.2020.00040","DOIUrl":"https://doi.org/10.1109/CogMI50398.2020.00040","url":null,"abstract":"The COVID-19 crisis has raised numerous concerns amongst the privacy community, as research indicates emerging privacy risks. Here we frame those risks and focus, in particular, on what we have observed to be increased rate of self-disclosure on online social media. That is, individuals are sharing more personal information online in what appears to be an effort to stay connected with others during isolation and cope with additional stress. We outline a research agenda to further explore this finding and highlight the potential for this crisis to serve as a tipping point for self-disclosure norms more generally.","PeriodicalId":360326,"journal":{"name":"2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129428040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Hybrid Text Classification and Language Generation Model for Automated Summarization of Dutch Breast Cancer Radiology Reports 荷兰乳腺癌放射学报告自动摘要的混合文本分类和语言生成模型
2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI) Pub Date : 2020-10-01 DOI: 10.1109/CogMI50398.2020.00019
Elisa Nguyen, Daphne Theodorakopoulos, Shreyasi Pathak, Jeroen Geerdink, O. Vijlbrief, M. V. Keulen, C. Seifert
{"title":"A Hybrid Text Classification and Language Generation Model for Automated Summarization of Dutch Breast Cancer Radiology Reports","authors":"Elisa Nguyen, Daphne Theodorakopoulos, Shreyasi Pathak, Jeroen Geerdink, O. Vijlbrief, M. V. Keulen, C. Seifert","doi":"10.1109/CogMI50398.2020.00019","DOIUrl":"https://doi.org/10.1109/CogMI50398.2020.00019","url":null,"abstract":"Breast cancer diagnosis is based on radiology reports describing observations made from medical imagery, such as X-rays obtained during mammography. The reports are written by radiologists and contain a conclusion summarizing the observations. Manually summarizing the reports is time-consuming and leads to high text variability. This paper investigates the automated summarization of Dutch radiology reports. We propose a hybrid model consisting of a language model (encoder-decoder with attention) and a separate BI-RADS score classifier. The summarization model achieved a ROUGE-L F1 score of 51.5% on the Dutch reports, which is comparable to results in other languages and other domains. For the BI-RADS classification, the language model (accuracy 79.1 %) was outperformed by a separate classifier (accuracy 83.3 %), leading us to propose a hybrid approach for radiology report summarization. Our qualitative evaluation with experts found the generated conclusions to be comprehensible and to cover mostly relevant content, and the main focus for improvement should be their factual correctness. While the current model is not accurate enough to be employed in clinical practice, our results indicate that hybrid models might be a worthwhile direction for future research.","PeriodicalId":360326,"journal":{"name":"2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126806735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Low-shot Learning in Natural Language Processing 自然语言处理中的低概率学习
2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI) Pub Date : 2020-10-01 DOI: 10.1109/CogMI50398.2020.00031
Congying Xia, Chenwei Zhang, Jiawei Zhang, Tingting Liang, Hao Peng, Philip S. Yu
{"title":"Low-shot Learning in Natural Language Processing","authors":"Congying Xia, Chenwei Zhang, Jiawei Zhang, Tingting Liang, Hao Peng, Philip S. Yu","doi":"10.1109/CogMI50398.2020.00031","DOIUrl":"https://doi.org/10.1109/CogMI50398.2020.00031","url":null,"abstract":"This paper study the low-shot learning paradigm in Natural Language Processing (NLP), which aims to provide the ability that can adapt to new tasks or new domains with limited annotation data, like zero or few labeled examples. Specifically, Low-shot learning unifies the zero-shot and few-shot learning paradigm. Diverse low-shot learning approaches, including capsule-based networks, data-augmentation methods, and memory networks, are discussed for different NLP tasks, for example, intent detection and named entity typing. We also provide potential future directions for low-shot learning in NLP.","PeriodicalId":360326,"journal":{"name":"2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132651470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Functional Perceptron using Multi-dimensional Activation Functions 基于多维激活函数的函数感知器
2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI) Pub Date : 2020-10-01 DOI: 10.1109/CogMI50398.2020.00012
Chungheon Yi, Wonik Choi, Youngjun Jeon, Ling Liu
{"title":"Functional Perceptron using Multi-dimensional Activation Functions","authors":"Chungheon Yi, Wonik Choi, Youngjun Jeon, Ling Liu","doi":"10.1109/CogMI50398.2020.00012","DOIUrl":"https://doi.org/10.1109/CogMI50398.2020.00012","url":null,"abstract":"We propose a new perceptron, called functional perceptron, which consists of a multi-dimensional activation function capable of learning a specific function. The functional perceptron does not use the traditional activation functions such as Sigmoid and ReLU. Instead, the proposed perceptron trains a function in a multi-dimensional space to accomplish a specific functionality and uses it as the learning task specific activation function. To realize this perceptron, we teach a comparison functionality to a multi-dimensional function by training two comparable inputs and producing a value of similarity as output. In order to show the efficacy of the functional perceptron, we apply the proposed perceptron to the XOR problem, the IRIS classification problem and an indoor positioning problem based on multi-signal fingerprints. Extensive experiments show that the proposed perceptron achieves about 96% accuracy in the IRIS classification and shows 1.737m accuracy in indoor positioning problem.","PeriodicalId":360326,"journal":{"name":"2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128977574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Promoting High Diversity Ensemble Learning with EnsembleBench 使用EnsembleBench促进高多样性集成学习
2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI) Pub Date : 2020-10-01 DOI: 10.1109/CogMI50398.2020.00034
Yanzhao Wu, Ling Liu, Zhongwei Xie, Juhyun Bae, Ka-Ho Chow, Wenqi Wei
{"title":"Promoting High Diversity Ensemble Learning with EnsembleBench","authors":"Yanzhao Wu, Ling Liu, Zhongwei Xie, Juhyun Bae, Ka-Ho Chow, Wenqi Wei","doi":"10.1109/CogMI50398.2020.00034","DOIUrl":"https://doi.org/10.1109/CogMI50398.2020.00034","url":null,"abstract":"Ensemble learning is gaining renewed interests in recent years. This paper presents EnsembleBench, a holistic framework for evaluating and recommending high diversity and high accuracy ensembles. The design of EnsembleBench offers three novel features: (1) EnsembleBench introduces a set of quantitative metrics for assessing the quality of ensembles and for comparing alternative ensembles constructed for the same learning tasks. (2) EnsembleBench implements a suite of baseline diversity metrics and optimized diversity metrics for identifying and selecting ensembles with high diversity and high quality, making it an effective framework for benchmarking, evaluating and recommending high diversity model ensembles. (3) Four representative ensemble consensus methods are provided in the first release of EnsembleBench, enabling empirical study on the impact of consensus methods on ensemble accuracy. A comprehensive experimental evaluation on popular benchmark datasets demonstrates the utility and effectiveness of EnsembleBench for promoting high diversity ensembles and boosting the overall performance of selected ensembles.","PeriodicalId":360326,"journal":{"name":"2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125690633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Online Intelligent Music Recommendation: The Opportunity and Challenge for People Well-Being Improvement 在线智能音乐推荐:提升人们幸福感的机遇与挑战
2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI) Pub Date : 2020-10-01 DOI: 10.1109/CogMI50398.2020.00014
Jialie Shen, K. Rafferty, Jia Jia
{"title":"Online Intelligent Music Recommendation: The Opportunity and Challenge for People Well-Being Improvement","authors":"Jialie Shen, K. Rafferty, Jia Jia","doi":"10.1109/CogMI50398.2020.00014","DOIUrl":"https://doi.org/10.1109/CogMI50398.2020.00014","url":null,"abstract":"In recent decades, fast development of contemporary digital entertainment and Internet technology has dramatically changed how people produce and consume music. This demands design and development of smart online music recommendation. In this paper, four important research (information retrieval and machine learning related) directions including content descriptor generation, personalization, playlist optimization and performance evaluation are identified and discussed to achieve effective online music recommendation. Based on a comprehensive analysis of the directions, we critically review and discuss the opportunities and challenges for IR research and emerging implications for real world practice. To further demonstrate its promises, we present case study on applying VenueMusic system to support people well-being improvement.","PeriodicalId":360326,"journal":{"name":"2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130568221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Unseen Filler Generalization In Attention-based Natural Language Reasoning Models 基于注意的自然语言推理模型中看不见的填充物泛化
2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI) Pub Date : 2020-10-01 DOI: 10.1109/CogMI50398.2020.00016
Chin-Hui Chen, Yi-Fu Fu, Hsiao-Hua Cheng, Shou-de Lin
{"title":"Unseen Filler Generalization In Attention-based Natural Language Reasoning Models","authors":"Chin-Hui Chen, Yi-Fu Fu, Hsiao-Hua Cheng, Shou-de Lin","doi":"10.1109/CogMI50398.2020.00016","DOIUrl":"https://doi.org/10.1109/CogMI50398.2020.00016","url":null,"abstract":"Recent natural language reasoning models have achieved human-level accuracy on several benchmark datasets such as bAbI. While the results are impressive, in this paper we argue by experiment analysis that several existing attention-based models have a hard time generalizing themselves to handle name entities not seen in the training data. We thus propose Unseen Filler Generalization (UFG) as a task along with two new datasets to evaluate the filler generalization capability of a natural language reasoning model. We also propose a simple yet general strategy that can be applied to various models to handle the UFG challenge through modifying the entity occurrence distribution in the training data. Such strategy allows the model to encounter unseen entities during training, and thus not to overfit to only a few specific name entities. Our experiments show that this strategy can significantly boost the filler generalization capability of three existing models including Entity Network, Working Memory Network, and Universal Transformers.","PeriodicalId":360326,"journal":{"name":"2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127395758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Message from the IEEE CogMI 2020 General Chairs and PC Chairs CogMI 2020 来自IEEE CogMI 2020通用主席和PC主席的消息
2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI) Pub Date : 2020-10-01 DOI: 10.1109/cogmi50398.2020.00005
{"title":"Message from the IEEE CogMI 2020 General Chairs and PC Chairs CogMI 2020","authors":"","doi":"10.1109/cogmi50398.2020.00005","DOIUrl":"https://doi.org/10.1109/cogmi50398.2020.00005","url":null,"abstract":"","PeriodicalId":360326,"journal":{"name":"2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114710003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Mechanism for Pervasive Internet Addiction Prediction 深度学习机制对普遍网络成瘾的预测
2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI) Pub Date : 2020-10-01 DOI: 10.1109/CogMI50398.2020.00011
Zonyin Shae, J. Tsai
{"title":"Deep Learning Mechanism for Pervasive Internet Addiction Prediction","authors":"Zonyin Shae, J. Tsai","doi":"10.1109/CogMI50398.2020.00011","DOIUrl":"https://doi.org/10.1109/CogMI50398.2020.00011","url":null,"abstract":"This paper outlines a visionary approach for Internet addiction prediction mechanism suitable for large scale population deployment. Internet addiction detection and treatment is traditionally an area of psychology research which focus on the Internet addition symptom detection and intervention by way of self-answer questionnaire design and psychologist interview that is not suitable for large scale population. This paper proposes a mechanism from the computer science AI deep learning aspect which evaluates the efficacy of the questionnaire and then transfer the questionnaire into the label data for deep learning model. By way of collecting the users' APP and web browsing behaviors as well as the bioinformatics data sets, AI model can be built not only for the detection, but also for prediction. An extensive discussion about the issues and open questions are also provided.","PeriodicalId":360326,"journal":{"name":"2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115693105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
End-to-End Learning from Noisy Crowd to Supervised Machine Learning Models 从嘈杂人群到监督机器学习模型的端到端学习
2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI) Pub Date : 2020-10-01 DOI: 10.1109/CogMI50398.2020.00013
Taraneh Younesian, Chi Hong, Amirmasoud Ghiassi, R. Birke, L. Chen
{"title":"End-to-End Learning from Noisy Crowd to Supervised Machine Learning Models","authors":"Taraneh Younesian, Chi Hong, Amirmasoud Ghiassi, R. Birke, L. Chen","doi":"10.1109/CogMI50398.2020.00013","DOIUrl":"https://doi.org/10.1109/CogMI50398.2020.00013","url":null,"abstract":"Labeling real-world datasets is time consuming but indispensable for supervised machine learning models. A common solution is to distribute the labeling task across a large number of non-expert workers via crowd-sourcing. Due to the varying background and experience of crowd workers, the obtained labels are highly prone to errors and even detrimental to the learning models. In this paper, we advocate using hybrid intelligence, i.e., combining deep models and human experts, to design an end-to-end learning framework from noisy crowd-sourced data, especially in an on-line scenario. We first summarize the state-of-the-art solutions that address the challenges of noisy labels from non-expert crowd and learn from multiple annotators. We show how label aggregation can benefit from estimating the annotators' confusion matrices to improve the learning process. Moreover, with the help of an expert labeler as well as classifiers, we cleanse aggregated labels of highly informative samples to enhance the final classification accuracy. We demonstrate the effectiveness of our strategies on several image datasets, i.e. UCI and CIFAR-10, using SVM and deep neural networks. Our evaluation shows that our on-line label aggregation with confusion matrix estimation reduces the error rate of labels by over 30%. Furthermore, relabeling only 10% of the data using the expert's results in over 90% classification accuracy with SVM.","PeriodicalId":360326,"journal":{"name":"2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130743660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信