J. Vis. Lang. Comput.最新文献

筛选
英文 中文
Ceiling-Vision-Based Mobile Object Self-Localization: A Composite Framework 基于天花板视觉的移动物体自定位:一个复合框架
J. Vis. Lang. Comput. Pub Date : 2021-12-10 DOI: 10.18293/jvlc2021-n2-019
A. Cuzzocrea, Luca Camilotti, E. Mumolo
{"title":"Ceiling-Vision-Based Mobile Object Self-Localization: A Composite Framework","authors":"A. Cuzzocrea, Luca Camilotti, E. Mumolo","doi":"10.18293/jvlc2021-n2-019","DOIUrl":"https://doi.org/10.18293/jvlc2021-n2-019","url":null,"abstract":"Self-localization of mobile objects is a fundamental requirement for autonomy. Mobile objects can be for example a mobile service robot, a motorized wheelchair, a mobile cart for transporting tasks or similar. Self-localization represents as well a necessary feature to develop systems able to perform autonomous movements such as navigation tasks. Self-localization is based upon reliable information coming from sensor devices situated on the mobile objects. There are many sensors available for that purpose. The early devices for positioning are rotary encoders. If the encoders are connected to wheels or legs movement actuators, relative movements of the mobile object during its path [3] can be measured. Then, mobile object positioning can be obtained with dead-reckoning approaches. Dead reckoning [3] is still widely used for mobile robot positioning estimation. It is also true that dead-reckoning is quite unreliable for long navigation tasks, because of accumulated error problems.","PeriodicalId":275847,"journal":{"name":"J. Vis. Lang. Comput.","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117212770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supporting Emotion Automatic Detection and Analysis over Real-Life Text Corpora via Deep Learning: Model, Methodology and Framework 基于深度学习的真实文本语料库情感自动检测与分析:模型、方法与框架
J. Vis. Lang. Comput. Pub Date : 2021-12-10 DOI: 10.18293/jvlc2021-n2-016
A. Cuzzocrea, Giosuè Lo Bosco, Mariano Maiorana, G. Pilato, Daniele Schicchi
{"title":"Supporting Emotion Automatic Detection and Analysis over Real-Life Text Corpora via Deep Learning: Model, Methodology and Framework","authors":"A. Cuzzocrea, Giosuè Lo Bosco, Mariano Maiorana, G. Pilato, Daniele Schicchi","doi":"10.18293/jvlc2021-n2-016","DOIUrl":"https://doi.org/10.18293/jvlc2021-n2-016","url":null,"abstract":"aiDEA Lab, University of Calabria, Rende, Italy & LORIA, Nancy, France bDipartimento di Matematica e Informatica, Universitá degli Studi di Palermo, Via Archirafi 34, 90123 Palermo, Italy cDipartimento SIT, Istituto Euro-Mediterraneo di Scienza e Tecnologia, Via Michele Miraglia 20, 90139 Palermo dCluster Reply SRL, Via Robert Kock 1/4, 20152 Milano, Italy eCNR, Istituto di Calcolo e Reti ad Alte Prestazioni, Consiglio Nazionale delle Ricerche, Via Ugo La Malfa 153, 90146 Palermo, Italy fCNR, Istituto di Tecnologie Didattiche , Via Ugo La Malfa 153, 90146, Palermo, Italy","PeriodicalId":275847,"journal":{"name":"J. Vis. Lang. Comput.","volume":"100 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121014206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Case Study of Testing an Image Recognition 一个测试图像识别的案例研究
J. Vis. Lang. Comput. Pub Date : 2021-07-09 DOI: 10.18293/seke2021-194
Chuanqi Tao, Dongyu Cao, Hongjing Guo, J. Gao
{"title":"A Case Study of Testing an Image Recognition","authors":"Chuanqi Tao, Dongyu Cao, Hongjing Guo, J. Gao","doi":"10.18293/seke2021-194","DOIUrl":"https://doi.org/10.18293/seke2021-194","url":null,"abstract":"High-quality Artificial intelligence (AI) software in different domains, like image recognition, has been widely emerged in people’s daily life. They are built on machine learning models to implement intelligent features. However, the current research on image recognition software rarely discusses test questions, clear quality requirements, and evaluation methods. The quality of image recognition applications becomes more and more prominent. A three-dimensional(3D) classification decision table can help users to conduct classification-based test requirement analysis and modeling for any given mobile apps powered with AI functions in detection, classification, and prediction. This paper presents a case study of a realistic image recognition application called Calorie Mama using manual testing and automation testing with a 3D decision table. The study results indicate the proposed method is feasible and effective in quality evaluation.","PeriodicalId":275847,"journal":{"name":"J. Vis. Lang. Comput.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133509308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Effective and Efficient Machine-Learning-Based Framework for Supporting Event Detection and Analysis in Complex Environments 一种支持复杂环境中事件检测和分析的基于机器学习的有效框架
J. Vis. Lang. Comput. Pub Date : 2020-07-15 DOI: 10.18293/jvlc2020-n1-023
A. Cuzzocrea, E. Mumolo
{"title":"An Effective and Efficient Machine-Learning-Based Framework for Supporting Event Detection and Analysis in Complex Environments","authors":"A. Cuzzocrea, E. Mumolo","doi":"10.18293/jvlc2020-n1-023","DOIUrl":"https://doi.org/10.18293/jvlc2020-n1-023","url":null,"abstract":"In this paper we describe a falls detection and classification algorithm for discriminating falls from daily life activities using a MEMS accelerometer. The algorithm is based on a shallow Neural Network with three hidden layers, used as fall/non fally classifier, trained with daily life activities features and fall features. The novelty of this algorithm is that synthetic falls are generated as multivariate random Gaussian features, so only real daily life features must be collected during some day of normal living. Moreover, the features related to synthetic fall events are generated as complement of normal features. First of all, the features acquired during daily life are clustered by Principal Component Analysis and no Fall activities shall be recorded. The complement set of the normal features is found and used as a mask for Monte Carlo generation of synthetic fall. The two feature sets, namely the features recorded from daily life activities and those artificially generated are used to train the Neural Network. This approach is suitable for a practical utilization of a Neural Network based fall detection characterized by high Recall-Precision rate.","PeriodicalId":275847,"journal":{"name":"J. Vis. Lang. Comput.","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131588020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Framework for Intrusion Detection Targeted at Non-Expert Users 一种针对非专业用户的入侵检测框架
J. Vis. Lang. Comput. Pub Date : 2020-07-15 DOI: 10.18293/jvlc2020-n1-014
Bernardo Breve, Stefano Cirillo, V. Deufemia
{"title":"A Framework for Intrusion Detection Targeted at Non-Expert Users","authors":"Bernardo Breve, Stefano Cirillo, V. Deufemia","doi":"10.18293/jvlc2020-n1-014","DOIUrl":"https://doi.org/10.18293/jvlc2020-n1-014","url":null,"abstract":"The wide spreading of the Internet leads to the born of a whole interconnected world. Among all these devices, smart voice assistants are gaining particular attention thanks to their ease of use, allowing users to comfortably deploy commands for controlling other devices. The simplicity of use of voice assistants allowed non-expert to interact with complex systems, leading to that category of users with limited knowledge, to interact with s without being perfectly aware of the risks they are exposed to. For example, common network monitoring systems are so useful as they are complex to use for non-expert users. This paper presents a framework for intrusion detection specifically designed to be used by any category of users, using visual interfaces for simplifying the user interaction with the framework, allowing him/her to properly configure and run an Intrusion Detection System (IDS). The implementation of voice assistants as a communication channel will further improve the overall user experience.","PeriodicalId":275847,"journal":{"name":"J. Vis. Lang. Comput.","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124667516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Innovative System for Supporting Acquisition and Reproduction of Gestures in Storytelling Humanoid Robots 一种支持讲故事类人机器人手势获取和再现的创新系统
J. Vis. Lang. Comput. Pub Date : 2020-07-15 DOI: 10.18293/jvlc2020-n1-017
A. Augello, Angelo Ciulla, A. Cuzzocrea, S. Gaglio, G. Pilato, Filippo Vella
{"title":"An Innovative System for Supporting Acquisition and Reproduction of Gestures in Storytelling Humanoid Robots","authors":"A. Augello, Angelo Ciulla, A. Cuzzocrea, S. Gaglio, G. Pilato, Filippo Vella","doi":"10.18293/jvlc2020-n1-017","DOIUrl":"https://doi.org/10.18293/jvlc2020-n1-017","url":null,"abstract":"The work describes a module that has been implemented for being included in a social humanoid robot architecture, in particular a storyteller robot, named NarRob. This module gives a humanoid robot the capability of mimicking and acquiring the motion of a human user in real-time. This allows the robot to increase the population of his dataset of gestures. The module relies on a Kinect based acquisition setup. The gestures are acquired by observing the typical gesture displayed by humans. The movements are then annotated by several evaluators according to their particular meaning, and they are organized considering a specific typology in the knowledge base of the robot. The properly annotated gestures are then used to enrich the narration of the stories. During the narration, the robot semantically analyses the textual content of the story in order to detect meaningful terms in the sentences and emotions that can be expressed. This analysis drives the choice of the gesture that accompanies the sentences when the story is read.","PeriodicalId":275847,"journal":{"name":"J. Vis. Lang. Comput.","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115551198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative E-learning Environments with Cognitive Computing and Big Data 基于认知计算和大数据的协同电子学习环境
J. Vis. Lang. Comput. Pub Date : 2019-09-08 DOI: 10.18293/jvlc2019n1-007
M. Coccoli, P. Maresca, A. Molinari
{"title":"Collaborative E-learning Environments with Cognitive Computing and Big Data","authors":"M. Coccoli, P. Maresca, A. Molinari","doi":"10.18293/jvlc2019n1-007","DOIUrl":"https://doi.org/10.18293/jvlc2019n1-007","url":null,"abstract":"The actual scenario of e-learning environments and techniques is fast-changing from both the technology side and the users’ perspective. In this vein, applications and services as well as methodologies are evolving rapidly, running after the more recent innovations and thus adopting distributed cloud architectures to provide the most advanced solutions. In this situation, two influential technological factors emerge: the former is cognitive computing, which can provide learners and teachers with innovative services enhancing the whole learning process, also introducing improvements in human-machine interactions; the latter is a new wave of big data derived from heterogeneous sources, which impacts on educational tasks and acts as enabler for the development of new analytics-based models, for both management activities and education tasks. Concurrently, from the side of learning techniques, these phenomena are revamping collaborative models so that we should talk about communities rather than classrooms. In these circumstances, it seems that current Learning Management Systems (LMS) may need a redesign. In this respect, the paper outlines the evolutionary trends of Technology-Enhanced Learning (TEL) environments and presents the results achieved within two experiences carried on in two Italian universities. © 2019 KSI Research h","PeriodicalId":275847,"journal":{"name":"J. Vis. Lang. Comput.","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122075479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Run-Time Adaptation of Modalities of Interaction and Context of Use in Web MobileApps Web移动应用中交互方式和使用情境的运行时适应
J. Vis. Lang. Comput. Pub Date : 2019-09-08 DOI: 10.18293/jvlc2019-n2-011
Danilo Camargo Bueno, L. Zaina
{"title":"Run-Time Adaptation of Modalities of Interaction and Context of Use in Web MobileApps","authors":"Danilo Camargo Bueno, L. Zaina","doi":"10.18293/jvlc2019-n2-011","DOIUrl":"https://doi.org/10.18293/jvlc2019-n2-011","url":null,"abstract":"","PeriodicalId":275847,"journal":{"name":"J. Vis. Lang. Comput.","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131954344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graphical Animations of the Suzuki-Kasami Distributed Mutual Exclusion Protocol 铃木kasami分布式互斥协议的图形动画
J. Vis. Lang. Comput. Pub Date : 2019-07-08 DOI: 10.18293/DMSVIVA2019-012
D. Bui, K. Ogata
{"title":"Graphical Animations of the Suzuki-Kasami Distributed Mutual Exclusion Protocol","authors":"D. Bui, K. Ogata","doi":"10.18293/DMSVIVA2019-012","DOIUrl":"https://doi.org/10.18293/DMSVIVA2019-012","url":null,"abstract":"","PeriodicalId":275847,"journal":{"name":"J. Vis. Lang. Comput.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121602541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Optimizing type-specific instrumentation on the JVM with reflective supertype information 使用反射超类型信息在JVM上优化特定于类型的插装
J. Vis. Lang. Comput. Pub Date : 2018-12-01 DOI: 10.1016/j.jvlc.2018.10.007
Andrea Rosà, Walter Binder
{"title":"Optimizing type-specific instrumentation on the JVM with reflective supertype information","authors":"Andrea Rosà, Walter Binder","doi":"10.1016/j.jvlc.2018.10.007","DOIUrl":"https://doi.org/10.1016/j.jvlc.2018.10.007","url":null,"abstract":"","PeriodicalId":275847,"journal":{"name":"J. Vis. Lang. Comput.","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"118104247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信