{"title":"Augmented reality display of neurosurgery craniotomy lesions based on feature contour matching","authors":"Hao Zhang, Qi-Yuan Sun, Zhen-Zhong Liu","doi":"10.1049/ccs2.12021","DOIUrl":"10.1049/ccs2.12021","url":null,"abstract":"<p>Traditional neurosurgical craniotomy primarily uses two-dimensional cranial medical images to estimate the location of a patient’s intracranial lesions. Such work relies on the experience and skills of the doctor and may result in accidental injury to important intracranial physiological tissues. To help doctors more intuitively determine patient lesion information and improve the accuracy of surgical route formulation and craniotomy safety, an augmented reality method for displaying neurosurgery craniotomy lesions based on feature contour matching is proposed. This method uses threshold segmentation and region growing algorithms to reconstruct a 3-D Computed tomography image of the patient’s head. The augmented reality engine is used to adjust the reconstruction model’s relevant parameters to meet the doctor’s requirements and determine the augmented reality matching method for feature contour matching. By using the mobile terminal to align the real skull model, the virtual lesion model is displayed. Using the designed user interface, doctors can view the patient’s personal information and can zoom in, zoom out, and rotate the virtual model. Therefore, the patient’s lesions information can be visualized accurately, which provides a visual basis for preoperative preparation.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":"3 3","pages":"221-228"},"PeriodicalIF":0.0,"publicationDate":"2021-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129447576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dipayan Saha, S.M.Mahbubur Rahman, Mohammad Tariqul Islam, M. Omair Ahmad, M.N.S. Swamy
{"title":"Prediction of instantaneous likeability of advertisements using deep learning","authors":"Dipayan Saha, S.M.Mahbubur Rahman, Mohammad Tariqul Islam, M. Omair Ahmad, M.N.S. Swamy","doi":"10.1049/ccs2.12022","DOIUrl":"10.1049/ccs2.12022","url":null,"abstract":"<p>The degree to which advertisements are successful is of prime concern for vendors in highly competitive global markets. Given the astounding growth of multimedia content on the internet, online marketing has become another form of advertising. Researchers consider advertisement likeability a major predictor of effective market penetration. An algorithm is presented to predict how much an advertisement clip will be liked with the aid of an end-to-end audiovisual feature extraction process using cognitive computing technology. Specifically, the usefulness of different spatial and time-domain deep-learning architectures such as convolutional neural and long short-term memory networks is investigated to predict the frame-by-frame instantaneous and root mean square likeability of advertisement clips. A data set named the ‘BUET Advertisement Likeness Data Set’, containing annotations of frame-wise likeability scores for various categories of advertisements, is also introduced. Experiments with the developed database show that the proposed algorithm performs better than existing methods in terms of commonly used performance indices at the expense of slightly increased computational complexity.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":"3 3","pages":"263-275"},"PeriodicalIF":0.0,"publicationDate":"2021-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12022","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122800371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingjing Xiao, Mengqiong Li, Chiming Wang, Jun Pi, Hui He
{"title":"Design and research of a robotic system for ultrasonic-assisted lamellar keratoplasty","authors":"Jingjing Xiao, Mengqiong Li, Chiming Wang, Jun Pi, Hui He","doi":"10.1049/ccs2.12020","DOIUrl":"10.1049/ccs2.12020","url":null,"abstract":"<p>In order to solve the problem of uncontrollable cutting depth and the rough incision edge of the cornea with manual trephine in lamellar keratoplasty, an ultrasonic-assisted corneal trephination method has been proposed for the first time in accordance with the advantage of ultrasonic vibration cutting, and the corresponding robotic system has been designed and researched. According to the traditional process of lamellar keratoplasty, the requirements of the surgical robotic system were first proposed. On this basis, the robotic system was designed and its schematic diagram was introduced. Second, the key components of the robotic body such as the eccentric adjusting mechanism and the end-effector of ultrasonic scalpel were illustrated, which can realise corneal trephination of different incision diameters without scalpel replacement. Then the operation flow chart of a robot-assisted lamellar keratoplasty was put forward. Finally, the preliminary verified experiments were performed using a grape and a porcine eyeball, respectively, in vitro with the prototype system. The results show that the robotic system can basically satisfy the operation requirements of lamellar keratoplasty. Owing to the less cutting force and smoother corneal incision edge of ultrasonic-assisted lamellar keratoplasty compared with manual trephine, it was proved to be more feasible and superior.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":"3 4","pages":"297-306"},"PeriodicalIF":0.0,"publicationDate":"2021-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12020","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126896621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Questioning ‘what makes us human’: How audiences react to an artificial intelligence–driven show","authors":"Rob Eagle, Rik Lander, Phil D. Hall","doi":"10.1049/ccs2.12018","DOIUrl":"10.1049/ccs2.12018","url":null,"abstract":"<p>I am Echoborg is promoted as ‘a show created afresh each time by the audience in conversation with an artificial intelligence (AI)’. The show demonstrates how AI in a creative and performance context can raise questions about the technology’s ethical use for persuasion and compliance, and how humans can reclaim agency. This audience study focuses on a consecutive three-night run in Bristol, UK in October 2019. The different outcomes of each show illustrate the unpredictability of audience interactions with conversational AI and how the collective dynamic of audience members shapes each performance. This study analyses (1) how I am Echoborg facilitates audience cocreation in a live performance context, (2) the show’s capacity to provoke nuanced understandings of the potential for AI and (3) the ability for intelligent technology to facilitate social interaction and group collaboration. This audience study demonstrates how the show inspires debate beyond binary conclusions (i.e. AI as good or bad) and how audiences can understand potential creative uses of AI, including as a tool for cocreating entertainment <i>with</i> (not just for) them.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":"3 2","pages":"91-99"},"PeriodicalIF":0.0,"publicationDate":"2021-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12018","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125864216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-attribute quantitative bearing fault diagnosis based on convolutional neural network","authors":"Shixin Zhang, Qin Lv, Shenlin Zhang, Jianhua Shan","doi":"10.1049/ccs2.12016","DOIUrl":"10.1049/ccs2.12016","url":null,"abstract":"<p>Existing bearing fault diagnosis methods have some disadvantages, one being that most methods cannot completely consider all specific fault attributes. Another disadvantage is that the qualitative diagnosis method considers different fault types as a whole, and qualitative diagnosis of a single fault attribute is complicated. A convolutional neural network is proposed for application in the multi-attribute quantitative bearing fault diagnosis. Multiple combinations of convolutional layers are adopted to directly extract features from one-dimensional vibration signals. In addition, a softmax layer is designed to realise the simultaneous recognition of different fault attributes. The advantage of this approach is that it can realise diagnostic results for any combination of fault attributes and corresponding types, which overcomes the disadvantage of single attribute recognition in the traditional method. The method is simple but has strong generalisation ability with average diagnostic accuracy of more than 95%. According to bearing data from Case Western Reserve University and laboratory experiments by the authors, the results verify that the method can accurately and quantitatively diagnose bearing faults.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":"3 4","pages":"287-296"},"PeriodicalIF":0.0,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12016","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116346485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-modal broad learning for material recognition","authors":"Zhaoxin Wang, Huaping Liu, Xinying Xu, Fuchun Sun","doi":"10.1049/ccs2.12004","DOIUrl":"10.1049/ccs2.12004","url":null,"abstract":"Joint Fund of Science & Technology Department of Liaoning Province and State Key Laboratory of Robotics, China, Grant/Award Number: 2020‐KF‐ 22‐06 Abstract Material recognition plays an important role in the interaction between robots and the external environment. For example, household service robots need to replace humans in the home environment to complete housework, so they need to interact with daily necessities and obtain their material performance. Images provide rich visual information about objects; however, it is often difficult to apply when objects are not visually distinct. In addition, tactile signals can be used to capture multiple characteristics of objects, such as texture, roughness, softness, and friction, which provides another crucial way for perception. How to effectively integrate multi‐modal information is an urgent problem to be addressed. Therefore, a multi‐modal material recognition framework CFBRL‐KCCA for target recognition tasks is proposed in the paper. The preliminary features of each model are extracted by cascading broad learning, which is combined with the kernel canonical correlation learning, considering the differences among different models of heterogeneous data. Finally, the open dataset of household objects is evaluated. The results demonstrate that the proposed fusion algorithm provides an effective strategy for material recognition.","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":"3 2","pages":"123-130"},"PeriodicalIF":0.0,"publicationDate":"2021-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122481833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on intelligent service of customer service system","authors":"Jinji Nie, Qi Wang, Jianbin Xiong","doi":"10.1049/ccs2.12012","DOIUrl":"10.1049/ccs2.12012","url":null,"abstract":"<p>With the development of the wireless network, from 4G network to 5G network, people's communication quality has improved significantly and the processing requirements of operators' customer service systems will ameliorate, whereas the business undertaken by the intelligent network becomes more difficult. Customer service system, which can convey files and video, has evolved from manual to intelligent. At the same time, this system establishes a knowledge base based on the process of solving problems with customers. The customer service system can also undertake the task of process control within the enterprise. The ultimate goal is to understand the needs of customers through the knowledge base and develop corporate products based on customer data. Furthermore, this study proposes a network architecture of an intelligent customer service system to provide a reference for the construction.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":"3 3","pages":"197-205"},"PeriodicalIF":0.0,"publicationDate":"2021-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12012","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122539417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ren Xiangfang, Shen Lei, Liu Miaomiao, Zhang Xiying, Chen Han
{"title":"Research and sustainable design of wearable sensor for clothing based on body area network","authors":"Ren Xiangfang, Shen Lei, Liu Miaomiao, Zhang Xiying, Chen Han","doi":"10.1049/ccs2.12014","DOIUrl":"10.1049/ccs2.12014","url":null,"abstract":"<p>The body area network (BAN) is composed of every wearable device network on the body to share information and data, which is applied in medical and health, especially in the direction of intelligent clothing. A wearable device is an integrated body of multi-sensor fusion. At the same time, the multi-dimensional needs of users and the unique problems of sensors appear. How to solve the problems of wearable sensors and sustainable design is the research focus. Based on the wearable sensor in the critical factor of wearable device fusion, this paper analyses the classification, technology, and current situation of a wearable sensor, discusses the problems of a wearable sensor for BAN from the aspects of human–computer interaction experience, data accuracy, multiple interaction modes, and battery power supply, and summarizes the direction of multi-sensor fusion, compatible biosensor materials, and low power consumption and high sensitivity. The sustainable design direction of visibility design, identification of use scenarios, short-term human–computer interaction, interaction process reduction, and integration invisibility are introduced. The integration research of wearable sensors is the future trend, and it has been widely used in medical and health, intelligent clothing, wireless communication, military, automobile, and other fields.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":"3 3","pages":"206-220"},"PeriodicalIF":0.0,"publicationDate":"2021-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12014","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133933982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep learning techniques-based perfection of multi-sensor fusion oriented human-robot interaction system for identification of dense organisms","authors":"Haiju Li, Chuntang Zhang, Jingwen Bo, Zhongjun Ding","doi":"10.1049/ccs2.12010","DOIUrl":"10.1049/ccs2.12010","url":null,"abstract":"<p>For detection of dense small-target organisms with indistinct features in complex background, the efficiency and accuracy of traditional target detection methods are low. Multi-sensor fusion oriented human-robot interaction (HRI) system has facilitated biologists to process and analyse data. For this, several deep learning models based on convolutional neural network (CNN) are improved and compared to study the species and density of dense organisms in deep-sea hydrothermal vent, which are fused it with related environmental information given by position sensors and conductivity-temperature-depth (CTD) sensors, so as to perfect multi-sensor fusion oriented HRI system. Firstly, the authors combined different meta-architectures and different feature extractors, and obtained five object identification algorithms based on CNN. Then, they compared computational cost of feature extractors and weighed the pros and cons of each algorithm from mean detection speed, correlation coefficient and mean class-specific confidence score to confirm that Faster Region-based CNN (R-CNN)_InceptionNet is the best algorithm applicable to hydrothermal vent biological dataset. Finally, they calculated the cognitive accuracy of <i>rimicaris exoculata</i> in dense and sparse areas, which were 88.3% and 95.9% respectively, to analyse the performance of the Faster R-CNN_InceptionNet. Results show that the proposed method can be used in the multi-sensor fusion oriented HRI system for the statistics of dense organisms in complex environments.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":"3 3","pages":"187-196"},"PeriodicalIF":0.0,"publicationDate":"2021-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115715360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intelligent flow control algorithm for microservice system","authors":"Yudong Li, Yuqing Zhang, Zhangbing Zhou, LinLin Shen","doi":"10.1049/ccs2.12013","DOIUrl":"10.1049/ccs2.12013","url":null,"abstract":"<p>In microservice systems, availability can be ensured through a variety of measures, such as fault tolerance and flow limiting, which are collectively called the flow control. In the current mainstream system design, the flow control rules are usually fixed and set manually, which cannot be dynamically adjusted according to the flow shape. The performance of the system is thus not fully explored. To mitigate this problem, an adaptive dynamic flow control algorithm is proposed. Based on the system's monitoring data and current flow, the algorithm calculates the flow-limiting threshold in real time, and then it implements fine-grained service adaptive flow control to improve the resource utilization. Experimental results show that the performance of the adaptive automatic flow control is better than that of the traditional static method on resource utilization.</p>","PeriodicalId":33652,"journal":{"name":"Cognitive Computation and Systems","volume":"3 3","pages":"276-285"},"PeriodicalIF":0.0,"publicationDate":"2021-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ccs2.12013","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121143009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}