Afshin Azizi , Zhao Zhang , Wanjia Hua , Meiwei Li , C. Igathinathane , Liling Yang , Yiannis Ampatzidis , Mahdi Ghasemi-Varnamkhasti , Radi , Man Zhang , Han Li
{"title":"Image processing and artificial intelligence for apple detection and localization: A comprehensive review","authors":"Afshin Azizi , Zhao Zhang , Wanjia Hua , Meiwei Li , C. Igathinathane , Liling Yang , Yiannis Ampatzidis , Mahdi Ghasemi-Varnamkhasti , Radi , Man Zhang , Han Li","doi":"10.1016/j.cosrev.2024.100690","DOIUrl":"10.1016/j.cosrev.2024.100690","url":null,"abstract":"<div><div>This review provides an overview of apple detection and localization using image analysis and artificial intelligence techniques for enabling robotic fruit harvesting in orchard environments. Classic methods for detecting and localizing infield apples are discussed along with more advanced approaches using deep learning algorithms that have emerged in the past few years. Challenges faced in apple detection and localization such as occlusions, varying illumination conditions, and clustered apples are highlighted, as well as the impact of environmental factors such as light changes on the performance of these algorithms. Potential future research perspectives are identified through a comprehensive literature analysis. These include combining cutting-edge deep learning and multi-vision and multi-modal sensors to potentially apply them in real-time for apple harvesting robots. Additionally, utilizing 3D vision for a thorough analysis of complex and dynamic orchard environments, and precise determination of fruit locations using point cloud data and depth information are presented. The outcome of this review paper will assist researchers and engineers in the development of advanced detection and localization mechanisms for infield apples. The anticipated result is the facilitation of progress toward commercial apple harvest robots.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"54 ","pages":"Article 100690"},"PeriodicalIF":13.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142637690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A systematic review on security aspects of fog computing environment: Challenges, solutions and future directions","authors":"Navjeet Kaur","doi":"10.1016/j.cosrev.2024.100688","DOIUrl":"10.1016/j.cosrev.2024.100688","url":null,"abstract":"<div><div>The dynamic and decentralized architecture of fog computing, which extends cloud computing closer to the edge of the network, offers benefits such as reduced latency and enhanced bandwidth. However, the existing fog architecture introduces unique security challenges due to the large number of distributed fog nodes, often deployed in diverse and resource-constrained environments. Further, the proximity of fog computing nodes to end-users and the open, distributed nature of the architecture make fog environments particularly vulnerable to unauthorized access and various types of cyberattacks. Therefore, in order to address these challenges, the study presented a detailed systematic review that aims to analyze existing security technologies in fog computing environments, identify current security gaps, and propose future research directions. The comprehensive literature review uses quality databases, focusing on articles published within the last four years, i.e. from 2020 to 2024. Further, the review followed a systematic methodology with clear inclusion and exclusion criteria to ensure relevance and quality with respect to security in fog computing. Consequently, key research questions are also formulated and answered for addressing various security concerns, such as architectural security, IoT integration vulnerabilities, and dynamic security management. Finally, the detailed review summarizes the key findings through MTGIR analysis to give valuable insights on the existing security framework of fog computing systems. The result analysis further revealed that 16% of the research is focusing on blockchain and elliptic curve cryptography, alongside the utilization of artificial intelligence and machine learning, which is around 13.2%, specifically for dynamic threat detection. Furthermore, there are few technologies which require attention are federated learning, secure key management, and secure communication mechanisms, as these technologies are less considered in literature, i.e. around 3% only. Finally, the analysis underscored the necessity for real-time security monitoring and adaptive threat response to manage the dynamic nature of fog computing environments effectively.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"54 ","pages":"Article 100688"},"PeriodicalIF":13.3,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A survey of deep learning techniques for detecting and recognizing objects in complex environments","authors":"Ashish Kumar Dogra , Vipal Sharma , Harsh Sohal","doi":"10.1016/j.cosrev.2024.100686","DOIUrl":"10.1016/j.cosrev.2024.100686","url":null,"abstract":"<div><div>Object detection has been used extensively in daily life, and in computer vision, this sub-field is highly significant and challenging. The field of object detection has been transformed by deep learning. Deep learning-based methods have shown to be remarkably effective at identifying and localizing objects in images and video streams when it comes to object detection. Deep learning algorithms can precisely locate and localize objects inside photos and videos because of their capacity to learn complex and nonlinear patterns in data. Deep learning models may also be trained on big datasets with minimal human intervention, allowing them to rapidly improve their performance. This makes deep learning models useful for applications such as self-driving cars, recognizing faces, and healthcare diagnosis. The purpose of this study was to gain an in-depth understanding of the primary state of development for the object detection pipeline in complex environments. Initially, this study describes the benchmark datasets and analyzes the typical detection model, and then, the paper systematic approach covers both one-stage and two-stage detectors, giving a thorough overview of object detection techniques in complex environments. We also discuss the new and traditional applications of object detection. In the end, the study reviews how well various topologies perform over a range of parameters. The study has covered a total of 119 articles, of which 27% are related to one-stage detectors, 26% to two-stage detectors, 24% to supporting data related to deep learning, 14% to survey articles, 8% to the datasets covered in the study, and the remaining 1% to the book chapters.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"54 ","pages":"Article 100686"},"PeriodicalIF":13.3,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intervention scenarios and robot capabilities for support, guidance and health monitoring for the elderly","authors":"Saja Aldawsari, Yi-Ping Phoebe Chen","doi":"10.1016/j.cosrev.2024.100687","DOIUrl":"10.1016/j.cosrev.2024.100687","url":null,"abstract":"<div><div>Demographic change in the world is a reality, and as a result, the number of elderly people is growing in both developed and developing countries, posing several social and economic issues. Most elderly people choose to stay alone at home rather than living with their families who can take care of them. Robots have the potential to revolutionize elderly care by providing aid, companionship, and monitoring services. The objective of this study is to present a comprehensive review which summarizes the cutting-edge works in adapting robotic applications to improve the quality of life for the elderly. We compare paradigms thoroughly and methodically in terms of support, guidance, health monitoring, and usability. We then summarize the current achievements while acknowledging their limitations before presenting perspectives on highly promising future work.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"54 ","pages":"Article 100687"},"PeriodicalIF":13.3,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resilience of deep learning applications: A systematic literature review of analysis and hardening techniques","authors":"Cristiana Bolchini, Luca Cassano, Antonio Miele","doi":"10.1016/j.cosrev.2024.100682","DOIUrl":"10.1016/j.cosrev.2024.100682","url":null,"abstract":"<div><div>Machine Learning (ML) is currently being exploited in numerous applications, being one of the most effective Artificial Intelligence (AI) technologies used in diverse fields, such as vision, autonomous systems, and the like. The trend motivated a significant amount of contributions to the analysis and design of ML applications against faults affecting the underlying hardware. The authors investigate the existing body of knowledge on Deep Learning (among ML techniques) resilience against hardware faults systematically through a thoughtful review in which the strengths and weaknesses of this literature stream are presented clearly and then future avenues of research are set out. The review reports 85 scientific articles published between January 2019 and March 2024, after carefully analysing 222 contributions (from an initial screening of eligible 244 publications). The authors adopt a classifying framework to interpret and highlight research similarities and peculiarities, based on several parameters, starting from the main scope of the work, the adopted fault and error models, to their reproducibility. This framework allows for a comparison of the different solutions and the identification of possible synergies. Furthermore, suggestions concerning the future direction of research are proposed in the form of open challenges to be addressed.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"54 ","pages":"Article 100682"},"PeriodicalIF":13.3,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Shokouhifar , Fakhrosadat Fanian , Marjan Kuchaki Rafsanjani , Mehdi Hosseinzadeh , Seyedali Mirjalili
{"title":"AI-driven cluster-based routing protocols in WSNs: A survey of fuzzy heuristics, metaheuristics, and machine learning models","authors":"Mohammad Shokouhifar , Fakhrosadat Fanian , Marjan Kuchaki Rafsanjani , Mehdi Hosseinzadeh , Seyedali Mirjalili","doi":"10.1016/j.cosrev.2024.100684","DOIUrl":"10.1016/j.cosrev.2024.100684","url":null,"abstract":"<div><div>Cluster-based routing techniques have become a key solution for managing data flow in Wireless Sensor Networks (WSNs), which often struggle with limited resources and dynamic network conditions. With the growing need for efficient data management in these networks, it is more important than ever to understand and enhance these techniques. This survey evaluates recent cluster-based routing protocols released from 2021 to 2024, focusing on the AI-driven approaches in WSNs including fuzzy heuristics, metaheuristics, and machine learning models, along with their combinations. Each approach is evaluated through a deep analysis of solution-based and network configuration-based factors. Solution-based parameters include performance mode, selection strategies, optimization objectives, modeling techniques, and key factors affecting the overall effectiveness of each approach. Additionally, network configuration analysis deals with the type of topology, communication architecture, network scale, performance metrics, and simulators used. This comprehensive analysis unveils valuable insights into the capabilities and limitations of each method. By identifying shortcomings and highlighting areas for improvement, this survey aims to guide future research towards the development of more efficient cluster-based routing techniques for WSNs. These methods, incorporating intelligent performance characteristics, will be well-equipped to address the ever-growing demands of the intelligent era.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"54 ","pages":"Article 100684"},"PeriodicalIF":13.3,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142426602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unleashing the prospective of blockchain-federated learning fusion for IoT security: A comprehensive review","authors":"Mansi Gupta , Mohit Kumar , Renu Dhir","doi":"10.1016/j.cosrev.2024.100685","DOIUrl":"10.1016/j.cosrev.2024.100685","url":null,"abstract":"<div><div>Internet-of-things (IoT) is a revolutionary paragon that brings automation and easiness to human lives and improves their experience. Smart Homes, Healthcare, and Agriculture are some of their amazing use cases. These IoT applications often employ Machine Learning (ML) techniques to strengthen their functionality. ML can be used to analyze sensor data for various, including optimizing energy usage in smart homes, predicting maintenance needs in industrial equipment, personalized user experiences in wearable devices, and detecting anomalies for security monitoring. However, implementing centralized ML techniques is not viable because of the high cost of computing power and privacy issues since so much data is stored over a cloud server. To safeguard data privacy, Federated Learning (FL) has become a new paragon for centralized ML methods where FL,an ML variation sends a model to the user devices without the need to give private data to the third-party or central server, it is one of the promising solutions to address data leakage concerns. By saving raw data to the client itself and transferring only model updates or parameters to the central server, FL helps to reduce privacy leakage. However, it is still not attack-resistant. Blockchain offers a solution to protect FL-enabled IoT networks using smart contracts and consensus mechanisms. This manuscript reviews IoT applications and challenges, discusses FL techniques that can be used to train IoT networks while ensuring privacy, and analyzes existing work. To ensure the security and privacy of IoT applications, an integrated Blockchain-powered FL-based framework was introduced and studies existing research were done using these three powerful paradigms. Finally, the research challenges faced by the integrated platform are explored for future scope, along with the potential applications of IoT in conjunction with other cutting-edge technologies.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"54 ","pages":"Article 100685"},"PeriodicalIF":13.3,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142426601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xudong Luo , Yanling Li , Qiaojuan Huang , Jieyu Zhan
{"title":"A survey of automated negotiation: Human factor, learning, and application","authors":"Xudong Luo , Yanling Li , Qiaojuan Huang , Jieyu Zhan","doi":"10.1016/j.cosrev.2024.100683","DOIUrl":"10.1016/j.cosrev.2024.100683","url":null,"abstract":"<div><div>The burgeoning field of automated negotiation systems represents a transformative approach to resolving conflicts and allocating resources with enhanced efficiency. This paper presents a thorough survey of this discipline, emphasising the implications of human factors, the application of machine learning techniques, and the real-world deployments of these systems. In traditional manual negotiation, various challenges emerge, including limited negotiation skills, power asymmetries, personality disparities, and cultural influences. Automated negotiation systems can offer solutions to these challenges through their round-the-clock availability, the ability to negotiate without emotional bias, efficient information access, and seamless integration of cultural contexts. This comprehensive survey delves into the intricacies of human–computer negotiation, shedding light on the impact of emotional cues, cultural diversity, and the subtleties of language. Furthermore, the study reviews the incorporation of machine learning models that facilitate the adaptation of negotiation strategies. The paper also discusses the application of fuzzy set theory and fuzzy constraint methods within the scope of automated negotiation, providing a valuable addition to the existing literature. Real-world deployment of these systems in domains e.g., e-commerce, conflict resolution, and multi-agent systems is also examined. By providing a broad overview of automated negotiation, this survey acknowledges the vital role of human factors in negotiation processes, underscores the value of intelligent and adaptive negotiation techniques and offers valuable insights into the practical applications of these systems in various real-world contexts.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"54 ","pages":"Article 100683"},"PeriodicalIF":13.3,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142329858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zahra Amiri , Arash Heidari , Nima Jafari , Mehdi Hosseinzadeh
{"title":"Deep study on autonomous learning techniques for complex pattern recognition in interconnected information systems","authors":"Zahra Amiri , Arash Heidari , Nima Jafari , Mehdi Hosseinzadeh","doi":"10.1016/j.cosrev.2024.100666","DOIUrl":"10.1016/j.cosrev.2024.100666","url":null,"abstract":"<div><p>Artificial Intelligence (AI) and Machine Learning (ML) are being used more and more to handle complex tasks in many different areas. As a result, interconnected information systems are growing, which means that autonomous systems are needed to help them adapt, find complex patterns, and make better decisions in areas like cybersecurity, finance, healthcare, authentication, marketing, and supply chain optimization. Even though there have been improvements in self-learning methods for complex pattern recognition in linked information systems, these studies still do not have a complete taxonomy that sorts these methods by how they can be used in different areas. It is hard to fully understand important factors and do the comparisons that are needed to drive the growth and use of autonomous learning in linked systems because of this gap. Because these methods are becoming more important, new study is looking into how they can be used in different areas. Still, recent study shows that we do not fully understand the environment of other uses for independent learning methods, which encourages us to keep looking into it. We come up with a new classification system that puts applications into six groups: finding cybersecurity threats, finding fraud in finance, diagnosing and monitoring healthcare, biometric authentication, personalized marketing, and optimizing the supply chain in systems that are all connected. The latest developments in this area can be seen by carefully looking at basic factors like pros and cons, modeling setting, and datasets. In particular, the data show that Elsevier and Springer both put out a lot of important papers (26.5 % and 11.8 %, respectively). With rates of 12.9 %, 11 %, and 8 %, respectively, the study shows that accuracy, mobility, and privacy are the most important factors. Tools like Python and MATLAB are now the most popular ways to test possible answers in this growing field.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"54 ","pages":"Article 100666"},"PeriodicalIF":13.3,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142272712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Digital to quantum watermarking: A journey from past to present and into the future","authors":"Swapnaneel Dhar, Aditya Kumar Sahu","doi":"10.1016/j.cosrev.2024.100679","DOIUrl":"10.1016/j.cosrev.2024.100679","url":null,"abstract":"<div><p>With the amplification of digitization, the surge in multimedia content, such as text, video, audio, and images, is incredible. Concomitantly, the incidence of multimedia tampering is also apparently increasing. Digital watermarking (DW) is the means of achieving privacy and authentication of the received content while preserving integrity and copyright. Literature has produced a plethora of state-of-the-art DW techniques to achieve the right balance between its performance measuring parameters, including high imperceptibility, increased watermarking ability, and tamper-free recovery. Meanwhile, during the vertex of DW, scientific advances in quantum computing led to the emergence of quantum-based watermarking. Though quantum watermarking (QW) is in its nascent stage, it has become captivating among researchers to dive deep inside it. This study not only investigates the performance of existing DW techniques but also extensively assesses the recently devised QW techniques. It further presents how the principles of quantum entanglement and superposition can be decisive in achieving superior immunity against several watermarking attacks. To the best of our knowledge, this study is the unique one to present a comprehensive review of both DW as well as QW techniques. Therefore, the facts presented in this study could be a baseline for the researchers to devise a novel DW or QW technique.</p></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"54 ","pages":"Article 100679"},"PeriodicalIF":13.3,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142231726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}