PeerJ Computer Science最新文献

筛选
英文 中文
Vision-based approach to knee osteoarthritis and Parkinson's disease detection utilizing human gait patterns. 基于视觉方法的膝关节骨关节炎和帕金森病检测利用人类步态模式。
IF 3.5 4区 计算机科学
PeerJ Computer Science Pub Date : 2025-05-06 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.2857
Zeeshan Ali, Jihoon Moon, Saira Gillani, Sitara Afzal, Muazzam Maqsood, Seungmin Rho
{"title":"Vision-based approach to knee osteoarthritis and Parkinson's disease detection utilizing human gait patterns.","authors":"Zeeshan Ali, Jihoon Moon, Saira Gillani, Sitara Afzal, Muazzam Maqsood, Seungmin Rho","doi":"10.7717/peerj-cs.2857","DOIUrl":"10.7717/peerj-cs.2857","url":null,"abstract":"<p><p>Recently, the number of cases of musculoskeletal and neurological disorders, such as knee osteoarthritis (KOA) and Parkinson's disease (PD), has significantly increased. Numerous clinical methods have been proposed in research to diagnose these disorders; however, a current trend in diagnosis is through human gait patterns. Several researchers proposed different methods in this area, including gait detection utilizing sensor-based data and vision-based systems that include both marker-based and marker-free techniques. The majority of current studies are concerned with the classification of Parkinson's disease. Furthermore, many vision-based algorithms rely on human gait silhouettes or gait representations and employ traditional similarity-based methodologies. However, in this study, a novel approach is proposed in which spatiotemporal features are extracted <i>via</i> deep learning methods with a transfer learning paradigm. Following that, advanced deep learning approaches, including sequential models like gated recurrent unit (GRU), are used for additional analysis. The experimentation is performed on the publicly available KOA-PD-normal dataset comprising gait videos with various abnormalities, and the proposed model has the highest accuracy of approximately 94.81%.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2857"},"PeriodicalIF":3.5,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12192726/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Technological trends in epidemic intelligence for infectious disease surveillance: a systematic literature review. 传染病监测流行情报的技术趋势:系统的文献综述。
IF 3.5 4区 计算机科学
PeerJ Computer Science Pub Date : 2025-05-06 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.2874
Hazeeqah Amny Kamarul Aryffin, Murtadha Arif Bin Sahbudin, Sakinah Ali Pitchay, Azni Haslizan Abhalim, Ilfita Sahbudin
{"title":"Technological trends in epidemic intelligence for infectious disease surveillance: a systematic literature review.","authors":"Hazeeqah Amny Kamarul Aryffin, Murtadha Arif Bin Sahbudin, Sakinah Ali Pitchay, Azni Haslizan Abhalim, Ilfita Sahbudin","doi":"10.7717/peerj-cs.2874","DOIUrl":"10.7717/peerj-cs.2874","url":null,"abstract":"<p><strong>Background: </strong>This research focuses on improving epidemic monitoring systems by incorporating advanced technologies to enhance the surveillance of diseases more effectively than before. Considering the drawbacks associated with surveillance methods in terms of time consumption and efficiency, issues highlighted in this study includes the integration of Artificial Intelligence (AI) in early detection, decision support and predictive modeling, big data analytics in data sharing, contact tracing and countering misinformation, Internet of Things (IoT) devices in real time disease monitoring and Geographic Information Systems (GIS) for geospatial artificial intelligence (GeoAI) applications and disease mapping. The increasing intricacy and regular occurrence of disease outbreaks underscore the pressing necessity for improvements in public health monitoring systems. This research delves into the developments and their utilization in detecting and handling infectious diseases while exploring how these progressions contribute to decision making and policy development, in public healthcare.</p><p><strong>Methodology: </strong>This review systematically analyzes how technological tools are being used in epidemic monitoring by conducting a structured search across online literature databases and applying eligibility criteria to identify relevant studies on current technological trends in public health surveillance.</p><p><strong>Results: </strong>The research reviewed 69 articles from 2019 to 2023 focusing on emerging trends in epidemic intelligence. Most of the studies emphasized the integration of artificial intelligence with technologies like big data analytics, geographic information systems, and the Internet of Things for monitoring infectious diseases.</p><p><strong>Conclusions: </strong>The expansion of publicly accessible information on the internet has opened a new pathway for epidemic intelligence. This study emphasizes the importance of integrating information technology tools such as AI, big data analytics, GIS, and the IoT in epidemic intelligence surveillance to effectively track infectious diseases. Combining these technologies helps public health agencies in detecting and responding to health threats.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2874"},"PeriodicalIF":3.5,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12192675/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mining autonomous student patterns score on LMS within online higher education. 在线高等教育中自主学生模式的挖掘。
IF 3.5 4区 计算机科学
PeerJ Computer Science Pub Date : 2025-05-05 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.2855
Ricardo Ordoñez-Avila, Jaime Meza, Sebastian Ventura
{"title":"Mining autonomous student patterns score on LMS within online higher education.","authors":"Ricardo Ordoñez-Avila, Jaime Meza, Sebastian Ventura","doi":"10.7717/peerj-cs.2855","DOIUrl":"10.7717/peerj-cs.2855","url":null,"abstract":"<p><p>Higher education institutions actively integrate information and communication technologies through learning management systems (LMS), which are crucial for online education. This study used data mining techniques to predict the autonomous scores of students in the online Law and Psychology programs at the Technical University of Manabi. The process involved data integration and selection of more than 16,000 records, preprocessing, transformation with RobustScaler, predictive modelling that included recursive feature elimination with cross-validation to select features (RFEcv), and hyperparameter fitting to achieve the best fit, and finally, evaluation of the models using metrics of root mean square error (RMSE), mean absolute error (MAE), and the coefficient of determination (R<sup>2</sup>). The feature selection framework suggested by RFEcv contributed to the performance of the models. The variables analyzed focused on download rate, homework submission rate, test performance rate, median daily accesses, median days of access per month, observation of comments on teacher-reviewed assignments, length of final exam, and not requiring the supplemental exam. Hyperparameter adjustment improved the performance of the models after applying RFEcv. The models evaluated showed minimal differences in RMSE ([0.5411 .. 0.6025]). The gradient boosting model achieved the best performance of R<sup>2</sup> = 0.6693, MAE = 0.4041 and RMSE = 0.5411 with the Law online program data, as with the Psychology online program data, with an R<sup>2</sup> = 0.6418, MAE = 0.4232 and RMSE = 0.6025, while the combination of both data sets reflected the best performance with the extreme gradient boosting (XGBoost) model with the values of R<sup>2</sup> = 0.6294, MAE = 0.4295 and RMSE = 0.5985. Future research and implementations could include autonomous score data through plugins and reports integrated into LMSs. This approach may provide indicators of interest for understanding and improving online learning from a personalized, real-time perspective.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2855"},"PeriodicalIF":3.5,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12193002/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ROM-Pose: restoring occluded mask image for 2D human pose estimation. ROM-Pose:恢复被遮挡的掩模图像,用于二维人体姿态估计。
IF 3.5 4区 计算机科学
PeerJ Computer Science Pub Date : 2025-05-02 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.2843
Yunju Lee, Jihie Kim
{"title":"ROM-Pose: restoring occluded mask image for 2D human pose estimation.","authors":"Yunju Lee, Jihie Kim","doi":"10.7717/peerj-cs.2843","DOIUrl":"10.7717/peerj-cs.2843","url":null,"abstract":"<p><p>Human pose estimation (HPE) is a field focused on estimating human poses by detecting key points in images. HPE includes methods like top-down and bottom-up approaches. The top-down approach uses a two-stage process, first locating and then detecting key points on humans with bounding boxes, whereas the bottom-up approach directly detects individual key points and integrates them to estimate the overall pose. In this article, we address the problem of bounding box detection inaccuracies in certain situations using the top-down method. The detected bounding boxes, which serve as input for the model, impact the accuracy of pose estimation. Occlusions occur when a part of the target's body is obscured by a person or object and hinder the model's ability to detect complete bounding boxes. Consequently, the model produces bounding boxes that do not recognize occluded parts, resulting in their exclusion from the input used by the HPE model. To mitigate this issue, we introduce the Restoring Occluded Mask Image for 2D Human Pose Estimation (ROM-Pose), comprising a restoration model and an HPE model. The restoration model is designed to delineate the boundary between the target's grayscale mask (occluded image) and the blocker's grayscale mask (occludee image) using the specially created Whole Common Objects in Context (COCO) dataset. Upon identifying the boundary, the restoration model restores the occluded image. This restored image is subsequently overlaid onto the RGB image for use in the HPE model. By integrating occluded parts' information into the input, the bounding box includes these areas during detection, thus enhancing the HPE model's ability to recognize them. ROM-Pose achieved a 1.6% improvement in average precision (AP) compared to the baseline.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2843"},"PeriodicalIF":3.5,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12192664/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomous vehicle surveillance through fuzzy C-means segmentation and DeepSORT on aerial images. 基于模糊c均值分割和深度排序的航空图像自动驾驶车辆监控。
IF 3.5 4区 计算机科学
PeerJ Computer Science Pub Date : 2025-05-01 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.2835
Asifa Mehmood Qureshi, Moneerah Alotaibi, Sultan Refa Alotaibi, Dina Abdulaziz AlHammadi, Muhammad Asif Jamal, Ahmad Jalal, Bumshik Lee
{"title":"Autonomous vehicle surveillance through fuzzy C-means segmentation and DeepSORT on aerial images.","authors":"Asifa Mehmood Qureshi, Moneerah Alotaibi, Sultan Refa Alotaibi, Dina Abdulaziz AlHammadi, Muhammad Asif Jamal, Ahmad Jalal, Bumshik Lee","doi":"10.7717/peerj-cs.2835","DOIUrl":"10.7717/peerj-cs.2835","url":null,"abstract":"<p><p>The high mobility of uncrewed aerial vehicles (UAVs) has led to their usage in various computer vision applications, notably in intelligent traffic surveillance, where it enhances productivity and simplifies the process. Yet, there are still several challenges that must be resolved to automate these systems. One significant challenge is the accurate extraction of vehicle foregrounds in complex traffic scenarios. As a result, this article proposes a novel vehicle detection and tracking system for autonomous vehicle surveillance, which employs Fuzzy C-mean clustering to segment the aerial images. After segmentation, we employed the YOLOv4 deep learning algorithm, which is efficient in detecting small-sized objects in vehicle detection. Furthermore, an ID assignment and recovery algorithm based on Speed-Up Robust Feature (SURF) is used for multi-vehicle tracking across image frames. Vehicles are determined by counting in each image to estimate the traffic density at different time intervals. Finally, these vehicles were tracked using DeepSORT, which combines the Kalman filter with deep learning to produce accurate results. Furthermore, to understand the traffic flow direction, the path trajectories of each tracked vehicle is projected. Our proposed model demonstrates a noteworthy vehicle detection and tracking rate during experimental validation, attaining precision scores of 0.82 and 0.80 over UAVDT and KIT-AIS datasets for vehicle detection. For vehicle tracking, the precision is 0.87 over the UAVDT dataset and 0.83 for the KIT-AIS dataset.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2835"},"PeriodicalIF":3.5,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12190255/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UMEDNet: a multimodal approach for emotion detection in the Urdu language. UMEDNet:乌尔都语情感检测的多模态方法。
IF 3.5 4区 计算机科学
PeerJ Computer Science Pub Date : 2025-05-01 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.2861
Adil Majeed, Hasan Mujtaba
{"title":"UMEDNet: a multimodal approach for emotion detection in the Urdu language.","authors":"Adil Majeed, Hasan Mujtaba","doi":"10.7717/peerj-cs.2861","DOIUrl":"10.7717/peerj-cs.2861","url":null,"abstract":"<p><p>Emotion detection is a critical component of interaction between human and computer systems, more especially affective computing, and health screening. Integrating video, speech, and text information provides better coverage of the basic and derived affective states with improved estimation of verbal and non-verbal behavior. However, there is a lack of systematic preferences and models for the detection of emotions in low-resource languages such as Urdu. To this effect, we propose Urdu Multimodal Emotion Detection Network (UMEDNet), a new emotion detection model for Urdu that works with video, speech, and text inputs for a better understanding of emotion. To support our proposed UMEDNet, we created the Urdu Multimodal Emotion Detection (UMED) <i>corpus</i>, which is a seventeen-hour annotated <i>corpus</i> of five basic emotions. To the best of our knowledge, the current study provides the first <i>corpus</i> for detecting emotion in the context of multimodal emotion detection for the Urdu language and is extensible for extended research. UMEDNet leverages state-of-the-art techniques for feature extraction across modalities; for extracting facial features from video, both Multi-task Cascaded Convolutional Networks (MTCNN) and FaceNet were used with fine-tuned Wav2Vec2 for speech features and XLM-Roberta for text. These features are then projected into common latent spaces to enable the effective fusion of multimodal data and to enhance the accuracy of emotion prediction. The model demonstrates strong performance, achieving an overall accuracy of 85.27%, while precision, recall, and F1 scores, are all approximately equivalent. In the end, we analyzed the impact of UMEDNet and found that our model integrates data on different modalities and leads to better performance.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2861"},"PeriodicalIF":3.5,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12192677/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative evaluation of approaches & tools for effective security testing of Web applications. 对Web应用程序的有效安全测试的方法和工具进行比较评估。
IF 3.5 4区 计算机科学
PeerJ Computer Science Pub Date : 2025-04-30 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.2821
Sana Qadir, Eman Waheed, Aisha Khanum, Seema Jehan
{"title":"Comparative evaluation of approaches & tools for effective security testing of Web applications.","authors":"Sana Qadir, Eman Waheed, Aisha Khanum, Seema Jehan","doi":"10.7717/peerj-cs.2821","DOIUrl":"https://doi.org/10.7717/peerj-cs.2821","url":null,"abstract":"&lt;p&gt;&lt;p&gt;It is generally accepted that adopting both static application security testing (SAST) and dynamic application security testing (DAST) approaches is vital for thorough and effective security testing. However, this suggestion has not been comprehensively evaluated, especially with regard to the individual risk categories mentioned in Open Web Application Security Project (OWASP) Top 10:2021 and common weakness enumeration (CWE) Top 25:2023 lists. Also, it is rare to find any evidence-based recommendations for effective tools for detecting vulnerabilities from a specific risk category or severity level. These shortcomings increase both the time and cost of systematic security testing when its need is heightened by increasingly frequent and preventable incidents. This study aims to fill these gaps by empirically testing seventy-five real-world Web applications using four SAST and five DAST tools. Only popular, free, and open-source tools were selected and each Web application was scanned using these nine tools. From the report generated by these tools, we considered two parameters to measure effectiveness: count and severity of the vulnerability found. We also mapped the vulnerabilities to OWASP Top 10:2021 and CWE Top 25:2023 lists. Our results show that using only DAST tools is the preferred option for four OWASP Top 10:2021 risk categories while using only SAST tools is preferred for only three risk categories. Either approach is effective for two of the OWASP Top 10:2021 risk categories. For CWE Top 25:2023 list, all three approaches were equally effective and found vulnerabilities belonging to three risk categories each. We also found that none of the tools were able to detect any vulnerability in one OWASP Top 10:2021 risk category and in eight CWE Top 25:2023 categories. This highlights a critical limitation of popular tools. The most effective DAST tool was OWASP Zed Attack Proxy (ZAP), especially for detecting vulnerabilities in broken access control, insecure design, and security misconfiguration risk categories. Yasca was the best-performing SAST tool, and outperformed all other tools at finding high-severity vulnerabilities. For medium-severity and low-severity levels, the DAST tools Iron Web application Advanced Security testing Platform (WASP) and Vega performed better than all the other tools. These findings reveal key insights, such as, the superiority of DAST tools for detecting certain types of vulnerabilities and the indispensability of SAST tools for detecting high-severity issues (due to detailed static code analysis). This study also addresses significant limitations in previous research by testing multiple real-world Web applications across diverse domains (technology, health, and education), enhancing generalization of the findings. Unlike studies that rely primarily on proprietary tools, our use of open-source SAST and DAST tools ensures better reproducibility and accessibility for organizations with limited budget.&lt;/p","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2821"},"PeriodicalIF":3.5,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12190248/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation of automated paper screening for esophagectomy systematic review using large language models. 使用大型语言模型验证食管切除术系统评价的自动纸张筛选。
IF 3.5 4区 计算机科学
PeerJ Computer Science Pub Date : 2025-04-30 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.2822
Rashi Ramchandani, Eddie Guo, Esra Rakab, Jharna Rathod, Jamie Strain, William Klement, Risa Shorr, Erin Williams, Daniel Jones, Sebastien Gilbert
{"title":"Validation of automated paper screening for esophagectomy systematic review using large language models.","authors":"Rashi Ramchandani, Eddie Guo, Esra Rakab, Jharna Rathod, Jamie Strain, William Klement, Risa Shorr, Erin Williams, Daniel Jones, Sebastien Gilbert","doi":"10.7717/peerj-cs.2822","DOIUrl":"10.7717/peerj-cs.2822","url":null,"abstract":"<p><strong>Background: </strong>Large language models (LLMs) offer a potential solution to the labor-intensive nature of systematic reviews. This study evaluated the ability of the GPT model to identify articles that discuss perioperative risk factors for esophagectomy complications. To test the performance of the model, we tested GPT-4 on narrower inclusion criterion and by assessing its ability to discriminate relevant articles that solely identified preoperative risk factors for esophagectomy.</p><p><strong>Methods: </strong>A literature search was run by a trained librarian to identify studies (<i>n</i> = 1,967) discussing risk factors to esophagectomy complications. The articles underwent title and abstract screening by three independent human reviewers and GPT-4. The Python script used for the analysis made Application Programming Interface (API) calls to GPT-4 with screening criteria in natural language. GPT-4's inclusion and exclusion decision were compared to those decided human reviewers.</p><p><strong>Results: </strong>The agreement between the GPT model and human decision was 85.58% for perioperative factors and 78.75% for preoperative factors. The AUC value was 0.87 and 0.75 for the perioperative and preoperative risk factors query, respectively. In the evaluation of perioperative risk factors, the GPT model demonstrated a high recall for included studies at 89%, a positive predictive value of 74%, and a negative predictive value of 84%, with a low false positive rate of 6% and a macro-F1 score of 0.81. For preoperative risk factors, the model showed a recall of 67% for included studies, a positive predictive value of 65%, and a negative predictive value of 85%, with a false positive rate of 15% and a macro-F1 score of 0.66. The interobserver reliability was substantial, with a kappa score of 0.69 for perioperative factors and 0.61 for preoperative factors. Despite lower accuracy under more stringent criteria, the GPT model proved valuable in streamlining the systematic review workflow. Preliminary evaluation of inclusion and exclusion justification provided by the GPT model were reported to have been useful by study screeners, especially in resolving discrepancies during title and abstract screening.</p><p><strong>Conclusion: </strong>This study demonstrates promising use of LLMs to streamline the workflow of systematic reviews. The integration of LLMs in systematic reviews could lead to significant time and cost savings, however caution must be taken for reviews involving stringent a narrower and exclusion criterion. Future research is needed and should explore integrating LLMs in other steps of the systematic review, such as full text screening or data extraction, and compare different LLMs for their effectiveness in various types of systematic reviews.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2822"},"PeriodicalIF":3.5,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12190591/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scaling laws for Haralick texture features of linear gradients. 线性梯度Haralick纹理特征的标度规律。
IF 3.5 4区 计算机科学
PeerJ Computer Science Pub Date : 2025-04-30 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.2856
Sorinel A Oprisan, Ana Oprisan
{"title":"Scaling laws for Haralick texture features of linear gradients.","authors":"Sorinel A Oprisan, Ana Oprisan","doi":"10.7717/peerj-cs.2856","DOIUrl":"10.7717/peerj-cs.2856","url":null,"abstract":"<p><p>This study presents a novel analytical framework for understanding the relationship between the image gradients and the symmetries of the Gray Level Co-occurrence Matrix (GLCM). Analytical expression for four key features-sum average (SA), sum variance (SV), difference variance (DV), and entropy-were derived to capture their dependence on image's gray-level quantization (N<sub>g</sub>), the gradient magnitude (∇), and the displacement vector (d) through the corresponding GLCM. Scaling laws obtained from the exact analytical dependencies of Haralick features on N<sub>g</sub>, ∇ and |d| show that SA and DV scale linearly with N<sub>g</sub>, SV scales quadratically, and entropy follows a logarithmic trend. The scaling laws allow a consistent derivation of normalization factors that make Haralick features independent of the quantization scheme N<sub>g</sub>. Numerical simulations using synthetic one-dimensional gradients validated our theoretical predictions. This theoretical framework establishes a foundation for consistent derivation of analytic expressions and scaling laws for Haralick features. Such an approach would streamline texture analysis across datasets and imaging modalities, enhancing the portability and interpretability of Haralick features in machine learning and medical imaging applications.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2856"},"PeriodicalIF":3.5,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12192890/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DMSA-Net: a deformable multiscale adaptive classroom behavior recognition network. 一个可变形的多尺度自适应课堂行为识别网络。
IF 3.5 4区 计算机科学
PeerJ Computer Science Pub Date : 2025-04-30 eCollection Date: 2025-01-01 DOI: 10.7717/peerj-cs.2876
Chunyu Dong, Jing Liu, Shenglong Xie
{"title":"DMSA-Net: a deformable multiscale adaptive classroom behavior recognition network.","authors":"Chunyu Dong, Jing Liu, Shenglong Xie","doi":"10.7717/peerj-cs.2876","DOIUrl":"10.7717/peerj-cs.2876","url":null,"abstract":"<p><p>In the intelligent transformation of education, accurate recognition of students' classroom behavior has become one of the key technologies for enhancing the quality of instruction and the efficacy of learning. However, in the recognition of target behavior in real classroom scenarios, due to the use of wide-angle or panoramic images for image acquisition, students in the back row are far away from monitoring devices, and their subtle body movements such as the small opening and closing of the mouth (to determine whether they are speaking), fine finger operations (to distinguish between reading books or operating mobile phones) are difficult to recognize. Moreover, there are occlusions and scale differences in the front and back rankings, which can easily cause confusion and interference with target features in the detection process, greatly limiting the accurate recognition ability of existing visual algorithms for classroom behavior. This article proposes a deformable multiscale adaptive classroom behavior recognition network. To improve the network's capacity to model minute behavioral phenomena, the backbone section introduces a deformable self-attention dattention module, dynamically modifying the receptive field's geometry to enhance the model's concentration on the region of interest. To improve the network's capacity for feature extraction and integration of behavior occlusion and classroom behavior at different scales, a proposal has been put forward the Multiscale Attention Feature Pyramid Structure (MSAFPS), to achieve multi-level feature aggregation after multiscale feature fusion, reducing the impact of mutual occlusion and scale differences in classroom behavior between front and back rows. In the detect section, we adopt the Wise Intersection Over Union (Wise-IoU) loss as our loss criterion, augmenting the evaluation framework with richer contextual cues to broaden its scope and elevate the network's detection prowess. Extensive experimentation reveals that our proposed method outperforms rival algorithms on two widely adopted benchmark datasets: SCB-Dataset3-S (the Student Classroom Behavior Dataset-https://github.com/Whiffe/SCB-dataset) and we created object detection dataset DataMountainSCB (https://github.com/Chunyu-Dong/DataFountainSCB1) containing six types of behaviors.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2876"},"PeriodicalIF":3.5,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12192764/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信