{"title":"No-Reference Laparoscopic Video Quality Assessment for Sensor Distortions Using Optimized Long Short-Term Memory Framework","authors":"Sria Biswas;Rohini Palanisamy","doi":"10.1109/LSENS.2025.3539186","DOIUrl":null,"url":null,"abstract":"Laparoscopic surgery relies on sensor-based video systems vulnerable to visual distortions, requiring rigorous quality checks to meet regulatory standards. This letter introduces a no-reference laparoscopic video quality assessment algorithm designed to replicate human perceptual judgments in the presence of sensor distortions. The method models the statistical interdependencies between luminance and motion features and combines them with texture variations to formulate a perceptually relevant feature vector. This is used as input to train a memory-retentive deep learning model optimized by chaotic maps to predict frame quality scores which are utilized to evaluate the overall video quality. Performance comparisons with state-of-the-art methods show that the proposed model aligns closely with both expert and nonexpert subjective ratings, with experts achieving higher accuracy. Ablation studies further emphasize the effectiveness of the selected feature combinations and regression frameworks, demonstrating the capability of the model to replicate human opinions. These findings highlight the potential of the proposed method as a reliable tool for automating quality assessment in sensor-based laparoscopic systems to ensure high standards in clinical applications.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"9 4","pages":"1-4"},"PeriodicalIF":2.2000,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10873850/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Laparoscopic surgery relies on sensor-based video systems vulnerable to visual distortions, requiring rigorous quality checks to meet regulatory standards. This letter introduces a no-reference laparoscopic video quality assessment algorithm designed to replicate human perceptual judgments in the presence of sensor distortions. The method models the statistical interdependencies between luminance and motion features and combines them with texture variations to formulate a perceptually relevant feature vector. This is used as input to train a memory-retentive deep learning model optimized by chaotic maps to predict frame quality scores which are utilized to evaluate the overall video quality. Performance comparisons with state-of-the-art methods show that the proposed model aligns closely with both expert and nonexpert subjective ratings, with experts achieving higher accuracy. Ablation studies further emphasize the effectiveness of the selected feature combinations and regression frameworks, demonstrating the capability of the model to replicate human opinions. These findings highlight the potential of the proposed method as a reliable tool for automating quality assessment in sensor-based laparoscopic systems to ensure high standards in clinical applications.