Conversational LLM-Based Decision Support for Defect Classification in AFM Images

IF 1.5
Angona Biswas;Jaydeep Rade;Nabila Masud;Md Hasibul Hasan Hasib;Aditya Balu;Juntao Zhang;Soumik Sarkar;Adarsh Krishnamurthy;Juan Ren;Anwesha Sarkar
{"title":"Conversational LLM-Based Decision Support for Defect Classification in AFM Images","authors":"Angona Biswas;Jaydeep Rade;Nabila Masud;Md Hasibul Hasan Hasib;Aditya Balu;Juntao Zhang;Soumik Sarkar;Adarsh Krishnamurthy;Juan Ren;Anwesha Sarkar","doi":"10.1109/OJIM.2025.3592284","DOIUrl":null,"url":null,"abstract":"Atomic force microscopy (AFM) has emerged as a powerful tool for nanoscale imaging and quantitative characterization of organic (e.g., live cells, proteins, DNA, and lipid bilayers) and inorganic (e.g., silicon wafers and polymers) specimens. However, image artifacts in AFM height and peak force error images directly affect the precision of nanomechanical measurements. Experimentalists face considerable challenges in obtaining high-quality AFM images due to the requirement of specialized expertise and constant manual monitoring. Another challenge is the lack of high-quality AFM datasets to train machine learning models for automated defect detection. In this work, we propose a two-step AI framework that combines a vision-based deep learning (DL) model for classifying AFM image defects with a large language model (LLM)-based conversational assistant that provides real-time corrective guidance in natural language, making it particularly valuable for non-AFM experts aiming to obtain high-quality images. We curated an annotated AFM defect dataset spanning organic and inorganic samples to train the defect detection model. Our defect classification model achieves 91.43% overall accuracy, with a recall of 93% for tip contamination and 60% not-tracking defects. We further develop an intuitive user interface that enables seamless interaction with the DL model and integrates an LLM-based guidance feature to support users in understanding defects and improving future experiments. We then evaluate the performance of multiple state-of-the-art LLMs on AFM-related queries, offering users flexibility in LLM selection based on their specific needs. LLM evaluations and the benchmark questions are available at: <uri>https://github.com/idealab-isu/AFM-LLM-Defect-Guidance</uri>.","PeriodicalId":100630,"journal":{"name":"IEEE Open Journal of Instrumentation and Measurement","volume":"4 ","pages":"1-12"},"PeriodicalIF":1.5000,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11096088","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of Instrumentation and Measurement","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11096088/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Atomic force microscopy (AFM) has emerged as a powerful tool for nanoscale imaging and quantitative characterization of organic (e.g., live cells, proteins, DNA, and lipid bilayers) and inorganic (e.g., silicon wafers and polymers) specimens. However, image artifacts in AFM height and peak force error images directly affect the precision of nanomechanical measurements. Experimentalists face considerable challenges in obtaining high-quality AFM images due to the requirement of specialized expertise and constant manual monitoring. Another challenge is the lack of high-quality AFM datasets to train machine learning models for automated defect detection. In this work, we propose a two-step AI framework that combines a vision-based deep learning (DL) model for classifying AFM image defects with a large language model (LLM)-based conversational assistant that provides real-time corrective guidance in natural language, making it particularly valuable for non-AFM experts aiming to obtain high-quality images. We curated an annotated AFM defect dataset spanning organic and inorganic samples to train the defect detection model. Our defect classification model achieves 91.43% overall accuracy, with a recall of 93% for tip contamination and 60% not-tracking defects. We further develop an intuitive user interface that enables seamless interaction with the DL model and integrates an LLM-based guidance feature to support users in understanding defects and improving future experiments. We then evaluate the performance of multiple state-of-the-art LLMs on AFM-related queries, offering users flexibility in LLM selection based on their specific needs. LLM evaluations and the benchmark questions are available at: https://github.com/idealab-isu/AFM-LLM-Defect-Guidance.
基于对话式llm的AFM图像缺陷分类决策支持
原子力显微镜(AFM)已成为有机(如活细胞、蛋白质、DNA和脂质双层)和无机(如硅片和聚合物)样品的纳米级成像和定量表征的强大工具。然而,AFM高度和峰值力误差图像中的图像伪影直接影响了纳米力学测量的精度。由于需要专业知识和持续的人工监测,实验人员在获得高质量的AFM图像方面面临相当大的挑战。另一个挑战是缺乏高质量的AFM数据集来训练用于自动缺陷检测的机器学习模型。在这项工作中,我们提出了一个两步人工智能框架,该框架结合了用于分类AFM图像缺陷的基于视觉的深度学习(DL)模型和基于大型语言模型(LLM)的会话助手,后者提供自然语言的实时纠正指导,使其对旨在获得高质量图像的非AFM专家特别有价值。我们策划了一个包含有机和无机样本的带注释的AFM缺陷数据集来训练缺陷检测模型。我们的缺陷分类模型达到了91.43%的总体准确率,其中尖端污染召回率为93%,不跟踪缺陷召回率为60%。我们进一步开发了一个直观的用户界面,实现了与深度学习模型的无缝交互,并集成了一个基于llm的指导功能,以支持用户理解缺陷并改进未来的实验。然后,我们评估了多个最先进的LLM在afm相关查询上的性能,为用户提供了基于他们特定需求的LLM选择的灵活性。法学硕士评估和基准问题可在:https://github.com/idealab-isu/AFM-LLM-Defect-Guidance。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信