Débora Pina , Liliane Kunstmann , Daniel de Oliveira , Marta Mattoso
{"title":"你的深度学习模型的面包屑:用dlproof跟踪来源痕迹","authors":"Débora Pina , Liliane Kunstmann , Daniel de Oliveira , Marta Mattoso","doi":"10.1016/j.simpa.2024.100730","DOIUrl":null,"url":null,"abstract":"<div><div>To train a Deep Learning (DL) model, a workflow must be executed with four well-defined activities: (i) Acquiring data, (ii) Preprocessing, (iii) Splitting and balancing the dataset, and (iv) Building and training the model. After generating several DL models, they undergo a process called model selection. After being selected, the DL model is put into a production environment to make predictions on new data. One of the challenges in supporting these analyses is related to providing relationships between candidate models, their datasets for train, test, and validation, input data, and other derivations paths. These relationships are also essential for trust, reproducibility, and evolution of the selected model. While existing solutions allow monitoring and analyzing the artifacts generated throughout the DL workflow, they often fail to establish relationships for supporting data derivation within the DL workflow. DLProv is a provenance-centric service to support DL workflow analyses and reproducibility. DLProv captures provenance data and exports provenance graphs for DL model reproducibility. DLProv is W3C PROV compliant, ensuring standardized prospective and retrospective provenance, and enables provenance capture in arbitrary execution frameworks.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"23 ","pages":"Article 100730"},"PeriodicalIF":1.3000,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Breadcrumbs for your Deep Learning Model: Following Provenance Traces with DLProv\",\"authors\":\"Débora Pina , Liliane Kunstmann , Daniel de Oliveira , Marta Mattoso\",\"doi\":\"10.1016/j.simpa.2024.100730\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>To train a Deep Learning (DL) model, a workflow must be executed with four well-defined activities: (i) Acquiring data, (ii) Preprocessing, (iii) Splitting and balancing the dataset, and (iv) Building and training the model. After generating several DL models, they undergo a process called model selection. After being selected, the DL model is put into a production environment to make predictions on new data. One of the challenges in supporting these analyses is related to providing relationships between candidate models, their datasets for train, test, and validation, input data, and other derivations paths. These relationships are also essential for trust, reproducibility, and evolution of the selected model. While existing solutions allow monitoring and analyzing the artifacts generated throughout the DL workflow, they often fail to establish relationships for supporting data derivation within the DL workflow. DLProv is a provenance-centric service to support DL workflow analyses and reproducibility. DLProv captures provenance data and exports provenance graphs for DL model reproducibility. DLProv is W3C PROV compliant, ensuring standardized prospective and retrospective provenance, and enables provenance capture in arbitrary execution frameworks.</div></div>\",\"PeriodicalId\":29771,\"journal\":{\"name\":\"Software Impacts\",\"volume\":\"23 \",\"pages\":\"Article 100730\"},\"PeriodicalIF\":1.3000,\"publicationDate\":\"2024-12-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Software Impacts\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2665963824001180\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software Impacts","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2665963824001180","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
Breadcrumbs for your Deep Learning Model: Following Provenance Traces with DLProv
To train a Deep Learning (DL) model, a workflow must be executed with four well-defined activities: (i) Acquiring data, (ii) Preprocessing, (iii) Splitting and balancing the dataset, and (iv) Building and training the model. After generating several DL models, they undergo a process called model selection. After being selected, the DL model is put into a production environment to make predictions on new data. One of the challenges in supporting these analyses is related to providing relationships between candidate models, their datasets for train, test, and validation, input data, and other derivations paths. These relationships are also essential for trust, reproducibility, and evolution of the selected model. While existing solutions allow monitoring and analyzing the artifacts generated throughout the DL workflow, they often fail to establish relationships for supporting data derivation within the DL workflow. DLProv is a provenance-centric service to support DL workflow analyses and reproducibility. DLProv captures provenance data and exports provenance graphs for DL model reproducibility. DLProv is W3C PROV compliant, ensuring standardized prospective and retrospective provenance, and enables provenance capture in arbitrary execution frameworks.