K. Nyarko, Peter O. Taiwo, Chukwuemeka Duru, Emmanual Masa-Ibi
{"title":"AI/ML系统工程工作台框架","authors":"K. Nyarko, Peter O. Taiwo, Chukwuemeka Duru, Emmanual Masa-Ibi","doi":"10.1109/CISS56502.2023.10089781","DOIUrl":null,"url":null,"abstract":"This paper presents the framework of a cloud-based Artificial Intelligence (AI) and Machine Learning (ML) workbench that provides services utilization and performance benchmarking. The framework promotes convenience by enabling a centralized platform for software developers and data scientists to perform federated search across various dataset repositories, choose problem domains, like Natural Language Processing, Speech and Computer Vision, and build/validate models. The benchmarking functionality of this framework helps users evaluate and compare performances of various solutions from multiple cloud service providers. The workbench framework consists of two primary layers. The Services layer which is rendered as an AI as a service (AIaaS) model, providing interfaces that connect users to vision, speech and natural language processing (NLP) services from various AI service providers. The Platform layer is an ML as a Service (MLaaS) model providing access to ML model training, tuning, inference and transfer learning tasks that are fulfillable on multiple cloud ML platforms with preset cloud-based compute instances. Benchmarking is provided on the workbench by comparing accuracy metrics on prediction and detection counts, F1 scores and ML training instances setup and completion time. By utilizing these performance benchmarking features, this framework can assist AI and ML practitioners in making informed judgments when selecting a cloud provider for specific activities. Additionally, it will increase the effectiveness and efficiency of data science training for both teachers and students.","PeriodicalId":243775,"journal":{"name":"2023 57th Annual Conference on Information Sciences and Systems (CISS)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AI/ML Systems Engineering Workbench Framework\",\"authors\":\"K. Nyarko, Peter O. Taiwo, Chukwuemeka Duru, Emmanual Masa-Ibi\",\"doi\":\"10.1109/CISS56502.2023.10089781\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents the framework of a cloud-based Artificial Intelligence (AI) and Machine Learning (ML) workbench that provides services utilization and performance benchmarking. The framework promotes convenience by enabling a centralized platform for software developers and data scientists to perform federated search across various dataset repositories, choose problem domains, like Natural Language Processing, Speech and Computer Vision, and build/validate models. The benchmarking functionality of this framework helps users evaluate and compare performances of various solutions from multiple cloud service providers. The workbench framework consists of two primary layers. The Services layer which is rendered as an AI as a service (AIaaS) model, providing interfaces that connect users to vision, speech and natural language processing (NLP) services from various AI service providers. The Platform layer is an ML as a Service (MLaaS) model providing access to ML model training, tuning, inference and transfer learning tasks that are fulfillable on multiple cloud ML platforms with preset cloud-based compute instances. Benchmarking is provided on the workbench by comparing accuracy metrics on prediction and detection counts, F1 scores and ML training instances setup and completion time. By utilizing these performance benchmarking features, this framework can assist AI and ML practitioners in making informed judgments when selecting a cloud provider for specific activities. Additionally, it will increase the effectiveness and efficiency of data science training for both teachers and students.\",\"PeriodicalId\":243775,\"journal\":{\"name\":\"2023 57th Annual Conference on Information Sciences and Systems (CISS)\",\"volume\":\"69 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 57th Annual Conference on Information Sciences and Systems (CISS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CISS56502.2023.10089781\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 57th Annual Conference on Information Sciences and Systems (CISS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CISS56502.2023.10089781","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
This paper presents the framework of a cloud-based Artificial Intelligence (AI) and Machine Learning (ML) workbench that provides services utilization and performance benchmarking. The framework promotes convenience by enabling a centralized platform for software developers and data scientists to perform federated search across various dataset repositories, choose problem domains, like Natural Language Processing, Speech and Computer Vision, and build/validate models. The benchmarking functionality of this framework helps users evaluate and compare performances of various solutions from multiple cloud service providers. The workbench framework consists of two primary layers. The Services layer which is rendered as an AI as a service (AIaaS) model, providing interfaces that connect users to vision, speech and natural language processing (NLP) services from various AI service providers. The Platform layer is an ML as a Service (MLaaS) model providing access to ML model training, tuning, inference and transfer learning tasks that are fulfillable on multiple cloud ML platforms with preset cloud-based compute instances. Benchmarking is provided on the workbench by comparing accuracy metrics on prediction and detection counts, F1 scores and ML training instances setup and completion time. By utilizing these performance benchmarking features, this framework can assist AI and ML practitioners in making informed judgments when selecting a cloud provider for specific activities. Additionally, it will increase the effectiveness and efficiency of data science training for both teachers and students.