{"title":"可信赖人工智能的数据驱动创新","authors":"L. Siddharth , Jianxi Luo","doi":"10.1016/j.sheji.2025.06.002","DOIUrl":null,"url":null,"abstract":"<div><div>Global concerns over the trustworthiness of rapidly proliferating artificial intelligence (AI)-centric artifacts have led to generic institutional recommendations for trustworthy AI, which have yet to be operationalized and integrated with design and innovation processes. We leverage the double hump model of data-driven innovation to propose and illustrate diverse data-driven approaches for identifying and evaluating opportunities, and generating and evaluating concepts for trustworthy AI. These approaches are expected to operationalize the institutional recommendations of trustworthy AI. Building on existing frameworks for classifying and managing risks associated with AI, we advocate for an ontological basis for trustworthy AI to enable fine-grained, computational assessments of AI-centric artifacts, their domains, and the organizations that develop or manage them.</div></div>","PeriodicalId":37146,"journal":{"name":"She Ji-The Journal of Design Economics and Innovation","volume":"11 3","pages":"Pages 261-283"},"PeriodicalIF":1.4000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Data-Driven Innovation for Trustworthy AI\",\"authors\":\"L. Siddharth , Jianxi Luo\",\"doi\":\"10.1016/j.sheji.2025.06.002\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Global concerns over the trustworthiness of rapidly proliferating artificial intelligence (AI)-centric artifacts have led to generic institutional recommendations for trustworthy AI, which have yet to be operationalized and integrated with design and innovation processes. We leverage the double hump model of data-driven innovation to propose and illustrate diverse data-driven approaches for identifying and evaluating opportunities, and generating and evaluating concepts for trustworthy AI. These approaches are expected to operationalize the institutional recommendations of trustworthy AI. Building on existing frameworks for classifying and managing risks associated with AI, we advocate for an ontological basis for trustworthy AI to enable fine-grained, computational assessments of AI-centric artifacts, their domains, and the organizations that develop or manage them.</div></div>\",\"PeriodicalId\":37146,\"journal\":{\"name\":\"She Ji-The Journal of Design Economics and Innovation\",\"volume\":\"11 3\",\"pages\":\"Pages 261-283\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2025-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"She Ji-The Journal of Design Economics and Innovation\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2405872625000437\",\"RegionNum\":2,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"HUMANITIES, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"She Ji-The Journal of Design Economics and Innovation","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2405872625000437","RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"HUMANITIES, MULTIDISCIPLINARY","Score":null,"Total":0}
Global concerns over the trustworthiness of rapidly proliferating artificial intelligence (AI)-centric artifacts have led to generic institutional recommendations for trustworthy AI, which have yet to be operationalized and integrated with design and innovation processes. We leverage the double hump model of data-driven innovation to propose and illustrate diverse data-driven approaches for identifying and evaluating opportunities, and generating and evaluating concepts for trustworthy AI. These approaches are expected to operationalize the institutional recommendations of trustworthy AI. Building on existing frameworks for classifying and managing risks associated with AI, we advocate for an ontological basis for trustworthy AI to enable fine-grained, computational assessments of AI-centric artifacts, their domains, and the organizations that develop or manage them.