{"title":"从我们的错误中学习?","authors":"Lauren Dyson","doi":"10.12968/s1356-9252(24)40033-6","DOIUrl":null,"url":null,"abstract":"Decisions based on artificial intelligence could end up being flawed because AI is being taught using biased data. While machine learning algorithms have the potential to enable greater safety, better access to mobility and more effective traffic management, data bias can lead to negative consequences, such as discrimination, unfairness and unreliability of data. We ask the experts how we can avoid unintended consequences in smart data management","PeriodicalId":251668,"journal":{"name":"Traffic Technology International","volume":"41 4","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning from Our Mistakes?\",\"authors\":\"Lauren Dyson\",\"doi\":\"10.12968/s1356-9252(24)40033-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Decisions based on artificial intelligence could end up being flawed because AI is being taught using biased data. While machine learning algorithms have the potential to enable greater safety, better access to mobility and more effective traffic management, data bias can lead to negative consequences, such as discrimination, unfairness and unreliability of data. We ask the experts how we can avoid unintended consequences in smart data management\",\"PeriodicalId\":251668,\"journal\":{\"name\":\"Traffic Technology International\",\"volume\":\"41 4\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Traffic Technology International\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.12968/s1356-9252(24)40033-6\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Traffic Technology International","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.12968/s1356-9252(24)40033-6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Decisions based on artificial intelligence could end up being flawed because AI is being taught using biased data. While machine learning algorithms have the potential to enable greater safety, better access to mobility and more effective traffic management, data bias can lead to negative consequences, such as discrimination, unfairness and unreliability of data. We ask the experts how we can avoid unintended consequences in smart data management