{"title":"Quality assurance for Internet of Things and speech recognition systems","authors":"Yves Le Traon, Tao Xie","doi":"10.1002/stvr.1858","DOIUrl":null,"url":null,"abstract":"In this issue, we are pleased to present two papers: one for risk assessment for an industrial Internet of Things and the other for testing speech recognition systems. The first paper, ‘ HiRAM: A Hierarchical Risk Assessment Model and Its Implementation for an Industrial Internet of Things in the Cloud ’ by Wen-Lin Sun, Ying-Han Tang and Yu-Lun Huang, proposes Hierarchical Risk Assessment Model (HiRAM) for an IIoT cloud platform to enable self-evaluate its security status by leveraging analytic hierarchy processes (AHPs). The authors also realise HiRAM-RAS, a modular and responsive Risk Assessment System based on HiRAM, and evaluate it in a real-world IIoT cloud platform. The evaluation results show the changes in integrity and availability scores evaluated by HiRAM. (Recommended by Xiaoyin Wang). The second paper, ‘ Adversarial Example-based Test Case Generation for Black-box Speech Recognition Systems ’ by Hanbo Cai, Pengcheng Zhang, Hai Dong, Lars Grunske, Shunhui Ji and Tianhao Yuan, proposes methods for generating targeted adversarial examples for speech recognition systems, based on the firefly algorithm. These methods generate the targeted adversarial samples by continuously adding interference noise to the original speech samples. The evaluation results show that the proposed methods achieve satisfactory results on three speech datasets (Google Command, Common Voice and LibriSpeech), and compared with existing methods, these methods can effectively improve the success rate of the targeted adversarial example generation. (Recommended by Yves Le Traon). We hope that these papers will inspire further research in these directions of quality assurance.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"23 1","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2023-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software Testing Verification & Reliability","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1002/stvr.1858","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
In this issue, we are pleased to present two papers: one for risk assessment for an industrial Internet of Things and the other for testing speech recognition systems. The first paper, ‘ HiRAM: A Hierarchical Risk Assessment Model and Its Implementation for an Industrial Internet of Things in the Cloud ’ by Wen-Lin Sun, Ying-Han Tang and Yu-Lun Huang, proposes Hierarchical Risk Assessment Model (HiRAM) for an IIoT cloud platform to enable self-evaluate its security status by leveraging analytic hierarchy processes (AHPs). The authors also realise HiRAM-RAS, a modular and responsive Risk Assessment System based on HiRAM, and evaluate it in a real-world IIoT cloud platform. The evaluation results show the changes in integrity and availability scores evaluated by HiRAM. (Recommended by Xiaoyin Wang). The second paper, ‘ Adversarial Example-based Test Case Generation for Black-box Speech Recognition Systems ’ by Hanbo Cai, Pengcheng Zhang, Hai Dong, Lars Grunske, Shunhui Ji and Tianhao Yuan, proposes methods for generating targeted adversarial examples for speech recognition systems, based on the firefly algorithm. These methods generate the targeted adversarial samples by continuously adding interference noise to the original speech samples. The evaluation results show that the proposed methods achieve satisfactory results on three speech datasets (Google Command, Common Voice and LibriSpeech), and compared with existing methods, these methods can effectively improve the success rate of the targeted adversarial example generation. (Recommended by Yves Le Traon). We hope that these papers will inspire further research in these directions of quality assurance.
期刊介绍:
The journal is the premier outlet for research results on the subjects of testing, verification and reliability. Readers will find useful research on issues pertaining to building better software and evaluating it.
The journal is unique in its emphasis on theoretical foundations and applications to real-world software development. The balance of theory, empirical work, and practical applications provide readers with better techniques for testing, verifying and improving the reliability of software.
The journal targets researchers, practitioners, educators and students that have a vested interest in results generated by high-quality testing, verification and reliability modeling and evaluation of software. Topics of special interest include, but are not limited to:
-New criteria for software testing and verification
-Application of existing software testing and verification techniques to new types of software, including web applications, web services, embedded software, aspect-oriented software, and software architectures
-Model based testing
-Formal verification techniques such as model-checking
-Comparison of testing and verification techniques
-Measurement of and metrics for testing, verification and reliability
-Industrial experience with cutting edge techniques
-Descriptions and evaluations of commercial and open-source software testing tools
-Reliability modeling, measurement and application
-Testing and verification of software security
-Automated test data generation
-Process issues and methods
-Non-functional testing