自动黑盒测试的名义和错误的场景在RESTful api

IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Davide Corradini, Amedeo Zampieri, Michele Pasqua, Emanuele Viglianisi, Michael Dallago, M. Ceccato
{"title":"自动黑盒测试的名义和错误的场景在RESTful api","authors":"Davide Corradini, Amedeo Zampieri, Michele Pasqua, Emanuele Viglianisi, Michael Dallago, M. Ceccato","doi":"10.1002/stvr.1808","DOIUrl":null,"url":null,"abstract":"RESTful APIs (or REST APIs for short) represent a mainstream approach to design and develop web APIs using the REpresentational State Transfer architectural style. Black‐box testing, which assumes only the access to the system under test with a specific interface, is the only viable option when white‐box testing is impracticable. This is the case for REST APIs: their source code is usually not (or just partially) available, or a white‐box analysis across many dynamically allocated distributed components (typical of a micro‐services architecture) is computationally challenging. This paper presents RestTestGen, a novel black‐box approach to automatically generate test cases for REST APIs, based on their interface definition (an OpenAPI specification). Input values and requests are generated for each operation of the API under test with the twofold objective of testing nominal execution scenarios and error scenarios. Two distinct oracles are deployed to detect when test cases reveal implementation defects. While this approach is mainly targeting the research community, it is also of interest to developers because, as a black‐box approach, it is universally applicable across different programming languages, or in the case external (compiled only) libraries are used in a REST API. The validation of our approach has been performed on more than 100 of real‐world REST APIs, highlighting the effectiveness of the approach in revealing actual faults in already deployed services.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":null,"pages":null},"PeriodicalIF":1.5000,"publicationDate":"2022-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":"{\"title\":\"Automated black‐box testing of nominal and error scenarios in RESTful APIs\",\"authors\":\"Davide Corradini, Amedeo Zampieri, Michele Pasqua, Emanuele Viglianisi, Michael Dallago, M. Ceccato\",\"doi\":\"10.1002/stvr.1808\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"RESTful APIs (or REST APIs for short) represent a mainstream approach to design and develop web APIs using the REpresentational State Transfer architectural style. Black‐box testing, which assumes only the access to the system under test with a specific interface, is the only viable option when white‐box testing is impracticable. This is the case for REST APIs: their source code is usually not (or just partially) available, or a white‐box analysis across many dynamically allocated distributed components (typical of a micro‐services architecture) is computationally challenging. This paper presents RestTestGen, a novel black‐box approach to automatically generate test cases for REST APIs, based on their interface definition (an OpenAPI specification). Input values and requests are generated for each operation of the API under test with the twofold objective of testing nominal execution scenarios and error scenarios. Two distinct oracles are deployed to detect when test cases reveal implementation defects. While this approach is mainly targeting the research community, it is also of interest to developers because, as a black‐box approach, it is universally applicable across different programming languages, or in the case external (compiled only) libraries are used in a REST API. The validation of our approach has been performed on more than 100 of real‐world REST APIs, highlighting the effectiveness of the approach in revealing actual faults in already deployed services.\",\"PeriodicalId\":49506,\"journal\":{\"name\":\"Software Testing Verification & Reliability\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2022-01-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"22\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Software Testing Verification & Reliability\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1002/stvr.1808\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software Testing Verification & Reliability","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1002/stvr.1808","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 22

摘要

RESTful api(或简称REST api)代表了使用具象状态传输架构风格设计和开发web api的主流方法。当白盒测试不可行时,黑盒测试是唯一可行的选择,黑盒测试假设只有通过特定的接口才能访问被测系统。这就是REST api的情况:它们的源代码通常是不可用的(或者只是部分可用),或者跨许多动态分配的分布式组件(典型的微服务架构)进行白盒分析在计算上是具有挑战性的。本文介绍了RestTestGen,一种新颖的黑盒方法,基于接口定义(OpenAPI规范)自动生成REST api的测试用例。为被测API的每个操作生成输入值和请求,具有测试名义执行场景和错误场景的双重目标。部署两个不同的oracle来检测测试用例何时揭示实现缺陷。虽然这种方法主要针对研究社区,但它也引起了开发人员的兴趣,因为作为黑盒方法,它普遍适用于不同的编程语言,或者在REST API中使用外部(仅编译)库的情况下。我们的方法已经在超过100个真实世界的REST api上进行了验证,强调了该方法在揭示已经部署的服务中的实际故障方面的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Automated black‐box testing of nominal and error scenarios in RESTful APIs
RESTful APIs (or REST APIs for short) represent a mainstream approach to design and develop web APIs using the REpresentational State Transfer architectural style. Black‐box testing, which assumes only the access to the system under test with a specific interface, is the only viable option when white‐box testing is impracticable. This is the case for REST APIs: their source code is usually not (or just partially) available, or a white‐box analysis across many dynamically allocated distributed components (typical of a micro‐services architecture) is computationally challenging. This paper presents RestTestGen, a novel black‐box approach to automatically generate test cases for REST APIs, based on their interface definition (an OpenAPI specification). Input values and requests are generated for each operation of the API under test with the twofold objective of testing nominal execution scenarios and error scenarios. Two distinct oracles are deployed to detect when test cases reveal implementation defects. While this approach is mainly targeting the research community, it is also of interest to developers because, as a black‐box approach, it is universally applicable across different programming languages, or in the case external (compiled only) libraries are used in a REST API. The validation of our approach has been performed on more than 100 of real‐world REST APIs, highlighting the effectiveness of the approach in revealing actual faults in already deployed services.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Software Testing Verification & Reliability
Software Testing Verification & Reliability 工程技术-计算机:软件工程
CiteScore
3.70
自引率
0.00%
发文量
34
审稿时长
>12 weeks
期刊介绍: The journal is the premier outlet for research results on the subjects of testing, verification and reliability. Readers will find useful research on issues pertaining to building better software and evaluating it. The journal is unique in its emphasis on theoretical foundations and applications to real-world software development. The balance of theory, empirical work, and practical applications provide readers with better techniques for testing, verifying and improving the reliability of software. The journal targets researchers, practitioners, educators and students that have a vested interest in results generated by high-quality testing, verification and reliability modeling and evaluation of software. Topics of special interest include, but are not limited to: -New criteria for software testing and verification -Application of existing software testing and verification techniques to new types of software, including web applications, web services, embedded software, aspect-oriented software, and software architectures -Model based testing -Formal verification techniques such as model-checking -Comparison of testing and verification techniques -Measurement of and metrics for testing, verification and reliability -Industrial experience with cutting edge techniques -Descriptions and evaluations of commercial and open-source software testing tools -Reliability modeling, measurement and application -Testing and verification of software security -Automated test data generation -Process issues and methods -Non-functional testing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信