{"title":"Benchmark for testing the evaluation tools for Web pages accessibility","authors":"Michele Kirchner","doi":"10.1109/WSE.2003.1234010","DOIUrl":"https://doi.org/10.1109/WSE.2003.1234010","url":null,"abstract":"The problem of the accessibility of Web pages still remains unattended by many sites, but it is expected to become more important in the near future. If the scope of publishing Web pages remains the universal access by everybody, we need that pages be accessible. A page is accessible if it can be viewed by many different browsers, in particular by the special browsers used by disabled people. It is worth noting that accessible pages have an important added value, since they are easily accessed also through PDA or other portable devices. A whole site has been developed that comprises 28 pages, each violating one specific WAI (Web Accessibility Initiative) directive, for a total of 40 checkpoints (or subcheckpoints). This means that with these pages about half of the WAI directives can be checked in the tools. This site constitutes an initial benchmark for the testing of the validation tools. This project still needs some work done, and this paper is also a call for collaboration in order to enlarge and refine the benchmark.","PeriodicalId":220870,"journal":{"name":"Fifth IEEE International Workshop on Web Site Evolution, 2003. Theme: Architecture. Proceedings.","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127434610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using architecture evaluation to prepare a large Web based system for evolution","authors":"Dennis B. Smith, P. Merson","doi":"10.1109/WSE.2003.1234012","DOIUrl":"https://doi.org/10.1109/WSE.2003.1234012","url":null,"abstract":"This paper describes an ongoing effort to develop architecture documentation and to evaluate the architecture using the Architecture Tradeoff Analysis Method/sup SM/ (ATAM/sup SM/). The system is a large Web based application using J2EE, portal and Web services technology that is replacing a large number of legacy applications. It has very detailed documentation at the design level, but is lacking overall architecture documentation. We describe how the architecture is being documented in an iterative manner, and how sample scenarios are being developed as input for an ATAM. The package of the documentation, scenarios and ATAM will fill in important gaps in existing knowledge, and it will provide an important baseline for the system's evolution.","PeriodicalId":220870,"journal":{"name":"Fifth IEEE International Workshop on Web Site Evolution, 2003. Theme: Architecture. Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129016006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resolution of static clones in dynamic Web pages","authors":"Nikita Synytskyy, J. Cordy, T. Dean","doi":"10.1109/WSE.2003.1234008","DOIUrl":"https://doi.org/10.1109/WSE.2003.1234008","url":null,"abstract":"Cloning is extremely likely to occur in Web sites, much more so than in other software. While some clones exist for valid reasons, or are too small to eliminate, cloning percentages of 30% or higher-not uncommon in Web sites-suggest that some improvements can be made. Finding and resolving the clones in Web documents is rather challenging, however: syntax errors and routine use of multiple languages complicate parsing the documents and finding clones, while lack of native code reuse tools forces the analyst to rely on other technologies for resolution. Here we present a way to find clones in Multilanguage Web documents, and resolve them using one of several code reuse techniques that are available in a dynamic Web site. Rather than picking a single resolution technique and relying on it exclusively, we pick it based on the clone in question, to minimize disruption to the structure of original documents.","PeriodicalId":220870,"journal":{"name":"Fifth IEEE International Workshop on Web Site Evolution, 2003. Theme: Architecture. Proceedings.","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121852927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Tonella, F. Ricca, E. Pianta, Christian Girardi
{"title":"Using keyword extraction for Web site clustering","authors":"P. Tonella, F. Ricca, E. Pianta, Christian Girardi","doi":"10.1109/WSE.2003.1234007","DOIUrl":"https://doi.org/10.1109/WSE.2003.1234007","url":null,"abstract":"Reverse engineering techniques have the potential to support Web site understanding, by providing views that show the organization of a site and its navigational structure. However, representing each Web page as a node in the diagrams that are recovered from the source code of a Web site leads often to huge and unreadable graphs. Moreover, since the level of connectivity is typically high, the edges in such graphs make the overall result still less usable. Clustering can be used to produce cohesive groups of pages that are displayed as a single node in reverse engineered diagrams. In this paper, we propose a clustering method based on the automatic extraction of the keywords of a Web page. The presence of common keywords is exploited to decide when it is appropriate to group pages together. A second usage of the keywords is in the automatic labeling of the recovered clusters of pages.","PeriodicalId":220870,"journal":{"name":"Fifth IEEE International Workshop on Web Site Evolution, 2003. Theme: Architecture. Proceedings.","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127352889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabrice Estiévenart, Aurore François, J. Henrard, Jean-Luc Hainaut
{"title":"A tool-supported method to extract data and schema from Web sites","authors":"Fabrice Estiévenart, Aurore François, J. Henrard, Jean-Luc Hainaut","doi":"10.1109/WSE.2003.1234003","DOIUrl":"https://doi.org/10.1109/WSE.2003.1234003","url":null,"abstract":"This paper presents a tool-supported method to reengineer Web sites, that is, to extract the page contents as XML documents structured by expressive DTDs or XML Schemas. All the pages that are recognized to express the same application (sub)domain are analyzed in order to derive their common structure. This structure is formalized by an XML document, called META, which is then used to extract an XML document that contains the data of the pages and a XML Schema validating these data. The META document can describe various structures such as alternative layout and data structure for the same concept, structure multiplicity and separation between layout and informational content. XML Schemas extracted from different page types are integrated and conceptualized into a unique schema describing the domain covered by the whole Web site. Finally, this conceptual schema is used to build the database of a renovated Web site. These principles are illustrated through a case study using the tools that create the META document, extract the data and the XML Schema.","PeriodicalId":220870,"journal":{"name":"Fifth IEEE International Workshop on Web Site Evolution, 2003. Theme: Architecture. Proceedings.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117099144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Considering browser interaction in Web application testing","authors":"G. D. Lucca, M. D. Penta","doi":"10.1109/WSE.2003.1234011","DOIUrl":"https://doi.org/10.1109/WSE.2003.1234011","url":null,"abstract":"As Web applications evolve, their structure may become more and more complex. Thus, systematic approaches/methods for Web application testing are needed. Existing methods take into consideration only those actions/events the user is prompted by the application itself, such as the selection of a hypertextual link or the submission of the data contained in a form. However, these methods do not consider also actions/events provided by the browser, such as the usage of backward and forward buttons, usage that in some cases may produce navigation inconsistencies. This paper proposes an approach to integrate existing testing techniques with a state-based testing devoted to discover possible inconsistencies caused by interactions with Web browser buttons. A testing model, considering the role of the browser while navigating a Web application, and some coverage criteria, are presented.","PeriodicalId":220870,"journal":{"name":"Fifth IEEE International Workshop on Web Site Evolution, 2003. Theme: Architecture. Proceedings.","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132794972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lessons learned in Web site architectures for public utilities","authors":"Dabo Sun, Kenny Wong, D. L. Moise","doi":"10.1109/WSE.2003.1234013","DOIUrl":"https://doi.org/10.1109/WSE.2003.1234013","url":null,"abstract":"Web sites are complex systems. They evolve all the time and are hard to understand and be maintained. This paper discusses some issues and challenges related to Web site development and maintenance. An evolutionary three-tier Web site architecture is presented based on our experiences in building and maintaining Web-based public utility management systems. In the current public utility domain, the coexistence of regulated and deregulated markets and diverse business rules bring many challenges to software engineers to build extensible and maintainable Web-based systems. New technologies provide opportunities to meet these requirements. However, how to adopt new technologies appropriately becomes another issue. Some design and implementation tradeoffs, as well as some lessons learned are considered in this paper.","PeriodicalId":220870,"journal":{"name":"Fifth IEEE International Workshop on Web Site Evolution, 2003. Theme: Architecture. Proceedings.","volume":"40 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115944076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Tonella, F. Ricca, E. Pianta, Christian Girardi, G. D. Lucca, A. R. Fasolino, Porfirio Tramontana
{"title":"Evaluation methods for Web application clustering","authors":"P. Tonella, F. Ricca, E. Pianta, Christian Girardi, G. D. Lucca, A. R. Fasolino, Porfirio Tramontana","doi":"10.1109/WSE.2003.1234006","DOIUrl":"https://doi.org/10.1109/WSE.2003.1234006","url":null,"abstract":"Clustering of the entities composing a Web application (static and dynamic pages) can be used to support program understanding, However, several alternative options are available when a clustering technique is designed for Web applications. The entities to be clustered can be described in different ways (e.g., by their structure, by their connectivity, or by their content), different similarity measures are possible, and alternative procedures can be used to form the clusters. The problem is how to evaluate the competing clustering techniques in order to select the best for program understanding purposes. In this paper, two methods for clustering evaluation are considered, the gold standard and the task oriented approach. The advantages and disadvantages of both of them are analyzed in detail. Definition of a gold standard (reference clustering) is difficult and prone to subjectivity. On the other side, an evaluation based on the level of support given to task execution is expensive and requires careful experimental design. Guidelines and examples are provided for the implementation of both methods.","PeriodicalId":220870,"journal":{"name":"Fifth IEEE International Workshop on Web Site Evolution, 2003. Theme: Architecture. Proceedings.","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124218104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. D. Lucca, A. R. Fasolino, Porfirio Tramontana, U. D. Carlini
{"title":"Abstracting business level UML diagrams from Web applications","authors":"G. D. Lucca, A. R. Fasolino, Porfirio Tramontana, U. D. Carlini","doi":"10.1109/WSE.2003.1234004","DOIUrl":"https://doi.org/10.1109/WSE.2003.1234004","url":null,"abstract":"In this paper, a reverse engineering approach for reconstructing UML diagrams at business level of the application domain of a Web application is presented. In particular, the approach allows the reconstruction of the UML class diagram providing an object-oriented conceptual model of the application domain, sequence diagrams modeling the interactions among the identified business objects and use case diagrams modeling the user functionalities provided by the Web Application. Heuristic criteria exploiting source code analysis are used for recovering the diagrams. Tools for implementing these criteria have been produced, and experiments for validating them have been carried out with the support of case studies. Experimental results showed the feasibility and the effectiveness of the proposed approach.","PeriodicalId":220870,"journal":{"name":"Fifth IEEE International Workshop on Web Site Evolution, 2003. Theme: Architecture. Proceedings.","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133249423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Creating Web services from legacy host programs","authors":"H. Sneed, Stephan H. Sneed","doi":"10.1109/WSE.2003.1234009","DOIUrl":"https://doi.org/10.1109/WSE.2003.1234009","url":null,"abstract":"An important prerequisite to connecting existing systems to the Web is the ability to link client programs on the Web site with server programs on the host. The host programs have not been conceived to run in an internet mode. They are either online transactions or batch steps. This paper describes a tool supported process to cut out selected sections of legacy code and to provide them with an XML interface. The same interface is used to generate a Java class, which creates XML messages returning from the server. This class is then built in to the package managing the Web site. In this way a consistent communication between the Web site and the server components on the host is ensured.","PeriodicalId":220870,"journal":{"name":"Fifth IEEE International Workshop on Web Site Evolution, 2003. Theme: Architecture. Proceedings.","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122821266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}