{"title":"A framework for implementing micro frontend architecture","authors":"Sylvester Timona Wanjala","doi":"10.7753/ijcatr1012.1002","DOIUrl":"https://doi.org/10.7753/ijcatr1012.1002","url":null,"abstract":"Web applications are an indispensable part of any enterprise information system. In the recent past, we have seen maturity in technologies that enable the separation of frontend and backend, with the backend adopting microservices architecture style. The front end has maintained the traditional monolithic architecture. Micro frontends have come up as a solution to the conventional monolithic frontend, which has received much attention. Still, so far, there is no straightforward approach to implementation that satisfies different practical requirements of a modern web application. This paper proposes an architectural pattern for implementing micro frontends to address challenges experienced in earlier implementations, such as inconsistent user experience, managing security, and complexity. We developed two simple web applications, one using the proposed architectural pattern, and another using the monolithic architecture and compared the performance. We used Google lighthouse to measure the performance of two applications. The landing page for the application developed using micro frontend architecture showed a higher performance score of 99 against 86 for a similar page in an application developed using the monolithic architecture. The proposed framework showed outstanding performance in handling the issues of consistent layout with a Cumulative Layout shift of 0. Breaking down the frontend with lazy loading of micro frontends improved the web application's performance, while the proposed framework reduced development complexity. However, more research is needed to provide seamless integration of micro frontends into the main application with the support of loading shared libraries in the main application; this will significantly reduce the payload size.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131325815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A knowledge governance framework for open innovation projects","authors":"R. Bernsteiner, Thomas Dilger, Christian Ploder","doi":"10.1504/IJWET.2020.109731","DOIUrl":"https://doi.org/10.1504/IJWET.2020.109731","url":null,"abstract":"Markets tend to develop always faster with ever-growing requirements on products and services. This forces enterprises to cooperate with partners like suppliers, customers, or even competitors across organisational borders to cope with these challenges. Such a collaboration leads to knowledge flows between all partners. Being too open and sharing too much information can cause knowledge leakage. The central aim of this research is to provide a framework on how to integrate knowledge governance mechanisms in open innovation projects to ensure eligibility in practice. Insights from the field have been integrated into the framework by interviewing ten experts who have practical experiences on open innovation projects. The interviews have been conducted in 2019 and analysed later on. Based on scientific literature and insights from the field, a knowledge governance framework to guide through open innovation projects has been developed.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114273758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Zatout, Mahmoud Boufaïda, M. Benabdelhafid, M. Berkane
{"title":"A model-driven approach for the verification of an adaptive service composition","authors":"S. Zatout, Mahmoud Boufaïda, M. Benabdelhafid, M. Berkane","doi":"10.1504/ijwet.2020.107678","DOIUrl":"https://doi.org/10.1504/ijwet.2020.107678","url":null,"abstract":"The development of web service compositions is a complex task that needs coherent mechanisms in order to maintain the quality of the provided business process and to satisfy user needs. This paper proposes a development process of an adaptable composed web service and mainly focuses on the reliability and the performance properties. It explores the model driven architecture transformation technique in order to formally model the whole service orchestration using the timed coloured Petri net formalism. The software CPN Tools offers, among others, the ASK-computational tree logic, the model checking technique and several monitors that will be exploited to describe and verify different properties at design time. They will also be used via access/CPN library in order to reason about the reconfiguration technique at runtime. An example of an identity card management process is given to prove the feasibility of the proposed solution.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121902824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A secure VM allocation scheme to preserve against co-resident threat","authors":"S. Chhabra, Ashutosh Kumar Singh","doi":"10.1504/ijwet.2020.107686","DOIUrl":"https://doi.org/10.1504/ijwet.2020.107686","url":null,"abstract":"Preserving the secrecy in cloud system is one of the biggest concerns for the cloud customers who faces security risks in the context of load balancing. The co-resident attacks are widely used by attackers, where malicious users build side channels and extract private information from VMs. The proposed model evaluates the possibility of VM co-residency and success rate of an attack. The emphasis of this paper is to reduce the possibility of co-resident attacks among different users. When cloud data centres receive requests for the tasks deployment, then the proposed system will find out the secure physical machine under VM allocation policies while avoiding the threats. The performance is calculated by these metrics: makespan, resource utilisation, co-residency probability and co-resident success rate. The results show that the most virtual machine allocation policy (MVMP) effectively reduces the risk under the safe states. The framework significantly improves the security by reducing the shared servers up to 32.2% and enhances the resource utilisation up to 44.14% over least VM allocation policy (LVMP), round robin VM allocation policy (RRVMP) allocation schemes.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123625269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The research on two phase pickup vehicle routing based on the K-means++ and genetic algorithms","authors":"Huan Zhao, Yiping Yang","doi":"10.1504/ijwet.2020.10029813","DOIUrl":"https://doi.org/10.1504/ijwet.2020.10029813","url":null,"abstract":"A popular topic of interest is the development of an efficient vehicle routing plan, which needs to meet customer requirements and ensure delivery with the lowest cost. This paper established a model of the vehicle routing problem with a time window and static network considering the vehicle type, type of goods, and customer satisfaction requirements to build an optimisation model. By optimising the combination of the K-means++ and genetic algorithms, the problem is transformed into a two stage solution, supplier clustering is performed using the K-means++ algorithm, and the vehicle path is determined using the genetic algorithm in each cluster arrangement. Finally, the optimisation results are compared with the actual delivery data, which demonstrates that the optimisation results are superior to the current vehicle arrangement in terms of vehicle utilisation and cost. Finally, an example is presented to illustrate the feasibility of the proposed algorithm.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134120410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Short text classification using feature enrichment from credible texts","authors":"Issa Alsmadi, Gan Keng Hoon","doi":"10.1504/ijwet.2020.107689","DOIUrl":"https://doi.org/10.1504/ijwet.2020.107689","url":null,"abstract":"Classifying Tweet's contents can become a useful feature for other application tasks. However, such classification can be quite challenging due to the short length and sparsity of tweet contents. Although individual tweets have limited length, their contents delve into different topics. Therefore, due to such diverse contents, achieving good coverage of content features remains a challenge. We adopt the expansion of keywords technique in this research and study the enrichment of tweet contents using text from credible sources, such as news sites. For evaluation, we conduct experiments on two Twitter datasets using four standard classifiers. The proposed approach has enhanced the performance of the classification task, with improvements in accuracy ranging from +0.05% to +3.54% for both datasets. Experimental results positively demonstrate that the proposed feature enrichment method can overcome the sparseness limitation of short text with improved classification performances when running on various classifiers.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117116669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improvement of TCP Vegas algorithm based on forward direction delay","authors":"Shijie Guan, Yueqiu Jiang, Qixue Guan","doi":"10.1504/ijwet.2020.107690","DOIUrl":"https://doi.org/10.1504/ijwet.2020.107690","url":null,"abstract":"Satellite networks transmit data through the space communications protocol specification transport protocol and uses transmission control protocol (TCP) Vegas as the congestion control algorithm. However, TCP Vegas does not have a suitable solution for the asymmetric bandwidth of satellite networks. Therefore, the reverse link of asymmetric bandwidth frequently causes congestion in satellite networks. This issue is addressed by reducing the congestion window with the occurrence of reverse link congestion, thereby simultaneously reducing the forward link throughput of the satellite network. In this study, a forward congestion control algorithm for the TCP Vegas algorithm based on time delay, which is called Vegas_FDD (forward direction delay), is proposed to mitigate congestion by dividing it into different types (forward and backward) and to improve network bandwidth utilisation. The suitability and effectiveness of the proposed algorithm are verified through simulation on Opnet software.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128470928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Anomaly detection in the web logs using user-behaviour networks","authors":"J. You, Xiaojuan Wang, Lei Jin, Yong Zhang","doi":"10.1504/ijwet.2019.102871","DOIUrl":"https://doi.org/10.1504/ijwet.2019.102871","url":null,"abstract":"With the rapid growth of the web attacks, anomaly detection becomes a necessary part in the management of modern large-scale distributed web applications. As the record of the user behaviour, web logs certainly become the research object relate to anomaly detection. Many anomaly detection methods based on automated log analysis have been proposed. However, most researches focus on the content of the single logs, while ignoring the connection between the user and the path. To address this problem, we introduce the graph theory into the anomaly detection and establish a user behaviour network model. Integrating the network structure and the characteristic of anomalous users, we propose five indicators to identify the anomalous users and the anomalous logs. Results show that the method gets a better performance on four real web application log datasets, with a total of about 4 million log messages and 1 million anomalous instances. In addition, this paper integrates and improves a state-of-the-art anomaly detection method, to further analyse the composition of the anomalous logs. We believe that our work will bring a new angle to the research field of the anomaly detection.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125673621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DWSpyder: a new schema extraction method for a deep web integration system","authors":"Yasser Saissi, A. Zellou, Ali Adri","doi":"10.1504/ijwet.2019.102872","DOIUrl":"https://doi.org/10.1504/ijwet.2019.102872","url":null,"abstract":"The deep web is a huge part of the web that is not indexed by search engines. The deep web sources are accessible only through their associated access forms. We wish to use a web integration system to access the deep web sources and all of their information. To implement this web integration system, we need to know the schema description of each web source. The problem resolved in this paper is how to extract the schema describing an inaccessible deep web source. We propose our DWSpyder method as being able to extract the schema describing a deep web source despite its inaccessibility. The DWSpyder method starts with a static analysis of the deep web source access forms in order to extract the first elements of the associated schema description. The second step of our method is a dynamic analysis of these access forms using queries to enrich our schema description. Our DWSpyder method also uses a clustering algorithm to identify the possible values of deep web form fields with undefined sets of values. All of the information extracted is used by DWSpyder to generate automatically deep web source schema descriptions.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130019190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Impact of replica placement-based clustering on fault tolerance in grid computing","authors":"Rahma Souli-Jbali, Minyar Sassi Hidri, R. Ayed","doi":"10.1504/ijwet.2019.102873","DOIUrl":"https://doi.org/10.1504/ijwet.2019.102873","url":null,"abstract":"Due to several demands on very high computing power and storage capacity, data grids seem to be a good solution to meet these growing demands. However, the design of distributed applications for data grids remains complex, and it is necessary to take into account the dynamic nature of the grids since the nodes may disappear at any time. We focus on problems related to the impact of replica placement-based clustering on fault tolerance in grids. In inter-clusters, the message-logging protocol is used. In intra-cluster, the inter-clusters protocol is coupled with the non-blocking coordinated checkpoint of Chandy-Lamport. This ensures that in case of failure, the impact of the fault would remain confined to the nodes of the same cluster. The experiment results show the efficiency of the proposed protocol in terms of time recovery, numbers of either used processes or exchanged messages.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116403925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}