Mohamed Hafidi, M. Djezzar, M. Hemam, Fatima Zahra Amara, M. Maimour
{"title":"Semantic web and machine learning techniques addressing semantic interoperability in Industry 4.0","authors":"Mohamed Hafidi, M. Djezzar, M. Hemam, Fatima Zahra Amara, M. Maimour","doi":"10.1108/ijwis-03-2023-0046","DOIUrl":"https://doi.org/10.1108/ijwis-03-2023-0046","url":null,"abstract":"\u0000Purpose\u0000This paper aims to offer a comprehensive examination of the various solutions currently accessible for addressing the challenge of semantic interoperability in cyber physical systems (CPS). CPS is a new generation of systems composed of physical assets with computation capabilities, connected with software systems in a network, exchanging data collected from the physical asset, models (physics-based, data-driven, . . .) and services (reconfiguration, monitoring, . . .). The physical asset and its software system are connected, and they exchange data to be interpreted in a certain context. The heterogeneous nature of the collected data together with different types of models rise interoperability problems. Modeling the digital space of the CPS and integrating information models that support cyber physical interoperability together are required.\u0000\u0000\u0000Design/methodology/approach\u0000This paper aims to identify the most relevant points in the development of semantic models and machine learning solutions to the interoperability problem, and how these solutions are implemented in CPS. The research analyzes recent papers related to the topic of semantic interoperability in Industry 4.0 (I4.0) systems.\u0000\u0000\u0000Findings\u0000Semantic models are key enabler technologies that provide a common understanding of data, and they can be used to solve interoperability problems in Industry by using a common vocabulary when defining these models.\u0000\u0000\u0000Originality/value\u0000This paper provides an overview of the different available solutions to the semantic interoperability problem in CPS.\u0000","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42798563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Infer the missing facts of D3FEND using knowledge graph representation learning","authors":"A. Khobragade, S. Ghumbre, V. Pachghare","doi":"10.1108/ijwis-03-2023-0042","DOIUrl":"https://doi.org/10.1108/ijwis-03-2023-0042","url":null,"abstract":"\u0000Purpose\u0000MITRE and the National Security Agency cooperatively developed and maintained a D3FEND knowledge graph (KG). It provides concepts as an entity from the cybersecurity countermeasure domain, such as dynamic, emulated and file analysis. Those entities are linked by applying relationships such as analyze, may_contains and encrypt. A fundamental challenge for collaborative designers is to encode knowledge and efficiently interrelate the cyber-domain facts generated daily. However, the designers manually update the graph contents with new or missing facts to enrich the knowledge. This paper aims to propose an automated approach to predict the missing facts using the link prediction task, leveraging embedding as representation learning.\u0000\u0000\u0000Design/methodology/approach\u0000D3FEND is available in the resource description framework (RDF) format. In the preprocessing step, the facts in RDF format converted to subject–predicate–object triplet format contain 5,967 entities and 98 relationship types. Progressive distance-based, bilinear and convolutional embedding models are applied to learn the embeddings of entities and relations. This study presents a link prediction task to infer missing facts using learned embeddings.\u0000\u0000\u0000Findings\u0000Experimental results show that the translational model performs well on high-rank results, whereas the bilinear model is superior in capturing the latent semantics of complex relationship types. However, the convolutional model outperforms 44% of the true facts and achieves a 3% improvement in results compared to other models.\u0000\u0000\u0000Research limitations/implications\u0000Despite the success of embedding models to enrich D3FEND using link prediction under the supervised learning setup, it has some limitations, such as not capturing diversity and hierarchies of relations. The average node degree of D3FEND KG is 16.85, with 12% of entities having a node degree less than 2, especially there are many entities or relations with few or no observed links. This results in sparsity and data imbalance, which affect the model performance even after increasing the embedding vector size. Moreover, KG embedding models consider existing entities and relations and may not incorporate external or contextual information such as textual descriptions, temporal dynamics or domain knowledge, which can enhance the link prediction performance.\u0000\u0000\u0000Practical implications\u0000Link prediction in the D3FEND KG can benefit cybersecurity countermeasure strategies in several ways, such as it can help to identify gaps or weaknesses in the existing defensive methods and suggest possible ways to improve or augment them; it can help to compare and contrast different defensive methods and understand their trade-offs and synergies; it can help to discover novel or emerging defensive methods by inferring new relations from existing data or external sources; and it can help to generate recommendations or guidance for selecting or deploying appropriate defensive methods based on the","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46125880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intrinsic feature extraction for unsupervised domain adaptation","authors":"Xinzhi Cao, Yinsai Guo, Wenbin Yang, Xiangfeng Luo, Shaorong Xie","doi":"10.1108/ijwis-04-2023-0062","DOIUrl":"https://doi.org/10.1108/ijwis-04-2023-0062","url":null,"abstract":"\u0000Purpose\u0000Unsupervised domain adaptation object detection not only mitigates model terrible performance resulting from domain gap, but also has the ability to apply knowledge trained on a definite domain to a distinct domain. However, aligning the whole feature may confuse the object and background information, making it challenging to extract discriminative features. This paper aims to propose an improved approach which is called intrinsic feature extraction domain adaptation (IFEDA) to extract discriminative features effectively.\u0000\u0000\u0000Design/methodology/approach\u0000IFEDA consists of the intrinsic feature extraction (IFE) module and object consistency constraint (OCC). The IFE module, designed on the instance level, mainly solves the issue of the difficult extraction of discriminative object features. Specifically, the discriminative region of the objects can be paid more attention to. Meanwhile, the OCC is deployed to determine whether category prediction in the target domain brings into correspondence with it in the source domain.\u0000\u0000\u0000Findings\u0000Experimental results demonstrate the validity of our approach and achieve good outcomes on challenging data sets.\u0000\u0000\u0000Research limitations/implications\u0000Limitations to this research are that only one target domain is applied, and it may change the ability of model generalization when the problem of insufficient data sets or unseen domain appeared.\u0000\u0000\u0000Originality/value\u0000This paper solves the issue of critical information defects by tackling the difficulty of extracting discriminative features. And the categories in both domains are compelled to be consistent for better object detection.\u0000","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41913805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Object detection and activity recognition in video surveillance using neural networks","authors":"Vishva Payghode, Ayush Goyal, Anupama Bhan, S. Iyer, Ashwani Kumar Dubey","doi":"10.1108/ijwis-01-2023-0006","DOIUrl":"https://doi.org/10.1108/ijwis-01-2023-0006","url":null,"abstract":"\u0000Purpose\u0000This paper aims to implement and extend the You Only Live Once (YOLO) algorithm for detection of objects and activities. The advantage of YOLO is that it only runs a neural network once to detect the objects in an image, which is why it is powerful and fast. Cameras are found at many different crossroads and locations, but video processing of the feed through an object detection algorithm allows determining and tracking what is captured. Video Surveillance has many applications such as Car Tracking and tracking of people related to crime prevention. This paper provides exhaustive comparison between the existing methods and proposed method. Proposed method is found to have highest object detection accuracy.\u0000\u0000\u0000Design/methodology/approach\u0000The goal of this research is to develop a deep learning framework to automate the task of analyzing video footage through object detection in images. This framework processes video feed or image frames from CCTV, webcam or a DroidCam, which allows the camera in a mobile phone to be used as a webcam for a laptop. The object detection algorithm, with its model trained on a large data set of images, is able to load in each image given as an input, process the image and determine the categories of the matching objects that it finds. As a proof of concept, this research demonstrates the algorithm on images of several different objects. This research implements and extends the YOLO algorithm for detection of objects and activities. The advantage of YOLO is that it only runs a neural network once to detect the objects in an image, which is why it is powerful and fast. Cameras are found at many different crossroads and locations, but video processing of the feed through an object detection algorithm allows determining and tracking what is captured. For video surveillance of traffic cameras, this has many applications, such as car tracking and person tracking for crime prevention. In this research, the implemented algorithm with the proposed methodology is compared against several different prior existing methods in literature. The proposed method was found to have the highest object detection accuracy for object detection and activity recognition, better than other existing methods.\u0000\u0000\u0000Findings\u0000The results indicate that the proposed deep learning–based model can be implemented in real-time for object detection and activity recognition. The added features of car crash detection, fall detection and social distancing detection can be used to implement a real-time video surveillance system that can help save lives and protect people. Such a real-time video surveillance system could be installed at street and traffic cameras and in CCTV systems. When this system would detect a car crash or a fatal human or pedestrian fall with injury, it can be programmed to send automatic messages to the nearest local police, emergency and fire stations. When this system would detect a social distancing violation, it can be programmed ","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2023-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41971395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A set of parameters for automatically annotating a Sentiment Arabic Corpus","authors":"Guellil Imane, Darwish Kareem, Azouaou Faical","doi":"10.1108/IJWIS-03-2019-0008","DOIUrl":"https://doi.org/10.1108/IJWIS-03-2019-0008","url":null,"abstract":"This paper aims to propose an approach to automatically annotate a large corpus in Arabic dialect. This corpus is used in order to analyse sentiments of Arabic users on social medias. It focuses on the Algerian dialect, which is a sub-dialect of Maghrebi Arabic. Although Algerian is spoken by roughly 40 million speakers, few studies address the automated processing in general and the sentiment analysis in specific for Algerian.,The approach is based on the construction and use of a sentiment lexicon to automatically annotate a large corpus of Algerian text that is extracted from Facebook. Using this approach allow to significantly increase the size of the training corpus without calling the manual annotation. The annotated corpus is then vectorized using document embedding (doc2vec), which is an extension of word embeddings (word2vec). For sentiments classification, the authors used different classifiers such as support vector machines (SVM), Naive Bayes (NB) and logistic regression (LR).,The results suggest that NB and SVM classifiers generally led to the best results and MLP generally had the worst results. Further, the threshold that the authors use in selecting messages for the training set had a noticeable impact on recall and precision, with a threshold of 0.6 producing the best results. Using PV-DBOW led to slightly higher results than using PV-DM. Combining PV-DBOW and PV-DM representations led to slightly lower results than using PV-DBOW alone. The best results were obtained by the NB classifier with F1 up to 86.9 per cent.,The principal originality of this paper is to determine the right parameters for automatically annotating an Algerian dialect corpus. This annotation is based on a sentiment lexicon that was also constructed automatically.","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86674990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan Camilo González-Vargas, Angela Carrillo Ramos, R. Fabregat, Lizzeth Camargo, Maria Caridad García Cepero, Jaime A. Pavlich-Mariscal
{"title":"RunayaySoft","authors":"Juan Camilo González-Vargas, Angela Carrillo Ramos, R. Fabregat, Lizzeth Camargo, Maria Caridad García Cepero, Jaime A. Pavlich-Mariscal","doi":"10.1108/IJWIS-04-2018-0021","DOIUrl":"https://doi.org/10.1108/IJWIS-04-2018-0021","url":null,"abstract":"Purpose The purpose of this paper is to describe a support system to the selection of enrichment activities in educational environment called RunayaySoft, where Runayay comes from the word Quechua that means develop and Soft as it is an informatics tool that supports the educational institutions and their students, in the selection of activities that allow foster some of their skills based on their interests, learning styles, aptitudes, multiple intelligences, preferences and so on. Moreover, it suggests institutions about the activities that they should make in their building considering student´s characteristics and the agreements that they have. Design/methodology/approach It does a diagnostic for identifying which characteristics are going to be considered to students and institutions. Then, it generates adaptive profiles with the aim of generating suggestions of enrichment activities that allow to boost some of their skills. For the students were considered their preferences, learning style, aptitude, multiple intelligences and interests. In the case of institutions were the agreements, resources and activities that they develop. Based on this information, it defines the relations for the generation of suggestions of activities toward students, where it does the prioritization of which activities should be considered. Findings For validating the system, it was done as a functional prototype that generates suggestions to students, as well as educative institutions, through a satisfaction test student assess if they agree or disagree with the suggestions given. With that assessment, it is validated the relationship between student’s characteristics, activity and institution are related for generating activities suggestions. Research limitations/implications RunayaySoft generates adaptive profiles for the students, activity and institution. Each profile has information that allows adapt an advice toward students and institutions. Social implications RunayaySoft considers student’s characteristics, activities and educational institutions for generating suggestions for enrichment activities that allow to boost some of their skills. Many times, when activities are generated in educative institutions, they are not considered a learner’s needs and characteristics. For that reason, the system helps institutions to identify activities that should be done in their facilities or with those institutions which they have agreements when the institutions that students come from do not have the required resources. Originality/value RunayaySoft suggests enrichment activities to students as well as educative institutions. For students, it suggests disciplinary areas where they can boost their skills; for each disciplinary area are recommended activities based on their preferences. Once students select the disciplinary area and activities, the system suggests educative institutions activities that they can do. If the institutions do not have the neces","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1108/IJWIS-04-2018-0021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62040530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a flexible framework to support a generalized extension of XACML for spatio-temporal RBAC model with reasoning ability","authors":"T. K. Dang, K. T. L. Thi, Anh Tuan Dang, H. Van","doi":"10.1108/IJWIS-12-2013-0037","DOIUrl":"https://doi.org/10.1108/IJWIS-12-2013-0037","url":null,"abstract":"XACML is an international standard used for access control in distributed systems. However, XACML and its existing extensions are not sufficient to fulfil sophisticated security requirements (e.g. access control based on user’s roles, context-aware authorizations, and the ability of reasoning). Remarkably, X-STROWL, a generalized extension of XACML, is a comprehensive model that overcomes these shortcomings. Among a large amount of open sources implementing XACML, HERAS-AF is chosen as the most suitable framework to be extended to implement X-STROWL model. This paper mainly focuses on the architecture design of proposed framework and the comparison with other frameworks. In addition, a case study will be presented to clarify the work-flow of this framework. This is the crucial contribution of our research to provide a holistic, extensible and intelligent authorization decision engine.","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2013-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85719790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Load distribution by using web workers for a real-time web application","authors":"S. Okamoto, Masaki Kohana","doi":"10.1145/1967486.1967577","DOIUrl":"https://doi.org/10.1145/1967486.1967577","url":null,"abstract":"In this paper, we describe a load distribution technique that employs web workers. We have been implementing a web-based MORPG as an interactive, real-time web application; previously, the web server alone was responsible for manipulating the behavior of all the game characters. As more users logged in, the workload on the server was increased. Hence, we have implemented a technique whereby the CPU load of the server is distributed among the clients; a performance evaluation reveals that our technique plays a role in decreasing the CGI latency of low-end servers and can decrease the CPU load of high-end servers when many users are logged in.","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2010-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89102018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Updating multidimensional XML documents","authors":"Nikolaos Fousteris, M. Gergatsoulis, Y. Stavrakas","doi":"10.1108/17440080810882342","DOIUrl":"https://doi.org/10.1108/17440080810882342","url":null,"abstract":"Purpose – In a wide spectrum of applications, it is desirable to manipulate semistructured information that may present variations according to different circumstances. Multidimensional XML (MXML) is an extension of XML suitable for representing data that assume different facets, having different value and/or structure under different contexts. The purpose of this paper is to develop techniques for updating MXML documents.Design/methodology/approach – Updating XML has been studied in the past, however, updating MXML must take into account the additional features, which stem from incorporating context into MXML. This paper investigates the problem of updating MXML in two levels: at the graph level, i.e. in an implementation independent way; and at the relational storage level.Findings – The paper introduces six basic update operations, which are capable of any possible change. Those operations are specified in an implementation independent way, and their effect explained through examples. Algorithms are gi...","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2008-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85626656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advances in agent and non-agent software engineering methodologies on the web and software systems","authors":"E. Shakshuki","doi":"10.1108/IJWIS.2007.36203DAA.001","DOIUrl":"https://doi.org/10.1108/IJWIS.2007.36203DAA.001","url":null,"abstract":"","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":null,"pages":null},"PeriodicalIF":1.6,"publicationDate":"2007-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62040480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}