Computing最新文献

筛选
英文 中文
Mapping and just-in-time traffic congestion mitigation for emergency vehicles in smart cities 为智慧城市中的应急车辆绘制地图并及时缓解交通拥堵
IF 3.7 3区 计算机科学
Computing Pub Date : 2024-09-15 DOI: 10.1007/s00607-024-01345-3
Syed Ali Haider, Junaid A. Zubairi, Sahar Idwan
{"title":"Mapping and just-in-time traffic congestion mitigation for emergency vehicles in smart cities","authors":"Syed Ali Haider, Junaid A. Zubairi, Sahar Idwan","doi":"10.1007/s00607-024-01345-3","DOIUrl":"https://doi.org/10.1007/s00607-024-01345-3","url":null,"abstract":"<p>Traffic congestion in urban areas poses several challenges to municipal authorities including pollution, productivity loss, reckless driving, and delays in dealing with emergencies. Smart cities can use modern IoT infrastructure to solve the congestion problem and reduce pollution and delays. In this article, we focus on congestion mapping and mitigation for emergency vehicles in smart cities. We use a novel traffic light control technique to change the flow of cars on lights of interest thereby making way for emergency vehicles. We use a simulation model for a selected area of Manhattan to implement congestion mapping and to help find the fastest path for routing emergency vehicles based on the congestion metrics. The system controls traffic lights to block off the roads feeding into congestion and allows flow away from the congested path. This helps in clearing the preferred route to help emergency vehicles reach the destination faster. We show that the proposed algorithm can map congestion on city roads with accuracy thus helping to improve the response time of the emergency services and saving precious lives.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"40 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142258095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fog intelligence for energy efficient management in smart street lamps 用于智能路灯节能管理的雾智能
IF 3.7 3区 计算机科学
Computing Pub Date : 2024-09-13 DOI: 10.1007/s00607-024-01348-0
J. Angela Jennifa Sujana, R. Venitta Raj, V. K. Raja Priya
{"title":"Fog intelligence for energy efficient management in smart street lamps","authors":"J. Angela Jennifa Sujana, R. Venitta Raj, V. K. Raja Priya","doi":"10.1007/s00607-024-01348-0","DOIUrl":"https://doi.org/10.1007/s00607-024-01348-0","url":null,"abstract":"<p>Street lamp is a great asset for human society with a narrow beam spread light. The extensive proliferation of solar power in street lamps causes power outages due to their variable power-generated profiles. Thus Smart Street Lamp Fog Intelligence (SSLFI) framework based on hierarchical learning was proposed for efficient energy management in solar street lamps. Smart Street Lamp (SSL) shifts its brightness at higher and lower light levels with a comforting, energy-efficient gleam of light. The fog intelligence framework forecasts the SSL output power through short-term probabilistic energy consumption forecasts using Q-NARX-BiLSTM (Quantile Regression-Nonlinear Auto-Regressive Neural Networks with exogenous input-Bidirectional Long short-term memory) model. NARX-BiLSTM of two module types: (1) NARXNN (Nonlinear Auto-Regressive Neural Networks with exogenous input) model generates SSL power consumption and (2) BiLSTM (Bidirectional Long short-term memory) model generates SSL power forecasts. The quantile regression with the NARX-BiLSTM (Nonlinear Auto-Regressive Neural Networks with exogenous input-Bidirectional Long short-term memory) model forecasts the seasonal patterns achieving non-parametric interval predictions. The probabilistic predictions of power consumption are determined based on the conditional quantile using an improved kernel density estimation approach. The fuzzy inference system adopts forecasting results to diagnose fault conditions in street lamps. The experiment results show that the proposed framework SSLFI outperformed the state-of-the-art models forecasting under different weather conditions.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"7 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contextual authentication of users and devices using machine learning 利用机器学习对用户和设备进行情境认证
IF 3.7 3区 计算机科学
Computing Pub Date : 2024-09-13 DOI: 10.1007/s00607-024-01333-7
Divyans Mahansaria, Uttam Kumar Roy
{"title":"Contextual authentication of users and devices using machine learning","authors":"Divyans Mahansaria, Uttam Kumar Roy","doi":"10.1007/s00607-024-01333-7","DOIUrl":"https://doi.org/10.1007/s00607-024-01333-7","url":null,"abstract":"<p>At the time of authentication, confidential data are exchanged between the user/device and the authentication server to determine the legitimacy of the source requesting authentication. Safeguarding the authentication process from security attacks is of utmost importance, and various authentication methods exist depending on the system’s requirements. However, no authentication process can guarantee full-proof security. This research aimed to use the context of users and devices during authentication to detect anomalies and security-related attacks. In particular, denial-of-service (DoS)/distributed denial-of-service (DDoS) attacks and brute-force attacks have been analyzed in detail using contextual information. Extensive simulations were conducted on the benchmark CIC-IDS2017 dataset using the Weka tool. The performance metrics of recall, precision, accuracy, f-score, and model-built time were computed for the four machine-learning classifiers—J48, Random Forest, Multi-Layer Perceptron, and Bayes Net—for different combinations of data splits and groups of data features. For both DoS/DDoS and brute-force attacks, some of the experimental results show a more than 99% value for recall, precision, accuracy, and f-score. The results of the experiments, security analysis, and threat modeling show that the proposed authentication scheme effectively enhances a secure system’s security level.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"183 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-objective service composition optimization problem in IoT for agriculture 4.0 农业 4.0 物联网中的多目标服务组合优化问题
IF 3.7 3区 计算机科学
Computing Pub Date : 2024-09-11 DOI: 10.1007/s00607-024-01346-2
Shalini Sharma, Bhupendra Kumar Pathak, Rajiv Kumar
{"title":"Multi-objective service composition optimization problem in IoT for agriculture 4.0","authors":"Shalini Sharma, Bhupendra Kumar Pathak, Rajiv Kumar","doi":"10.1007/s00607-024-01346-2","DOIUrl":"https://doi.org/10.1007/s00607-024-01346-2","url":null,"abstract":"<p>One of the most well-known names that has recently attained new heights and set a standard is Internet of Things (IoT). IoT aims to connect all physical devices in such a way that they are subject to human control over the Internet.The emergence of IoT in almost all the industries has redesigned them including smart agriculture. In today’s world, the growth in agriculture sector is rapid, smarter and precise than ever. In case of IoT, the objects are termed as services, sometimes with similar functionalities but distinct quality of service parameters. As the user’s requirements are complex, a single service cannot fulfil them efficiently. So, service composition is the solution. These services known as atomic services, are represented as workflow, with each of them having distinct candidate composite services. Fulfilling these Quality of Service (QoS) constraints makes it a NP-hard problem which can’t be solved using traditional approaches. Hence, comes the concept of evolutionary approaches. In this paper one of the evolutionary approach- NSGA-II is used to optimize the production of apple by composing the various services, taking into account the cost and time as multi-objective problem to be solved. This is for the very first time that QoS aware service composition problem has been optimized in smart agriculture as found in the literature. Results are further compared with multi-objective genetic algorithm (MOGA) and it has been found that NSGA-II outperforms MOGA by generating well-proportioned pareto optimal solutions.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"27 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of mitigation deployment using deep reinforcement learning over an enhanced ATT &CK 在增强型 ATT &CK 上使用深度强化学习优化缓解部署
IF 3.7 3区 计算机科学
Computing Pub Date : 2024-09-06 DOI: 10.1007/s00607-024-01344-4
Yingze Liu, Yuanbo Guo, Rajiv Ranjan, Dan Chen
{"title":"Optimization of mitigation deployment using deep reinforcement learning over an enhanced ATT &CK","authors":"Yingze Liu, Yuanbo Guo, Rajiv Ranjan, Dan Chen","doi":"10.1007/s00607-024-01344-4","DOIUrl":"https://doi.org/10.1007/s00607-024-01344-4","url":null,"abstract":"<p>This study introduces a Deep Reinforcement Learning approach (DRL-MD) aimed at optimizing the deployment of mitigations to minimize redundancy while ensuring effective defense against cyberattacks. DRL-MD initially enhances ATT &amp;CK (Adversarial Tactics, Techniques, and Common Knowledge) to underscore the formal relationships between attacks and defenses. Over the enhanced ATT &amp;CK, DRL-MD then operates in two phases: (1) <i>Estimating Node Importance</i>: DRL-MD proposes a model to estimate the importance of deployed nodes in the network, prioritizing mitigation deployment locations for better evaluation of mitigation effectiveness; and (2) <i>Optimizing Mitigation Deployment</i>: A Soft Actor-Critic algorithm finds the optimal mitigation deployment policy through multi-objective optimization of the importance of deployed nodes, the effectiveness of mitigations in preventing cyberattacks, vulnerability repair, and deployment cost. A case study with DRL-MD against the state-of-the-art counterparts has been performed considering the <i>WannaCry</i> threat, and results indicate that: (1) DRL-MD performs the best with 6.4–11% decrease in deployment cost; and (2) DRL-MD can significantly reduce redundancy in mitigation deployments, which partially benefits from the enhanced ATT &amp;CK model. Overall, a comprehensive solution of mitigation deployment has been fostered to significantly lower the redundancy with more effective defenses against cyberattacks sustained.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"437 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust evaluation of GPU compute instances for HPC and AI in the cloud: a TOPSIS approach with sensitivity, bootstrapping, and non-parametric analysis 云计算中用于高性能计算和人工智能的 GPU 计算实例的稳健评估:采用敏感性、引导和非参数分析的 TOPSIS 方法
IF 3.7 3区 计算机科学
Computing Pub Date : 2024-09-06 DOI: 10.1007/s00607-024-01342-6
Mandeep Kumar, Gagandeep Kaur, Prashant Singh Rana
{"title":"Robust evaluation of GPU compute instances for HPC and AI in the cloud: a TOPSIS approach with sensitivity, bootstrapping, and non-parametric analysis","authors":"Mandeep Kumar, Gagandeep Kaur, Prashant Singh Rana","doi":"10.1007/s00607-024-01342-6","DOIUrl":"https://doi.org/10.1007/s00607-024-01342-6","url":null,"abstract":"<p>Evaluating GPU compute instances for High Performance Computing (HPC) and Artificial Intelligence (AI) applications in the cloud involves complex decision-making processes. This research applies the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to rank various GPU compute instances for HPC and AI from leading cloud providers: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and Oracle Cloud Infrastructure (OCI). The analysis incorporates a sensitivity examination, bootstrapping, and non-parametric tests to ensure robust and reliable rankings. Sensitivity analysis reveals the stability of the TOPSIS framework despite variations in criteria weights, while bootstrap analysis provides confidence intervals for the rankings, highlighting their consistency. The Friedman test confirms that ranking stability persists across different scenarios, indicating minimal impact from weight adjustments. Despite these insights, limitations such as interdependencies among criteria, data accuracy, and generalizability constraints must be acknowledged. This comprehensive approach ensures informed decision-making for selecting optimal GPU instances for cloud-based HPC and AI tasks.\u0000</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"40 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using a random forest to predict quantized reuse distance in an SSD write buffer 使用随机森林预测固态硬盘写入缓冲区中的量化重复使用距离
IF 3.7 3区 计算机科学
Computing Pub Date : 2024-09-05 DOI: 10.1007/s00607-024-01343-5
Hyejin Cha, In Kee Kim, Taeseok Kim
{"title":"Using a random forest to predict quantized reuse distance in an SSD write buffer","authors":"Hyejin Cha, In Kee Kim, Taeseok Kim","doi":"10.1007/s00607-024-01343-5","DOIUrl":"https://doi.org/10.1007/s00607-024-01343-5","url":null,"abstract":"<p>Efficient management of the write buffer in solid-state drives (SSDs) can be achieved by predicting future I/O request patterns using machine learning techniques. However, the computational demands posed by sophisticated approaches like deep learning remain significant, despite the increasing computational power of SSDs. This paper presents a novel approach to write buffer management that addresses these challenges. Our method employs a lightweight yet accurate random forest classifier to predict the forward reuse distances (FRDs) of I/O requests, indicating the likelihood of recurring identical I/O requests. Our key insight is that, rather than aiming for exact FRD predictions for future individual requests, we focus on identifying whether the predicted FRD exceeds the buffer size. With this insight, our method implements efficient buffer management operations, including bypassing the buffer storage when necessary. To achieve this, we introduce a banding method that quantizes FRDs according to the buffer size. This enables predictions at the band level, forming the foundation for a lightweight machine learning model. Subsequently, we assign high caching priority to write requests that are anticipated to have a short FRD band. Through extensive evaluations utilizing a simulator, we demonstrate that our method achieves results comparable to those of the optimal algorithm in terms of hit rate in most scenarios. Moreover, our approach outperforms state-of-the-art algorithms, which depend on past I/O reference patterns, by up to 27%.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"29 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A demand forecasting system of product categories defined by their time series using a hybrid approach of ensemble learning with feature engineering 利用集合学习与特征工程的混合方法,建立由时间序列定义的产品类别需求预测系统
IF 3.7 3区 计算机科学
Computing Pub Date : 2024-09-02 DOI: 10.1007/s00607-024-01320-y
Santiago Mejía, Jose Aguilar
{"title":"A demand forecasting system of product categories defined by their time series using a hybrid approach of ensemble learning with feature engineering","authors":"Santiago Mejía, Jose Aguilar","doi":"10.1007/s00607-024-01320-y","DOIUrl":"https://doi.org/10.1007/s00607-024-01320-y","url":null,"abstract":"<p>Retail companies face major problems in the estimation of their product’s future demand due to the high diversity of sales behavior that each good presents. Different forecasting models are implemented to meet the demand requirements for efficient inventory management. However, in most of the proposed works, a single model approach is applied to forecast all products, ignoring that some methods are better adapted for certain features of the demand time series of each product. The proposed forecasting system addresses this problem, by implementing a two-phase methodology that initially clusters the products with the application of an unsupervised learning approach using the extracted demand features of each good, and then, implements a second phase where, after a feature engineering process, a set of different forecasting methods are evaluated to identify those with best performs for each cluster. Finally, ensemble machine learning models are implemented using the top-performing models of each cluster to carry out the demand estimation. The results indicate that the proposed forecasting system improves the demand estimation over the single forecasting approaches when evaluating the R<sup>2</sup>, MSE, and MASE quality measures.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"99 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid deep learning and evolutionary algorithms for accurate cloud workload prediction 混合深度学习和进化算法,实现准确的云计算工作量预测
IF 3.7 3区 计算机科学
Computing Pub Date : 2024-08-25 DOI: 10.1007/s00607-024-01340-8
Tassawar Ali, Hikmat Ullah Khan, Fawaz Khaled Alarfaj, Mohammed AlReshoodi
{"title":"Hybrid deep learning and evolutionary algorithms for accurate cloud workload prediction","authors":"Tassawar Ali, Hikmat Ullah Khan, Fawaz Khaled Alarfaj, Mohammed AlReshoodi","doi":"10.1007/s00607-024-01340-8","DOIUrl":"https://doi.org/10.1007/s00607-024-01340-8","url":null,"abstract":"<p>Cloud computing offers demand-based allocation of required resources to its clients ensuring optimal use of resources in a cost-effective manner. However, due to the massive increase in demand for physical resources by datacenters cloud management suffers from inefficient resource management. To enhance efficiency by reducing resource setup time, workload prediction has become an active research area. It helps to make management decisions proactively and enables the cloud management system to better respond to spikes in the workload. This study proposes a hybrid model combining both state-of-the-art deep learning models and evolutionary algorithms for workload prediction. The proposed cluster-based differential evolution neural network model utilizes differential evolution for the optimization of feature weights of the deep neural network to predict the future workloads of a cloud datacenter. The proposed model uses a novel mutation strategy that clusters the population based on an agglomerative technique and chooses the best gene from randomly chosen clusters. Thus, the strategy creates a balance between the exploration and exploitation of the population and enables the model to avoid local optima and converge rapidly. The datasets used for the experiments are created from Google’s real-world traces and the Alibaba platform. The model is compared with backpropagation, Adam optimizer-based LSTM, and an evolutionary neural network-based three-mutation policy. We evaluated the performance of the proposed model in terms of root mean squared error in predicting the upcoming CPU, RAM, and BW usage. The proposed model achieved an error rate as low as 0.0002 to outperform the existing studies in the relevant literature. To further authenticate the results, we performed the statistical analysis of the obtained results in terms of R-squared, mean bias deviation, 90th percentile score, and Theil’s U statistics. The high accuracy and automaticity of the proposed model have paved the way for its application in diverse areas of cloud computing, including real-time applications.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"10 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large language models: a new approach for privacy policy analysis at scale 大型语言模型:大规模隐私政策分析的新方法
IF 3.7 3区 计算机科学
Computing Pub Date : 2024-08-22 DOI: 10.1007/s00607-024-01331-9
David Rodriguez, Ian Yang, Jose M. Del Alamo, Norman Sadeh
{"title":"Large language models: a new approach for privacy policy analysis at scale","authors":"David Rodriguez, Ian Yang, Jose M. Del Alamo, Norman Sadeh","doi":"10.1007/s00607-024-01331-9","DOIUrl":"https://doi.org/10.1007/s00607-024-01331-9","url":null,"abstract":"<p>The number and dynamic nature of web sites and mobile applications present regulators and app store operators with significant challenges when it comes to enforcing compliance with applicable privacy and data protection laws. Over the past several years, people have turned to Natural Language Processing (NLP) techniques to automate privacy compliance analysis (e.g., comparing statements in privacy policies with analysis of the code and behavior of mobile apps) and to answer people’s privacy questions. Traditionally, these NLP techniques have relied on labor-intensive and potentially error-prone manual annotation processes to build the corpora necessary to train them. This article explores and evaluates the use of Large Language Models (LLMs) as an alternative for effectively and efficiently identifying and categorizing a variety of data practice disclosures found in the text of privacy policies. Specifically, we report on the performance of ChatGPT and Llama 2, two particularly popular LLM-based tools. This includes engineering prompts and evaluating different configurations of these LLM techniques. Evaluation of the resulting techniques on well-known corpora of privacy policy annotations yields an F1 score exceeding 93%. This score is higher than scores reported earlier in the literature on these benchmarks. This performance is obtained at minimal marginal cost (excluding the cost required to train the foundational models themselves). These results, which are consistent with those reported in other domains, suggest that LLMs offer a particularly promising approach to automated privacy policy analysis at scale.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"8 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信