Journal of Cloud Computing最新文献

筛选
英文 中文
TCP Stratos for stratosphere based computing platforms 基于平流层计算平台的 TCP Stratos
Journal of Cloud Computing Pub Date : 2024-03-15 DOI: 10.1186/s13677-024-00620-0
A. A. Periola
{"title":"TCP Stratos for stratosphere based computing platforms","authors":"A. A. Periola","doi":"10.1186/s13677-024-00620-0","DOIUrl":"https://doi.org/10.1186/s13677-024-00620-0","url":null,"abstract":"Stratosphere computing platforms (SCPs) benefit from free cooling but face challenges necessitating transmission control protocol (TCP) re-design. The redesign should be considered due to stratospheric gravity waves (SGWs), and sudden stratospheric warming (SSWs). SGWs, and SSWs disturb the wireless channel during SCPs packet communications. SCP packet transmission can be done using existing TCP variants at the expense of high packet loss as existing TCP variants do not consider SGWs, and SSWs. TCP variants designed for satellite links are not suitable as they do not explicitly consider the SSW, and SGW. Moreover, the use of SCPs in future internet is at a nascent stage. The presented research proposes a new TCP variant i.e., TCP Stratos. TCP Stratos incorporates a parameter transfer mechanism and comprises loss-based; and delay-based components. However, its window evolution considers the occurrence of SSWs, and SGWs. The performance benefit of the proposed approach is evaluated via MATLAB numerical simulation. MATLAB simulation has been used because of the consideration of the stratosphere. The modelling of the stratosphere in this case is challenging for conventional tools and frameworks. Performance evaluation shows that using TCP Stratos instead of existing TCP variants and improved TCP variants reduces the packet loss rate by an average of (7.1–23.1) % and (3.8–12.8) %, respectively. The throughput is enhanced by an average of (20.5–53)%, and (40.9–70)% when TCP Stratos is used instead of existing TCP variant and modified TCP variant, respectively.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140156309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing the resource allocation in cyber physical energy systems based on cloud storage and IoT infrastructure 基于云存储和物联网基础设施优化网络物理能源系统的资源分配
Journal of Cloud Computing Pub Date : 2024-03-15 DOI: 10.1186/s13677-024-00615-x
Zhiqing Bai, Caizhong Li, Javad Pourzamani, Xuan Yang, Dejuan Li
{"title":"Optimizing the resource allocation in cyber physical energy systems based on cloud storage and IoT infrastructure","authors":"Zhiqing Bai, Caizhong Li, Javad Pourzamani, Xuan Yang, Dejuan Li","doi":"10.1186/s13677-024-00615-x","DOIUrl":"https://doi.org/10.1186/s13677-024-00615-x","url":null,"abstract":"Given the prohibited operating zones, losses, and valve point effects in power systems, energy optimization analysis in such systems includes numerous non-convex and non-smooth parameters, such as economic dispatch problems. In addition, in this paper, to include all possible scenarios in economic dispatch problems, multi-fuel generators, and transmission losses are considered. However, these features make economic dispatch problems more complex from a non-convexity standpoint. In order to solve economic dispatch problems as an important consideration in power systems, this paper presents a modified robust, and effective optimization algorithm. Here, some modifications are carried out to tackle such a sophisticated problem and find the best solution, considering multiple fuels, valve point effect, large-scale systems, prohibited operating zones, and transmission losses. Moreover, a few complicated power systems including 6, 13, and 40 generators which are fed by one type of fuel, 10 generators with multiple fuels, and two large-scale cases comprised of 80 and 120 generators are analyzed by the proposed optimization algorithm. The effectiveness of the proposed method, in terms of accuracy, robustness, and convergence speed is evaluated, as well. Furthermore, this paper explores the integration of cloud storage and internet of things (IoT) to augment the adaptability of monitoring capabilities of the proposed method in handling non-convex energy resource management and allocation problems across various generator quantities and constraints. The results show the capability of the proposed algorithm for solving non-convex energy resource management and allocation problems irrespective of the number of generators and constraints. Based on the obtained results, the proposed method provides good results for both small and large systems. The proposed method, for example, always yields the best results for the system of 6 power plants with and without losses, which are $15,276.894 and $15,443.7967. Moreover, the improvements made in the proposed method have allowed the economic dispatch problem regarding multi-fuel power plants to be solved not only with optimal results ($623.83) but also in less than 35 iterations. Lastly, the difference between the best-obtained results ($121,412) and the worst-obtained results ($121,316.1992) for the system of 40 power plants is only about $4 which is quite acceptable.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140155613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SRA-E-ABCO: terminal task offloading for cloud-edge-end environments SRA-E-ABCO:面向云端环境的终端任务卸载
Journal of Cloud Computing Pub Date : 2024-03-14 DOI: 10.1186/s13677-024-00622-y
Shun Jiao, Haiyan Wang, Jian Luo
{"title":"SRA-E-ABCO: terminal task offloading for cloud-edge-end environments","authors":"Shun Jiao, Haiyan Wang, Jian Luo","doi":"10.1186/s13677-024-00622-y","DOIUrl":"https://doi.org/10.1186/s13677-024-00622-y","url":null,"abstract":"The rapid development of the Internet technology along with the emergence of intelligent applications has put forward higher requirements for task offloading. In Cloud-Edge-End (CEE) environments, offloading computing tasks of terminal devices to edge and cloud servers can effectively reduce system delay and alleviate network congestion. Designing a reliable task offloading strategy in CEE environments to meet users’ requirements is a challenging issue. To design an effective offloading strategy, a Service Reliability Analysis and Elite-Artificial Bee Colony Offloading model (SRA-E-ABCO) is presented for cloud-edge-end environments. Specifically, a Service Reliability Analysis (SRA) method is proposed to assist in predicting the offloading necessity of terminal tasks and analyzing the attributes of terminal devices and edge nodes. An Elite Artificial Bee Colony Offloading (E-ABCO) method is also proposed, which optimizes the offloading strategy by combining elite populations with improved fitness formulas, position update formulas, and population initialization methods. Simulation results on real datasets validate the efficient performance of the proposed scheme that not only reduces task offloading delay but also optimize system overhead in comparison to baseline schemes.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140127587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FLM-ICR: a federated learning model for classification of internet of vehicle terminals using connection records FLM-ICR:利用连接记录对车载互联网终端进行分类的联合学习模型
Journal of Cloud Computing Pub Date : 2024-03-13 DOI: 10.1186/s13677-024-00623-x
Kai Yang, Jiawei Du, Jingchao Liu, Feng Xu, Ye Tang, Ming Liu, Zhibin Li
{"title":"FLM-ICR: a federated learning model for classification of internet of vehicle terminals using connection records","authors":"Kai Yang, Jiawei Du, Jingchao Liu, Feng Xu, Ye Tang, Ming Liu, Zhibin Li","doi":"10.1186/s13677-024-00623-x","DOIUrl":"https://doi.org/10.1186/s13677-024-00623-x","url":null,"abstract":"With the rapid growth of Internet of Vehicles (IoV) technology, the performance and privacy of IoV terminals (IoVT) have become increasingly important. This paper proposes a federated learning model for IoVT classification using connection records (FLM-ICR) to address privacy concerns and poor computational performance in analyzing users' private data in IoV. FLM-ICR, in the horizontally federated learning client-server architecture, utilizes an improved multi-layer perceptron and logistic regression network as the model backbone, employs the federated momentum gradient algorithm as the local model training optimizer, and uses the federated Gaussian differential privacy algorithm to protect the security of the computation process. The experiment evaluates the model's classification performance using the confusion matrix, explores the impact of client collaboration on model performance, demonstrates the model's suitability for imbalanced data distribution, and confirms the effectiveness of federated learning for model training. FLM-ICR achieves the accuracy, precision, recall, specificity, and F1 score of 0.795, 0.735, 0.835, 0.75, and 0.782, respectively, outperforming existing research methods and balancing classification performance and privacy security, making it suitable for IoV computation and analysis of private data.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140127220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-dimensional resource allocation strategy for LEO satellite communication uplinks based on deep reinforcement learning 基于深度强化学习的低地球轨道卫星通信上行链路多维资源分配策略
Journal of Cloud Computing Pub Date : 2024-03-08 DOI: 10.1186/s13677-024-00621-z
Yu Hu, Feipeng Qiu, Fei Zheng, Jilong Zhao
{"title":"Multi-dimensional resource allocation strategy for LEO satellite communication uplinks based on deep reinforcement learning","authors":"Yu Hu, Feipeng Qiu, Fei Zheng, Jilong Zhao","doi":"10.1186/s13677-024-00621-z","DOIUrl":"https://doi.org/10.1186/s13677-024-00621-z","url":null,"abstract":"In the LEO satellite communication system, the resource utilization rate is very low due to the constrained resources on satellites and the non-uniform distribution of traffics. In addition, the rapid movement of LEO satellites leads to complicated and changeable networks, which makes it difficult for traditional resource allocation strategies to improve the resource utilization rate. To solve the above problem, this paper proposes a resource allocation strategy based on deep reinforcement learning. The strategy takes the weighted sum of spectral efficiency, energy efficiency and blocking rate as the optimization objective, and constructs a joint power and channel allocation model. The strategy allocates channels and power according to the number of channels, the number of users and the type of business. In the reward decision mechanism, the maximum reward is obtained by maximizing the increment of the optimization target. However, during the optimization process, the decision always focuses on the optimal allocation for current users, and ignores QoS for new users. To avoid the situation, current service beams are integrated with high- traffic beams, and states of beams are refactored to maximize long-term benefits to improve system performance. Simulation experiments show that in scenarios with a high number of users, the proposed resource allocation strategy reduces the blocking rate by at least 5% compared to reinforcement learning methods, effectively enhancing resource utilization.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140075383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge-cloud computing oriented large-scale online music education mechanism driven by neural networks 神经网络驱动的面向边缘云计算的大规模在线音乐教育机制
Journal of Cloud Computing Pub Date : 2024-03-07 DOI: 10.1186/s13677-023-00555-y
Wen Xing, Adam Slowik, J. Dinesh Peter
{"title":"Edge-cloud computing oriented large-scale online music education mechanism driven by neural networks","authors":"Wen Xing, Adam Slowik, J. Dinesh Peter","doi":"10.1186/s13677-023-00555-y","DOIUrl":"https://doi.org/10.1186/s13677-023-00555-y","url":null,"abstract":"With the advent of the big data era, edge cloud computing has developed rapidly. In this era of popular digital music, various technologies have brought great convenience to online music education. But vast databases of digital music prevent educators from making specific-purpose choices. Music recommendation will be a potential development direction for online music education. In this paper, we propose a deep learning model based on multi-source information fusion for music recommendation under the scenario of edge-cloud computing. First, we use the music latent factor vector obtained by the Weighted Matrix Factorization (WMF) algorithm as the ground truth. Second, we build a neural network model to fuse multiple sources of music information, including music spectrum extracted from extra music information to predict the latent spatial features of music. Finally, we predict the user’s preference for music through the inner product of the user vector and the music vector for recommendation. Experimental results on public datasets and real music data collected by edge devices demonstrate the effectiveness of the proposed method in music recommendation.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140057209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RNA-RBP interactions recognition using multi-label learning and feature attention allocation 利用多标签学习和特征注意力分配识别 RNA-RBP 相互作用
Journal of Cloud Computing Pub Date : 2024-03-07 DOI: 10.1186/s13677-024-00612-0
Huirui Han, Bandeh Ali Talpur, Wei Liu, Limei Wang, Bilal Ahmed, Nadia Sarhan, Emad Mahrous Awwad
{"title":"RNA-RBP interactions recognition using multi-label learning and feature attention allocation","authors":"Huirui Han, Bandeh Ali Talpur, Wei Liu, Limei Wang, Bilal Ahmed, Nadia Sarhan, Emad Mahrous Awwad","doi":"10.1186/s13677-024-00612-0","DOIUrl":"https://doi.org/10.1186/s13677-024-00612-0","url":null,"abstract":"In this study, we present a sophisticated multi-label deep learning framework for the prediction of RNA-RBP (RNA-binding protein) interactions, a critical aspect in understanding RNA functionality modulation and its implications in disease pathogenesis. Our approach leverages machine learning to develop a rapid and cost-efficient predictive model for these interactions. The proposed model captures the complex characteristics of RNA and recognizes corresponding RBPs through its dual-module architecture. The first module employs convolutional neural networks (CNNs) for intricate feature extraction from RNA sequences, enabling the model to discern nuanced patterns and attributes. The second module is a multi-view multi-label classification system incorporating a feature attention mechanism. The second module is a multi-view multi-label classification system that utilizes a feature attention mechanism. This mechanism is designed to intricately analyze and distinguish between common and unique deep features derived from the diverse RNA characteristics. To evaluate the model's efficacy, extensive experiments were conducted on a comprehensive RNA-RBP interaction dataset. The results emphasize substantial improvements in the model's ability to predict RNA-RBP interactions compared to existing methodologies. This advancement emphasizes the model's potential in contributing to the understanding of RNA-mediated biological processes and disease etiology.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140057502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-cost and high-performance abnormal trajectory detection based on the GRU model with deep spatiotemporal sequence analysis in cloud computing 基于 GRU 模型的低成本高性能异常轨迹检测与云计算中的深度时空序列分析
Journal of Cloud Computing Pub Date : 2024-03-05 DOI: 10.1186/s13677-024-00611-1
Guohao Tang, Huaying Zhao, Baohua Yu
{"title":"Low-cost and high-performance abnormal trajectory detection based on the GRU model with deep spatiotemporal sequence analysis in cloud computing","authors":"Guohao Tang, Huaying Zhao, Baohua Yu","doi":"10.1186/s13677-024-00611-1","DOIUrl":"https://doi.org/10.1186/s13677-024-00611-1","url":null,"abstract":"Trajectory anomalies serve as early indicators of potential issues and frequently provide valuable insights into event occurrence. Existing methods for detecting abnormal trajectories primarily focus on comparing the spatial characteristics of the trajectories. However, they fail to capture the temporal dimension’s pattern and evolution within the trajectory data, thereby inadequately identifying the behavioral inertia of the target group. A few detection methods that incorporate spatiotemporal features have also failed to adequately analyze the spatiotemporal sequence evolution information; consequently, detection methods that ignore temporal and spatial correlations are too one-sided. Recurrent neural networks (RNNs), especially gate recurrent unit (GRU) that design reset and update gate control units, process nonlinear sequence processing capabilities, enabling effective extraction and analysis of both temporal and spatial characteristics. However, the basic GRU network model has limited expressive power and may not be able to adequately capture complex sequence patterns and semantic information. To address the above issues, an abnormal trajectory detection method based on the improved GRU model is proposed in cloud computing in this paper. To enhance the anomaly detection ability and training efficiency of relevant models, strictly control the input of irrelevant features and improve the model fitting effect, an improved model combining the random forest algorithm and fully connected layer network is designed. The method deconstructs spatiotemporal semantics through reset and update gated units, while effectively capturing feature evolution information and target behavioral inertia by leveraging the integration of features and nonlinear mapping capabilities of the fully connected layer network. The experimental results based on the GeoLife GPS trajectory dataset indicate that the proposed approach improves both generalization ability by 1% and reduces training cost by 31.68%. This success do provides a practical solution for the task of anomaly trajectory detection.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140035370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-empowered mobile edge computing: inducing balanced federated learning strategy over edge for balanced data and optimized computation cost 人工智能赋能移动边缘计算:通过边缘诱导均衡联合学习策略,实现数据均衡和计算成本优化
Journal of Cloud Computing Pub Date : 2024-03-04 DOI: 10.1186/s13677-024-00614-y
Momina Shaheen, Muhammad S. Farooq, Tariq Umer
{"title":"AI-empowered mobile edge computing: inducing balanced federated learning strategy over edge for balanced data and optimized computation cost","authors":"Momina Shaheen, Muhammad S. Farooq, Tariq Umer","doi":"10.1186/s13677-024-00614-y","DOIUrl":"https://doi.org/10.1186/s13677-024-00614-y","url":null,"abstract":"In Mobile Edge Computing, the framework of federated learning can enable collaborative learning models across edge nodes, without necessitating the direct exchange of data from edge nodes. It addresses significant challenges encompassing access rights, privacy, security, and the utilization of heterogeneous data sources over mobile edge computing. Edge devices generate and gather data, across the network, in non-IID (independent and identically distributed) manner leading to potential variations in the number of data samples among these edge networks. A method is proposed to work in federated learning under edge computing setting, which involves AI techniques such as data augmentation and class estimation and balancing during training process with minimized computational overhead. This is accomplished through the implementation of data augmentation techniques to refine data distribution. Additionally, we leveraged class estimation and employed linear regression for client-side model training. This strategic approach yields a reduction in computational costs. To validate the effectiveness of the proposed approach, it is applied to two distinct datasets. One dataset pertains to image data (FashionMNIST), while the other comprises numerical and textual data concerning stocks for predictive analysis of stock values. This approach demonstrates commendable performance across both dataset types and approaching more than 92% of accuracy in the paradigm of federated learning.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140026487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated visual quality assessment for virtual and augmented reality based digital twins 基于虚拟现实和增强现实的数字双胞胎的自动视觉质量评估
Journal of Cloud Computing Pub Date : 2024-02-26 DOI: 10.1186/s13677-024-00616-w
Ben Roullier, Frank McQuade, Ashiq Anjum, Craig Bower, Lu Liu
{"title":"Automated visual quality assessment for virtual and augmented reality based digital twins","authors":"Ben Roullier, Frank McQuade, Ashiq Anjum, Craig Bower, Lu Liu","doi":"10.1186/s13677-024-00616-w","DOIUrl":"https://doi.org/10.1186/s13677-024-00616-w","url":null,"abstract":"Virtual and augmented reality digital twins are becoming increasingly prevalent in a number of industries, though the production of digital-twin systems applications is still prohibitively expensive for many smaller organisations. A key step towards reducing the cost of digital twins lies in automating the production of 3D assets, however efforts are complicated by the lack of suitable automated methods for determining the visual quality of these assets. While visual quality assessment has been an active area of research for a number of years, few publications consider this process in the context of asset creation in digital twins. In this work, we introduce an automated decimation procedure using machine learning to assess the visual impact of decimation, a process commonly used in the production of 3D assets which has thus far been underrepresented in the visual assessment literature. Our model combines 108 geometric and perceptual metrics to determine if a 3D object has been unacceptably distorted during decimation. Our model is trained on almost 4, 000 distorted meshes, giving a significantly wider range of applicability than many models in the literature. Our results show a precision of over 97% against a set of test models, and performance tests show our model is capable of performing assessments within 2 minutes on models of up to 25, 000 polygons. Based on these results we believe our model presents both a significant advance in the field of visual quality assessment and an important step towards reducing the cost of virtual and augmented reality-based digital-twins.","PeriodicalId":501257,"journal":{"name":"Journal of Cloud Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139968517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信