Frontiers in signal processing最新文献

筛选
英文 中文
Spread spectrum modulation recognition based on phase diagram entropy 基于相位图熵的扩频调制识别
Frontiers in signal processing Pub Date : 2023-07-05 DOI: 10.3389/frsip.2023.1197619
Denis Stanescu, A. Digulescu, C. Ioana, A. Serbanescu
{"title":"Spread spectrum modulation recognition based on phase diagram entropy","authors":"Denis Stanescu, A. Digulescu, C. Ioana, A. Serbanescu","doi":"10.3389/frsip.2023.1197619","DOIUrl":"https://doi.org/10.3389/frsip.2023.1197619","url":null,"abstract":"Wireless communication technologies are undergoing intensive study and are experiencing accelerated progress which leads to a large increase in the number of end-users. Because of this, the radio spectrum has become more crowded than ever. These previously mentioned aspects lead to the urgent need for more reliable and intelligent communication systems that can improve the spectrum efficiency. Specifically, modulation scheme recognition occupies a crucial position in the civil and military application, especially with the emergence of Software Defined Radio (SDR). The modulation recognition is an indispensable task while performing spectrum sensing in Cognitive Radio (CR). Spread spectrum (SS) techniques represent the foundation for the design of Cognitive Radio systems. In this work, we propose a new method of characterization of Spread spectrum modulations capable of providing relevant information for the process of recognition of this type of modulations. Using the proposed approach, results higher than 90% are obtained in the modulation classification process, thus bringing an advantage over the classical methods, whose performance is below 75%.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83485629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The disparity between optimal and practical Lagrangian multiplier estimation in video encoders 视频编码器中最优拉格朗日乘子估计与实际拉格朗日乘子估计的差异
Frontiers in signal processing Pub Date : 2023-07-03 DOI: 10.3389/frsip.2023.1205104
D. Ringis, Vibhoothi, François Pitié, A. Kokaram
{"title":"The disparity between optimal and practical Lagrangian multiplier estimation in video encoders","authors":"D. Ringis, Vibhoothi, François Pitié, A. Kokaram","doi":"10.3389/frsip.2023.1205104","DOIUrl":"https://doi.org/10.3389/frsip.2023.1205104","url":null,"abstract":"With video streaming making up 80% of the global internet bandwidth, the need to deliver high-quality video at low bitrate, combined with the high complexity of modern codecs, has led to the idea of a per-clip optimisation approach in transcoding. In this paper, we revisit the Lagrangian multiplier parameter, which is at the core of rate-distortion optimisation. Currently, video encoders use prediction models to set this parameter but these models are agnostic to the video at hand. We explore the gains that could be achieved using a per-clip direct-search optimisation of the Lagrangian multiplier parameter. We evaluate this optimisation framework on a much larger corpus of videos than that has been attempted by previous research. Our results show that per-clip optimisation of the Lagrangian multiplier leads to BD-Rate average improvements of 1.87% for x265 across a 10 k clip corpus of modern videos, and up to 25% in a single clip. Average improvements of 0.69% are reported for libaom-av1 on a subset of 100 clips. However, we show that a per-clip, per-frame-type optimisation of λ for libaom-av1 can increase these average gains to 2.5% and up to 14.9% on a single clip. Our optimisation scheme requires about 50–250 additional encodes per-clip but we show that significant speed-up can be made using proxy videos in the optimisation. These computational gains (of up to ×200) incur a slight loss to BD-Rate improvement because the optimisation is conducted at lower resolutions. Overall, this paper highlights the value of re-examining the estimation of the Lagrangian multiplier in modern codecs as there are significant gains still available without changing the tools used in the standards.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86668664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multilevel dynamic model for documenting, reactivating and preserving interactive multimedia art 记录、重新激活和保存交互式多媒体艺术的多层次动态模型
Frontiers in signal processing Pub Date : 2023-06-30 DOI: 10.3389/frsip.2023.1183294
Alessandro Fiordelmondo, A. Russo, Mattia Pizzato, Luca Zecchinato, S. Canazza
{"title":"A multilevel dynamic model for documenting, reactivating and preserving interactive multimedia art","authors":"Alessandro Fiordelmondo, A. Russo, Mattia Pizzato, Luca Zecchinato, S. Canazza","doi":"10.3389/frsip.2023.1183294","DOIUrl":"https://doi.org/10.3389/frsip.2023.1183294","url":null,"abstract":"Preserving interactive multimedia artworks is a challenging research field due to their complex nature and technological obsolescence. Established preservation strategies are inadequate since they do not cover the complex relations between analogue and digital components, their short life expectancies, and the experience produced when the artworks are activated. The existence of many projects in this research area highlights the urgency to create a preservation practice focused on the new multimedia art forms. The paper introduces the Multilevel Dynamic Preservation (MDP) model, developed at the Centro di Sonologia Computazionale (CSC) of the University of Padova, which aims to preserve multimedia artworks through different levels of information (about the components, their relationship and the activated experiences) through various exhibitions and thus as a process or a dynamic object. The model has been developed through several case studies. This paper reports a specific and complex one: the “hybrid reactivation” of the Il caos delle sfere, a 1999 interactive installation by Italian composer Carlo De Pirro. The entire reactivation process aims at preserving its identity, rather than simply replicating the original installation, and consists of both the replacement of old and non-functioning components components (“adaptive/update approach”) and the reactivation of original parts (“purist approach“)-hence the name “hybrid reactivation”. Through this case study, it was possible to test and optimize the model in all aspects: from collecting old documentation and using it for reactivation to creating new documentation and archiving the entire artwork. The model allows us to preserve the artwork as a process of change, minimizing the loss of information about previous versions. Most importantly, it lets us rethink the concept of the authenticity of interactive multimedia art, shifting the focus from materiality to the experience and function that artworks activate. The model avoids recording both the last reactivation and the first exhibition as authentic. It records the process of transformation between reactivations. It is through this process that the authenticity of the artwork can be inferred.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86281441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceptual video quality assessment: the journey continues! 感性视频质量测评:征程还在继续!
Frontiers in signal processing Pub Date : 2023-06-27 DOI: 10.3389/frsip.2023.1193523
Avinab Saha, Sai Karthikey Pentapati, Zaixi Shang, Ramit Pahwa, Bowen Chen, Hakan Emre Gedik, Sandeep Mishra, A. Bovik
{"title":"Perceptual video quality assessment: the journey continues!","authors":"Avinab Saha, Sai Karthikey Pentapati, Zaixi Shang, Ramit Pahwa, Bowen Chen, Hakan Emre Gedik, Sandeep Mishra, A. Bovik","doi":"10.3389/frsip.2023.1193523","DOIUrl":"https://doi.org/10.3389/frsip.2023.1193523","url":null,"abstract":"Perceptual Video Quality Assessment (VQA) is one of the most fundamental and challenging problems in the field of Video Engineering. Along with video compression, it has become one of two dominant theoretical and algorithmic technologies in television streaming and social media. Over the last 2 decades, the volume of video traffic over the internet has grown exponentially, powered by rapid advancements in cloud services, faster video compression technologies, and increased access to high-speed, low-latency wireless internet connectivity. This has given rise to issues related to delivering extraordinary volumes of picture and video data to an increasingly sophisticated and demanding global audience. Consequently, developing algorithms to measure the quality of pictures and videos as perceived by humans has become increasingly critical since these algorithms can be used to perceptually optimize trade-offs between quality and bandwidth consumption. VQA models have evolved from algorithms developed for generic 2D videos to specialized algorithms explicitly designed for on-demand video streaming, user-generated content (UGC), virtual and augmented reality (VR and AR), cloud gaming, high dynamic range (HDR), and high frame rate (HFR) scenarios. Along the way, we also describe the advancement in algorithm design, beginning with traditional hand-crafted feature-based methods and finishing with current deep-learning models powering accurate VQA algorithms. We also discuss the evolution of Subjective Video Quality databases containing videos and human-annotated quality scores, which are the necessary tools to create, test, compare, and benchmark VQA algorithms. To finish, we discuss emerging trends in VQA algorithm design and general perspectives on the evolution of Video Quality Assessment in the foreseeable future.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74386511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
4DEgo: ego-velocity estimation from high-resolution radar data dego:从高分辨率雷达数据估计自我速度
Frontiers in signal processing Pub Date : 2023-06-27 DOI: 10.3389/frsip.2023.1198205
Prashant Rai, N. Strokina, R. Ghabcheloo
{"title":"4DEgo: ego-velocity estimation from high-resolution radar data","authors":"Prashant Rai, N. Strokina, R. Ghabcheloo","doi":"10.3389/frsip.2023.1198205","DOIUrl":"https://doi.org/10.3389/frsip.2023.1198205","url":null,"abstract":"Automotive radars allow for perception of the environment in adverse visibility and weather conditions. New high-resolution sensors have demonstrated potential for tasks beyond obstacle detection and velocity adjustment, such as mapping or target tracking. This paper proposes an end-to-end method for ego-velocity estimation based on radar scan registration. Our architecture includes a 3D convolution over all three channels of the heatmap, capturing features associated with motion, and an attention mechanism for selecting significant features for regression. To the best of our knowledge, this is the first work utilizing the full 3D radar heatmap for ego-velocity estimation. We verify the efficacy of our approach using the publicly available ColoRadar dataset and study the effect of architectural choices and distributional shifts on performance.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89775602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Epileptic seizure prediction based on multiresolution convolutional neural networks 基于多分辨率卷积神经网络的癫痫发作预测
Frontiers in signal processing Pub Date : 2023-05-30 DOI: 10.3389/frsip.2023.1175305
Ali K. Ibrahim, H. Zhuang, E. Tognoli, Ali Muhamed Ali, N. Erdol
{"title":"Epileptic seizure prediction based on multiresolution convolutional neural networks","authors":"Ali K. Ibrahim, H. Zhuang, E. Tognoli, Ali Muhamed Ali, N. Erdol","doi":"10.3389/frsip.2023.1175305","DOIUrl":"https://doi.org/10.3389/frsip.2023.1175305","url":null,"abstract":"Epilepsy withholds patients’ control of their body or consciousness and puts them at risk in the course of their daily life. This article pursues the development of a smart neurocomputational technology to alert epileptic patients wearing EEG sensors of an impending seizure. An innovative approach for epileptic seizure prediction has been proposed to improve prediction accuracy and reduce the false alarm rate in comparison with state-of-the-art benchmarks. Maximal overlap discrete wavelet transform was used to decompose EEG signals into different frequency resolutions, and a multiresolution convolutional neural network is designed to extract discriminative features from each frequency band. The algorithm automatically generates patient-specific features to best classify preictal and interictal segments of the subject. The method can be applied to any patient case from any dataset without the need for a handcrafted feature extraction procedure. The proposed approach was tested with two popular epilepsy patient datasets. It achieved a sensitivity of 82% and a false prediction rate of 0.058 with the Children’s Hospital Boston-MIT scalp EEG dataset and a sensitivity of 85% and a false prediction rate of 0.19 with the American Epilepsy Society Seizure Prediction Challenge dataset. This technology provides a personalized solution for the patient that has improved sensitivity and specificity, yet because of the algorithm’s intrinsic ability for generalization, it emancipates from the reliance on epileptologists’ expertise to tune a wearable technological aid, which will ultimately help to deploy it broadly, including in medically underserved locations across the globe.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84629413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From 2D to 3D video conferencing: modular RGB-D capture and reconstruction for interactive natural user representations in immersive extended reality (XR) communication 从2D到3D视频会议:沉浸式扩展现实(XR)通信中交互式自然用户表示的模块化RGB-D捕获和重建
Frontiers in signal processing Pub Date : 2023-05-22 DOI: 10.3389/frsip.2023.1139897
S. Gunkel, S. Dijkstra-Soudarissanane, H. Stokking, O. Niamut
{"title":"From 2D to 3D video conferencing: modular RGB-D capture and reconstruction for interactive natural user representations in immersive extended reality (XR) communication","authors":"S. Gunkel, S. Dijkstra-Soudarissanane, H. Stokking, O. Niamut","doi":"10.3389/frsip.2023.1139897","DOIUrl":"https://doi.org/10.3389/frsip.2023.1139897","url":null,"abstract":"With recent advancements in Virtual Reality (VR) and Augmented Reality (AR) hardware, many new immersive Extended Reality (XR) applications and services arose. One challenge that remains is to solve the social isolation often felt in these extended reality experiences and to enable a natural multi-user communication with high Social Presence. While a multitude of solutions exist to address this issue with computer-generated “artificial” avatars (based on pre-rendered 3D models), this form of user representation might not be sufficient for conveying a sense of co-presence for many use cases. In particular, for personal communication (for example, with family, doctor, or sales representatives) or for applications requiring photorealistic rendering. One alternative solution is to capture users (and objects) with the help of RGBD sensors to allow real-time photorealistic representations of users. In this paper, we present a complete and modular RGBD capture application and outline the different steps needed to utilize RGBD as means of photorealistic 3D user representations. We outline different capture modalities, as well as individual functional processing blocks, with its advantages and disadvantages. We evaluate our approach in two ways, a technical evaluation of the operation of the different modules and two small-scale user evaluations within integrated applications. The integrated applications present the use of the modular RGBD capture in both augmented reality and virtual reality communication application use cases, tested in realistic real-world settings. Our examples show that the proposed modular capture and reconstruction pipeline allows for easy evaluation and extension of each step of the processing pipeline. Furthermore, it allows parallel code execution, keeping performance overhead and delay low. Finally, our proposed methods show that an integration of 3D photorealistic user representations into existing video communication transmission systems is feasible and allows for new immersive extended reality applications.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"122 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77391334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Data-driven airborne bayesian forward-looking superresolution imaging based on generalized Gaussian distribution 基于广义高斯分布的数据驱动机载贝叶斯前视超分辨率成像
Frontiers in signal processing Pub Date : 2023-05-11 DOI: 10.3389/frsip.2023.1093203
Hongmeng Chen, Zeyu Wang, Yingjie Zhang, X. Jin, Wenquan Gao, Jizhou Yu
{"title":"Data-driven airborne bayesian forward-looking superresolution imaging based on generalized Gaussian distribution","authors":"Hongmeng Chen, Zeyu Wang, Yingjie Zhang, X. Jin, Wenquan Gao, Jizhou Yu","doi":"10.3389/frsip.2023.1093203","DOIUrl":"https://doi.org/10.3389/frsip.2023.1093203","url":null,"abstract":"Airborne forward-looking radar (AFLR) has been more and more impoatant due to its wide application in the military and civilian fields, such as automatic driving, sea surveillance, airport surveillance and guidance. Recently, sparse deconvolution technique has been paid much attention in AFLR. However, the azimuth resolution performance gradually decreases with the complexity of the imaging scene. In this paper, a data-driven airborne Bayesian forward-looking superresolution imaging algorithm based on generalized gaussian distribution (GGD- Bayesian) for complex imaging scene is proposed. The generalized gaussian distribution is utilized to describe the sparsity information of the imaging scene, which is quite essential to adaptively fit different imaging scenes. Moreover, the mathematical model for forward-looking imaging was established under the maximum a posteriori (MAP) criterion based on the Bayesian framework. To solve the above optimization problem, quasi-Newton algorithm is derived and used. The main contribution of the paper is the automatic selection for the sparsity parameter in the process of forward-looking imaging. The performance assessment with simulated data has demonstrated the effectiveness of our proposed GGD- Bayesian algorithm under complex scenarios.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85594732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Apparent color picker: color prediction model to extract apparent color in photos 显色选择器:颜色预测模型,用于提取照片中的显色
Frontiers in signal processing Pub Date : 2023-05-09 DOI: 10.3389/frsip.2023.1133210
Yuki Kubota, Shigeo Yoshida, M. Inami
{"title":"Apparent color picker: color prediction model to extract apparent color in photos","authors":"Yuki Kubota, Shigeo Yoshida, M. Inami","doi":"10.3389/frsip.2023.1133210","DOIUrl":"https://doi.org/10.3389/frsip.2023.1133210","url":null,"abstract":"A color extraction interface reflecting human color perception helps pick colors from natural images as users see. Apparent color in photos differs from pixel color due to complex factors, including color constancy and adjacent color. However, methodologies for estimating the apparent color in photos have yet to be proposed. In this paper, the authors investigate suitable model structures and features for constructing an apparent color picker, which extracts the apparent color from natural photos. Regression models were constructed based on the psychophysical dataset for given images to predict the apparent color from image features. The linear regression model incorporates features that reflect multi-scale adjacent colors. The evaluation experiments confirm that the estimated color was closer to the apparent color than the pixel color for an average of 70%–80% of the images. However, the accuracy decreased for several conditions, including low and high saturation at low luminance. The authors believe that the proposed methodology could be applied to develop user interfaces to compensate for the discrepancy between human perception and computer predictions.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89670813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Recent trends in multimedia forensics and visual content verification 社论:多媒体取证和视觉内容验证的最新趋势
Frontiers in signal processing Pub Date : 2023-05-09 DOI: 10.3389/frsip.2023.1210123
R. Caldelli, Duc Tien Dang Nguyen, Cecilia Pasquini
{"title":"Editorial: Recent trends in multimedia forensics and visual content verification","authors":"R. Caldelli, Duc Tien Dang Nguyen, Cecilia Pasquini","doi":"10.3389/frsip.2023.1210123","DOIUrl":"https://doi.org/10.3389/frsip.2023.1210123","url":null,"abstract":"Huge amounts of multimedia content are in fact generated every day, pervading the web and popular sharing platforms such as social networks. Such data carry embedded traces due to the whole creation and sharing cycle, which can be recovered and exploited to assess the authenticity of a specific asset. This includes identifying the provenance of media data, the generation device or crafting method, as well as potential manipulation of the multimedia signal. Also, the massive introduction of artificial intelligence and of modern performing devices, together with new paradigms for content sharing and usage, have determined the need to research novel methodologies that can globally take into account all these important changes. This Research Topic gathers cutting-edge techniques for the forensic analysis and verification of media data, including solutions at the edge of signal processing, machine/ deep learning, and multimedia analysis. Research approaches to multimedia forensics have rapidly evolved in the last years, as a consequence of both technological advancements inmedia creation and distribution, andmethodological advancements in signal processing and learning. One evident aspect is the disruptive diffusion of deep learning models for addressing tasks related to audio-visual data. As a consequence of the impressive performance boost they brought in different areas, deep architectures nowadays dominate in multimedia forensics research as well. Then, forensic methodologies need to be updated with respect to the constant evolution of acquisition devices and data formats. Therefore, algorithms are also designed with the goal of efficiently analyzing high-resolution data, possibly subject to advanced in-camera processing. In addition, there is an increasing need for detection technologies that are able to identify synthetically generated visual data, in response to the impressive advancements of generative models based on Artificial intelligence (AI) such as Generative Adversarial Networks (GANs). We are glad to introduce the accepted manuscripts to this Research Topic, which are well aligned with these cutting-edge research trends and are authored by highly recognized OPEN ACCESS","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74924706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信