Mengmeng Ren, Li Qiao, Long Yang, Zhen Gao, Jian Chen, Mahdi Boloursaz Mashhadi, Pei Xiao, Rahim Tafazolli, Mehdi Bennis
{"title":"Generative Semantic Communication via Textual Prompts: Latency Performance Tradeoffs","authors":"Mengmeng Ren, Li Qiao, Long Yang, Zhen Gao, Jian Chen, Mahdi Boloursaz Mashhadi, Pei Xiao, Rahim Tafazolli, Mehdi Bennis","doi":"arxiv-2409.09715","DOIUrl":null,"url":null,"abstract":"This paper develops an edge-device collaborative Generative Semantic\nCommunications (Gen SemCom) framework leveraging pre-trained Multi-modal/Vision\nLanguage Models (M/VLMs) for ultra-low-rate semantic communication via textual\nprompts. The proposed framework optimizes the use of M/VLMs on the wireless\nedge/device to generate high-fidelity textual prompts through visual\ncaptioning/question answering, which are then transmitted over a wireless\nchannel for SemCom. Specifically, we develop a multi-user Gen SemCom framework\nusing pre-trained M/VLMs, and formulate a joint optimization problem of prompt\ngeneration offloading, communication and computation resource allocation to\nminimize the latency and maximize the resulting semantic quality. Due to the\nnonconvex nature of the problem with highly coupled discrete and continuous\nvariables, we decompose it as a two-level problem and propose a low-complexity\nswap/leaving/joining (SLJ)-based matching algorithm. Simulation results\ndemonstrate significant performance improvements over the conventional\nsemanticunaware/non-collaborative offloading benchmarks.","PeriodicalId":501082,"journal":{"name":"arXiv - MATH - Information Theory","volume":"21 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - Information Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09715","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper develops an edge-device collaborative Generative Semantic
Communications (Gen SemCom) framework leveraging pre-trained Multi-modal/Vision
Language Models (M/VLMs) for ultra-low-rate semantic communication via textual
prompts. The proposed framework optimizes the use of M/VLMs on the wireless
edge/device to generate high-fidelity textual prompts through visual
captioning/question answering, which are then transmitted over a wireless
channel for SemCom. Specifically, we develop a multi-user Gen SemCom framework
using pre-trained M/VLMs, and formulate a joint optimization problem of prompt
generation offloading, communication and computation resource allocation to
minimize the latency and maximize the resulting semantic quality. Due to the
nonconvex nature of the problem with highly coupled discrete and continuous
variables, we decompose it as a two-level problem and propose a low-complexity
swap/leaving/joining (SLJ)-based matching algorithm. Simulation results
demonstrate significant performance improvements over the conventional
semanticunaware/non-collaborative offloading benchmarks.