{"title":"Tomographic Image Reconstruction Using an Advanced Score Function (ADSF).","authors":"Wenxiang Cong, Wenjun Xia, Ge Wang","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Computed tomography (CT) reconstructs volumetric images using X-ray projection data acquired from multiple angles around an object. For low-dose or sparse-view CT scans, the classic image reconstruction algorithms often produce severe noise and artifacts. To address this issue, we develop a novel iterative image reconstruction method based on maximum a posteriori (MAP) estimation. In the MAP framework, the score function, i.e., the gradient of the logarithmic probability density distribution, plays a crucial role as an image prior in the iterative image reconstruction process. By leveraging the Gaussian mixture model, we derive a novel score matching formula to establish an advanced score function (ADSF) through deep learning. Integrating the new ADSF into the image reconstruction process, a new ADSF iterative reconstruction method is developed to improve image reconstruction quality. The convergence of the ADSF iterative reconstruction algorithm is proven through mathematical analysis. The performance of the ADSF reconstruction method is also evaluated on both public medical image datasets and clinical raw CT datasets. Our results show that the ADSF reconstruction method can achieve better denoising and deblurring effects than the state-of-the-art reconstruction methods, showing excellent generalizability and stability.</p>","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10312904/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10117027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ArXivPub Date : 2024-02-15DOI: 10.48550/arXiv.2402.09906
Niklas Muennighoff, Hongjin Su, Liang Wang, Nan Yang, Furu Wei, Tao Yu, Amanpreet Singh, Douwe Kiela
{"title":"Generative Representational Instruction Tuning","authors":"Niklas Muennighoff, Hongjin Su, Liang Wang, Nan Yang, Furu Wei, Tao Yu, Amanpreet Singh, Douwe Kiela","doi":"10.48550/arXiv.2402.09906","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09906","url":null,"abstract":"All text-based language problems can be reduced to either generation or embedding. Current models only perform well at one or the other. We introduce generative representational instruction tuning (GRIT) whereby a large language model is trained to handle both generative and embedding tasks by distinguishing between them through instructions. Compared to other open models, our resulting GritLM 7B sets a new state of the art on the Massive Text Embedding Benchmark (MTEB) and outperforms all models up to its size on a range of generative tasks. By scaling up further, GritLM 8x7B outperforms all open generative language models that we tried while still being among the best embedding models. Notably, we find that GRIT matches training on only generative or embedding data, thus we can unify both at no performance loss. Among other benefits, the unification via GRIT speeds up Retrieval-Augmented Generation (RAG) by>60% for long documents, by no longer requiring separate retrieval and generation models. Models, code, etc. are freely available at https://github.com/ContextualAI/gritlm.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"25 20","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ArXivPub Date : 2024-02-15DOI: 10.48550/arXiv.2402.10018
Ayelet C. Portnoy, Alejandro Cohen
{"title":"Multi-Stage Algorithm for Group Testing with Prior Statistics","authors":"Ayelet C. Portnoy, Alejandro Cohen","doi":"10.48550/arXiv.2402.10018","DOIUrl":"https://doi.org/10.48550/arXiv.2402.10018","url":null,"abstract":"In this paper, we propose an efficient multi-stage algorithm for non-adaptive Group Testing (GT) with general correlated prior statistics. The proposed solution can be applied to any correlated statistical prior represented in trellis, e.g., finite state machines and Markov processes. We introduce a variation of List Viterbi Algorithm (LVA) to enable accurate recovery using much fewer tests than objectives, which efficiently gains from the correlated prior statistics structure. Our numerical results demonstrate that the proposed Multi-Stage GT (MSGT) algorithm can obtain the optimal Maximum A Posteriori (MAP) performance with feasible complexity in practical regimes, such as with COVID-19 and sparse signal recovery applications, and reduce in the scenarios tested the number of pooled tests by at least $25%$ compared to existing classical low complexity GT algorithms. Moreover, we analytically characterize the complexity of the proposed MSGT algorithm that guarantees its efficiency.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"22 26","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ArXivPub Date : 2024-02-15DOI: 10.48550/arXiv.2402.09894
Tao Long, Katy Ilonka Gero, Lydia B. Chilton
{"title":"Not Just Novelty: A Longitudinal Study on Utility and Customization of AI Workflows","authors":"Tao Long, Katy Ilonka Gero, Lydia B. Chilton","doi":"10.48550/arXiv.2402.09894","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09894","url":null,"abstract":"Generative AI brings novel and impressive abilities to help people in everyday tasks. There are many AI workflows that solve real and complex problems by chaining AI outputs together with human interaction. Although there is an undeniable lure of AI, it's uncertain how useful generative AI workflows are after the novelty wears off. Additionally, tools built with generative AI have the potential to be personalized and adapted quickly and easily, but do users take advantage of the potential to customize? We conducted a three-week longitudinal study with 12 users to understand the familiarization and customization of generative AI tools for science communication. Our study revealed that the familiarization phase lasts for 4.3 sessions, where users explore the capabilities of the workflow and which aspects they find useful. After familiarization, the perceived utility of the system is rated higher than before, indicating that the perceived utility of AI is not just a novelty effect. The increase in benefits mainly comes from end-users' ability to customize prompts, and thus appropriate the system to their own needs. This points to a future where generative AI systems can allow us to design for appropriation.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"29 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ArXivPub Date : 2024-02-15DOI: 10.48550/arXiv.2402.09792
S. Shrivastava, A. Biswas, S. Chakrabarty, G. Dash, V. Saraswat, U. Ganguly
{"title":"System-level Impact of Non-Ideal Program-Time of Charge Trap Flash (CTF) on Deep Neural Network","authors":"S. Shrivastava, A. Biswas, S. Chakrabarty, G. Dash, V. Saraswat, U. Ganguly","doi":"10.48550/arXiv.2402.09792","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09792","url":null,"abstract":"Learning of deep neural networks (DNN) using Resistive Processing Unit (RPU) architecture is energy-efficient as it utilizes dedicated neuromorphic hardware and stochastic computation of weight updates for in-memory computing. Charge Trap Flash (CTF) devices can implement RPU-based weight updates in DNNs. However, prior work has shown that the weight updates (V_T) in CTF-based RPU are impacted by the non-ideal program time of CTF. The non-ideal program time is affected by two factors of CTF. Firstly, the effects of the number of input pulses (N) or pulse width (pw), and secondly, the gap between successive update pulses (t_gap) used for the stochastic computation of weight updates. Therefore, the impact of this non-ideal program time must be studied for neural network training simulations. In this study, Firstly, we propose a pulse-train design compensation technique to reduce the total error caused by non-ideal program time of CTF and stochastic variance of a network. Secondly, we simulate RPU-based DNN with non-ideal program time of CTF on MNIST and Fashion-MNIST datasets. We find that for larger N (~1000), learning performance approaches the ideal (software-level) training level and, therefore, is not much impacted by the choice of t_gap used to implement RPU-based weight updates. However, for lower N (<500), learning performance depends on T_gap of the pulses. Finally, we also performed an ablation study to isolate the causal factor of the improved learning performance. We conclude that the lower noise level in the weight updates is the most likely significant factor to improve the learning performance of DNN. Thus, our study attempts to compensate for the error caused by non-ideal program time and standardize the pulse length (N) and pulse gap (t_gap) specifications for CTF-based RPUs for accurate system-level on-chip training.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"20 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ArXivPub Date : 2024-02-15DOI: 10.48550/arXiv.2402.10137
Yinhong Liu, Yimai Fang, David Vandyke, Nigel Collier
{"title":"TOAD: Task-Oriented Automatic Dialogs with Diverse Response Styles","authors":"Yinhong Liu, Yimai Fang, David Vandyke, Nigel Collier","doi":"10.48550/arXiv.2402.10137","DOIUrl":"https://doi.org/10.48550/arXiv.2402.10137","url":null,"abstract":"In light of recent advances in large language models (LLMs), the expectations for the next generation of virtual assistants include enhanced naturalness and adaptability across diverse usage scenarios. However, the creation of high-quality annotated data for Task-Oriented Dialog (TOD) is recognized to be slow and costly. To address these challenges, we introduce Task-Oriented Automatic Dialogs (TOAD), a novel and scalable TOD dataset along with its automatic generation pipeline. The TOAD dataset simulates realistic app context interaction and provide a variety of system response style options. Two aspects of system response styles are considered, verbosity level and users' expression mirroring. We benchmark TOAD on two response generation tasks and the results show that modelling more verbose or responses without user expression mirroring is more challenging.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"16 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ArXivPub Date : 2024-02-15DOI: 10.48550/arXiv.2402.09716
Ece Gumusel, Kyrie Zhixuan Zhou, M. Sanfilippo
{"title":"User Privacy Harms and Risks in Conversational AI: A Proposed Framework","authors":"Ece Gumusel, Kyrie Zhixuan Zhou, M. Sanfilippo","doi":"10.48550/arXiv.2402.09716","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09716","url":null,"abstract":"This study presents a unique framework that applies and extends Solove (2006)'s taxonomy to address privacy concerns in interactions with text-based AI chatbots. As chatbot prevalence grows, concerns about user privacy have heightened. While existing literature highlights design elements compromising privacy, a comprehensive framework is lacking. Through semi-structured interviews with 13 participants interacting with two AI chatbots, this study identifies 9 privacy harms and 9 privacy risks in text-based interactions. Using a grounded theory approach for interview and chatlog analysis, the framework examines privacy implications at various interaction stages. The aim is to offer developers, policymakers, and researchers a tool for responsible and secure implementation of conversational AI, filling the existing gap in addressing privacy issues associated with text-based AI chatbots.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"13 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ArXivPub Date : 2024-02-15DOI: 10.48550/arXiv.2402.09937
C. Carlet, Marko Ðurasevic, D. Jakobović, S. Picek, L. Mariot
{"title":"A Systematic Evaluation of Evolving Highly Nonlinear Boolean Functions in Odd Sizes","authors":"C. Carlet, Marko Ðurasevic, D. Jakobović, S. Picek, L. Mariot","doi":"10.48550/arXiv.2402.09937","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09937","url":null,"abstract":"Boolean functions are mathematical objects used in diverse applications. Different applications also have different requirements, making the research on Boolean functions very active. In the last 30 years, evolutionary algorithms have been shown to be a strong option for evolving Boolean functions in different sizes and with different properties. Still, most of those works consider similar settings and provide results that are mostly interesting from the evolutionary algorithm's perspective. This work considers the problem of evolving highly nonlinear Boolean functions in odd sizes. While the problem formulation sounds simple, the problem is remarkably difficult, and the related work is extremely scarce. We consider three solutions encodings and four Boolean function sizes and run a detailed experimental analysis. Our results show that the problem is challenging, and finding optimal solutions is impossible except for the smallest tested size. However, once we added local search to the evolutionary algorithm, we managed to find a Boolean function in nine inputs with nonlinearity 241, which, to our knowledge, had never been accomplished before with evolutionary algorithms.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"20 17","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ArXivPub Date : 2024-02-15DOI: 10.48550/arXiv.2402.09671
David A. Noever, Forrest McKee
{"title":"Exploiting Alpha Transparency In Language And Vision-Based AI Systems","authors":"David A. Noever, Forrest McKee","doi":"10.48550/arXiv.2402.09671","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09671","url":null,"abstract":"This investigation reveals a novel exploit derived from PNG image file formats, specifically their alpha transparency layer, and its potential to fool multiple AI vision systems. Our method uses this alpha layer as a clandestine channel invisible to human observers but fully actionable by AI image processors. The scope tested for the vulnerability spans representative vision systems from Apple, Microsoft, Google, Salesforce, Nvidia, and Facebook, highlighting the attack's potential breadth. This vulnerability challenges the security protocols of existing and fielded vision systems, from medical imaging to autonomous driving technologies. Our experiments demonstrate that the affected systems, which rely on convolutional neural networks or the latest multimodal language models, cannot quickly mitigate these vulnerabilities through simple patches or updates. Instead, they require retraining and architectural changes, indicating a persistent hole in multimodal technologies without some future adversarial hardening against such vision-language exploits.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"17 15","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ArXivPub Date : 2024-02-15DOI: 10.48550/arXiv.2402.09660
Erasmo Purificato, Ludovico Boratto, Ernesto William De Luca
{"title":"User Modeling and User Profiling: A Comprehensive Survey","authors":"Erasmo Purificato, Ludovico Boratto, Ernesto William De Luca","doi":"10.48550/arXiv.2402.09660","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09660","url":null,"abstract":"The integration of artificial intelligence (AI) into daily life, particularly through information retrieval and recommender systems, has necessitated advanced user modeling and profiling techniques to deliver personalized experiences. These techniques aim to construct accurate user representations based on the rich amounts of data generated through interactions with these systems. This paper presents a comprehensive survey of the current state, evolution, and future directions of user modeling and profiling research. We provide a historical overview, tracing the development from early stereotype models to the latest deep learning techniques, and propose a novel taxonomy that encompasses all active topics in this research area, including recent trends. Our survey highlights the paradigm shifts towards more sophisticated user profiling methods, emphasizing implicit data collection, multi-behavior modeling, and the integration of graph data structures. We also address the critical need for privacy-preserving techniques and the push towards explainability and fairness in user modeling approaches. By examining the definitions of core terminology, we aim to clarify ambiguities and foster a clearer understanding of the field by proposing two novel encyclopedic definitions of the main terms. Furthermore, we explore the application of user modeling in various domains, such as fake news detection, cybersecurity, and personalized education. This survey serves as a comprehensive resource for researchers and practitioners, offering insights into the evolution of user modeling and profiling and guiding the development of more personalized, ethical, and effective AI systems.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"15 17","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}