{"title":"First steps towards Computational Polynomials in Lean","authors":"James Harold Davenport","doi":"arxiv-2408.04564","DOIUrl":"https://doi.org/arxiv-2408.04564","url":null,"abstract":"The proof assistant Lean has support for abstract polynomials, but this is\u0000not necessarily the same as support for computations with polynomials. Lean is\u0000also a functional programming language, so it should be possible to implement\u0000computational polynomials in Lean. It turns out not to be as easy as the naive\u0000author thought.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141938736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Abstraction-Preserving Block Matrix Implementation in Maple","authors":"David J. Jeffrey, Stephen M. Watt","doi":"arxiv-2408.02112","DOIUrl":"https://doi.org/arxiv-2408.02112","url":null,"abstract":"A Maple implementation of partitioned matrices is described. A recursive\u0000block data structure is used, with all operations preserving the block\u0000abstraction. These include constructor functions, ring operations such as\u0000addition and product, and inversion. The package is demonstrated by calculating\u0000the PLU factorization of a block matrix.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"93 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141938853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recent Developments in Real Quantifier Elimination and Cylindrical Algebraic Decomposition","authors":"Matthew England","doi":"arxiv-2407.19781","DOIUrl":"https://doi.org/arxiv-2407.19781","url":null,"abstract":"This extended abstract accompanies an invited talk at CASC 2024, which\u0000surveys recent developments in Real Quantifier Elimination (QE) and Cylindrical\u0000Algebraic Decomposition (CAD). After introducing these concepts we will first\u0000consider adaptations of CAD inspired by computational logic, in particular the\u0000algorithms which underpin modern SAT solvers. CAD theory has found use in\u0000collaboration with these via the Satisfiability Modulo Theory (SMT) paradigm;\u0000while the ideas behind SAT/SMT have led to new algorithms for Real QE. Second\u0000we will consider the optimisation of CAD through the use of Machine Learning\u0000(ML). The choice of CAD variable ordering has become a key case study for the\u0000use of ML to tune algorithms in computer algebra. We will also consider how\u0000explainable AI techniques might give insight for improved computer algebra\u0000software without any reliance on ML in the final code.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141863953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Equality of morphic sequences","authors":"Hans Zantema","doi":"arxiv-2407.15721","DOIUrl":"https://doi.org/arxiv-2407.15721","url":null,"abstract":"Morphic sequences form a natural class of infinite sequences, typically\u0000defined as the coding of a fixed point of a morphism. Different morphisms and\u0000codings may yield the same morphic sequence. This paper investigates how to\u0000prove that two such representations of a morphic sequence by morphisms\u0000represent the same sequence. In particular, we focus on the smallest\u0000representations of the subsequences of the binary Fibonacci sequence obtained\u0000by only taking the even or odd elements. The proofs we give are induction\u0000proofs of several properties simultaneously, and are typically found fully\u0000automatically by a tool that we developed.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Algebraic anti-unification","authors":"Christian Antić","doi":"arxiv-2407.15510","DOIUrl":"https://doi.org/arxiv-2407.15510","url":null,"abstract":"Abstraction is key to human and artificial intelligence as it allows one to\u0000see common structure in otherwise distinct objects or situations and as such it\u0000is a key element for generality in AI. Anti-unification (or generalization) is\u0000textit{the} part of theoretical computer science and AI studying abstraction.\u0000It has been successfully applied to various AI-related problems, most\u0000importantly inductive logic programming. Up to this date, anti-unification is\u0000studied only from a syntactic perspective in the literature. The purpose of\u0000this paper is to initiate an algebraic (i.e. semantic) theory of\u0000anti-unification within general algebras. This is motivated by recent\u0000applications to similarity and analogical proportions.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Error Detection and Constraint Recovery in Hierarchical Multi-Label Classification without Prior Knowledge","authors":"Joshua Shay Kricheli, Khoa Vo, Aniruddha Datta, Spencer Ozgur, Paulo Shakarian","doi":"arxiv-2407.15192","DOIUrl":"https://doi.org/arxiv-2407.15192","url":null,"abstract":"Recent advances in Hierarchical Multi-label Classification (HMC),\u0000particularly neurosymbolic-based approaches, have demonstrated improved\u0000consistency and accuracy by enforcing constraints on a neural model during\u0000training. However, such work assumes the existence of such constraints\u0000a-priori. In this paper, we relax this strong assumption and present an\u0000approach based on Error Detection Rules (EDR) that allow for learning\u0000explainable rules about the failure modes of machine learning models. We show\u0000that these rules are not only effective in detecting when a machine learning\u0000classifier has made an error but also can be leveraged as constraints for HMC,\u0000thereby allowing the recovery of explainable constraints even if they are not\u0000provided. We show that our approach is effective in detecting machine learning\u0000errors and recovering constraints, is noise tolerant, and can function as a\u0000source of knowledge for neurosymbolic models on multiple datasets, including a\u0000newly introduced military vehicle recognition dataset.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141779712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From Words to Worlds: Compositionality for Cognitive Architectures","authors":"Ruchira Dhar, Anders Søgaard","doi":"arxiv-2407.13419","DOIUrl":"https://doi.org/arxiv-2407.13419","url":null,"abstract":"Large language models (LLMs) are very performant connectionist systems, but\u0000do they exhibit more compositionality? More importantly, is that part of why\u0000they perform so well? We present empirical analyses across four LLM families\u0000(12 models) and three task categories, including a novel task introduced below.\u0000Our findings reveal a nuanced relationship in learning of compositional\u0000strategies by LLMs -- while scaling enhances compositional abilities,\u0000instruction tuning often has a reverse effect. Such disparity brings forth some\u0000open issues regarding the development and improvement of large language models\u0000in alignment with human cognitive capacities.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141740780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating Task-Oriented Dialogue Consistency through Constraint Satisfaction","authors":"Tiziano Labruna, Bernardo Magnini","doi":"arxiv-2407.11857","DOIUrl":"https://doi.org/arxiv-2407.11857","url":null,"abstract":"Task-oriented dialogues must maintain consistency both within the dialogue\u0000itself, ensuring logical coherence across turns, and with the conversational\u0000domain, accurately reflecting external knowledge. We propose to conceptualize\u0000dialogue consistency as a Constraint Satisfaction Problem (CSP), wherein\u0000variables represent segments of the dialogue referencing the conversational\u0000domain, and constraints among variables reflect dialogue properties, including\u0000linguistic, conversational, and domain-based aspects. To demonstrate the\u0000feasibility of the approach, we utilize a CSP solver to detect inconsistencies\u0000in dialogues re-lexicalized by an LLM. Our findings indicate that: (i) CSP is\u0000effective to detect dialogue inconsistencies; and (ii) consistent dialogue\u0000re-lexicalization is challenging for state-of-the-art LLMs, achieving only a\u00000.15 accuracy rate when compared to a CSP solver. Furthermore, through an\u0000ablation study, we reveal that constraints derived from domain knowledge pose\u0000the greatest difficulty in being respected. We argue that CSP captures core\u0000properties of dialogue consistency that have been poorly considered by\u0000approaches based on component pipelines.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"25 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141721172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hyperion - A fast, versatile symbolic Gaussian Belief Propagation framework for Continuous-Time SLAM","authors":"David Hug, Ignacio Alzugaray, Margarita Chli","doi":"arxiv-2407.07074","DOIUrl":"https://doi.org/arxiv-2407.07074","url":null,"abstract":"Continuous-Time Simultaneous Localization And Mapping (CTSLAM) has become a\u0000promising approach for fusing asynchronous and multi-modal sensor suites.\u0000Unlike discrete-time SLAM, which estimates poses discretely, CTSLAM uses\u0000continuous-time motion parametrizations, facilitating the integration of a\u0000variety of sensors such as rolling-shutter cameras, event cameras and Inertial\u0000Measurement Units (IMUs). However, CTSLAM approaches remain computationally\u0000demanding and are conventionally posed as centralized Non-Linear Least Squares\u0000(NLLS) optimizations. Targeting these limitations, we not only present the\u0000fastest SymForce-based [Martiros et al., RSS 2022] B- and Z-Spline\u0000implementations achieving speedups between 2.43x and 110.31x over Sommer et al.\u0000[CVPR 2020] but also implement a novel continuous-time Gaussian Belief\u0000Propagation (GBP) framework, coined Hyperion, which targets decentralized\u0000probabilistic inference across agents. We demonstrate the efficacy of our\u0000method in motion tracking and localization settings, complemented by empirical\u0000ablation studies.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141576694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yasra Chandio, Momin A. Khan, Khotso Selialia, Luis Garcia, Joseph DeGol, Fatima M. Anwar
{"title":"A Neurosymbolic Approach to Adaptive Feature Extraction in SLAM","authors":"Yasra Chandio, Momin A. Khan, Khotso Selialia, Luis Garcia, Joseph DeGol, Fatima M. Anwar","doi":"arxiv-2407.06889","DOIUrl":"https://doi.org/arxiv-2407.06889","url":null,"abstract":"Autonomous robots, autonomous vehicles, and humans wearing mixed-reality\u0000headsets require accurate and reliable tracking services for safety-critical\u0000applications in dynamically changing real-world environments. However, the\u0000existing tracking approaches, such as Simultaneous Localization and Mapping\u0000(SLAM), do not adapt well to environmental changes and boundary conditions\u0000despite extensive manual tuning. On the other hand, while deep learning-based\u0000approaches can better adapt to environmental changes, they typically demand\u0000substantial data for training and often lack flexibility in adapting to new\u0000domains. To solve this problem, we propose leveraging the neurosymbolic program\u0000synthesis approach to construct adaptable SLAM pipelines that integrate the\u0000domain knowledge from traditional SLAM approaches while leveraging data to\u0000learn complex relationships. While the approach can synthesize end-to-end SLAM\u0000pipelines, we focus on synthesizing the feature extraction module. We first\u0000devise a domain-specific language (DSL) that can encapsulate domain knowledge\u0000on the important attributes for feature extraction and the real-world\u0000performance of various feature extractors. Our neurosymbolic architecture then\u0000undertakes adaptive feature extraction, optimizing parameters via learning\u0000while employing symbolic reasoning to select the most suitable feature\u0000extractor. Our evaluations demonstrate that our approach, neurosymbolic Feature\u0000EXtraction (nFEX), yields higher-quality features. It also reduces the pose\u0000error observed for the state-of-the-art baseline feature extractors ORB and\u0000SIFT by up to 90% and up to 66%, respectively, thereby enhancing the system's\u0000efficiency and adaptability to novel environments.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141576696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}