{"title":"What makes any agent a moral agent? Reflections on machine consciousness and moral agency","authors":"Joel Parthemore, Blay Whitby","doi":"10.1142/S1793843013500017","DOIUrl":"https://doi.org/10.1142/S1793843013500017","url":null,"abstract":"In this paper, we take moral agency to be that context in which a particular agent can, appropriately, be held responsible for her actions and their consequences. In order to understand moral agency, we will discuss what it would take for an artifact to be a moral agent. For reasons that will become clear over the course of the paper, we take the artifactual question to be a useful way into discussion but ultimately misleading. We set out a number of conceptual pre-conditions for being a moral agent and then outline how one should — and should not — go about attributing moral agency. In place of a litmus test for such an agency — such as Allen et al.'s Moral Turing Test — we suggest some tools from the conceptual spaces theory and the unified conceptual space theory for mapping out the nature and extent of that agency.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123354572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The expressive stance: intentionality, expression, and machine art","authors":"A. Linson","doi":"10.1142/S1793843013500066","DOIUrl":"https://doi.org/10.1142/S1793843013500066","url":null,"abstract":"This paper proposes a new interpretive stance for interpreting artistic works and performances that is relevant to artificial intelligence research but also has broader implications. Termed the expressive stance, this stance makes intelligible a critical distinction between present-day machine art and human art, but allows for the possibility that future machine art could find a place alongside our own. The expressive stance is elaborated as a response to Daniel Dennett's notion of the intentional stance, which is critically examined with respect to his specialized concept of rationality. The paper also shows that temporal scale implicitly serves to select between different modes of explanation in prominent theories of intentionality. It also considers the implications of the phenomenological background for systems that produce art.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129264779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. S. Reilly, Gerald Fry, Sean L. Guarino, Michael Reposa, R. West, R. Costantini, J. Johnston
{"title":"Evaluating the potential for using affect-inspired techniques to manage real-time systems","authors":"W. S. Reilly, Gerald Fry, Sean L. Guarino, Michael Reposa, R. West, R. Costantini, J. Johnston","doi":"10.1142/S1793843013500042","DOIUrl":"https://doi.org/10.1142/S1793843013500042","url":null,"abstract":"We describe a novel affect-inspired mechanism to improve the performance of computational systems operating in dynamic environments. In particular, we designed a mechanism that is based on aspects of the fear response in humans to dynamically reallocate operating system-level central processing unit (CPU) resources to processes as they are needed to deal with time-critical events. We evaluated this system in the MINIX® and Linux® operating systems and in three different testing environments (two simulated, one live). We found the affect-based system was not only able to react more rapidly to time-critical events as intended, but since the dynamic processes for handling these events did not need to use significant CPU when they were not in time-critical situations, our simulated unmanned aerial vehicle (UAV) was able to perform even non-emergency tasks at a higher level of efficiency and reactivity than was possible in the standard implementation.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125959389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TWO KINDS OF COMMON SENSE KNOWLEDGE (AND A CONSTRAINT FOR MACHINE CONSCIOUSNESS DESIGN)","authors":"Pietro Perconti","doi":"10.1142/S1793843013400076","DOIUrl":"https://doi.org/10.1142/S1793843013400076","url":null,"abstract":"In this paper, it will be argued that common sense knowledge has not a unitary structure. It is rather articulated at two different levels: a deep and a superficial level of common sense. The deep level is based on know-how procedures, on metaphorical frames built on imaginative bodily representations, and on a set of adaptive behaviors. Superficial level includes beliefs and judgments. They can be true or false and are culture dependent. Deep common sense is unavailable for any fast change, because it depends more on human biology than on cultural conventions. The deep level of common sense is characterized by a sensorimotor representational format, while the superficial level is largely made by propositional entities. This difference can be considered as a constraint for machine consciousness design, insofar this latter should be based on a reliable model of common sense knowledge.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130392760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SAFE/MORAL AUTOPOIESIS AND CONSCIOUSNESS","authors":"Mark R. Waser","doi":"10.1142/S1793843013400052","DOIUrl":"https://doi.org/10.1142/S1793843013400052","url":null,"abstract":"Artificial intelligence, the \"science and engineering of intelligent machines\", still has yet to create even a simple \"Advice Taker\" [McCarthy, 1959]. We have previously argued [Waser, 2011] that this is because researchers are focused on problem-solving or the rigorous analysis of intelligence (or arguments about consciousness) rather than the creation of a \"self\" that can \"learn\" to be intelligent. Therefore, following expert advice on the nature of self [Llinas, 2001; Hofstadter, 2007; Damasio, 2010], we embarked upon an effort to design and implement a self-understanding, self-improving loop as the totality of a (seed) AI. As part of that, we decided to follow up on Richard Dawkins' [1976] speculation that \"perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself\" by defining a number of axioms and following them through to their logical conclusions. The results combined with an enactive approach yielded many surprising and useful implications for further understanding consciousness, self, and \"free-will\" that continue to pave the way towards the creation of safe/moral autopoiesis.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126091965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CONSCIOUSNESS AND SENTIENT ROBOTS","authors":"P. Haikonen","doi":"10.1142/S1793843013400027","DOIUrl":"https://doi.org/10.1142/S1793843013400027","url":null,"abstract":"It is argued here that the phenomenon of consciousness is nothing more than a special way of a subjective internal appearance of information. To explain consciousness is to explain how this subjective internal appearance of information can arise in the brain. To create a conscious robot is to create subjective internal appearances of information inside the robot. Other features that are often attributed to the phenomenon of consciousness are related to the contents of consciousness and cognitive functions. The internal conscious appearance of these is caused by the mechanism that gives rise to the internal appearances in the first place. A useful conscious robot must have a variety of cognitive abilities, but these abilities alone, no matter how advanced, will not make the robot conscious; the phenomenal internal appearances must be present as well. The Haikonen Cognitive Architecture (HCA) tries to facilitate both internal appearances and cognitive functions. The experimental robot XCR-1 is the first implementation experiment with the HCA.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"231 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126136254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PHENOMENAL CONSCIOUSNESS AND BIOLOGICALLY INSPIRED SYSTEMS","authors":"I. Aleksander","doi":"10.1142/S1793843013400015","DOIUrl":"https://doi.org/10.1142/S1793843013400015","url":null,"abstract":"The stated aim of adherents to the paradigm called biologically inspired cognitive architectures (BICA) is to build machines that address \"the challenge of creating a real-life computational equivalent of the human mind\".(From the mission statement of the new BICA journal.) In contrast, practitioners of machine consciousness (MC) are driven by the observation that these human minds for which one is trying to find equivalents are generally thought to be conscious. (Of course, this is controversial because there is no evidence of consciousness in behavior. But as the hypothesis of the consciousness of others is commonly used, a rejection of it has to be considered just as much as its acceptance.) In this paper, it is asked whether those who would like to build computational equivalents of the human mind can do so while ignoring the role of consciousness in what is called the mind. This is not ignored in the MC paradigm and the consequences, particularly on phenomenological treatments of the mind, are briefly explored. A measure based on a subjective feel for how well a model matches personal experience is introduced. An example is given which illustrates how MC can clarify the double-cognition tenet of Strawson's cognitive phenomenology.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124544979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"WHAT IS THE USE OF THE BODY SCHEMA FOR HUMANOID ROBOTS","authors":"P. Morasso","doi":"10.1142/S1793843013400064","DOIUrl":"https://doi.org/10.1142/S1793843013400064","url":null,"abstract":"The paper explains the necessity, for humanoid robots, of the body schema, as a middleware between motor cognition and motor control. A specific model of body schema is described, based on the Passive Motion Paradigm, which uses force fields for representing goals and internal/external constraints. The integration of the body schema with motor control is discussed in relation with whole body movements. Finally, the integration with sensorimotor cognitive processes is addressed, in the context of learning/discovering the use of tools in skilled behavior.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132586206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Consciousness as a process of queries and answers in architectures based on in situ representations","authors":"F. Velde","doi":"10.1142/S1793843013400039","DOIUrl":"https://doi.org/10.1142/S1793843013400039","url":null,"abstract":"Functional or access consciousness can be described as an ongoing dynamic process of queries and answers. Whenever we have an awareness of an object or its surroundings, it consists of the dynamic process that answers (implicit) queries like \"What is the color or shape of the object?\" or \"What surrounds this object?\" The process of queries and answers is based on a computational architecture that integrates grounding of representations with cognitive productivity. The human brain may be unique in combining grounding and productivity. Because representations have to remain grounded in combinatorial structures underlying the productivity of cognition, they have to remain in situ. Hebbian neuronal assemblies are an example of in situ conceptual representations, although the latter are not just associative. To obtain productivity, in situ representations are embedded in specialized neuronal \"blackboards\" by which (temporal) combinatorial structures can be formed. In situ representations interact in these blackboards. This interaction initiates the (implicit) query and answer process underlying functional consciousness. In this process, an in situ representation, dominating one blackboard, could begin to dominate other blackboards as well. Viewed in this way, human consciousness derives from the unique ability of the human brain to combine grounding and cognitive productivity.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127793020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A COGNITIVE ARCHITECTURE WITH INCREMENTAL LEVELS OF MACHINE CONSCIOUSNESS INSPIRED BY COGNITIVE NEUROSCIENCE","authors":"K. Raizer, A. Paraense, Ricardo Ribeiro Gudwin","doi":"10.1142/S1793843012400197","DOIUrl":"https://doi.org/10.1142/S1793843012400197","url":null,"abstract":"The main motivation for this work is to investigate the advantages provided by machine consciousness, while in the control of software agents. In order to pursue this goal, we developed a cognitive architecture, with different levels of machine consciousness, targeting the control of artificial creatures. As a standard guideline, we applied cognitive neuroscience concepts to incrementally develop the cognitive architecture, following the evolutionary steps taken by the animal brain. The triune brain theory proposed by MacLean, together with Arrabale's \"ConsScale\", serve as roadmaps to achieve each developmental stage, while iCub — a humanoid robot and its simulator — serve as a platform for the experiments. A completely codelet-based system \"Core\" has been implemented, serving the whole architecture.","PeriodicalId":418022,"journal":{"name":"International Journal of Machine Consciousness","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129790319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}