{"title":"Some Prognostications: Artificial Intelligence and Accounting","authors":"Ron Weber","doi":"10.1111/auar.12403","DOIUrl":null,"url":null,"abstract":"<p>The Colin Ferguson Oration is the address given to attendees at the annual Australia Accounting Hall of Fame dinner and presentation evening. It is an invited oration, whereby an eminent modern-day leader addresses the audience on matters at the intersection of business, government and the academe as they relate to the rich history, the current state and/or the future direction of the accounting profession. The oration is named in honour of our colleague Professor Colin Ferguson (1949–2014). Colin was the key figure driving the inception of the Australian Accounting Hall of Fame. In a decorated academic career, he worked tirelessly for many years and with great distinction at the intersection of accounting thought and practice encompassing auditing, forensic accounting and accounting information systems, so it was only fitting that this oration is named in his honour.</p><p>This year's oration was delivered by Ron Weber, Emeritus Professor at Monash University and The University of Queensland who in 2018 was inducted into the Australian Accounting Hall of Fame.</p><p>It is crucial that as a profession we continue to bring together academe, practitioners and standard setters to explore relevant challenges and issues in our field. This year's oration addresses a topical issue, which is the likely role that artificial intelligence (AI) will play as we consider the future of accounting. We are absolutely thrilled that Ron's oration is published in the <i>Australian Accounting Review</i> (AAR), a journal that for a long time has occupied a unique and valued position in our professional landscape. We thank sincerely the editors of the journal.</p><p>At the outset, I'd like to indicate that I'm going to take a tack that might surprise you. Specifically, I'm not going to try to ‘wow’ you with AI (artificial intelligence) innovations that potentially will turn the accounting field on its head. Hyperbole tends to leave me cold, especially when it clouds deep issues that need to be addressed. And hyperbole about the latest information technology becomes dated quickly and sometimes appears quite funny in hindsight. Instead, I want to examine the likely impact of AI on accounting from a more philosophical perspective.</p><p><b>Let me lay a foundation for what will follow with two anecdotes</b></p><p>Here is the first anecdote. When I was studying for my PhD at the University of Minnesota in the mid 1970s, I had some involvement with several academics and students who were trying to figure out how humans understood language. Their goal was to build software that would understand natural language input to a computer through either voice or text by emulating how humans understood natural language. Some of you will remember that the mid-1970s were the days before personal computers, the worldwide web, and graphical user interfaces. Working with computers was still difficult! Yes, at the time, we were living in the dark ages!</p><p>Here is the second anecdote. In 1982, I spent a six-month sabbatical leave at New York University (NYU). There I met a colleague who was trying to build a computer program to play the Chinese game of Go. I had never heard of Go until I went to NYU. It is the oldest board game in existence (over 2500 years old). In several ways, apparently it is a more complex and difficult game than chess. Anyway, the reason my colleague at NYU was interested in Go was that he already had extensive experience in building chess-playing programs. He was a graduate of the AI laboratory at Carnegie-Mellon University led by a famous scholar – Nobel Laureate in Economics, Herbert Simon. In Simon's laboratory at Carnegie, my colleague had worked on chess-playing programs that were written based on the ways that grand masters play chess. He hoped he would get additional insights about human intelligence by working with grand masters of Go.</p><p><b>What Happened Subsequently?</b></p><p>We now have natural-language understanding software (e.g., Siri) that is fairly good at ‘understanding’ spoken natural language. The way the software works, however, has only a few similarities with the ways that humans understand natural language (at least to the best of our knowledge). Rather, the software depends on the breath-taking speeds with which modern computers now operate, the availability of high-speed communications networks, and the high-speed, enormous-capacity storage devices that now exist. For instance, when you ask Siri to do something, a sound file gets transmitted via the internet to Apple computers, and the sounds are matched against a huge database of sounds and their corresponding words. Siri then uses pattern recognition with an enormous database of phrases, questions and answers to determine what most likely is being said and its meaning.</p><p>A similar situation exists with chess-playing programs. They don't work like human chess players. Instead, they use brute-force methods to determine their moves. They access a huge database of historical grandmaster games, winning endgames, strategic moves and so on, and they have sophisticated algorithms that they use to examine millions of positions in a second and to optimally evaluate their next move. Today, many different chess-playing programs exist that will beat the best human chess players every time.</p><p>Do the impressive capabilities of speech-recognition software and chess-playing software manifest they possess human-like intelligence? The answer is ‘no’. And the situation with speech-recognition software and chess-playing software typifies AI work in many other domains.</p><p>Will this situation change? At some time in the future, are we likely to see AI programs that mirror human intelligence in, for instance, the accounting domain? My view is that the answer is ‘no’, and here I want to turn to some philosophy to explain my reasons.</p><p>If computer programs are to have any chance of mirroring human intelligence, we first need to solve a deep, fundamental problem that philosophers and cognitive scientists call the ‘mind–body problem’. Basically, the mind–body problem addresses the questions of what constitutes the human mind, how the human mind and consciousness arise, and how human consciousness relates to the human body.</p><p>Almost 30 years ago, an Australian philosopher named David Chalmers called the mind–body problem the ‘hard’ problem in philosophy (Chalmers <span>1996</span>). The fact that his name for the problem is still in vogue reflects that we currently have a long way to go before we have some sense of whether the mind–body problem can ever be solved.</p><p>While a solution to the hard problem of human consciousness and intelligence remains elusive, nonetheless some philosophers have given us a theory of the way in which they believe human consciousness and intelligence have come about. I want to use their theory (Bunge <span>1979</span>; Mahner <span>2015</span>) to explain why I doubt AI will ever mirror human intelligence, but I also want to stress that the theory I am using is not accepted universally.</p><p>Clearly, human consciousness and intelligence didn't always exist! Specifically, the theory I'm using postulates that they arose progressively over the eons through a particular evolutionary process called ‘assemblage’. This process involves things in the world beginning to interact with other things and these interactions leading to the emergence of new, more complex things. These new things have a critical feature – namely, they have new properties not possessed by their components – their so-called <i>emergent</i> properties. These novel properties are somehow related to the properties of their components, but the critical issue is they are properties that are <i>not</i> possessed by any of their components (Bunge <span>2003</span>).</p><p>Let me illustrate the notion of emergent properties through a simple example. Consider a work team that has a number of employees who interact with one another to perform certain tasks. The <i>cohesiveness</i> of the work team is an emergent property of the team. Somehow cohesiveness is related to properties of the individuals who make up the team, but it is not a property of the individual members of the team – we don't say a person is ‘cohesive’.</p><p>Think about humans, therefore, as an extraordinarily complex level structure of things (in essence, the things are systems) that have assembled over time. Billions of years ago, the evolutionary processes that led to the emergence of humans began with particular atoms (primarily hydrogen, oxygen and nitrogen with a little carbon). These atoms eventually assembled into molecules. Some of these molecules eventually assembled into organelles. And then we see the formation of cells, tissues, organs and organisms as the assembly process that underpins evolution unfolded over time. Finally, we have a human made up of about 100 trillion cells, with each cell in turn made up of 100 trillion atoms. All the components of a human (atoms, cells, tissues and so on) are things (systems) with emergent properties.</p><p>The philosophers who developed this theory argue that only after this evolutionary process was quite advanced did consciousness and intelligence, at least as we know it, start to appear. They contend higher-level systems had to evolve in the life form that eventually became a human before we had the types of emergent properties that they believe are needed to produce human consciousness and intelligence.</p><p>What does this mean for the chances of machines ever emulating human consciousness and intelligence? If the philosophers who developed the theory I've described are right, the answer is that the chances are not good (see also Mahner <span>2015</span>).</p><p>Think about the numbers! Remember, the human body has roughly one trillion cells, each of which is composed of roughly one trillion atoms. Many of these atoms and cells are connected to other atoms and cells. Of course, not everything is connected to everything else. Nonetheless, the possible number of connections and the number that most likely exist are mind-boggling. What are the emergent properties that have to exist among the different components of a life form if consciousness and intelligence are to eventually appear?</p><p>To make matters even more complex, after higher-level systems have evolved, we know that they sometimes exert an influence on their lower-level components – the components that initially assembled to form the higher-level system – such that the properties of the lower-level system change. For instance, consider someone who becomes a head of department or a dean in a university. They <i>acquire</i> new properties such as (a) the authority to make certain decisions, and (b) the unbelievable frustrations arising from being a head or dean in a university. And they can <i>lose</i> certain properties – for instance, if you have been a head or a dean, you will know that the property you often lose is the will to live!!</p><p>If we are trying to mirror human intelligence, here is the catch. First, we are a long way from knowing (and perhaps we may never know) all the connections that exist between the huge number of components of the human body – the atoms, the cells, the tissues and so on. Second, even where we know some that exist, we don't always know their exact nature and thus how to replicate them. Third, how the emergent properties of higher-level systems in the human body relate to the properties of lower-level components is often unclear.</p><p>Here, then, is the important moral to my story so far. Focusing on whether computers can and eventually will have the capabilities to mirror human consciousness and human intelligence is, in my opinion, the <i>wrong</i> focus. I doubt this will ever occur. Humans are the outcome of an evolutionary process that has occurred over billions of years. After a couple of thousand years of philosophers trying to understand human consciousness and intelligence and more recently cognitive neuroscientists tackling the same task, we have barely scratched the surface.</p><p>We also have to consider the properties that continue to differentiate humans from machines – empathy, sympathy, love, self-sacrifice – and how they affect human consciousness and intelligence. Where do these properties come from? Can you envisage a machine with these properties? Can you conceive of a situation where you and a computer might fall in love with each other?</p><p>Does the moral of my story mean that as humans (as accountants) we do not have to be concerned about artificial intelligence because the likelihood of computers being able to mirror human consciousness and intelligence, at least for the foreseeable future, is very low? The answer is a resounding, an emphatic, ‘No!’. A certain type of consciousness and intelligence – let's just simply call it machine intelligence – will continue to evolve rapidly as computers become more powerful and our knowledge of how to use them increases exponentially. It is this form of artificial intelligence that has to be our focus.</p><p>The reason is that we need to understand the nature of and significant implications of a concept that philosophers interested in general system theory call <i>equifinality</i> – very simply, the idea that we can sometimes achieve the same (or almost the same) outcomes in the world using different processes (e.g., Gresov and Drazin <span>1997</span>). Language-recognition software and chess-playing software are good examples of equifinality in practice. We don't have quite the same outcomes with the software as we do with humans. But in one case, language-recognition software, the outcome is good enough for many purposes. And in the other case, chess-playing software, we have a superior outcome (at least if winning the game is our objective criterion).</p><p>The challenges we face because of equifinality are becoming increasingly salient. For instance, for those of us who are academics, we now have concerns about student use of so-called <i>generative</i> AI programs such as ChatGPT. The fact that a student's response to an assignment has been produced by a generative AI program can be extraordinarily difficult to detect – again, equifinality at work.</p><p>It's so hard to predict how equifinality will manifest. It's often hard for humans to ‘think’ like computers! For instance, we have difficulty comprehending how computers perform tasks in a few seconds that would take humans large amounts of time to complete. In this regard, we are at the dawn of quantum computing – currently, a field of research that promises the development of a new kind of computer that can perform certain kinds of calculations in a few seconds that would otherwise take today's supercomputers decades or millennia to complete. In a world of quantum computers, what forms of equifinality and machine intelligence will arise?</p><p>Where to from here? As accountants, what should we do in a world where machine intelligence will continue to develop rapidly. I wish I had privileged insights, but sadly I don't. For what they are worth, however, I'd like to conclude my oration with just a few thoughts that might provide some matters for reflection.</p><p>First, as accountants, we should focus on identifying those tasks where humans are likely to have a long-term comparative advantage over computers. I suspect these kinds of tasks will be those that require very human attributes – for instance, an ability to interact with others with warmth and empathy, an ability to read body language, a sense of the ephemeral and spiritual, and an ability to develop rapport and trust. We should continue to develop our capabilities in relation to these tasks.</p><p>Second, we need to think very hard about those accounting tasks where machine intelligence will have a comparative advantage over humans. We already have some pointers to the tasks that will be affected – specifically, those that are amenable to machine-learning, pattern-matching and classification techniques. But developments in generative AI and quantum computing should motivate us to think more broadly. Where equifinality is likely to arise, we should exit systematically and gracefully from the tasks that will be affected.</p><p>Third, we can look for opportunities to work synergistically with machine intelligence. As accountants, ultimately, we are seeking ways to provide information about economic phenomena. With better tools, we are progressively expanding our views about what economic phenomena can and should be our focus. In this regard, I am mindful of Bill Edge's (<span>2022</span>) excellent oration last year where he spoke about developments in sustainability reporting and the opportunities provided to accountants. With powerful tools such as networks of environmental sensors, pattern-recognition and machine-learning software, generative AI tools and creative thinking, we can expand the scope of the work we do as accountants.</p><p>Here is my closing comment. I feel some sense of irony and remorse about the topic of my oration. My focus has been <i>artificial</i> intelligence and its possible implications for the accounting profession. But tonight, we are commemorating someone, Professor Colin Ferguson, who had an extraordinary amount of very real <i>human</i> intelligence, personal and professional. There was nothing artificial about it! I hope Col will forgive me.</p><p>Thank you!</p>","PeriodicalId":51552,"journal":{"name":"Australian Accounting Review","volume":"33 2","pages":"110-113"},"PeriodicalIF":3.1000,"publicationDate":"2023-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/auar.12403","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Australian Accounting Review","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/auar.12403","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BUSINESS, FINANCE","Score":null,"Total":0}
引用次数: 0
Abstract
The Colin Ferguson Oration is the address given to attendees at the annual Australia Accounting Hall of Fame dinner and presentation evening. It is an invited oration, whereby an eminent modern-day leader addresses the audience on matters at the intersection of business, government and the academe as they relate to the rich history, the current state and/or the future direction of the accounting profession. The oration is named in honour of our colleague Professor Colin Ferguson (1949–2014). Colin was the key figure driving the inception of the Australian Accounting Hall of Fame. In a decorated academic career, he worked tirelessly for many years and with great distinction at the intersection of accounting thought and practice encompassing auditing, forensic accounting and accounting information systems, so it was only fitting that this oration is named in his honour.
This year's oration was delivered by Ron Weber, Emeritus Professor at Monash University and The University of Queensland who in 2018 was inducted into the Australian Accounting Hall of Fame.
It is crucial that as a profession we continue to bring together academe, practitioners and standard setters to explore relevant challenges and issues in our field. This year's oration addresses a topical issue, which is the likely role that artificial intelligence (AI) will play as we consider the future of accounting. We are absolutely thrilled that Ron's oration is published in the Australian Accounting Review (AAR), a journal that for a long time has occupied a unique and valued position in our professional landscape. We thank sincerely the editors of the journal.
At the outset, I'd like to indicate that I'm going to take a tack that might surprise you. Specifically, I'm not going to try to ‘wow’ you with AI (artificial intelligence) innovations that potentially will turn the accounting field on its head. Hyperbole tends to leave me cold, especially when it clouds deep issues that need to be addressed. And hyperbole about the latest information technology becomes dated quickly and sometimes appears quite funny in hindsight. Instead, I want to examine the likely impact of AI on accounting from a more philosophical perspective.
Let me lay a foundation for what will follow with two anecdotes
Here is the first anecdote. When I was studying for my PhD at the University of Minnesota in the mid 1970s, I had some involvement with several academics and students who were trying to figure out how humans understood language. Their goal was to build software that would understand natural language input to a computer through either voice or text by emulating how humans understood natural language. Some of you will remember that the mid-1970s were the days before personal computers, the worldwide web, and graphical user interfaces. Working with computers was still difficult! Yes, at the time, we were living in the dark ages!
Here is the second anecdote. In 1982, I spent a six-month sabbatical leave at New York University (NYU). There I met a colleague who was trying to build a computer program to play the Chinese game of Go. I had never heard of Go until I went to NYU. It is the oldest board game in existence (over 2500 years old). In several ways, apparently it is a more complex and difficult game than chess. Anyway, the reason my colleague at NYU was interested in Go was that he already had extensive experience in building chess-playing programs. He was a graduate of the AI laboratory at Carnegie-Mellon University led by a famous scholar – Nobel Laureate in Economics, Herbert Simon. In Simon's laboratory at Carnegie, my colleague had worked on chess-playing programs that were written based on the ways that grand masters play chess. He hoped he would get additional insights about human intelligence by working with grand masters of Go.
What Happened Subsequently?
We now have natural-language understanding software (e.g., Siri) that is fairly good at ‘understanding’ spoken natural language. The way the software works, however, has only a few similarities with the ways that humans understand natural language (at least to the best of our knowledge). Rather, the software depends on the breath-taking speeds with which modern computers now operate, the availability of high-speed communications networks, and the high-speed, enormous-capacity storage devices that now exist. For instance, when you ask Siri to do something, a sound file gets transmitted via the internet to Apple computers, and the sounds are matched against a huge database of sounds and their corresponding words. Siri then uses pattern recognition with an enormous database of phrases, questions and answers to determine what most likely is being said and its meaning.
A similar situation exists with chess-playing programs. They don't work like human chess players. Instead, they use brute-force methods to determine their moves. They access a huge database of historical grandmaster games, winning endgames, strategic moves and so on, and they have sophisticated algorithms that they use to examine millions of positions in a second and to optimally evaluate their next move. Today, many different chess-playing programs exist that will beat the best human chess players every time.
Do the impressive capabilities of speech-recognition software and chess-playing software manifest they possess human-like intelligence? The answer is ‘no’. And the situation with speech-recognition software and chess-playing software typifies AI work in many other domains.
Will this situation change? At some time in the future, are we likely to see AI programs that mirror human intelligence in, for instance, the accounting domain? My view is that the answer is ‘no’, and here I want to turn to some philosophy to explain my reasons.
If computer programs are to have any chance of mirroring human intelligence, we first need to solve a deep, fundamental problem that philosophers and cognitive scientists call the ‘mind–body problem’. Basically, the mind–body problem addresses the questions of what constitutes the human mind, how the human mind and consciousness arise, and how human consciousness relates to the human body.
Almost 30 years ago, an Australian philosopher named David Chalmers called the mind–body problem the ‘hard’ problem in philosophy (Chalmers 1996). The fact that his name for the problem is still in vogue reflects that we currently have a long way to go before we have some sense of whether the mind–body problem can ever be solved.
While a solution to the hard problem of human consciousness and intelligence remains elusive, nonetheless some philosophers have given us a theory of the way in which they believe human consciousness and intelligence have come about. I want to use their theory (Bunge 1979; Mahner 2015) to explain why I doubt AI will ever mirror human intelligence, but I also want to stress that the theory I am using is not accepted universally.
Clearly, human consciousness and intelligence didn't always exist! Specifically, the theory I'm using postulates that they arose progressively over the eons through a particular evolutionary process called ‘assemblage’. This process involves things in the world beginning to interact with other things and these interactions leading to the emergence of new, more complex things. These new things have a critical feature – namely, they have new properties not possessed by their components – their so-called emergent properties. These novel properties are somehow related to the properties of their components, but the critical issue is they are properties that are not possessed by any of their components (Bunge 2003).
Let me illustrate the notion of emergent properties through a simple example. Consider a work team that has a number of employees who interact with one another to perform certain tasks. The cohesiveness of the work team is an emergent property of the team. Somehow cohesiveness is related to properties of the individuals who make up the team, but it is not a property of the individual members of the team – we don't say a person is ‘cohesive’.
Think about humans, therefore, as an extraordinarily complex level structure of things (in essence, the things are systems) that have assembled over time. Billions of years ago, the evolutionary processes that led to the emergence of humans began with particular atoms (primarily hydrogen, oxygen and nitrogen with a little carbon). These atoms eventually assembled into molecules. Some of these molecules eventually assembled into organelles. And then we see the formation of cells, tissues, organs and organisms as the assembly process that underpins evolution unfolded over time. Finally, we have a human made up of about 100 trillion cells, with each cell in turn made up of 100 trillion atoms. All the components of a human (atoms, cells, tissues and so on) are things (systems) with emergent properties.
The philosophers who developed this theory argue that only after this evolutionary process was quite advanced did consciousness and intelligence, at least as we know it, start to appear. They contend higher-level systems had to evolve in the life form that eventually became a human before we had the types of emergent properties that they believe are needed to produce human consciousness and intelligence.
What does this mean for the chances of machines ever emulating human consciousness and intelligence? If the philosophers who developed the theory I've described are right, the answer is that the chances are not good (see also Mahner 2015).
Think about the numbers! Remember, the human body has roughly one trillion cells, each of which is composed of roughly one trillion atoms. Many of these atoms and cells are connected to other atoms and cells. Of course, not everything is connected to everything else. Nonetheless, the possible number of connections and the number that most likely exist are mind-boggling. What are the emergent properties that have to exist among the different components of a life form if consciousness and intelligence are to eventually appear?
To make matters even more complex, after higher-level systems have evolved, we know that they sometimes exert an influence on their lower-level components – the components that initially assembled to form the higher-level system – such that the properties of the lower-level system change. For instance, consider someone who becomes a head of department or a dean in a university. They acquire new properties such as (a) the authority to make certain decisions, and (b) the unbelievable frustrations arising from being a head or dean in a university. And they can lose certain properties – for instance, if you have been a head or a dean, you will know that the property you often lose is the will to live!!
If we are trying to mirror human intelligence, here is the catch. First, we are a long way from knowing (and perhaps we may never know) all the connections that exist between the huge number of components of the human body – the atoms, the cells, the tissues and so on. Second, even where we know some that exist, we don't always know their exact nature and thus how to replicate them. Third, how the emergent properties of higher-level systems in the human body relate to the properties of lower-level components is often unclear.
Here, then, is the important moral to my story so far. Focusing on whether computers can and eventually will have the capabilities to mirror human consciousness and human intelligence is, in my opinion, the wrong focus. I doubt this will ever occur. Humans are the outcome of an evolutionary process that has occurred over billions of years. After a couple of thousand years of philosophers trying to understand human consciousness and intelligence and more recently cognitive neuroscientists tackling the same task, we have barely scratched the surface.
We also have to consider the properties that continue to differentiate humans from machines – empathy, sympathy, love, self-sacrifice – and how they affect human consciousness and intelligence. Where do these properties come from? Can you envisage a machine with these properties? Can you conceive of a situation where you and a computer might fall in love with each other?
Does the moral of my story mean that as humans (as accountants) we do not have to be concerned about artificial intelligence because the likelihood of computers being able to mirror human consciousness and intelligence, at least for the foreseeable future, is very low? The answer is a resounding, an emphatic, ‘No!’. A certain type of consciousness and intelligence – let's just simply call it machine intelligence – will continue to evolve rapidly as computers become more powerful and our knowledge of how to use them increases exponentially. It is this form of artificial intelligence that has to be our focus.
The reason is that we need to understand the nature of and significant implications of a concept that philosophers interested in general system theory call equifinality – very simply, the idea that we can sometimes achieve the same (or almost the same) outcomes in the world using different processes (e.g., Gresov and Drazin 1997). Language-recognition software and chess-playing software are good examples of equifinality in practice. We don't have quite the same outcomes with the software as we do with humans. But in one case, language-recognition software, the outcome is good enough for many purposes. And in the other case, chess-playing software, we have a superior outcome (at least if winning the game is our objective criterion).
The challenges we face because of equifinality are becoming increasingly salient. For instance, for those of us who are academics, we now have concerns about student use of so-called generative AI programs such as ChatGPT. The fact that a student's response to an assignment has been produced by a generative AI program can be extraordinarily difficult to detect – again, equifinality at work.
It's so hard to predict how equifinality will manifest. It's often hard for humans to ‘think’ like computers! For instance, we have difficulty comprehending how computers perform tasks in a few seconds that would take humans large amounts of time to complete. In this regard, we are at the dawn of quantum computing – currently, a field of research that promises the development of a new kind of computer that can perform certain kinds of calculations in a few seconds that would otherwise take today's supercomputers decades or millennia to complete. In a world of quantum computers, what forms of equifinality and machine intelligence will arise?
Where to from here? As accountants, what should we do in a world where machine intelligence will continue to develop rapidly. I wish I had privileged insights, but sadly I don't. For what they are worth, however, I'd like to conclude my oration with just a few thoughts that might provide some matters for reflection.
First, as accountants, we should focus on identifying those tasks where humans are likely to have a long-term comparative advantage over computers. I suspect these kinds of tasks will be those that require very human attributes – for instance, an ability to interact with others with warmth and empathy, an ability to read body language, a sense of the ephemeral and spiritual, and an ability to develop rapport and trust. We should continue to develop our capabilities in relation to these tasks.
Second, we need to think very hard about those accounting tasks where machine intelligence will have a comparative advantage over humans. We already have some pointers to the tasks that will be affected – specifically, those that are amenable to machine-learning, pattern-matching and classification techniques. But developments in generative AI and quantum computing should motivate us to think more broadly. Where equifinality is likely to arise, we should exit systematically and gracefully from the tasks that will be affected.
Third, we can look for opportunities to work synergistically with machine intelligence. As accountants, ultimately, we are seeking ways to provide information about economic phenomena. With better tools, we are progressively expanding our views about what economic phenomena can and should be our focus. In this regard, I am mindful of Bill Edge's (2022) excellent oration last year where he spoke about developments in sustainability reporting and the opportunities provided to accountants. With powerful tools such as networks of environmental sensors, pattern-recognition and machine-learning software, generative AI tools and creative thinking, we can expand the scope of the work we do as accountants.
Here is my closing comment. I feel some sense of irony and remorse about the topic of my oration. My focus has been artificial intelligence and its possible implications for the accounting profession. But tonight, we are commemorating someone, Professor Colin Ferguson, who had an extraordinary amount of very real human intelligence, personal and professional. There was nothing artificial about it! I hope Col will forgive me.