{"title":"传统与神经网络目标匹配方法动态:美国信息技术并购市场","authors":"Ioannis Anagnostopoulos, Anas Rizeq","doi":"10.1002/isaf.1492","DOIUrl":null,"url":null,"abstract":"<p>In an era of a continuous quest for business growth and sustainability it has been shown that synergies and growth-driven mergers and acquisitions (M&As) are an integral part of institutional strategy. In order to endure in the face of fierce competition M&As have become a very important channel of obtaining technology, increasing competitiveness and market share (Carbone & Stone, <span>2005</span>; Christensen et al., <span>2011</span>). During the post-2000 era, this is also a point where more than half of the said available growth and synergies in M&As are strongly related to information technology (IT) and its disruptive synergistic potential, as the first decade of the 2000s has shown (Sarrazin & West, <span>2011</span>). Such business growth materializes at the intersection of internalizing, integrating, and applying the latest data management technology with M&As where there are vast opportunities to develop (a) new technologies, (b) new target screening and valuation methodologies, (c) new products, (d) new services, and (e) new business models (Hacklin et al., <span>2013</span>; Lee & Lee, <span>2017</span>). However, while technology and its disruptive capabilities have received considerable attention from the business community in general, studies regarding the examination of technology convergence, integration dynamics, and success of M&As from a market screening and valuation perspective are relatively scarce (Lee & Cho, <span>2015</span>; Song et al., <span>2017</span>). Furthermore, little attention has been devoted to investigating the evolutionary path of technology-assisted, target screening methods and understanding their potential for effective target acquisition in the future (Aaldering et al., <span>2019</span>). We contribute to this by examining the application of neural network (NN) methodology in successful target screening in the US M&As IT sector.</p><p>In addition, while there are recognized idiosyncratic motivations for pursuing M&A-centered strategies for growth, there are also considerable system-wide issues that introduce waves of global M&A deals. Examples include reactions to globalization dynamics, changes in competition, tax reforms (such as the recent US tax reform indicating tax benefits for investors), deregulation, economic reforms and liberalization, block or regional economic integration (i.e., the Gulf Cooperation Council and the EU). Hence, effective target-firm identification is an important research topic to business leaders and academics from both management and economic perspectives.</p><p>Technology firms in particular often exhibit unconventional growth patterns, and this also makes firm valuation problematic as it can drive their stocks being hugely misvalued (i.e., overvalued) therefore increasing M&A activity (Rhodes-Kropf & Viswanathan, <span>2004</span>). Betton et al. (<span>2008</span>) claimed that predicting targets with any degree of accuracy has proven difficult in their general conclusion regarding M&A research aimed at target prediction. Hennessy (<span>2017</span>) argued that such firms add an extra and significant layer of complexity in terms of appropriate target identification and valuation. This is so because such firms either have limited/no past history or go through an invariable and dynamically high volume of transformations throughout their lives; they also enjoy unique businesses and/or products and, consequently, there are not directly visible and comparable peers or competitors. It is extremely challenging to identify and match appropriate targets that could recondition a company, evaluate how much to invest, and how to assimilate them (Christensen, <span>2016</span>).</p><p>The problem stems from the fact that upper management (with the exception of motive distortion), either through unintended hubris or myopia, erroneously match targets to the strategic intent of a deal. In this way, managers fail to discriminate between targets that can improve the company's growth prospects from those that could dramatically dampen performance.</p><p>Most of the research so far concentrates on linearly attempting to predict the probability of targets been acquired through regression analysis and the related distress signals (i.e., bankruptcy probabilities) using publicly available data sets on firms and then applying the aforementioned models. In reality, though, and in most cases, social researchers do not work with data from well-planned laboratory experiments where this is possible. One cannot safely assume that past data, interaction data, and obviously human performance-related data are characterized by an ideal distribution (Sparrowe et al., <span>2001</span>). Social science studies and the associated data have a tendency for nonlinearity, clustering around certain sections, and being skewed with respect to particular variables (Very et al., <span>2012</span>). More recently, Lipton and Lipton (<span>2013</span>)—and earlier Tam and Kiang (<span>1992</span>)—also supported this notion by arguing that the variety and multiplicity of exogenous factors is so persistent at any given time that they affect mergers and make it impossible to predict the level of future merger activity. For all the aforementioned reasons, the assumptions ingrained in such methods constitute an unrealistic prospect for many informational sets utilized by scholars.</p><p>Prediction performance can potentially be enhanced by utilizing more data in an effort to construct helpful models; yet, Cremers et al. (<span>2009</span>) argued that, equally, a predefined set of data in order to obtain a regression model is an additional issue. However, in both preceding statements, the inability to deal effectively with nonlinearity is a critical drawback of multiple linear regression (Detienne et al., <span>2003</span>). This de facto observation in the domain of management science renders linear regressions questionable in some instances. If established knowledge of nonlinearity exists, then it is much easier to treat; however, in many instances, researchers may be unaware of the nonlinearity among the chosen variables.Consequently, companies employ less well-suited methods, they pay the wrong (usually higher) price, they fail to realize the synergistic potential, and they significantly increase their costs after integrating the target in the wrong way (Ma & Liu, <span>2017</span>). Furthermore, both the choice of target screening methodology and the associated performance measures have been long-standing issues facing researchers and managers alike. Past research has shown that, ex ante capital market reactions to an acquisition announcement exhibit little relation to corporate managers' ex post assessment fundamentals (Schoenberg, <span>2000</span>). M&A announcements archetypally encompass a big premium over going market prices (between 30% and 40% on average) and result in a large and swift change in market prices suggesting the announcement is news to the market (Eckbo, <span>2014</span>). Approximately half of all M&As fail, with a very large proportion of those considered successful producing negligible gains for their shareholders. At the same time, the upper management of targets departs within 3–5 years following the acquisition completion (Agrawal & Jaffe, <span>2000</span>; Krug & Aguilera, <span>2005</span>). Hence, the appropriateness of target screening and valuation methodologies in important economic sectors, such the IT services sector, remain overlooked. It is thus of both business and academic importance to investigate how the underlying trajectories interact in order to seize resulting opportunities.</p><p>The evolution of artificial intelligence, big data, and modern screening methods is steadily leading managers to reject a one-size-fits-all valuation in favor of more <i>individualized</i>, reconfigurable, and understandable accounting data—data that they can customize and fit into their own structures to meet their own decision-making needs. Consequently, managers increasingly investigate methods and digital data that are less static, less summarized, more concise, and more agile. In that line of thinking, O'Leary (<span>2019</span>) argued that emerging model designs enhance database capabilities and that the parallel development of multiple and different learning models will revolutionize the use of accounting data.</p><p>In this paper, we explore, apply, and compare two types of target screening classification techniques—the NN and the traditional logistic regression (LR) M&A forecasting techniques—in terms of successful target prediction in the IT M&A market for the USA. We provide for a demonstration of the growing prospects of the use of an NN to systematize feature engineering from raw time series, in a more methodical way as a result of the strategic change in the types of digital commodities that decision-makers demand. In that respect, and within the context of M&As, predicting which companies will become takeover targets and the ability to discriminate between high‑ and low-quality targets is very important for managers and financiers, as well as for regulators and competition market committees. Our findings provide valuable insights to guide managers in financial and other organizations to improve their performance through suitable target (or nontarget) screening methods.</p><p>Having provided the introduction and our motivation for our study in Section 2 we explore the literature, revolving around three major threads: namely, first examining the relevant empirical evidence on M&A deals in the USA; second, uncovering the valuation challenges; and third, exposing the two models discussed in Section 3. We do so with the aim of aiding the understanding of the evolution of such complex dynamics. In Section 3 we discuss our methodologies, where determinant variables and NNs are also compared vis-à-vis LR. We do so with regard to aiding an understanding of the tensions in model dynamics. Section 4 presents the results of our analysis, and our last two sections provide the conclusions, discussion, and recommendations.</p><p>Advancements in technological disruption are fueling a winner-takes-all environment, and firms with the most effective target asset matching will create more distance and differentiation between the largest, most successful firms and the rest of the market (Hennessy & Hege, <span>2010</span>). Any breakthrough in the capacity to more reliably forecast firms and spotting which companies are likely to translate first-mover advantages into market power that will engage in a merger deal would be very lucrative for an investor in the financial markets. Over the past two decades, approximately more than one-third of worldwide deals have involved US-based companies. This percentage stood above 50% in 2000 due to the tech bubble. In the first part of this section we perform a brief empirical exploration in order to (a) demonstrate how successful M&As of tech-driven, digital business models (e.g., fintechs) have become the instrument of choice to acquire needed technologies, capabilities, and scalabilities in order to close innovation gaps, deflect competition, and (b) as a path to growth through the power of ‘ball-point statistics’, to demonstrate to business leaders and academics the importance of M&As in the IT sector as well as help our exposition of utilizing agile and reliable methods for successful target acquisition.</p><p>For the purposes of this study, three broad groups of data were required where all M&A sample data were gathered from the SDC Platinum database. Our sample is subject to the following restriction criteria: (a) our period covers the aftermath of the technology bubble, and the last 17 years of M&A transactions announced between the years 2000 and 2017 are included; (b) we exclude private acquirer firms also in circumstances where the target is a publicly traded company domiciled in the USA; and as a result, (c) only public technology firm targets are included as classified being a technology company by their Standard Industrial Classification (SIC) codes and sub-codes; (d) partial investments, divestments, joint ventures, spinoffs, and buybacks are excluded; (e) only <i>pure</i> mergers or acquisitions transactions are included where the acquirer owns more than 50% of the targets' shares.</p><p>All IT M&A transactions records, number of public firms in the technology sector from the year 2000 to 2017, and the relevant financial ratios for the same period were required. We employ the coding system and we define a technology firm as the type of institution that (a) focuses primarily on the manufacturing and development of technology (both software and hardware), (b) disseminates information via technology, and (c) is referred to as a subset of technology companies as provided by the same coding and sub-coding system—for example, auxiliary IT devices and services. We utilize four-digit numerical SIC codes, which are used to assign companies to their respective industries. M&A data can also be uniquely positioned for the analysis of market convergence, as firms challenged by technology convergence attempt to extend their capabilities from external sources through various corporate development activities as argued previously herein. Given that M&As are stimulated by structural and functional market changes, it seems reasonable to look at whether technology-firm convergence is also related to changing market boundaries as evident in the case of technology convergence. That is, sometimes boundaries seem to be blurred given that technology companies range from semiconductors to technology-based personal credit institutions (e.g., fintechs). More specifically, in order to ensure consistency, the relevant M&As in our sample included companies where SIC codes range from 3571 to 3578, 3674 to 3699, and 7371 to 7379. The growing proximity between formerly distinct market segments through the incidence of M&As can act as a predictor of measuring the extent and rate of the market convergence processes. Our initial sample consists of 6,392 deals where both the NN and LR method are tested on the whole sample. We then utilize two subsamples (targets vs. nontargets) as well as testing our data on two selected sample weighting scenarios (i.e., a balanced 50/50 sample and an unbalanced 70/30 sample of targets to nontargets).</p><p>The comparative method utilization and analysis discussed in this paper permits a controlled investigation of competing methodologies where, grounded on a quantitative analysis of M&A transaction deal data, we were able to demonstrate that alternative methods can be utilized in an interdisciplinary manner. Modeling the prediction of takeover targets reliably has been a challenging field in the domains of forecasting and finance. We have reviewed M&A types and their classifications, explored their history, and discussed motives and causes. For example, bankruptcy and financial distress prediction topics were discussed in the financial literature more extensively owing to their practical usage, especially by banks for credit scoring and acquirers for target matching. Then we identified our research objectives by analyzing our research problem to build a generic framework for understanding the generic aspects of the market for corporate control and its importance confined to the US IT sector. Researchers have utilized a variety of methods for predicting takeovers, and we have attempted to review these methods and evaluate them as per the financial literature. In order to demonstrate the practicality and accuracy of NNs we have applied an LR model for comparison purposes. In our study, NNs performed better than LR on both samples, indicating their superiority over LR.</p><p>Our study has important implications for both managers and policymakers. Furthermore, scholars and managers have also long been aware of both the tensions and the potential benefits emanating from interdisciplinary cooperation. A systematic target examination of the M&A environment can provide for a heightened sense of understanding of marketplace subtleties and can help moderate ambiguity in target projections. Therefore, the comparative methodology predictive results can be merged into market growth agendas in order to support managers and practitioners with effective decision-making. For example, top-level managers can utilize competing methodologies for creating appropriate variable networks for expanding and advancing firm-level target matching as well as controlling for management myopia effects, which can help (re)focus acquisition intent. Policy setters and competition authorities can potentially angle their scope and reach beyond the direction of simply evaluating the features (i.e., causalities) of each market by considering the outcomes of interdisciplinary and cross-examination methods for effective competition regulation.</p><p>Our aim is to compare standard predictive tools with innovative ones in the finance field, and in the M&A field in particular with the further aim of contributing and pushing the boundaries of innovative methodologies that assist academics and professionals alike. Our study, like any other study, also has restrictions that call for an expanded exploration in order to extend and enrich the current and prior research in target forecasting studies and methods. As discussed herein, the direction of the relationship between the independent and dependent variables examined (cause–effect) is not categorically clear. It is very challenging to even attempt to offer further elaboration upon the major factors that drive the volume/size and value of M&A deals based on the data set at hand. Interdisciplinary field research and combined triangulated methodologies, such as qualitative surveys, can potentially offer an avenue for further clarifications. In terms of the data utilized, it is also imperative to communicate and make clear that we have confined our sample to a single industrial setting. Arguably, the setting itself, owing to its unique characteristics, necessitates further comparative research in other settings/sectors/industries; the relevance and the consistency of this proposed research can be supplemented through future studies. A further limitation of our work is that one can argue that there is a further need for an external validation of our model and findings with a second, independent data set, and for this reason our results should be considered also as exploratorily predictive as opposed to a definitive remedy.</p><p>There is also no academic or empirical reason not to extend this study in other related research domains where there is also potentially big data availability, such as cross-border M&As, waves of consolidation, IPOs, and foreign direct investment in all its forms. All these areas present a rich ground for expanding on the need to develop models that predict not only which companies are likely to be targets, but also which deals will move forward, which types of expansion can actually add value over some medium-term horizon, like 3–5 years.</p><p>The author(s) declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.</p><p>The author(s) received no financial support for the research, authorship, and/or publication of this article.</p>","PeriodicalId":53473,"journal":{"name":"Intelligent Systems in Accounting, Finance and Management","volume":"28 2","pages":"97-118"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/isaf.1492","citationCount":"3","resultStr":"{\"title\":\"Conventional and neural network target-matching methods dynamics: The information technology mergers and acquisitions market in the USA\",\"authors\":\"Ioannis Anagnostopoulos, Anas Rizeq\",\"doi\":\"10.1002/isaf.1492\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In an era of a continuous quest for business growth and sustainability it has been shown that synergies and growth-driven mergers and acquisitions (M&As) are an integral part of institutional strategy. In order to endure in the face of fierce competition M&As have become a very important channel of obtaining technology, increasing competitiveness and market share (Carbone & Stone, <span>2005</span>; Christensen et al., <span>2011</span>). During the post-2000 era, this is also a point where more than half of the said available growth and synergies in M&As are strongly related to information technology (IT) and its disruptive synergistic potential, as the first decade of the 2000s has shown (Sarrazin & West, <span>2011</span>). Such business growth materializes at the intersection of internalizing, integrating, and applying the latest data management technology with M&As where there are vast opportunities to develop (a) new technologies, (b) new target screening and valuation methodologies, (c) new products, (d) new services, and (e) new business models (Hacklin et al., <span>2013</span>; Lee & Lee, <span>2017</span>). However, while technology and its disruptive capabilities have received considerable attention from the business community in general, studies regarding the examination of technology convergence, integration dynamics, and success of M&As from a market screening and valuation perspective are relatively scarce (Lee & Cho, <span>2015</span>; Song et al., <span>2017</span>). Furthermore, little attention has been devoted to investigating the evolutionary path of technology-assisted, target screening methods and understanding their potential for effective target acquisition in the future (Aaldering et al., <span>2019</span>). We contribute to this by examining the application of neural network (NN) methodology in successful target screening in the US M&As IT sector.</p><p>In addition, while there are recognized idiosyncratic motivations for pursuing M&A-centered strategies for growth, there are also considerable system-wide issues that introduce waves of global M&A deals. Examples include reactions to globalization dynamics, changes in competition, tax reforms (such as the recent US tax reform indicating tax benefits for investors), deregulation, economic reforms and liberalization, block or regional economic integration (i.e., the Gulf Cooperation Council and the EU). Hence, effective target-firm identification is an important research topic to business leaders and academics from both management and economic perspectives.</p><p>Technology firms in particular often exhibit unconventional growth patterns, and this also makes firm valuation problematic as it can drive their stocks being hugely misvalued (i.e., overvalued) therefore increasing M&A activity (Rhodes-Kropf & Viswanathan, <span>2004</span>). Betton et al. (<span>2008</span>) claimed that predicting targets with any degree of accuracy has proven difficult in their general conclusion regarding M&A research aimed at target prediction. Hennessy (<span>2017</span>) argued that such firms add an extra and significant layer of complexity in terms of appropriate target identification and valuation. This is so because such firms either have limited/no past history or go through an invariable and dynamically high volume of transformations throughout their lives; they also enjoy unique businesses and/or products and, consequently, there are not directly visible and comparable peers or competitors. It is extremely challenging to identify and match appropriate targets that could recondition a company, evaluate how much to invest, and how to assimilate them (Christensen, <span>2016</span>).</p><p>The problem stems from the fact that upper management (with the exception of motive distortion), either through unintended hubris or myopia, erroneously match targets to the strategic intent of a deal. In this way, managers fail to discriminate between targets that can improve the company's growth prospects from those that could dramatically dampen performance.</p><p>Most of the research so far concentrates on linearly attempting to predict the probability of targets been acquired through regression analysis and the related distress signals (i.e., bankruptcy probabilities) using publicly available data sets on firms and then applying the aforementioned models. In reality, though, and in most cases, social researchers do not work with data from well-planned laboratory experiments where this is possible. One cannot safely assume that past data, interaction data, and obviously human performance-related data are characterized by an ideal distribution (Sparrowe et al., <span>2001</span>). Social science studies and the associated data have a tendency for nonlinearity, clustering around certain sections, and being skewed with respect to particular variables (Very et al., <span>2012</span>). More recently, Lipton and Lipton (<span>2013</span>)—and earlier Tam and Kiang (<span>1992</span>)—also supported this notion by arguing that the variety and multiplicity of exogenous factors is so persistent at any given time that they affect mergers and make it impossible to predict the level of future merger activity. For all the aforementioned reasons, the assumptions ingrained in such methods constitute an unrealistic prospect for many informational sets utilized by scholars.</p><p>Prediction performance can potentially be enhanced by utilizing more data in an effort to construct helpful models; yet, Cremers et al. (<span>2009</span>) argued that, equally, a predefined set of data in order to obtain a regression model is an additional issue. However, in both preceding statements, the inability to deal effectively with nonlinearity is a critical drawback of multiple linear regression (Detienne et al., <span>2003</span>). This de facto observation in the domain of management science renders linear regressions questionable in some instances. If established knowledge of nonlinearity exists, then it is much easier to treat; however, in many instances, researchers may be unaware of the nonlinearity among the chosen variables.Consequently, companies employ less well-suited methods, they pay the wrong (usually higher) price, they fail to realize the synergistic potential, and they significantly increase their costs after integrating the target in the wrong way (Ma & Liu, <span>2017</span>). Furthermore, both the choice of target screening methodology and the associated performance measures have been long-standing issues facing researchers and managers alike. Past research has shown that, ex ante capital market reactions to an acquisition announcement exhibit little relation to corporate managers' ex post assessment fundamentals (Schoenberg, <span>2000</span>). M&A announcements archetypally encompass a big premium over going market prices (between 30% and 40% on average) and result in a large and swift change in market prices suggesting the announcement is news to the market (Eckbo, <span>2014</span>). Approximately half of all M&As fail, with a very large proportion of those considered successful producing negligible gains for their shareholders. At the same time, the upper management of targets departs within 3–5 years following the acquisition completion (Agrawal & Jaffe, <span>2000</span>; Krug & Aguilera, <span>2005</span>). Hence, the appropriateness of target screening and valuation methodologies in important economic sectors, such the IT services sector, remain overlooked. It is thus of both business and academic importance to investigate how the underlying trajectories interact in order to seize resulting opportunities.</p><p>The evolution of artificial intelligence, big data, and modern screening methods is steadily leading managers to reject a one-size-fits-all valuation in favor of more <i>individualized</i>, reconfigurable, and understandable accounting data—data that they can customize and fit into their own structures to meet their own decision-making needs. Consequently, managers increasingly investigate methods and digital data that are less static, less summarized, more concise, and more agile. In that line of thinking, O'Leary (<span>2019</span>) argued that emerging model designs enhance database capabilities and that the parallel development of multiple and different learning models will revolutionize the use of accounting data.</p><p>In this paper, we explore, apply, and compare two types of target screening classification techniques—the NN and the traditional logistic regression (LR) M&A forecasting techniques—in terms of successful target prediction in the IT M&A market for the USA. We provide for a demonstration of the growing prospects of the use of an NN to systematize feature engineering from raw time series, in a more methodical way as a result of the strategic change in the types of digital commodities that decision-makers demand. In that respect, and within the context of M&As, predicting which companies will become takeover targets and the ability to discriminate between high‑ and low-quality targets is very important for managers and financiers, as well as for regulators and competition market committees. Our findings provide valuable insights to guide managers in financial and other organizations to improve their performance through suitable target (or nontarget) screening methods.</p><p>Having provided the introduction and our motivation for our study in Section 2 we explore the literature, revolving around three major threads: namely, first examining the relevant empirical evidence on M&A deals in the USA; second, uncovering the valuation challenges; and third, exposing the two models discussed in Section 3. We do so with the aim of aiding the understanding of the evolution of such complex dynamics. In Section 3 we discuss our methodologies, where determinant variables and NNs are also compared vis-à-vis LR. We do so with regard to aiding an understanding of the tensions in model dynamics. Section 4 presents the results of our analysis, and our last two sections provide the conclusions, discussion, and recommendations.</p><p>Advancements in technological disruption are fueling a winner-takes-all environment, and firms with the most effective target asset matching will create more distance and differentiation between the largest, most successful firms and the rest of the market (Hennessy & Hege, <span>2010</span>). Any breakthrough in the capacity to more reliably forecast firms and spotting which companies are likely to translate first-mover advantages into market power that will engage in a merger deal would be very lucrative for an investor in the financial markets. Over the past two decades, approximately more than one-third of worldwide deals have involved US-based companies. This percentage stood above 50% in 2000 due to the tech bubble. In the first part of this section we perform a brief empirical exploration in order to (a) demonstrate how successful M&As of tech-driven, digital business models (e.g., fintechs) have become the instrument of choice to acquire needed technologies, capabilities, and scalabilities in order to close innovation gaps, deflect competition, and (b) as a path to growth through the power of ‘ball-point statistics’, to demonstrate to business leaders and academics the importance of M&As in the IT sector as well as help our exposition of utilizing agile and reliable methods for successful target acquisition.</p><p>For the purposes of this study, three broad groups of data were required where all M&A sample data were gathered from the SDC Platinum database. Our sample is subject to the following restriction criteria: (a) our period covers the aftermath of the technology bubble, and the last 17 years of M&A transactions announced between the years 2000 and 2017 are included; (b) we exclude private acquirer firms also in circumstances where the target is a publicly traded company domiciled in the USA; and as a result, (c) only public technology firm targets are included as classified being a technology company by their Standard Industrial Classification (SIC) codes and sub-codes; (d) partial investments, divestments, joint ventures, spinoffs, and buybacks are excluded; (e) only <i>pure</i> mergers or acquisitions transactions are included where the acquirer owns more than 50% of the targets' shares.</p><p>All IT M&A transactions records, number of public firms in the technology sector from the year 2000 to 2017, and the relevant financial ratios for the same period were required. We employ the coding system and we define a technology firm as the type of institution that (a) focuses primarily on the manufacturing and development of technology (both software and hardware), (b) disseminates information via technology, and (c) is referred to as a subset of technology companies as provided by the same coding and sub-coding system—for example, auxiliary IT devices and services. We utilize four-digit numerical SIC codes, which are used to assign companies to their respective industries. M&A data can also be uniquely positioned for the analysis of market convergence, as firms challenged by technology convergence attempt to extend their capabilities from external sources through various corporate development activities as argued previously herein. Given that M&As are stimulated by structural and functional market changes, it seems reasonable to look at whether technology-firm convergence is also related to changing market boundaries as evident in the case of technology convergence. That is, sometimes boundaries seem to be blurred given that technology companies range from semiconductors to technology-based personal credit institutions (e.g., fintechs). More specifically, in order to ensure consistency, the relevant M&As in our sample included companies where SIC codes range from 3571 to 3578, 3674 to 3699, and 7371 to 7379. The growing proximity between formerly distinct market segments through the incidence of M&As can act as a predictor of measuring the extent and rate of the market convergence processes. Our initial sample consists of 6,392 deals where both the NN and LR method are tested on the whole sample. We then utilize two subsamples (targets vs. nontargets) as well as testing our data on two selected sample weighting scenarios (i.e., a balanced 50/50 sample and an unbalanced 70/30 sample of targets to nontargets).</p><p>The comparative method utilization and analysis discussed in this paper permits a controlled investigation of competing methodologies where, grounded on a quantitative analysis of M&A transaction deal data, we were able to demonstrate that alternative methods can be utilized in an interdisciplinary manner. Modeling the prediction of takeover targets reliably has been a challenging field in the domains of forecasting and finance. We have reviewed M&A types and their classifications, explored their history, and discussed motives and causes. For example, bankruptcy and financial distress prediction topics were discussed in the financial literature more extensively owing to their practical usage, especially by banks for credit scoring and acquirers for target matching. Then we identified our research objectives by analyzing our research problem to build a generic framework for understanding the generic aspects of the market for corporate control and its importance confined to the US IT sector. Researchers have utilized a variety of methods for predicting takeovers, and we have attempted to review these methods and evaluate them as per the financial literature. In order to demonstrate the practicality and accuracy of NNs we have applied an LR model for comparison purposes. In our study, NNs performed better than LR on both samples, indicating their superiority over LR.</p><p>Our study has important implications for both managers and policymakers. Furthermore, scholars and managers have also long been aware of both the tensions and the potential benefits emanating from interdisciplinary cooperation. A systematic target examination of the M&A environment can provide for a heightened sense of understanding of marketplace subtleties and can help moderate ambiguity in target projections. Therefore, the comparative methodology predictive results can be merged into market growth agendas in order to support managers and practitioners with effective decision-making. For example, top-level managers can utilize competing methodologies for creating appropriate variable networks for expanding and advancing firm-level target matching as well as controlling for management myopia effects, which can help (re)focus acquisition intent. Policy setters and competition authorities can potentially angle their scope and reach beyond the direction of simply evaluating the features (i.e., causalities) of each market by considering the outcomes of interdisciplinary and cross-examination methods for effective competition regulation.</p><p>Our aim is to compare standard predictive tools with innovative ones in the finance field, and in the M&A field in particular with the further aim of contributing and pushing the boundaries of innovative methodologies that assist academics and professionals alike. Our study, like any other study, also has restrictions that call for an expanded exploration in order to extend and enrich the current and prior research in target forecasting studies and methods. As discussed herein, the direction of the relationship between the independent and dependent variables examined (cause–effect) is not categorically clear. It is very challenging to even attempt to offer further elaboration upon the major factors that drive the volume/size and value of M&A deals based on the data set at hand. Interdisciplinary field research and combined triangulated methodologies, such as qualitative surveys, can potentially offer an avenue for further clarifications. In terms of the data utilized, it is also imperative to communicate and make clear that we have confined our sample to a single industrial setting. Arguably, the setting itself, owing to its unique characteristics, necessitates further comparative research in other settings/sectors/industries; the relevance and the consistency of this proposed research can be supplemented through future studies. A further limitation of our work is that one can argue that there is a further need for an external validation of our model and findings with a second, independent data set, and for this reason our results should be considered also as exploratorily predictive as opposed to a definitive remedy.</p><p>There is also no academic or empirical reason not to extend this study in other related research domains where there is also potentially big data availability, such as cross-border M&As, waves of consolidation, IPOs, and foreign direct investment in all its forms. All these areas present a rich ground for expanding on the need to develop models that predict not only which companies are likely to be targets, but also which deals will move forward, which types of expansion can actually add value over some medium-term horizon, like 3–5 years.</p><p>The author(s) declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.</p><p>The author(s) received no financial support for the research, authorship, and/or publication of this article.</p>\",\"PeriodicalId\":53473,\"journal\":{\"name\":\"Intelligent Systems in Accounting, Finance and Management\",\"volume\":\"28 2\",\"pages\":\"97-118\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1002/isaf.1492\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Intelligent Systems in Accounting, Finance and Management\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/isaf.1492\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Economics, Econometrics and Finance\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent Systems in Accounting, Finance and Management","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/isaf.1492","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Economics, Econometrics and Finance","Score":null,"Total":0}
Conventional and neural network target-matching methods dynamics: The information technology mergers and acquisitions market in the USA
In an era of a continuous quest for business growth and sustainability it has been shown that synergies and growth-driven mergers and acquisitions (M&As) are an integral part of institutional strategy. In order to endure in the face of fierce competition M&As have become a very important channel of obtaining technology, increasing competitiveness and market share (Carbone & Stone, 2005; Christensen et al., 2011). During the post-2000 era, this is also a point where more than half of the said available growth and synergies in M&As are strongly related to information technology (IT) and its disruptive synergistic potential, as the first decade of the 2000s has shown (Sarrazin & West, 2011). Such business growth materializes at the intersection of internalizing, integrating, and applying the latest data management technology with M&As where there are vast opportunities to develop (a) new technologies, (b) new target screening and valuation methodologies, (c) new products, (d) new services, and (e) new business models (Hacklin et al., 2013; Lee & Lee, 2017). However, while technology and its disruptive capabilities have received considerable attention from the business community in general, studies regarding the examination of technology convergence, integration dynamics, and success of M&As from a market screening and valuation perspective are relatively scarce (Lee & Cho, 2015; Song et al., 2017). Furthermore, little attention has been devoted to investigating the evolutionary path of technology-assisted, target screening methods and understanding their potential for effective target acquisition in the future (Aaldering et al., 2019). We contribute to this by examining the application of neural network (NN) methodology in successful target screening in the US M&As IT sector.
In addition, while there are recognized idiosyncratic motivations for pursuing M&A-centered strategies for growth, there are also considerable system-wide issues that introduce waves of global M&A deals. Examples include reactions to globalization dynamics, changes in competition, tax reforms (such as the recent US tax reform indicating tax benefits for investors), deregulation, economic reforms and liberalization, block or regional economic integration (i.e., the Gulf Cooperation Council and the EU). Hence, effective target-firm identification is an important research topic to business leaders and academics from both management and economic perspectives.
Technology firms in particular often exhibit unconventional growth patterns, and this also makes firm valuation problematic as it can drive their stocks being hugely misvalued (i.e., overvalued) therefore increasing M&A activity (Rhodes-Kropf & Viswanathan, 2004). Betton et al. (2008) claimed that predicting targets with any degree of accuracy has proven difficult in their general conclusion regarding M&A research aimed at target prediction. Hennessy (2017) argued that such firms add an extra and significant layer of complexity in terms of appropriate target identification and valuation. This is so because such firms either have limited/no past history or go through an invariable and dynamically high volume of transformations throughout their lives; they also enjoy unique businesses and/or products and, consequently, there are not directly visible and comparable peers or competitors. It is extremely challenging to identify and match appropriate targets that could recondition a company, evaluate how much to invest, and how to assimilate them (Christensen, 2016).
The problem stems from the fact that upper management (with the exception of motive distortion), either through unintended hubris or myopia, erroneously match targets to the strategic intent of a deal. In this way, managers fail to discriminate between targets that can improve the company's growth prospects from those that could dramatically dampen performance.
Most of the research so far concentrates on linearly attempting to predict the probability of targets been acquired through regression analysis and the related distress signals (i.e., bankruptcy probabilities) using publicly available data sets on firms and then applying the aforementioned models. In reality, though, and in most cases, social researchers do not work with data from well-planned laboratory experiments where this is possible. One cannot safely assume that past data, interaction data, and obviously human performance-related data are characterized by an ideal distribution (Sparrowe et al., 2001). Social science studies and the associated data have a tendency for nonlinearity, clustering around certain sections, and being skewed with respect to particular variables (Very et al., 2012). More recently, Lipton and Lipton (2013)—and earlier Tam and Kiang (1992)—also supported this notion by arguing that the variety and multiplicity of exogenous factors is so persistent at any given time that they affect mergers and make it impossible to predict the level of future merger activity. For all the aforementioned reasons, the assumptions ingrained in such methods constitute an unrealistic prospect for many informational sets utilized by scholars.
Prediction performance can potentially be enhanced by utilizing more data in an effort to construct helpful models; yet, Cremers et al. (2009) argued that, equally, a predefined set of data in order to obtain a regression model is an additional issue. However, in both preceding statements, the inability to deal effectively with nonlinearity is a critical drawback of multiple linear regression (Detienne et al., 2003). This de facto observation in the domain of management science renders linear regressions questionable in some instances. If established knowledge of nonlinearity exists, then it is much easier to treat; however, in many instances, researchers may be unaware of the nonlinearity among the chosen variables.Consequently, companies employ less well-suited methods, they pay the wrong (usually higher) price, they fail to realize the synergistic potential, and they significantly increase their costs after integrating the target in the wrong way (Ma & Liu, 2017). Furthermore, both the choice of target screening methodology and the associated performance measures have been long-standing issues facing researchers and managers alike. Past research has shown that, ex ante capital market reactions to an acquisition announcement exhibit little relation to corporate managers' ex post assessment fundamentals (Schoenberg, 2000). M&A announcements archetypally encompass a big premium over going market prices (between 30% and 40% on average) and result in a large and swift change in market prices suggesting the announcement is news to the market (Eckbo, 2014). Approximately half of all M&As fail, with a very large proportion of those considered successful producing negligible gains for their shareholders. At the same time, the upper management of targets departs within 3–5 years following the acquisition completion (Agrawal & Jaffe, 2000; Krug & Aguilera, 2005). Hence, the appropriateness of target screening and valuation methodologies in important economic sectors, such the IT services sector, remain overlooked. It is thus of both business and academic importance to investigate how the underlying trajectories interact in order to seize resulting opportunities.
The evolution of artificial intelligence, big data, and modern screening methods is steadily leading managers to reject a one-size-fits-all valuation in favor of more individualized, reconfigurable, and understandable accounting data—data that they can customize and fit into their own structures to meet their own decision-making needs. Consequently, managers increasingly investigate methods and digital data that are less static, less summarized, more concise, and more agile. In that line of thinking, O'Leary (2019) argued that emerging model designs enhance database capabilities and that the parallel development of multiple and different learning models will revolutionize the use of accounting data.
In this paper, we explore, apply, and compare two types of target screening classification techniques—the NN and the traditional logistic regression (LR) M&A forecasting techniques—in terms of successful target prediction in the IT M&A market for the USA. We provide for a demonstration of the growing prospects of the use of an NN to systematize feature engineering from raw time series, in a more methodical way as a result of the strategic change in the types of digital commodities that decision-makers demand. In that respect, and within the context of M&As, predicting which companies will become takeover targets and the ability to discriminate between high‑ and low-quality targets is very important for managers and financiers, as well as for regulators and competition market committees. Our findings provide valuable insights to guide managers in financial and other organizations to improve their performance through suitable target (or nontarget) screening methods.
Having provided the introduction and our motivation for our study in Section 2 we explore the literature, revolving around three major threads: namely, first examining the relevant empirical evidence on M&A deals in the USA; second, uncovering the valuation challenges; and third, exposing the two models discussed in Section 3. We do so with the aim of aiding the understanding of the evolution of such complex dynamics. In Section 3 we discuss our methodologies, where determinant variables and NNs are also compared vis-à-vis LR. We do so with regard to aiding an understanding of the tensions in model dynamics. Section 4 presents the results of our analysis, and our last two sections provide the conclusions, discussion, and recommendations.
Advancements in technological disruption are fueling a winner-takes-all environment, and firms with the most effective target asset matching will create more distance and differentiation between the largest, most successful firms and the rest of the market (Hennessy & Hege, 2010). Any breakthrough in the capacity to more reliably forecast firms and spotting which companies are likely to translate first-mover advantages into market power that will engage in a merger deal would be very lucrative for an investor in the financial markets. Over the past two decades, approximately more than one-third of worldwide deals have involved US-based companies. This percentage stood above 50% in 2000 due to the tech bubble. In the first part of this section we perform a brief empirical exploration in order to (a) demonstrate how successful M&As of tech-driven, digital business models (e.g., fintechs) have become the instrument of choice to acquire needed technologies, capabilities, and scalabilities in order to close innovation gaps, deflect competition, and (b) as a path to growth through the power of ‘ball-point statistics’, to demonstrate to business leaders and academics the importance of M&As in the IT sector as well as help our exposition of utilizing agile and reliable methods for successful target acquisition.
For the purposes of this study, three broad groups of data were required where all M&A sample data were gathered from the SDC Platinum database. Our sample is subject to the following restriction criteria: (a) our period covers the aftermath of the technology bubble, and the last 17 years of M&A transactions announced between the years 2000 and 2017 are included; (b) we exclude private acquirer firms also in circumstances where the target is a publicly traded company domiciled in the USA; and as a result, (c) only public technology firm targets are included as classified being a technology company by their Standard Industrial Classification (SIC) codes and sub-codes; (d) partial investments, divestments, joint ventures, spinoffs, and buybacks are excluded; (e) only pure mergers or acquisitions transactions are included where the acquirer owns more than 50% of the targets' shares.
All IT M&A transactions records, number of public firms in the technology sector from the year 2000 to 2017, and the relevant financial ratios for the same period were required. We employ the coding system and we define a technology firm as the type of institution that (a) focuses primarily on the manufacturing and development of technology (both software and hardware), (b) disseminates information via technology, and (c) is referred to as a subset of technology companies as provided by the same coding and sub-coding system—for example, auxiliary IT devices and services. We utilize four-digit numerical SIC codes, which are used to assign companies to their respective industries. M&A data can also be uniquely positioned for the analysis of market convergence, as firms challenged by technology convergence attempt to extend their capabilities from external sources through various corporate development activities as argued previously herein. Given that M&As are stimulated by structural and functional market changes, it seems reasonable to look at whether technology-firm convergence is also related to changing market boundaries as evident in the case of technology convergence. That is, sometimes boundaries seem to be blurred given that technology companies range from semiconductors to technology-based personal credit institutions (e.g., fintechs). More specifically, in order to ensure consistency, the relevant M&As in our sample included companies where SIC codes range from 3571 to 3578, 3674 to 3699, and 7371 to 7379. The growing proximity between formerly distinct market segments through the incidence of M&As can act as a predictor of measuring the extent and rate of the market convergence processes. Our initial sample consists of 6,392 deals where both the NN and LR method are tested on the whole sample. We then utilize two subsamples (targets vs. nontargets) as well as testing our data on two selected sample weighting scenarios (i.e., a balanced 50/50 sample and an unbalanced 70/30 sample of targets to nontargets).
The comparative method utilization and analysis discussed in this paper permits a controlled investigation of competing methodologies where, grounded on a quantitative analysis of M&A transaction deal data, we were able to demonstrate that alternative methods can be utilized in an interdisciplinary manner. Modeling the prediction of takeover targets reliably has been a challenging field in the domains of forecasting and finance. We have reviewed M&A types and their classifications, explored their history, and discussed motives and causes. For example, bankruptcy and financial distress prediction topics were discussed in the financial literature more extensively owing to their practical usage, especially by banks for credit scoring and acquirers for target matching. Then we identified our research objectives by analyzing our research problem to build a generic framework for understanding the generic aspects of the market for corporate control and its importance confined to the US IT sector. Researchers have utilized a variety of methods for predicting takeovers, and we have attempted to review these methods and evaluate them as per the financial literature. In order to demonstrate the practicality and accuracy of NNs we have applied an LR model for comparison purposes. In our study, NNs performed better than LR on both samples, indicating their superiority over LR.
Our study has important implications for both managers and policymakers. Furthermore, scholars and managers have also long been aware of both the tensions and the potential benefits emanating from interdisciplinary cooperation. A systematic target examination of the M&A environment can provide for a heightened sense of understanding of marketplace subtleties and can help moderate ambiguity in target projections. Therefore, the comparative methodology predictive results can be merged into market growth agendas in order to support managers and practitioners with effective decision-making. For example, top-level managers can utilize competing methodologies for creating appropriate variable networks for expanding and advancing firm-level target matching as well as controlling for management myopia effects, which can help (re)focus acquisition intent. Policy setters and competition authorities can potentially angle their scope and reach beyond the direction of simply evaluating the features (i.e., causalities) of each market by considering the outcomes of interdisciplinary and cross-examination methods for effective competition regulation.
Our aim is to compare standard predictive tools with innovative ones in the finance field, and in the M&A field in particular with the further aim of contributing and pushing the boundaries of innovative methodologies that assist academics and professionals alike. Our study, like any other study, also has restrictions that call for an expanded exploration in order to extend and enrich the current and prior research in target forecasting studies and methods. As discussed herein, the direction of the relationship between the independent and dependent variables examined (cause–effect) is not categorically clear. It is very challenging to even attempt to offer further elaboration upon the major factors that drive the volume/size and value of M&A deals based on the data set at hand. Interdisciplinary field research and combined triangulated methodologies, such as qualitative surveys, can potentially offer an avenue for further clarifications. In terms of the data utilized, it is also imperative to communicate and make clear that we have confined our sample to a single industrial setting. Arguably, the setting itself, owing to its unique characteristics, necessitates further comparative research in other settings/sectors/industries; the relevance and the consistency of this proposed research can be supplemented through future studies. A further limitation of our work is that one can argue that there is a further need for an external validation of our model and findings with a second, independent data set, and for this reason our results should be considered also as exploratorily predictive as opposed to a definitive remedy.
There is also no academic or empirical reason not to extend this study in other related research domains where there is also potentially big data availability, such as cross-border M&As, waves of consolidation, IPOs, and foreign direct investment in all its forms. All these areas present a rich ground for expanding on the need to develop models that predict not only which companies are likely to be targets, but also which deals will move forward, which types of expansion can actually add value over some medium-term horizon, like 3–5 years.
The author(s) declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
The author(s) received no financial support for the research, authorship, and/or publication of this article.
期刊介绍:
Intelligent Systems in Accounting, Finance and Management is a quarterly international journal which publishes original, high quality material dealing with all aspects of intelligent systems as they relate to the fields of accounting, economics, finance, marketing and management. In addition, the journal also is concerned with related emerging technologies, including big data, business intelligence, social media and other technologies. It encourages the development of novel technologies, and the embedding of new and existing technologies into applications of real, practical value. Therefore, implementation issues are of as much concern as development issues. The journal is designed to appeal to academics in the intelligent systems, emerging technologies and business fields, as well as to advanced practitioners who wish to improve the effectiveness, efficiency, or economy of their working practices. A special feature of the journal is the use of two groups of reviewers, those who specialize in intelligent systems work, and also those who specialize in applications areas. Reviewers are asked to address issues of originality and actual or potential impact on research, teaching, or practice in the accounting, finance, or management fields. Authors working on conceptual developments or on laboratory-based explorations of data sets therefore need to address the issue of potential impact at some level in submissions to the journal.