Artificial intelligence (AI) is the ability of a computer software program to learn and perform cognitive functions similar to human intelligence.1 The software is programmed to gather and process very large amounts of data, and come up with information, suggestions, recommendations, predictions, decisions or actions based on whatever it is designed to produce. The data is gathered from sources that the software is programmed to look for. The sources of data may be public or private, depending their relevance for the application. The data gathered and processed may include text, charts, graphics, images, voices, sounds, videos, etc.
Generative AI (GenAI) is a category of artificial intelligence that creates content in response to a prompt.2 The prompt may be a simple question or a complex set of instructions. A simple example is the production of a research paper on a given topic. A complex example is the design of a new automobile based on given criteria and parameters. GenAI applications such as ChatGPT developed by OpenAI are very much in the news, although many other types of AI applications are being developed by technology giants such as Microsoft, Google, Meta, Apple, along with a multitude of smaller high tech companies. It is expected that most AI applications under development will be customizable to various individual and organizational needs.
Major Implications
AI is a disruptive technology, namely a technology that will have highly transformative effects. Every white-collar job will be impacted, from clerical to professional. Every industry will also be impacted, some perhaps a lot more than others. It is not possible to imagine the extent of the transformative effects that will take place over time, any more than it was possible to imagine how electricity would transform the world soon after it was invented. In the context of risk management, AI is what is called a “known unknown,” namely a risk that is identifiable but not well understood with respect to its eventual consequences and outcomes. AI is in its infancy. It is very much like a toddler barely able to walk on its own at this moment. This AI toddler will grow and mature, and it needs to be raised properly if it is to become a responsible adult.
Many business leaders believe that “within just a few years, powerful AI systems will perform cognitive work at the same level (or even above) their human workforce.”3 Amy Webb, a professor of strategic foresight at the Stern School of Business of New York University, and the CEO of the Future Today Institute, argues that such thinking is misguided. “First, it’s too early to predict the exact future of AI (…). Exactly which jobs AI will eliminate, and when, is guesswork. It isn’t enough for an AI system to perform a task; the output has to be proven trustworthy, integrated into existing workstreams, and managed for compliance, risk, and regulatory issues. (…) Second, leaders are focused too narrowly on immediate gains, rather than how their value network will transform in the future. As AI evolves, it will require entire segments of business to be reimagined (…). Remember the earliest days of the Internet and web browsers, which were viewed as entertainment? No one planned for the fundamental transformation both would ignite” explains Ms. Webb.4
Although no one can fully anticipate the effects that AI will have over time, everyone agrees that they will be significant. The potential benefits associated with AI are immense. We are only beginning to conceptualize the use of AI in fields such as science, engineering, medicine, education, agriculture, business, finance, public policy, national defense, and the list goes on. The key is to unlock the potential of AI, while managing the many possible downsides and threats that are major concerns at this point. Chief among them are the effects of AI on jobs, the workforce and the economy.
Mixed Enthusiasm
A global survey of chief executive officers (CEOs) conducted by Ernst & Young indicates that CEOs “recognize the potential of artificial intelligence, but most are encountering significant challenges in formulating and operationalizing related strategies.”5 While over two-thirds of the CEOs surveyed “see the need to act quickly on GenAI, a similar proportion also report being stymied by uncertainty in this space, making it challenging to respond at speed.”6 Nonetheless, the vast majority of CEOs report that their organization is making progress on important matters related to AI (Figure 1). Based on the survey, more than 80% of organizations are actively involved in developing a vision for change, and experimenting with AI applications through pilot projects and partnerships.
Despite appearances of early adoption, “many business leaders have opted to wait for the AI dust to settle before designing a formal business strategy”7 according to the Information Systems Audit and Control Association (ISACA), headquartered near Chicago, Illinois. This global association of 170,000 information technology professionals also reports that some businesses have banned or restricted the use of GenAI “because of its tendency to give incorrect answers that sound confident or legitimate, leading users to be confused or mislead when looking for relevant answers to their questions.”8 Moreover, companies such as Samsung, Apple, JP Morgan Chase, and Verizon are said to have “heavily restricted the workplace use of generative AI over security fears.”9 These findings are echoed by surveys conducted by McKinsey, which suggest that “most organizations are dipping a toe into the AI pool – not cannonballing. Slow progress toward widespread adoption is likely due to cultural and organizational barriers” according to McKinsey.10
Another survey conducted by the Boston Consulting Group (BCG) suggests that employee optimism toward IA is rising compared with another survey completed five years ago.11 However, the latest BCG survey notes that sentiment varies considerably across countries (Figure 2). Employees in developed countries such as the United States, the Netherlands and Japan are least optimistic, while employees in developing countries or regions such as Brazil, India and the Middle East are most optimistic. These results are not surprising because AI is likely to have a much greater impact on white collar jobs, which are more prevalent in developed economies.
Erratic Fears
According to the Collins Dictionary, “erratic” means irregular in performance, behavior, or attitude; inconsistent and unpredictable; having no fixed or regular course; wandering. In simple terms, erratic means all over the map. It is a good descriptor for the fears currently associated with AI. Nonetheless, erratic fears cannot be taken lightly given the nature of AI, and all of the uncertainties and potential disruptions associated with its eventual use everywhere imaginable.
Many experts believe that AI poses an existential threat to humanity. In May 2023, more than 350 signatories (many of them from organizations such OpenAI, Microsoft and Google) signed-off on a statement by the Center for AI Safety (CAIS), which affirms that: “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”12 A rather scary admission by those leading AI efforts. The list of signatories is visible on the CAIS website. It continues to grow with signatures by academics and business leaders.
Sober Second Thoughts
Some experts see things differently. “I do find the concerns exaggerated” says Richard Sutton, a computer scientist renowned for his research in machine learning. “AI is hyped both positively and negatively. It is a shame, but maybe that is the way that our society comes to pay attention to something that is, nevertheless, very important.”13 According Daniel Schoenberger, a lawyer who worked on Google’s list of AI principles, there is an upside for AI leaders to emphasize the risks of AI because it gives the impression that their technology is highly sophisticated, which in turn drives up share prices, enquiries, product demand and downstream revenues. “It’s obvious that these guys benefit from the hype still being fueled” says Mr. Schoenberger.14
Contextually, job market disruptions are nothing new suggests David-Alexandre Brassard, Chief Economist of CPA Canada. “Sixty percent of jobs in 2018 did not exist in 1940. Will the disruption be faster this time around? I argue that the short-term impacts should be greater: displacing workers from old roles is quicker than re-employing them in new ones” explains Mr. Brassard.15 “AI stands out because it brings the disruptions to a new crowd. Automation introduced machines and new processes into factory work, either modifying blue-collar jobs or displacing them. AI, on the other hand, introduces algorithms into white-collar or office work. Whether white-collar workers will be displaced or their role changed is still unknown, but AI is prone to take over low-risk information interpretation and decision-making” notes Mr. Brassard.16
According to Amy Webb, professor of strategic foresight at the Stern School of Business of New York University, and CEO of the Future Today Institute, the AI paradox is that “we need to think of the workforce as evolving with – rather than being supplanted by – generative AI. (…) Workers will have to learn new skills, iteratively and over a period of years. Leaders must adopt a new approach to maximize the potential of AI in their organizations, which requires tracking key developments in AI differently, using an iterative process to cultivate a ready workforce, and most importantly, creating evidence-backed future scenarios that challenge conventional thinking” says Ms Webb.17 Evidently, every new technology has implications on the workforce. Some jobs disappear while others are created. What matters most is to plan for change and manage the risks involved.
Risk Classification Model
Risk classification models, namely categories of risk used to identify, analyze and aggregate risk information, are beginning to emerge for AI. According to McKinsey, there are six overarching types of risks that organizations have to be mindful of when developing and using AI. “In our experience, most AI risks map to at least one of the overarching risk types, and they often span multiple types. (…) Therefore, organizations should ask if each category of risk could result from each AI model or tool the company is considering or already using” according to McKinsey experts.18 Table 1 briefly describes the AI risk categories identified by McKinsey.
From a societal perspective, AI is fraught with many other risks. The ones mostly in the news and perhaps of biggest concern, include the risk of dominance by those at the forefront of AI, the risk of widespread job losses and economic hardship, the risk of growing inequalities among countries and populations, the risk of disinformation and interference in electoral processes, the risk of ever more sophisticated fraud schemes and cybercrime, and the risk of copyright infringement and plagiarism.
In December 2023, the New York Times filed a lawsuit against OpenAI and Microsoft, alleging copyright infringement. The New York Times alleges that OpenAI and Microsoft exploited its content without permission to create their artificial-intelligence products, including OpenAI’s ChatGPT and Microsoft’s Copilot. “Times journalism is the work of thousands of journalists, whose employment costs hundreds of millions of dollars per year. (…) The defendants have effectively avoided spending the billions of dollars that the Times invested in creating that work, by taking it without permission or compensation.” the New York Times mentions in its complaint.19
According to reporters of the Wall Street Journal (WSJ) “several news organizations are exploring how the [GenAI] technology can be harnessed to their benefit – from automating publishing to writing headlines or entire articles. But media companies also see a growing threat. AI tools such as ChatGPT, Copilot and Google’s nascent search-AI tool provide detailed answers to questions that could reduce the need for users to click on links to news sources, depriving those sites of traffic and ad revenue.”20 Robert Thomson, chief executive of News Corp (parent of the WSJ) is very concerned about AI tools using publisher content without permission.21
Laws and Regulations
Bill Gates, founder of Microsoft, believes AI is the most revolutionary technology he has seen in decades. “The rise of AI will free people up to do things that software never will – teaching, caring for patients, and supporting the elderly, for example,” he wrote in a blog post.22 Mr. Gates also believes that AI needs to be regulated. “We should try to balance fears about the downsides of AI – which are understandable and valid – with its ability to improve people’s lives”23 he adds. Mr. Gates is one of the many signatories of the CAIS statement, which proclaims that “mitigating the risk of extinction from AI should be a global priority.”24 The global survey conducted by the Boston Consulting Group reveals that 79% of employees “believe that AI-specific regulations are necessary.”25 Responses are above 80% in most European countries. They are slightly lower in Canada at 77% and the United States at 74%. These responses leave no doubt on the need for strong regulations.
Paradoxically, comprehensive laws and regulations may actually speed-up the adoption of AI by removing many uncertainties, and giving organizations a clear (or at least clearer) understanding of public-policy objectives. Adnan Masood, Chief AI Architect at UST, a global provider of information technology solutions, observes that “the landscape of AI-related regulations is diverse, with nations carving out their unique yet overlapping [legal and regulatory] frameworks.”26
Early in 2022, the United States adopted the Algorithmic Accountability Act of 2022. This legislation “requires certain businesses that use automated decision systems to make critical decisions to study and report about the impact of those systems on consumers. Critical decisions include those that have a significant effect on a consumer’s life such as the cost or availability of health care, housing, educational opportunities, or financial services.”27 The United States Federal Trade Commission (FTC) is mandated to issue regulations for implementing the legislation, and to lead enforcement actions in collaboration with state officials. The law also establishes a Bureau of Technology to advise the FTC about the technological aspects of its functions.28
In October 2022, the United States’ Office of Science and Technology issued the AI Bill of Rights, which outlines “five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. (…) These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.”29 The AI Bill of Rights is accompanied by a handbook that explains each principle, including why it is important, what should be expected, and how it should be applied or implemented. Table 2 summarizes the AI Bill of Rights principles.
Early in 2024, the European Union (EU) adopted the AI Act, considered to be “the world’s most comprehensive legislation yet on artificial intelligence, setting out sweeping rules for developers of AI systems, and new restrictions on how the technology can be used. (…) The rules, which are set to take effect gradually over several years, ban certain AI uses, introduce new transparency rules, and require risk assessments for AI systems that are deemed high-risk. (…) The new legislation applies to AI products used in the EU market, regardless of where they were developed. It is backed by fines of up to 7% of a company’s worldwide revenue.”30 Although the legislation applies only to EU countries, “it is expected to have a global impact because large AI companies are unlikely to want to forgo access to the bloc, which has a population of about 448 million.”31
As explained by Dentons, a global law firm with offices in 80 countries, the EU legislation establishes obligations based on the risks and impacts of AI on individuals and society at large.32 AI systems are classified as limited risk, high risk, or unacceptable risk. AI systems that pose unacceptable risks are banned. These systems are described as those presenting a threat to fundamental human rights such as biometric categorization, untargeted facial recognition, emotional recognition in workplace and educational settings, manipulation of human thoughts and behavior, and the exploitation of people vulnerabilities. Every AI system (regardless of risk) is subject to transparency requirements. For instance, EU citizens must be aware that they are interacting with an AI system, whenever one is being used, and for whatever purpose. Table 3 outlines AI systems with unacceptable risks and those considered high risk based on the EU legislation.
Foregone Conclusion
Developed by humans, we can expect that AI systems will mirror the good, the bad and the ugly of human nature. The opportunities and threats of AI are compounded by the fact that it can replicate undesirable behaviors exponentially, driven by superior cognitive abilities. We cannot anticipate what the future holds when it comes to AI, any more than we could anticipate the many outcomes of other disruptive technologies when they were invented, such as electricity, automobiles, computers, the Internet, nuclear energy, etc. However, we can anticipate that the not-so-well-intended among us will be tempted to exploit AI in many undesirable ways. That is a foregone conclusion – a result obvious to everyone before it happens. Principles, policies, laws, regulations, risk assessments, oversight, whistleblowing, and sanctions are needed presto. Exactly what so many AI experts are asking for. Let us hope that government officials, in consultation with business leaders, will ensure that AI systems are developed and used in accordance with the public interest.
__________________________
1 McKinsey & Company, “What is AI” McKinsey Explainers (April 2023), p.2.
2 McKinsey & Company, “What is AI” McKinsey Explainers (April 2023), p.4.
3 Amy Webb, “How to Prepare for a GenAI Future You Can’t Predict” Harvard Business Review, (August 2023).
4 Amy Webb, “How to Prepare for a GenAI Future You Can’t Predict” Harvard Business Review, (August 2023).
5 Andrea Guerzoni, et.al., Is the AI buzz creating too much noise for CEOs to cut through? Ernst & Young (Oct. 2023), p.1.
6 Andrea Guerzoni, et.al., Is the AI buzz creating too much noise for CEOs to cut through? (…), p.1
7 ISACA, The Promise and Peril of the AI Revolution: Managing Risk, (2023), p.5.
8 ISACA, The Promise and Peril of the AI Revolution: Managing Risk, (2023), p.6.
9 ISACA, The Promise and Peril of the AI Revolution: Managing Risk, (2023), p.6.
10 McKinsey & Company, “What is AI” McKinsey Explainers (April 2023), p.7.
11 Boston Consulting Group, AI at Work: What People Are Saying, (June 2023), p.4.
12 Center for AI Safety, Statement on AI Risk, (https://www.safe.ai/work/statement-on-ai-risk).
13 Victoria Wells, “What You Need To Know About Office Robots” National Post, (May 31, 2023).
14 S. Schechner and D. Seetharaman, “How Worried Should We Be About AI’s Threat to Humanity? WSJ (Sept. 4, 2023).
15 David-Alexandre Brassard, “Opportunity Cost – The range of AI’s true influence will be determined by how it is eventually regulated,” Pivot Magazine (CPA Canada, March-April 2024), p.10.
16 David-Alexandre Brassard, “Opportunity Cost – The range of AI’s true influence will be determined by (…), p.10.
17 Amy Webb, “How to Prepare for a GenAI Future You Can’t Predict” Harvard Business Review, (August 2023).
18 Kevin Buehler, et.al., Getting to know – and manage – your biggest AI risks (McKinsey Analytics, May 2021), p.3.
19 Alexandra Bruell, “New York Times Sues Microsoft and OpenAI, Alleging Copyright Infringement” WSJ (Dec. 27, 2023).
20 Alexandra Bruell, “New York Times Sues Microsoft and OpenAI, Alleging Copyright Infringement” (…).
21 Alexandra Bruell, “New York Times Sues Microsoft and OpenAI, Alleging Copyright Infringement” (…).
22 Alyssa Lukpat, “Bill Gates Says AI Is the Most Revolutionary Technology in Decades” WSJ (March 22, 2023).
23 Alyssa Lukpat, “Bill Gates Says AI Is the Most Revolutionary Technology in Decades” WSJ (March 22, 2023).
24 Center for AI Safety (CAIS), Statement on AI Risk, (https://www.safe.ai/work/statement-on-ai-risk).
25 Boston Consulting Group, AI at Work: What People Are Saying, (June 2023), p.11.
26 Adnan Masood, “Why companies must prepare for future AI regulation” CIODive, (October 23, 2023).
27 Congress.Gov, S.3572 – Algorithmic Accountability Act of 2022, (117th Congress, 2021-2022).
28 Congress.Gov, S.3572 – Algorithmic Accountability Act of 2022, (117th Congress, 2021-2022).
29 The White House, Blueprint for an AI Bill of Rights (October 2022; https://www.whitehouse.gov/ostp/ai-bill-of-rights/).
30 K. Mackrael, S. Schechner, “European Lawmakers Pass AI Act, First Comprehensive AI Law” WSJ, (March 13, 2024).
31 Kim Mackrael and Sam Schechner, “European Lawmakers Pass AI Act, World’s First Comprehensive AI Law” (…).
32 Dentons, The New EU AI Act – the 10 key things you need to know now, (December 14, 2023).
Copyright © 2025 Noranda Education Inc. All rights reserved.