Skip to main content

5 AI Issues That Can’t Be Ignored

Here’s a mindful approach for IT leaders who want to capture the opportunities offered by artificial intelligence

5 AI Issues That Can’t Be Ignored

With the rise of platforms, products, and processes based on artificial intelligence, business and IT leaders who want to adopt AI responsibly have a steep learning curve. In this essay, adapted from Driving IT Innovation: A Roadmap for CIOs to Reinvent the Future, authors Heather Smith and James McKeen of Smith School of Business, Queen’s University, lay out five key issues that leaders will need to address as they transition to AI. Driving IT Innovation is based on insights drawn from focus groups with senior IT executives and managers.

One of the most challenging aspects of artificial intelligence is that it is leading managers and society at large to consider some of the broader impacts of technology adoption. “There are real social impacts to this technology,” said a manager, “and we need to adopt it in a way that is mindful of them.”

The focus group identified five big issues that need to be tackled when adopting AI:

1. Ask the bigger questions.

“AI raises many questions that still need to be resolved,” said a manager. Within organizations, the focus group easily listed a number of issues that are being discussed by IT leaders, including:

  • Should a bot self-identify as a robot?
  • Should AI be transparent about how it makes decisions? Is informed consent needed to use AI?
  • Who accepts decision responsibility and accountability: The organization? The algorithm supplier? The data scientist?
  • Should waivers be required in some cases? Are they ethical?
  • What are some of the “back door” implications of AI (e.g., smart TVs that leak data)?

“We don’t understand what norms for using AI will be acceptable,” said a manager, “and these will likely vary around the world. They are likely to evolve more slowly than the technology itself.”

There are even bigger questions that must also be addressed. AI and robotics are beginning to affect labour markets, and most predictions point to increasing levels of job losses over the next 10 years. One writer notes, “The disconnect with past work models is happening a lot faster than in the past. . . . We’ll soon see enormous waves of workers put out of work and ill prepared to take on very different jobs.”

The focus group was very aware of the potential societal dangers involved in such massive economic displacement and the fact that our institutions are ill-equipped to deal with these changes. “Our welfare, unemployment, retirement systems, and our universities all need to adapt,” said a manager. Another added, “We need to work to maximize ‘friendly’ AI to extend human intelligence and open new fields of employment.”

“There are real social impacts to AI and we all need to work together to identify the questions that need to be asked, establish norms for its use, and reform our social and educational institutions,” a third manager concluded.

2. Beware of anthropomorphism.

Anthropomorphism is the attribution of human traits, emotions, and intentions to non-human entities and is considered an innate tendency of human psychology. As computers become more and more humanlike in their ability to interact conversationally, it is natural to ascribe human characteristics to them. AI developers try to leverage anthropomorphism to make computers easier to use. While not necessarily unwise, experts warn that the inappropriate use of anthropomorphic metaphors creates false beliefs about the behaviour of computers such as overestimating their “flexibility.” For example, a customer service call centre with “chatbots” could lead to disaster if it is unable to address complex human needs.

This is a real danger, said the focus group, when companies are under constant pressure to reduce costs.

3. Work to develop trust.

It is critical that people and organizations be able to trust what technology is able to do. At these initial stages of AI, this trust must be constantly tested. Organizations can expect vendors to oversell their capabilities as well, leaving people skeptical of what AI can really do.

Furthermore, trust will vary by context. One manager noted: “The level of trust required depends on the types of decisions AI is making — less is needed when determining the best route to work and much more when there are safety implications.”

In addition, trust can be misplaced or abused and this should never be forgotten. “We had total confidence in our automated airplane tracking system until Malaysia Airlines Flight 370 completely disappeared. Such occurrences reveal inappropriate assumptions that are temporarily threatening and require a complete reassessment of how we are using technology,” stated a manager.

4. Build multiple work models.

As the world changes in response to new work practices resulting from adoption of AI, organizations will have to integrate their legacy systems and practices into this fast-paced, rapidly changing environment. No one knows what models will be effective in this new world, so the best advice is to experiment with multiple ones (e.g., crowdsourcing, distant manufacturing or transaction processing, and contract work).

Experience with different work models will help develop flexibility and agility and start to modify organizational cultures. Having an adaptive culture will give organizations much more than a one-time advantage. It could be key to their very survival. One manager quoted Charles Darwin: “It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change.”

5. Consider open AI.

OpenAI is a non-profit artificial intelligence research company, supported by Tesla’s Elon Musk, that aims to carefully promote and develop friendly AI in order to benefit, rather than harm, humanity as a whole. It is also an open-source project aimed at creating specifications for AI and associated programs and tools.

Its short-term goals are to build tools and algorithms that will be shared publicly and over the longer term, to develop better hardware that can perform more like a human. An open-source model is a cheaper way to address AI problems, and if it works it could help advance AI for everyone.

 

Heather Smith is a senior research associate with Smith School of Business at Queen’s University and co-author (with James McKeen) of eight books. She is also a senior research associate with the American Society for Information Management’s Advanced Practices Council. James D. McKeen is Professor Emeritus at Smith School of Business and Senior Vice President and Chief Technology Officer at Empire Life Insurance. He has worked in the IT field for many years as a practitioner, researcher, and consultant. Learn more about Driving IT Innovation