Complete Story
 

01/06/2020

Stories of AI’s Future

David Berglund
Head of Artificial Intelligence
FIS | Banking Solutions
E:  david.berglund@fisglobal.com

With a quick glimpse at your Netflix queue, it may seem like stories of AI and super-computing have only been around since the 80s (Bladerunner anyone?). But long before replicants were tormenting humans, we had stories of what our future with technology may entail. Civilizations didn’t have the engineering skills to build AI, but they served as both a warning and inspiration for what may be to come.

The earliest images of an AI are thought to come from ancient Greece at least 2,700 years ago. One notable ‘bot’ was name Talos, portrayed by Hesiod in 700 BCE. Talos was a giant automaton made of bronze that would patrol the island of Crete, circling it three times per day to protect it from invaders by hurling giant rocks at their armies and ships. Talos was a very primitive form of AI, racing through a prescribed path around the island. It could be compared to a rule-based chatbot that was commonplace several years ago.

Digital agents can solve one task well (e.g. run and look for invaders, then throw rocks), but could be easily thrown off course.  We know that many tasks are not so simple as they require judgment and adaptability. Even the lowly self-service checkout station at the grocery store (or Smart ATMs for that matter) can be prone to frustrating errors. For the Greeks, each time an artificial being was sent to earth, it resulted in problems for humans – almost to say that the ideas are great up in the heavens when used by gods or left in literary form, but when they interact with humans we get chaos (or a frustrated customer who can’t figure out how to complete a deposit).

While many advanced forms of Machine Learning make it seem like our phone, Facebook feed and Amazon recommendations “know us,” it’s all probabilistic and the AI can be fooled if we ask too much of our algorithms. Understanding the frailty of your bots or algorithms is key if you want to avoid brand embarrassment, or worse. Avoid this by understanding the extent of your use case models others, where it was tested, if the data in your model is like the source training data and what your fallback route is. A simple control framework can ensure you understand the risks and are positioned to succeed even if things go wrong (which we know will).

Talos also imagines the potential usefulness of an AI agent – its consistent ability to run without tiring is a benefit many of us look for in AI today. Consistency is useful when processing loan applications or generating reports. If we can define a task with complete precision and get a bot to complete it 24x7 we can see both quality and efficiency benefits.

Fast forward 2,500 years to the early 19th century, Mary Shelley doubled down on many of those same themes from the Greeks and launched science fiction (with a splash of horror) as a new literary genre. There’s an obvious ethical connection between Frankenstein and today’s highly advanced “black box” AI applications.

First, are we creating the AI for the right reasons? Is it too pat ourselves on the back for using shiny buzzy tech or because it’s the right tool for the job? Second, as we seek to gain greater predictive power of our AI models, we give up our ability to understand how they work.

One Google engineer said,[1]  machine learning algorithms have become a form of alchemy. Even if we put a human in the loop before making an automated action (e.g. credit decision), we must be confident there’s no injected bias into the decision due to a faulty model, bad data or interpretation of results. Model Risk Management standards don’t go away when we apply advanced technology. They become exponentially more important when the pace of decisions via automation are scaled across our businesses.

"There's an anguish in the field…many of us feel like we're operating on an alien technology." – Ali Rahimi

Much like the story of Frankenstein, HBO’s Westworld portrays the potential for AI to run wild. In the show, rich vacationers visit a futuristic amusement park filled with robotic “hosts” who allow visitors to live their fantasies. What could go wrong?  Well, like Frankenstein, the scientists are fixated on achieving human-level intelligence of the robots without the balance of individual responsibility. There’s no consideration of what it would mean to have a new dependent species. There is also no understanding of the complexities of what they’ve created. In Westworld, AI reigns terror across humans.

"Nothing can possibly go wrong ... go wrong ... go wrong."—Quote from Westworld (1973)

Does that mean the future for AI is truly dystopian? No, like all technologies they can be used to create abundance or destruction. We simply need to be thoughtful in the application of advanced technologies and ensure we have teams in place to create and maintain AI. The “care and feeding” of AI is just as important as the initial product or algorithm design. Most forms of AI use Machine Learning which should improve performance over time based on new inputs (e.g. payment requests) and outputs (e.g. fraud or not). The lesson? Ensure you have a plan (people/controls) in place to monitor the result of your AI over time. Also, ensure the incoming data results align with the expectation. The last thing you want is for your bot to become a modern-day Frankenstein[2].  

Four takeaways:

  1. Don’t be blinded by self-satisfaction, buzz or the fear of missing out – ensure you’re focused on the glory of achievement. It’s easy to be blinded by the deeper “why.”
  2. Have the right people on board to understand how the technology works and to ask the right questions. Have controls in place to avoid unnecessary risk.
  3. Remember, the launch is just the beginning. Ensure you know how to measure success and be ready for the unexpected with solid controls and ‘real’ people as needed.
  4. This holiday season enjoy working through your AI-based Netflix queue for both viewing pleasure and inspiration for what the AI of the future may mean for banking and payments.

The best stories are created with conflict based on a bit of self-awareness. This is why most films and science fiction literature portray a more dismal future with AI including uncomfortable relationships with our AI assistants (e.g. falling out of touch with people and in love with a virtual assistant in the movie Her) and the potential for our voice-based agents to predict, act and ease our everyday lives. For instance, JARVIS from Ironman could handle simple tasks like securing building access to running businesses for Stark Industries. When items go well with technology and AI, ease and simplicity seem to fall into place. They seem like obvious conclusions to the way it “should” work.

There’s a timeless link between science and imagination. We have an impulse to imagine a future that doesn’t exist. Despite many negative images of AI agents, the movies still excite people. Stay inspired in your thinking and when working through your strategic plans for 2020+. Keep in mind if you can imagine it, it’s likely to be both possible and inevitable.

[1] https://www.sciencemag.org/news/2018/05/ai-researchers-allege-machine-learning-alchemy

[2] https://en.wikipedia.org/wiki/Tay_(bot)

Printer-Friendly Version