On October 12, 2016, the White House released a report on the future of artificial intelligence and its potentially transformative impact on the economy and society.  Although the report does not focus on the application of artificial intelligence (AI) technologies in the financial services industry, it provides a helpful overview of the different categories of technology presently referred to as AI, provides an explanation of machine learning systems in layman terms, and highlights risks associated with the adoption and implementation of AI-enabled products that will be of interest to financial services companies developing or implementing AI technology.

AI technology has a broad array of potential applications in the financial services space, including:

  • personalization of financial services through analytics-driven recommendation engines;
  • the automation of traditional teller functions through the deployment of smart assistants and voice recognition technology;
  • robo-advisory services in the wealth management space;
  • automated fraud detection, as well as anti-money laundering and anti-terrorist financing compliance monitoring; and
  • predictive cybersecurity monitoring and response systems.

The report highlights a number of risks associated with the adoption of AI technologies that are likely to be of concern to both regulators and financial services companies testing and implementing such technologies.

  • First, machine learning tools that are built using training datasets that are not representative of the population to which the model is later applied can lead to biased and inaccurate statistical predictions. The conscious or unconscious biases of the teams building AI algorithms and tools can also lead to biased results. The opaqueness of AI algorithms and systems (as compared to traditional analytics scoring techniques) makes monitoring bias in AI systems especially challenging.
  • Second, the incomplete testing of AI systems before implementation could lead to unpredictable and potentially unsafe outcomes. Safety is a specific concern for AI systems because often the systems are “trained” in the laboratory on a more narrow set of objects and situations than what the AI system will encounter once implemented in the real world, which makes the response of the AI system somewhat unpredictable once deployed.

As AI technology matures and gains broader traction across industries, there is no doubt that regulators, technology companies and entrepreneurs will have a robust discussion regarding the proper role of regulation in addressing bias, predictability and safety concerns with respect to AI-enabled products.  The report suggests that policy discussions regarding AI should focus on protecting public safety, and regulatory responses to AI should include a focus on lowering the costs and barriers to innovation without adversely impacting safety and market fairness.