Building trust in Artificial Intelligence – AI empowered running

This is the first article in a series about building trust in AI. The authors describe personal and professional AI use case examples and reflect on factors that need to be considered in ensuring trusted AI solutions.

AI in running

The world’s leading sports watch manufacturers have been building and refining algorithms to measure everything relating to cardiovascular fitness and movement. Such data is of great support to individuals who wish to maintain and improve fitness, such as running economy.

I am a passionate long-distance runner and lately I have been thinking about how AI could help runners become more successful in races. During competitions it would be great to identify the moments you can push a bit over the assumed limit, while conserving enough energy to make it to the finish line. Moreover, it would be great if such an effort would neither cause injuries or extend recovery time.

To realize my dream of a virtual race coach, there are, however, certain conditions that need to be fulfilled before I could fully trust the solution.

What would it take me to trust in AI?

For me to build confidence in such a solution, at least the following conditions need to be fulfilled:

  • Transparency: Is the AI solution understandable, meaning the decision framework can be explained and validated? Do I know how my data is protected and what portion of it is used by the provider for their own purposes?
  • Reliability: Will the AI solution perform as intended when needed? Does it have ongoing learning capabilities?

Today, leading sports watch providers increasingly embrace machine learning algorithms to individualize analysis and to tailor outcomes. Also, innovative sports apparel and sports shoe suppliers are experimenting with sensors and AI capabilities.

However, the challenge of AI lies in transparency and reliability – are they able to build such products, in which customers can trust?

Making full use of the potential of AI

It is a true challenge for manufacturers and for AI solution providers to capture their future customers’ attention and to provide reliable and individualized products. EY’s Global Trusted AI Advisory Leader Cathy Coby stated the need to achieve and sustain trust in AI as follows: “An organization must understand, govern, fine-tune and protect all of the components embedded within and around the AI system. These components can include data sources, sensors, firmware, software, hardware, user interfaces, networks as well as human operators and users.”

EY’s trusted AI framework emphasizes five attributes that the solutions must have if they are to sustain trust:

  • Performance: The AI’s outcomes are aligned with stakeholder expectations and perform at a desired level of precision and consistency.
  • Bias: Inherent biases arising from the data, development team composition and training methods are identified, and these variables are addressed through the AI design. The AI system is designed with consideration for the needs of all impacted stakeholders and to promote a positive societal impact.
  • Transparency: When interacting with an AI algorithm, an end user is given appropriate notification and an opportunity to select their level of interaction. User consent is obtained, as required, for data captured and used.
  • Resiliency: The data used by the AI system components, and by the algorithm itself, are secured from unauthorized access, corruption and adversarial attack.
  • Explainability: The AI’s training methods and decision criteria can be understood, are documented, and are readily available for human operator challenge and validation.

Running with trusted AI

Before I make an investment in a virtual race coach, I need to gain confidence in the solution. I expect maximum levels of transparency and reliability. Additionally, I want to have proof, that the solution will meet my individual needs, and that it will constantly learn and evolve. Furthermore, I want my personal data to be well protected and only used for the agreed purposes.

In conclusion, I wish that AI solution providers would rigorously apply trust by design principles throughout the product and customer life-cycles.

With or a without a virtual race coach, I will continue to enjoy running to stay fit and inspired.

 

Interested in AI and virtual trust? Read more about AI on our Global pages.


Who?

Rene Felber

Rene Felber is the Tech Risk competence leader at EY Advisory Finland. He is passionate about risks, performance and digitalization.

rene.felber@fi.ey.com