- The CBI chevron_right
- Winning trust in Artificial Intelligence
Winning trust in Artificial Intelligence
With the significant benefits that AI offers firms and customers, now is the time for business to take action to build trust, says IBM's Bill Kelleher.
1 of 2 Caption, 2 of 2 Caption,
AI is reshaping our world as we know it, transforming every sector, from education to transportation. It’s bringing fast, smart and more personalised services. Something we can all appreciate. But, as with any new wave of technology and despite the benefits, there are questions and concerns.
A recent study by the Institute of Business Value found while 82% of organisations are considering AI adoption, over half have security and privacy concerns about the use of data.
With the significant benefits that AI offers business and customers, now is the time for business to take action to build trust. Only by embedding ethical principles into AI applications and processes can we build systems that people can trust. To achieve this, we must first identify what this entails and where to begin. As a starting point, to maximise and accelerate the business value and impact of AI systems, it is imperative to build systems that are explainable.
Companies developing or using AI should have clear principles around the development, deployment and governance. These principles don’t require complexity, but they should, as a minimum, be championed by a company’s board/senior management, be available for public input and be translatable into tangible actions. One essential principal, for example, is transparency. This could mean your phone service provider informing you when you are speaking with a chatbot or a person to answer your questions.
Good AI governance
Essential to this is good AI governance. With those responsible for gathering the data, creating the algorithms, and applying it to businesses often working separately, it’s important to consider the different stages involved in creating AI solutions and monitor each stage closely. But with a number of frameworks already in development, companies do not need to create these processes from scratch.
Earlier this year, The European Commission published its Ethics Guidelines for Trustworthy AI designed to set a global standard in advancing AI ethically and responsibly. IBM, through our AI Ethics Global Leader, Francesca Rossi, helped create the guidelines which identify seven fundamental requirements to help businesses shape their approach to building trustworthy AI.
The guidelines also contain an assessment list that can be used as a roadmap for companies to put trustworthy AI into operation. It covers a broad range of areas, but businesses can start with two of the most crucial challenges of AI: mitigating bias and ensuring people understand the rationale behind AI decisions. The guidelines recognise that there is no “one-size-fits-all” solution to AI Ethics with different situations raising different challenges.
The assessment list can be piloted by companies for the remainder of 2019, with feedback from the trial phase shaping the final list due in 2020. The final guidelines will look to include use-cases, demonstrating how the guidelines can be applied in different AI contexts.
Mitigating bias
The data collected and fed by humans into an AI application often contains implicit racial, gender, or other biases. That could result in a system that sifts through first-round job applications with bias toward candidates’ age, education or address for example. There are tools in development and already in the market to detect and mitigate bias like this, and companies have a responsibility to ensure they’re using these and/or only working with AI providers deploying them. Additionally, the rationale behind recommendations made by an AI system must be explainable to those directly and indirectly affected.
To this end, at IBM we have already been putting principles of trusted AI into practice in tools such as Watson Open Scale to increase control of AI for the organisations using it by detecting bias and demonstrating explainability of how an outcome was reached. Our AI fairness 360 toolkit provides developers with algorithms that can compensate for bias in the training data.
It is society’s decision to trust or not to trust AI, and the companies that deliver it, that will determine its success. As leaders and companies, we must earn that trust. AI ethics should not be viewed as an isolated business objective to be bolted on after deployment. It is a vital part of business performance. Only by embedding ethical principles into AI applications and processes can we build systems that people trust.