Explainable AI – why humans must always be able to understand what AI is thinking
- Summary:
- The ability to explain AI processes is essential for business decision making. Christian Pedersen of IFS sets out the benefits of transparency at the start of AI integration.
Global research by McKinsey shows that the adoption of AI in businesses more than doubled between 2017 and 2022. In 2017, 20% of respondents reported adopting AI in at least one business area. By 2022, that figure stood at 50%.
Much of this growth is being enabled by increasing connectivity, including the ongoing roll-out of IoT. By observing how end-users interact with an application and analyzing data that streams in from sensors across an IoT infrastructure, business processes can be automated using AI.
Essentially, this creates a feedback loop, allowing the automation of processes to become more streamlined and refined over time. That’s providing businesses the potential to achieve significant efficiencies. However, in order to realize this potential, AI also needs to be explainable, so that humans can interact with it and ensure it is used for the benefit of the organization itself.
Why direct human engagement is not necessarily required
The use of AI already delivers significant operational efficiencies for the companies who have been integrating it within their systems and has the potential to go much further. By learning how processes work, AI gains the capability to enhance them. Part of this is likely to involve reducing the amount of user involvement. However, AI cannot nor should not run in complete isolation from humans.
On one level, users will need to interact with the technology to better understand what is happening in the business. That’s one aspect of the application of AI in businesses but it is far from the only one.
On many occasions, when businesses are running IoT systems, for example, there will be no direct human engagement at all. The focus will be on the automated maintenance of a piece of equipment. The system will be feeding in data. When something goes wrong a ticket is automatically raised and all the necessary fix information is passed onto a technician, without direct user involvement.
That does not necessarily mean that the human will have to get involved in directly changing the business process itself. AI should be able to handle that. Yet, despite all this, users will still be in the loop – and at least one individual will need to provide their approval on the work done.
Why explainability matters
AI, therefore, always has to be explainable. After all, if a human has the final sign-off on a critical business process, they need to understand what they are signing. That means the results need to be presented in a way that is easily intelligible. Still more importantly, every process needs to be auditable – and that will also necessitate human involvement.
While AI and automation are lightly regulated at the moment, there is every likelihood that this will change in the near future. It is possible that businesses will need to provide some kind of log or auditability for why decisions were made. This new area is not covered by legislation or frameworks at the moment, but it is critically important that businesses prepare themselves for what’s coming.
Explainability feeds into this, in that organizations will have to provide output from the decisions in a way that a human can interpret. For instance, IFS’s new AI-based, self-learning solution continuously monitors large volumes of data for assets, machines, systems, and industrial processes to discover and analyze unusual behavior and causes of failures. By leveraging this our customers can democratize intelligence and empower operational users to take timely action and prevent asset downtime, quality issues, emission violations and automate process and workflow improvements.
Ensuring data can be easily understood
Often decisions are going to be based on incredibly large volumes of complex data. That, in a nutshell, is the value of data streaming. But no human would be able to make sense of this data in its rawest form in real time.
The risk is that AI could be left unsupervised – that it might learn from bad data, not account for drift, and be used to target incomplete or simply irrelevant metrics. That is really at the start of AI integration. It will always be necessary to carefully consider what you want out of the system, what you are going to feed into the system, and what success looks like. Anyone expecting a panacea, or expecting to simply throw AI at a problem and get good results, is likely to be disappointed.
Explainable AI is, therefore, increasingly coming of age. Essentially, it is a set of processes and methods that allow human users to comprehend and trust the results and output created by machine learning algorithms. It brings a range of benefits from meeting regulatory standards, helping developers ensure the system is working well and enabling people impacted by a specific decision to subsequently challenge it.
In line with this, we are seeing more and more businesses developing methods and techniques in the application of AI technology, such that the results of the solution can be understood by human experts.
It is important that organizations understand who trains their AI systems, what data was used and, just as importantly, what went into their algorithms’ recommendations. A high-quality explainable AI system can deliver this for them.
Future prospects
AI studies ways to build computer algorithms that learn and solve problems. These techniques are becoming more widely used in business today. Moving forwards, businesses will continue to rely on large volumes of complex data, hence the need for data streaming but also for explainable AI, which enables humans (in this case, employees) to have oversight of the approach and ensure it is working well and can be regulated and challenged.
Looking ahead, AI will continue to evolve – fast – and as its capabilities grow, so too will the need for explainable AI to keep humans in the loop and put checks and balances on its growth.