Wherever AI systems and algorithms are used within the corporate world to help make decisions for an individual, and where data is collected and stored, there is a genuine need for transparency and AI explainability. This is linked to the EU and UK GDPR and the right to be informed as an individual. We all have the right to understand why and how our data is being collected, as well as how it will be deleted once it is no longer pertinent information for the organisation processing the data.
The risk of AI discrimination and bias
One area where there is a need for tight regulation is where AI systems could potentially discriminate against certain types of individuals. There could be both intentional and unintentional discrimination when it comes to the use of AI and algorithms that corporate entities use to make decisions that impact private individuals. This could be due to deliberate removal of certain aspects of a data set that discriminates against a certain profile of person, or where machine learning has not yet processed enough data to fairly implement decision making based on a wider variety of personal profiles, and therefore discriminates against the newer types.
Does there need to be AI explainability?
The right to be informed is set out in EU and UK GDPR. This transparency and accountability for organisations that collect and process data of any kind is an important part of modern life. When you consider the impact of algorithms and AI systems in processing data in the modern workplace, any personal data that is processed requires transparency and explainability so that data subjects understand implicitly what is happening with their personal, and sometimes sensitive, information.
Any data controller carrying out processing of personal data must be transparent in the reasons behind the data collection and how it is collected and processed. Wherever there is AI assisting with this process the transparency obligation is also present. It is no different to data protection linked to human processing.
The legal right to be informed of how your personal data is being processed is a crucial component of data privacy. When it comes to automated decision-making processes where the data subject could suffer adverse effects or legal ramifications based on the decision made, it is vital that there are clear and simple explanations available about the use of AI. This meaningful explanation should highlight the logic behind the use of AI, as well as any potential pitfalls associated with it. Should there be a change in circumstance, where data is collected in a different way, or used in a different way, this must be communicated with the data subjects in advance of the proposed changes.
What needs to be included in an explanation?
To begin with, the explanation must be clear, simple to follow, and easy to find. It will be split into two distinct categories for the governance of AI systems, outcome-based explanations, and processed-based explanations. The latter demonstrates the initial process and how the AI system has been designed and deployed, whereas the outcome provides the reasoning behind a decision made by an algorithm.
There are a few different ways in which decision-making processes by AI can be explained to the data subject. Rationale explanations will look at the reasons behind the pathway that led to the eventual outcome. This should always be presented in a clear and accessible fashion for the layman, with no technical terms involved. Next comes the explanation about who is involved with the design and implementation of the AI system, and who is contactable by the user to go through a human review of a decision. Data explanation will put the user through the process of what information is collected and used to enable the decision to be made.
Within an explanation there must be an effort to demonstrate fairness, and where AI is concerned there must always be a meaningful human review that ensures that all steps have been taken to treaty each individual and their data fairly. Alongside this are the safety, security, and performance parameters to keep personal, sensitive data safe, and to ensure accuracy of decision making. Last, a clear explanation should be provided as to the steps taken to consider the impact an AI system has on an individual and wider society.
As you can see, putting in place simple, transparent documentation outlining explainable AI benefits and how data is collected, stored, and analysed, should be a priority in the corporate world and any other organisation and sector. Proving the benefits of AI and its use in collecting data shows true accountability for an organisation, building trust with the outside world and showing that you are a company that adheres to data privacy regulations and that you care about the process and the data being collected and stored. This is important in a future world where AI systems are to drastically increase in use and where technology and the sheer volume of data sets will only increase.
This post has been sponsored by Brick Digital
Digital Health Buzz! aims to be the destination of choice when it comes to what’s happening in the digital health world. We are not about news and views, but informative articles and thoughts to apply in your business.