How an AI & Data Protection Relationship Works to Guard your Privacy

by | Jun 14, 2021 | Uncategorized | 0 comments

One of the most interesting technologies that has gained momentum in recent years is artificial intelligence (AI). In high-tech, the AI industry is based on sub-fields of machine learning and deep learning. One of the concerns the domain raises is its ability to conform to regulatory and private obligations, such as the GDPR proscribes. Due to the rapid advancement of digital technology, the line between AI and data protection is diminishing.

The term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human brains, such as “learning” and “problem solving”. Therefore, the main
features of AI are:

  • Collecting large amounts of information – Collecting the information will lead to the ability to make autonomous decisions / actions aimed at maximizing the chances of success. AI and data protection regulations can now work together to provide improved security of personal information. The perfect example of artificial intelligence is a self-driving car that needs to make an autonomous
    decision based on what is happening on the road. In order to make informed decisions one has to gather information about people, create profiles of those people, then place them according to their ‘weight’ on which decisions can be made. For example, whether a bank should approve a mortgage or loan or not. Gathering information and building a profile reveal new issues related to privacy, which have become more relevant following the adoption of EU data protection rules, especially following the publication of guidelines for automatic and private decision-making. For example, clause 29 of the regulation stipulates that information may not be processed without the consent of the information owner; however, in many known cases the information owner does not even know  that the information about him/her has already been collected.
  • Prohibition of automated decisions – EU privacy regulations stipulate that the individual may not be subject to a decision based solely on automatic processing, or on a profile which may affect his/her legal status or significantly affect that individual as well. The prohibition applies to decisions that are based solely on automatic processing. Therefore, human supervision of conclusions reached by the machine must be significant to justify the AI and data protection relationship.

Exceptions to the rule regarding automated decisions – Exceptions are taken into account when a decision is made automatically in the case of fraud prevention or money laundering tests, or becomes necessary when executing or entering into a contract, or is based on the consent of the individual. The owner of the information must be able to show that this profiling is necessary, taking into account the issue of privacy, as much as possible.

Is consent a practical option? What happens in the case of health-related data? Who would give their consent to be subject to an automatic decision relating to their health? In my opinion this would rarely happen for example, where use of such technologies is intended for marketing purposes.

Looking at another example, problems arise when reliance is placed on the IT systems of insurance companies that use automatic processing of health data to assess insurance risk. This problem shows that we cannot entirely depend on an AI and data protection relationship without human supervision. In such cases, obtaining the consent of people to the automatic processing of their health data may be a considerable and costly process for insurance companies.

This means that it is not possible to adopt a single specific guideline regarding information privacy that will cover any type of artificial intelligence and provide the best AI and data protection regulations compatibility. The information displayed, subject to data protection and privacy regulation, will consist of the main factors that were considered to reach the decision, the source of this information and their relevance.


People should always be able to object to an automated decision. Even when an automated decision is necessary to execute a contract, or was made after the person’s consent in the relevant case, people should still be entitled to human intervention that will allow them to express their point of view and to appeal the automated decision.

And what happens in the case of a wrong decision?

The complexity of artificial intelligence is expected to escalate in the coming years, and such complexity may make it more difficult to determine when a cyber-attack occurs and therefore there is a duty to notify when data breach alerts are activated. This notification should contain a detailed explanation on the part of the information owner regarding decisions made regarding all the factors involved in the attack and the adequacy of the protection of individual’s data and privacy rights.