While it is so outdated already

Do we need a new GDPR ?

In May 2020, the European Commission must deliver its first review report on the GDPR. The preparations are well underway. The only problem with this review is that the Commission does not intend to draft proposals to reform the GDPR.

 Even worse, the Commission is apparently not even intending to repair evident mistakes in the GDPR, such as the fact that article 10 is missing in the fining clauses of article 83.

What the Commission's review most likely will also not address, is that the GDPR builds on outdated concepts dating back to 1981 and 1995. Probably also due to the lengthy legislative process, the GDPR was not designed to deal with the impact of the modern information society on people's lives, where artificial intelligence, cloud services, sensor technology and biometrics are ubiquitous. The GDPR is not a radical or sweeping new law as many have suggested; rather, it is merely a change of an existing 1995 Directive and most of what was 'new' was nothing more than a codification of existing case law, Article 29 Working Party opinions and best practices.

In a nutshell, the GDPR's problems can be summarized as follows:

The GDPR is not fit for purpose

The GDPR is nowhere near able to deal with the challenges of the 4th Industrial Revolution which we are facing now. For example, one of the GDPR's key principles is data minimization. This means that companies, governmental organizations and other institutions should not collect more data about you and me than they need for their legitimate purposes.

However, the GDPR does not apply to personal and household use of personal data. The GDPR is based on the outdated premise that our personal data is collected for the primary purposes of the controller. And although this is still largely true for traditional data processing situations, such as public administration, employment and commerce, this is no longer the case with many new online services. It is increasingly not the service providers deciding what data to collect and why … it is us, the users!

?Here lies the problem. In the past, personal data were predominantly collected to digitize an organization's administrative processes (which was the focus of the former DPD and now of the GDPR); in the future, personal data are predominantly collected as a consequence of the digitization of our personal lives. Every day, we upload vast amounts of private information: our locations, our opinions, our lives, and our desires. All the service providers do, is fulfilling a demand and building the functionality that allows us to process our data.

Gabriel García Márquez once said: "All human beings have three lives: public, private and secret". The internet has however dissolved the boundaries between those spheres. Nowadays, it is very hard to keep your secret life secret and your private life private, as we are producing more personal data than ever. We upload pictures of ourselves and others that are liked or commented on by our friends or even complete strangers; we search the web for news, cures for diseases, sexual pleasure, music and the latest fashion; we express our political views on social media and in personal blogs; we communicate using online communication tools and real-time translation apps.

Soon we will be living in 'smart' ever-connected perceiving environments, which promise to make our lives easier. Smart devices are far more attractive than traditional ones. Not only do they allow all kinds of additional features, they also strengthen the customer relationship. Or, as somebody in the hardware business once told me: "If you buy a traditional device, you are my customer once every 8 years; but if you buy a smart device, you are my customer everyday". However, increasingly, we own the hardware, but not the functionality of smart devices, because much of the functionality is offered server-side, which means those devices become (almost) useless the moment we stop sharing data... And no, the GDPR's right to data portability is not going to be of much help here.

We like -no, we demand- the features that under the GDPR qualify as secondary use of such data, such as personalized news feeds and optimized search results. Ergo, profiling and the inevitable filter bubble are key features of the way we use online services, as they help us to sift through those zettabytes of information.

An exacerbating factor here is that most of us are not paying for any of those (value-added) services and information. Consequently, vast amounts of data are also being collected about us for online and in-app advertising, as it is the advertisers that pay for our free services and free information. The scale at which this happens has led to the term Surveillance Capitalism. It is the perverse evolution of the democratic structure from the beginning of the internet: access for everybody. If you can afford a device, most services are free.

The primary purposes of why our personal data are created in those services is often covered by the personal and household use exception of the GDPR. This means that the GDPR will not stop us from uploading all these data and consequently will not stop the controllers from collecting it (except by terminating the service altogether). Everything the controllers do with our data, apart from providing the services, is therefore technically secondary use of our data. And because many of these services are free, such secondary use often violates the contextual integrity within which we shared the data with the service providers in the first place.

The GDPR will not cause nor does it intend to end free online services. So, as long as free services that facilitate our personal needs exist, there will be no meaningful data minimization applying to the companies providing those free services. No, increasing the number of situations which require consent is not going to work, as most people are incapable of making an informed choice between their present desires and some vague risk in the distant future. Yes, we have the right to be forgotten, but that right has a limited scope and it requires action on our side. Plus, the right to be forgotten is highly ineffective against the tsunami of data that other people are uploading on us, as it is impossible for us to keep track of who is storing what data about us.

If we really want our personal data to be processed fairly, we should change the economic fundamentals of the Information Economy instead of trying to empty the data ocean with a bucket called "the GDPR" and the (future) "ePrivacy Regulation". We need to think through the value and costs of a data transaction. And put rules on those transactions whether it is competition rules, consumer protection rules, employment rules, or data protection rules. And oh yes, we desperately need criminal law protecting us and our data when things really have gone wrong. Not only when people or companies abuse our data intentionally, but also when they should have been taking better care of our data. After all, when houses are made of wood, being reckless with fire is a crime too.

The Law of Everything

The GDPR is slowly becoming the Law of Everything. This is the logical consequence of the fact that most of what we do in our daily lives is caught in data. As a 'horizontal law', the GDPR deals with (almost) all processing of personal data regardless of the sector or the circumstances. It impacts a wide range of topics, from (e-)government and (e-)healthcare to consumer protection and employment practices. It affects our shopping, our freedom of expression and our daily commute. It touches our attendance in the workplace and schools, scientific research, and even our prayers in church.

Especially article 2 GDPR creates this omnibus effect. No longer are organizations looking at a case from a sector-specific perspective (e.g., fair employment practices), but they immediately jump on the GDPR when personal data are involved. Many take a bottom-up approach where art. 6 GDPR is leading, rather than a top-down approach where the GDPR is supplementary to sector- or topic-specific laws, company ethics and best practices. We are so concerned about the question whether the data processing meets the GDPR standards, terrified as we are about the GDPR's high fines, that we often lose sight of the question whether that real-life process itself is lawful and fair.

To give a specific example: The Dutch Data Protection Authority recently prohibited alcohol- and drug testing in the workplace because the processing of the data didn't comply with data protection law. However, this is the wrong approach. Whether an employer is allowed to test an employee on alcohol and drugs, is first and foremost a matter of labor law and of the horizontal effect of privacy rights as enshrined in article 8 ECHR and the laws around a person's physical integrity. If labor law, privacy law and physical integrity law allow such a test under the circumstances, the GDPR - as a secondary law applying to the case - should facilitate the processing of personal data resulting from such test (which is exactly what art. 9(2)(b) does, by the way).

But it is not a DPA's job to interpret labor law to conclude that alcohol- and drug testing in the workplace is prohibited (the Dutch DPA said that Dutch labor law lacked the basis for such test, which is a debatable conclusion in my view). The task of interpreting labor law and collective bargaining agreements is reserved for labor courts. All a DPA should do in this case, is ensure that the personal data resulting from lawful tests are protected and that employee rights relating to such data are respected. If a DPA would have to answer the question, as the Dutch DPA had to, whether labor law allows an alcohol or drug test in the workplace in a particular case, then the DPA should be required to ask a labor court to interpret the relevant labor law rules.

Should the court conclude that the test is illegal under labor law, then the processing of the personal data would also be illegal under article 5(1)(a) GDPR and the DPA should order the employer to delete those data. However, should the court decide that the test is lawful under labor law rules, then the DPA should interpret the GDPR accordingly to facilitate the processing of the test results. The fact that data protection is a fundamental right in the EU is not relevant here; as the processing personal data is the result of a business process, the GDPR always plays second fiddle to the laws applying to such business process.

We see this problem also in other areas, such as consumer protection (e.g. to what extent may free services be provided in exchange for data?), healthcare (e.g., may medical data -given the medical secrecy obligation of care givers- be used for scientific and applied clinical research?) and security (e.g., under what conditions may a surveillance system be installed and how long may data be kept?). The GDPR is drafted in a 'purpose-neutral' way; it should be able to handle all purposes of data processing regardless of the sector or context. But it does not provide us with answers whether a particular purpose is lawful or fair. The answer to that question is (and should be) found in other laws.

Ergo, although the GDPR is meant to fully harmonize data protection rules and data protection rights, the GDPR does not, and should not, fully harmonize the rules of business, government, healthcare, research, education, sports, or whatever other part of our society. If harmonization is necessary, then the EU should draft sector-specific legislation, which ideally should include lex specialis rules on the processing of personal data in that sector (see e.g., the Clinical Trial Regulation and the ePrivacy Directive). If not, the Member States are free to legislate, and the resulting patchwork of rules the inevitable effect of the subsidiarity principle of EU law.

This requires that the entire Chapter II of the GDPR may be deviated from in EU or Member State law, not only certain parts of it (as is the case currently). In a supplementary role, the GDPR should only set the bar when there is no specific law applying to the case. This also requires that EU and Member State legislators always take data processing into consideration when legislating a specific sector or certain behavior. I call this "Legislating for Privacy".

Politicians, DPA's and lawyers should keep in mind that processing of personal data is an activity, not a purpose in itself. Don't leave how we design our society to the GDPR and the (undemocratic) DPA's/EDPB to deal with. The DPA's and the EDPB should be careful not to overreach their powers by prohibiting business processes or government practices via an interpretation of the GDPR. The lawfulness of a business process or of an invasion of privacy which generates personal data is for the courts to decide.

Even the GDPR in some places requires additional legislation. The most prominent example is article 22(2)(b), which implicitly requires the EU or Member State legislators to make law to ensure the use of artificial intelligence in a certain sector or circumstance is lawful and fair. How self-driving cars may use AI to create, process and share (personal) data should be part of specific traffic laws. To what extent employers are allowed to use AI when hiring or assessing employees should be part of labor law. And whether the government may use AI to govern our cities and search for criminals, tax evaders or fraudsters should be part of public administration, tax and criminal procedure law. It is important to realize that articles 22(2)(a) and 22(2)(c) are only the 'catch all clauses' in the absence of such legislation.

Outdated roles

With the introduction of cloud computing, the artificial distinction between data controller and data processor was already questioned. However, the GDPR still makes that distinction for no specific reason. Professional service providers, who now technically qualify as processors, also in other areas than cloud services, are more often than not taking far-reaching decisions with respect to the personal data they process. Not only how the data is processed, but also where and by whom the data is processed. And they increasingly also advise - or as part of their services even determine - which personal data are processed.

Ergo, service providers often have a huge impact on the processing and protection of the personal data. Business models have become very complex, and often involve a number of parties that operate in data processing chains. It is only logical to make a service provider more accountable for the way he processes the data than only the 10 GDPR obligations that now apply to data processors.

Since the introduction of the GDPR, especially its rules regarding engaging data processors (article 28) and its focus on data breaches (article 33/34), data protection has become a major obstacle for service providers to do business. The GDPR does not only create a lot of costs when doing business with a service provider, it even has - often unnecessary - acted as a deal breaker. In negotiations, pushing the risk of non-compliance with the GDPR to the other party is often more important than the actual data protection provisions. And because of the GDPR's high fines, clients don't trust the data protection practices of service providers, leading to endless negotiations and overly burdensome contracts.

Not only should we ditch the concept of processor, we should also ditch the concept of joint controllers. In practice, there is no such thing as joint controllers. There are only controllers controlling their own part of the GDPR's obligations in a chain of data processing operations. Which controller controls which part, is determined by the nature of their relationship, their tasks in the data processing chain and the circumstances of the case. If a website decides to use the Facebook Pixel, it is utterly nonsense to treat Facebook as a joint controller for the collection of the data resulting from it. There is only a website owner responsible for the decision to allow Facebook to collect data, and there is Facebook responsible for the fair, proportionate and secure processing of the data resulting from such decision.

To accommodate a situation where there are only controllers, the GDPR should make clear that the principle of accountability does not exceed a contracting party's administrative sphere of influence. If the client, under the circumstances of the case, did a proper due diligence on the service provider, he should never be liable for the mistakes of the service provider, nor should he be risking a fine for violating the principle of accountability. If every party is only responsible and liable for its own actions and its own compliance, that would make doing business so much easier.

There is no such thing as adequacy

Historically, adequacy decisions were taken to facilitate international trade. But as the Attorney General in paragraph 238 of his opinion in the Schrems II case rightfully suggests, there cannot be any certainty that a country provides for an adequate level of protection. All the European Commission can do, is assess the publicly available information and the information shared by the third country's government. Secret programs will never be part of the adequacy decision and there is therefore no way the Commission can state without a doubt that a country provides for a level of protection that is 'essentially equivalent' to that of EU law'. We may safely conclude that an adequacy decision is by definition based on sub-optimal facts and possible half-truths.

There are two ways to deal with this:

  1. Assess the adequacy of GDPR-style rules in a third country, while recognizing the existence of possible government surveillance programs for national security purposes and leave those programs and state powers entirely out of the scope of the adequacy decision (just like national security is not covered by the GDPR). This would however be contrary to the idea of adequacy and the Schrems I case.
  2. Take the existence of government surveillance programs and powers fully into account when assessing the adequacy of a third country. But as no third country will ever be likely to be completely transparent about those programs, there would effectively be no adequacy decisions.

Supervision is not scalable

Given the complexity of the Information Society and the pervasiveness of personal data, concentrating supervision in a single data protection authority is no longer an effective way to regulate the Information Society. The numbers are simple. Here in The Netherlands alone, there are 1.8 million companies. Add the government agencies, healthcare institutions, educational institutions, non-profit organizations and sports clubs to the mix and we are easily looking at close to 3 million Dutch organizations which must comply with the GDPR. And we are not even taking into account the non-EU organizations which must comply with the GDPR ex article 3(2) because they offer goods or services to people in The Netherlands or because they monitor their behavior.

No data protection authority could ever effectively supervise such broad scope. When I worked for the Dutch Data Protection Authority back in the 90's, we already realized that it would be more logical to have sectoral supervisory authorities taking the lead in supervising data protection in their sector. After all, they are more familiar with the sectoral laws and business practices. The Dutch DPA was merely a secondary advisory body in such sectors back then.

Not only does spreading the supervision workload over multiple supervisory bodies make sense from a effectiveness perspective, it also increases the resources available for supervision. Because let's face it: The Achilles' Heel of the GDPR is the budget which governments are willing to allocate to their data protection authorities. Data protection authorities are always short on budget and staff. If large corporations fight their decisions in court, this could result in a serious depletion of funds.

For the same reason, the only way Big Tech could be adequately supervised, is when the EU creates an EU Data Protection Agency (e.g., the EDPS) with similar jurisdictional powers as the European Commission has in competition law. Big Tech tends to gravitate to small Member States like Ireland and Luxemburg. An EU Data Protection Agency would take the pressure off the budgets of those DPA's. Although I agree with the idea of a One Stop Shop for multinationals, the current system is inherently unfair: Irish and Luxemburg tax payers are paying for the supervision of US Big Tech companies that operate across the EU. With an EU Data Protection Agency, the costs of supervision could be allocated to the Member States based on more fair criteria, such as number of citizens.

Conclusion

The GDPR, often described as the world's toughest data protection law, is not doing what is supposed to be doing: protect people and stimulate the economy. The solution is a complete overhaul of data protection thinking:

  • Integrate data protection rules in sector-specific or topic-specific law, such as labor law, competition law, consumer protection law, energy law, traffic law, etc.
  • Make the GDPR supplementary to those other laws. This requires an overhaul of Chapters II and IX of the GDPR. Make Chapter III (rights of data subjects) and IV (accountability) the core chapters of the GDPR.
  • Abandon the distinction between controllers and processors. Think in data processing chains. Treat every organization in that chain as a single controller, whose obligations depend on its role in the data processing chain. Do not hold parties at one end of the chain accountable/liable for the mistakes by parties at the other end of the chain to facilitate commerce.
  • Rethink the rules on international data transfers. Adequacy is an outdated concept in a world where data flow globally. And data protection arrangements put in place by the parties involved, such as contractual clauses and (my own invention) binding corporate rules, have their limits.
  • Involve sector-specific supervisory authorities in the enforcement of data protection

 

Do we need a new GDPR ?

Gepubliceerd

4 feb 2020
Netkwesties
Netkwesties is een webuitgave over internet, ict, media en samenleving met achtergrondartikelen, beschouwingen, columns en commentaren van een panel van deskundigen.
Colofon Nieuwsbrief RSS Feed Twitter

Nieuwsbrief ontvangen?

De Netkwesties nieuwsbrief bevat boeiende achtergrondartikelen, beschouwingen, columns en commentaren van een panel van deskundigen o.g.v. internet, ict, media en samenleving.

De nieuwsbrief is gratis. We gaan zorgvuldig met je gegevens om, we sturen nooit spam.

Abonneren Preview bekijken?

Netkwesties © 1999/2024. Alle rechten voorbehouden. Privacyverklaring

1
0