There are many definitions of privacy: “the right to be left alone” or “the degree to which one exposes aspects of their personal life.” In fact, privacy is a social construct. Global law makers have attempted to create legislation to manage individuals’ rights to privacy, often from different starting positions but with the same goal in mind, these include the GDPR, CCPA, PIPL, PIPEDA, etc.
The free flow of data globally is essential for commerce, social development, scientific research, environmental sustainability, and national and international security, to name just a few.
The way in which data is now collected, transferred, and used is prolific, this is not news; most people understand the data they give up, perhaps not the full extent or the uses or indeed where their personal data will end up. This poses the question of the “privacy paradox.” If individuals are concerned about privacy, then why do they act in a way that disregards privacy, that is, not reading privacy notices, clicking ‘I consent’, etc.?
Answers to this question vary, and different conclusions can be drawn. One is cognitive bandwidth: “our cognitive capacity and our ability to pay attention, make good decisions, stick with our plans, and resist temptations.” The overprovision of information and complexity of processing operations can lead to cognitive bandwidth; this leads to what is recognised as “regret.” A core principle of data privacy is choice and consent for individuals, and regulations reflect this by imposing compliance obligations on organisations that control data. We see this in the GDPR with the right to object and strict criteria for obtaining valid consent, as well as clear and concise information provision. When these obligations are implemented, “regret” is avoided. The question to ask is: can this be achieved? It is increasingly difficult to explain how data is collected, used by AI tooling and other automated systems, how profiles on individuals are created, and the effects these may have.
To resolve these issues, we need to develop applications, design processes, and systems that remove or negate obstacles that create opaque processing and lead to the aforementioned issues for individuals; this is the goal of privacy by design (PbD).
PbD is an approach to processing personal data based on seven principals; these create a paradigm approach or way of thinking about how we process data and allow us to develop strategies, utilise tactics, and understand architectural approaches to meet these principals. PbD is often confused with security by design or compliance by design and has the potential of being hijacked without fully understanding how to design and implement these considerations into systems.
We see that legislation also sees the benefits of this approach, and an interpretation of PbD in the GDPR is Data Protection by Design and Data Protection by Default, but without fully understanding how to implement the requirements, this can become lip service and lead us back to the original issues raised in this blog.
The answer, therefore, is training: a formal approach to understanding what PbD is, recognising the issues it resolves, and how it can be implemented. One must also consider how the application of PbD meets compliance considerations and organisational goals.
We can resurrect privacy, but the obligation sits with those who want to use personal data, not the individuals whose data we use, as that control has been lost.