Big companies leading with clients or user’s data is nothing new in the corporate world. But now, the practice of collecting data has risen in such an explosive way that we’ve finally started to grasp the implications of its potential, and we’ve even given it a name: Big Data. Not just data, but big data. The “big” is important here because it perfectly explains the main difference this new era is bringing: an increment in pure, raw size.

Data compilation is now massive because digital platforms have allowed for companies to automate these processes. But as with every other new system, new perks arise as well. Now that we’re on the edge of really taking advantage of Big Data for sectors like IoT and Smart Cities, the question of security has appeared; even in advertising, which we call agree needs some rethinking despite its status as an “established” sector.

Breaches on companies that have resulted in the compromise of users’ data are numerous, unfortunately. But there’s also another risk: should we trust in these giant entities to have this much sensible information in the first place? This seemingly simple questions have now given rise to a new moral and ethics discussion regarding user’s data, and a very important one we must have to be able to advance safely into the future.

There are already several ideas and projects surrounding the idea of “data transparency”, a principle that, as the name implies, proposes that entities that deal with clients’ data should be transparent with all the activities surrounding its use. From the recollection process, to the actual implementation of the data, and maybe even making the data itself available.

Usable is the most important word”, says Paolo Ciuccarelli, Associate Professor at Politecnico di Milano and Scientific Director at DensityDesign Research Lab. He presents a paradox: we want access to this information, but it’s often too complicated for the regular users to understand in the first place. In other cases, the contrary occurs: the data tends to be oversimplified when presented.

In other words, Ciuccarelli adds another layer to the goal of making people “aware” about how their personal data is being used: understanding it. Entities can be transparent enough with what they do, but making that data understandable and readable is the real challenge no one talks about, and where more resources should go.

Manon Molins from Fing, a French fintech company, agreed with those assessments, and she even adds yet another layer to the equation: “You need more than showing the data. You need to be able to use it.” Usability implies that the data is understandable, but it also implies giving the user the tools needed for implementing that information into their lives.

In any case, for organizations to offer these services, “tools are getting stronger,” according to Molins. She says AI is a good example of a technology that can be taken advantage of, even if the concept scared her at first. At Fing, she works in the MesInfos Project (or MyData Project), which started in 2012 from the idea that if organizations are able to use your data, you should too.

Molins argues that individuals should be masters of their data. And since they don’t trust companies and organizations anymore, like she says, what better way to gain that trust that by making them owners of their own data? For that, the MesInfos Project has even proposed an app with its own interface where the users can access a personal cloud with their information. Of course, people must be concerned about their data in the first place.

Matías Ferrero works a Fjon, a design company with several studios across different countries. He functions from the Berlin offices, where he employs his skills as a designer. From that visual perspective, he argues that “services must be attractive to users.” And accessing them must be a choice for everyone.

When asked about the difficulty of making data understandable for people, he’s sure that everyone can understand it, but he makes clear one thing: “You have to design to who you’re talking to.” How you present the information depends on the target, not a general guideline. And usually, the less you complicate things by giving too much, the better. “You can explain anything to anyone. It just depends on the user.”

José Ramón Gómez from Telefónica doesn’t quite agree. For him, accepting that simply not everyone is going to understand what you’re giving to them is important, and you have to work with that reality. Starting from there, he recognizes three main types of users.

The first are the guardians, who are very private and concerned about their information, but are a small minority. Then there are those who give all of their data without much thought. In the middle, others keep giving information but are more skeptical about it. Any organization intending to be transparent about the data they use must keep these profiles in mind.

As he is responsible for AURA, Telefónica’s AI platform, Gómez also gives a consumerist’ approach: your priority must be to empower the users with the data you have by making it useful for them with your own services. However, he also echoes the main idea that every other expert expressed.

That for the users, “the data belongs to them.”