The need for Privacy and its relation to Trust in the Internet of Things

With the advent of the Internet of Things, comes the generation of colossal amounts of data points. This is especially true at the device layer of the IoT Stack. When such a device, sensor or other data collecting mechanism is linked to a private individual, the issue of privacy of the given data and trust in that particular system is of significant importance.

felix-russell-saw-106849

Transparency in IoT is key

– in not only allowing the consumer to make an informed decision but also to avoid expensive data breaches. In this context, this article expands on the relation between the need of privacy and the consequential need of trust in the IoT architectures handling our private data.
Before we further investigate this relation, we start by trying to define the respective terms. Please note that no explicit definitions for these terms exist and the following definitions just show our definition in the given context.

Privacy

Our understanding of the word privacy is strongly linked to the term Personal Identifiable Information (from now on PII). The term PII is used to describe information that could potentially relate to a person. Privacy is required as soon as PII is leaving the control of the person whose PII it is. Privacy can be seen as an attribute that is either fulfilled or not, but it can also be seen as a measurable attribute, where the value depends to what degree our PII is protected against a potential attacker.

The paper [1] further categorizes privacy into further subcategories:
1. Identity Privacy: The need of privacy for information that can identify a person.
2. Location Privacy: The need of privacy for information that can identify a person’s
location, since the location can reveal PII, e.g. points of interest
3. Footprint Privacy: The need of privacy for all PII leaked unintentionally, e.g.preferred language.

The actual need of privacy is dependent on these subcategories because of different levels of sensitivity of the information handled.

Trust

Trust is an attribute that describes how assured we are that our PII is sufficiently protected and that our PII is only used for agreed purposes. The amount of trust required in the used services / architectures is again dependent on the sensitivity of the information we are sharing. E.g., when we are asked to share our preferred language, a lot less trust is needed as opposed to sharing our private phone number.

Similar to the term of privacy, the term of trust can be further categorized according to [1]:

  1. Device Trust: Need to interact with reliable devices.
  2. Processing Trust: Need to interact with correct and meaningful data.
  3. Connection Trust: Requirement to exchange the right data with the right service providers and nobody else
  4. System Trust: Desire to leverage a dependable overall system. This can be achieved by providing as much transparency of the system as possible.

Having defined the terms, privacy and trust, we are now in a position to look at how the terms relate to each other. The following graphs help us understand this relation in a noteworthy manner [1]:

Figure 1: Source [1]

Relation Privacy & Sensitivity

In Figure 1 the need for privacy depending on the sensitivity of the information is displayed. It is not surprising to see, that the need for privacy grows as the sensitivity of the shared information increases.

Figure 2 on the the other hand shows a more interesting correlation. The graph shows the required trust levels given a certain need for privacy. Here we see, that even when the need for privacy is at a maximum, at 1, the required trust level towards a service / architecture is below 0.75

Picture1

Figure 2: Source [1], IoT-EPI Internal Analysis

Although it may seem surprising at first, it is not in reality when taking a moment to think about the context. It is impossible to trust a service / architecture 100% since there are too many unknown factors in the current state of things. An individual sharing PII’s usually does not have a complete understanding of how the architecture is built up, about how security measures are realised or how trustworthy potentially involved third parties are. Even if all this is made as transparent as possible, a slight risk always remains. With all these insecurities in mind it seems logical that a high amount of trust, but not 100% trust is currently required. It also implies that that the user is not able to trust the service at the expected level in relation to his privacy needs – leaving room for improvement on the side of IoT device and software vendors. We have therefore included an “Ideal Trust” line in Figure 2 to indicate the user trust levels that vendors should be striving towards.

Now that we have established that trust is an extremely important attribute to bring users to share their PII with IoT services, we need to ask: How can this trust be achieved? To answer this question we will take a closer look at the papers [2] and [3] which both address this question.

The paper[1] aimed to create “A Conceptual Trust Model for the Internet of Things Interactions” The model created within this work can be viewed in the following figure:

A Conceptual Trust Model for the Internet of Things Interactions

Figure 3: Source [2]

The model shows the attempt of an device X to evaluate a potential service provider Y. In order to deem Y trustworthy, its trust value has to be higher than a previously defined threshold TH0. If X does not have enough information to evaluate the trustworthiness of Y, it tries to gain more information from other devices which have already interacted with Y. If that is not possible or the trustworthiness of Y is to low, X will not interact with Y.

The paper “Ideas for a Trust Indicator in the Internet of Things” [3] suggests not only retrieving the trust value when contacting another device, but also when receiving information from another device. It establishes a distinction between à priori trust and à posteriori trust.

À priori trust can be used to decide whether to send the message or not, using a threshold level as used in the model in Figure 3. This level is dependent on the trust of the used communication channel and the trust of the receiving object.

À posteriori trust on the other hand, is the trust value to decide whether a message can be used or not. This again is put together with the trustworthiness of the used channel and the trustworthiness of the object sending the message.

In order to make the decision making process easier for the user, the paper [3] suggests using the traffic light metaphor to indicate how trustworthy a given service is. Alternatively, a more elaborate dashboard could be used to give the user an overview of trust values and make adequate suggestions about which services to use.

Of course, there might be further ways to indicate trustworthiness to the user. All the referenced papers however point to the fact that in order for user-oriented IoT services to succeed, the users will have to share personal identifiable information with the service. The user’s need for privacy is therefore very high and a certain level of trust towards the service has to be built up.

IoT service providers can help build up this trust by increasing security through the implementation of trust models.

However, we must be aware of the fact, that the mere implementation of adequate security measures is not enough. To really strengthen the user’s trust in the service, the processes of these measures as well as the results have to be made transparent to the user. Transparency is the key to unlocking the user’s trust in services enabled by the Internet of Things.

Authors: Richa Sharma & Paul Moosmann (Fraunhofer Institute)

>> Read more about Trust related topics. 


References

[1] Daubert, Jorg, Alexander Wiesmaier, and Panayotis Kikiras. A View on Privacy & Trust in IoT. Tech. AGT International, Germany, Telecooperation Group, Technical University of Darmstadt,Web.<https://www.informatik.tu-darmstadt.de/fileadmin/user_upload/Group_TK/filesDownload/Published_Papers/joerg15privacytrust.pdf>.

[2] Arabsorkhi, Abouzar, Mohammad Sayad Haghighi, and Roghayeh Ghorbanloo. A Conceptual Trust Model for the Internet of Things Interactions. Tech. University of Tehran, Iran Telecommunication Research Center, 2016. Web. <http://anslab.org/paper/IST2016_v7.pdf>.

[3] Leister, Wolfgang, and Trenton Schulz. Ideas for a Trust Indicator in the Internet of Things. Tech.IARIA, 27 May 2012. Web. <https://www.thinkmind.org/index.php?view=article&articleid=smart_2012_2_10_40043>.

Comments are closed.