Skip to content Skip to footer

A Tale of Connected Cars and Overlapping Regulations | by Tea Mustać | Jan, 2024

[ad_1]

IntelliCar is a European-based company that recently started producing smart cars for the European market. In order to get the desired answer when looking into the magic mirror and asking who has the smartest car of them all, IntelliCar thought long and hard and decided to equip their super smart cars with: facial and emotion recognition automatically adjusting the car temperature and sending warnings when the driver dozes off, optional usage-based car insurance,[2] its very own ChatGPT powered virtual assistant, and a whole bunch of other safety and driving-experience enhancing technologies. However, the three mentioned already suffice to make my point, so I will stop myself here. Now, to be fully honest, any single one of the three listed technologies would be enough to trigger the application of the EU Data Act, the GDPR and the AI Act, but I wanted to mention a couple of interesting provisions of the EU Data Act (the article is going to be more focused on it), so bear with me here.

GDPR

First things first, the situation with the GDPR is pretty straightforward in the described case. We have three technologies in the car all of which will collect (a lot of) personal data.

The car will first collect facial data in order to recognize the user and check whether the driver has given his consent for subsequent processing operations. (Now, we can’t expect IntelliCar to account for this initial act of processing as well, it’s just all too complicated, and the dominant players aren’t paying much attention to it either so surely as a startup, they can afford to look the other way?) If the consent is recorded the car will continue to collect and process the facial expressions in order to adjust the car temperature, send alerts if signs of doziness appear and even ask the driver what’s wrong through its voice assistant feature. Second, if the driver also opted for usage-based insurance the car will collect usage data that can be ascribed to the particular identified and consenting driver. That data will then be transferred to the insurance company for them to process and adjust the insurance premiums. Finally, by saying “Hey IntelliCar (or any name as decided by the user)” the car’s voice assistant activates. Then an almost unlimited number of requests can be made to the car including playing music, asking for directions or even searching things up online, because as you remember our virtual assistant is powered by ChatGPT and hence reasonably capable of performing such requests. All the collected processed data is definitely personal, as the face, the voice and the habits of a particular (already identified) driver, all constitute information based on which someone (most obviously IntelliCar in this case) can identify the driver.

Well, okay not much new there. The GDPR applies to connected cars, of course. There goes the first loaf of bread in our sandwich.

AI Act

The situation with the AI Act is slightly more complicated but, as we’ll see, the gist is that the AI Act still applies. If anything, then to assess whether there are any specific obligations from the Act to comply with.

So, let’s start with the most obvious one. Facial and emotion recognition systems are definitely types of machine-based systems that can generate outputs, such as, in this case, recommendations or decisions that influence the physical environments i.e. car temperature (Article 3). Intellicar is the one that developed and implemented the system and, thus, also its provider. So now it only remains to be determined which (if any) obligations they have to comply with. To answer this question, we can start by confirming that facial and emotion recognition systems are provisionally listed in Annex III as high-risk AI systems. The only way to still potentially get out of all the obligations of the Act would be to conduct a risk assessment and elaborate that their particular system does not actually pose a high risk for the affected persons, as sufficient data protection measures are in place and the recommendations and decisions made by the system are of minor importance. This assessment, even if the result is positive, meaning the system is not that risky after all, will still have to be thorough, documented, and submitted to the authorities though.

The feature recording data for automated insurance adjustments is slightly more complex as here it is not the company that actually has access to or implements the AI system. It simply provides the data (or at least it should). Data providers are (luckily) not a role under the AI Act, so with sufficient contractual and documentation safeguards in place we should be safe. but only given that IntelliCar didn’t in some way significantly re-adjust the system to fit it to their cars, which wouldn’t be all that surprising. In that case, we are back to where we started, IntelliCar is again considered a provider and still has at least some risks to assess.

Finally, our virtual assistant might be the most troublesome of them all, as we have to first determine whether IntelliCar is a deployer or a provider of the technology. For the sake of simplicity let’s say that in this case, IntelliCar uses the ChatGPT Enterprise plug-in and only customizes it using internal data. So hopefully they are just deploying the system and can only be held responsible for choosing a potentially non-compliant system. But they can leave that problem for their future selves. First it is time to conquer the market, whatever the (future) cost.

Data Act

Now finally we come to the last (well definitely not the last, but the last we’ll consider here) secret ingredient in our connected car compliance sandwich. The Data Act. And here our IntelliCar will find itself under attack on all three fronts (pretty straightforwardly) as a manufacturer of a connected product. And just to linger on this Act that received undeservingly little attention in the public, there are multiple booby traps to be on the lookout for here.

The Data Act primarily serves the purpose of empowering users by granting them various access rights not just to the personal data collected during the use of connected products but also to non-personal data, such as data indicating hardware status and malfunctions (Recital 15). Now, although when it comes to connected products, which are most often used by natural persons, it is fairly safe to say that a lot of the collected data will be personal. It is still good to keep in mind that the users have to be able to access ALL collected data (metadata necessary for interpreting the original data included). And this has to be possible easily, securely, free of charge, and, at best, in a comprehensible machine-readable, and directly accessible format. (Piece of cake!) Of course, the Act brings a whole bunch of other obligations, in particular regarding information sharing, depending on the role a particular company (or natural person) has under it. I won’t go into all of them, but I will mention a couple of particularly interesting ones relevant to my imaginary context.

The first one is the way the Act deals with trade secrets. Namely, in situations when the user cannot access the data directly, data has to be provided to the user by the data holder. Now, a lot of this data is going to be very valuable to the company holding it, maybe even as valuable as to put it on the pedestal of a trade secret. These secrets are in fact technical or organizational information that have commercial value, are purposefully kept secret, and to which access is limited. And so, while individual data points might not merit this status, when we think about more complex collections built from collected data points, potentially enriched with third-party data and even inferences, these collections might very well merit trade secret protection. And while the GDPR would never even consider the idea that a user couldn’t access a profile built based on his data, the Data Act does consider this possibility. Primarily because it also governs the sharing of non-personal data. So, in certain cases where the risk of suffering serious economic damage is demonstrated the data holder may withhold the requested data on the basis of it being a trade secret. This exception might leave some wiggle room for the companies to not share all of their valuable data after all.

The second peculiarity concerns our usage-based insurance premium, as the Act also regulates smart contracts. Meaning contracts where “a computer program [is] used for the automated execution of an agreement … using a sequence of electronic records”. One example of such a smart contract could be automated insurance adjustments based on real-time data. And one important obligation in this regard is the smart contract kill switch that has to be implemented as “a mechanism … to terminate the continued execution of transactions and that … includes internal functions which can reset or instruct the contract to stop or interrupt the operation”. This kill switch poses important questions as to the consequences it has for the driver, IntelliCar, as well as the insurance company. Namely, it raises questions such as who is entitled to use the kill switch, when can it be used (contracts are contracts for a reason and their execution is in most cases a good, legally mandated thing), what happens when someone uses it (does the premium fall back to a default mode?), and can clicking the kill switch be reversed (how to account for the unrecorded driving time)? All this will have to be (most likely) contractually regulated between the parties involved and is no trivial matter.

Finally, one last headache we’ll consider is that virtual assistants are also explicitly regulated by the Data Act (Article 31). Virtual assistant, in the context of the act, means “software that can process demands, tasks or questions including those based on audio, written input, gestures or motions, and that, based on those demands, tasks or questions, provides access to other services or controls the functions of connected products”. Now this basically opens up a Pandora’s box not just for our smart car producer but potentially also for the company developing the virtual assistant, possibly dragging them into yet another 70 pages of legislative texts to comply with. (As if they didn’t have enough on their plate already.) And how the trade secret argument (or maybe excuse) would play out in this context can be anybody’s guess.

[ad_2]

Source link