Skip to main content

ITS & Ethics: yes means yes

There is an increasing wealth of information available to create personalised transport solutions – and the possibilities are exciting. But, Andrew Bunn warns, ITS companies have a duty to be explicit in explaining what people’s data is going to be used for
March 4, 2019 Read time: 7 mins
Untitled-1 650.jpg
MaaS providers must be transparent about the way they use customer data, or it’ll end in tears | © Daniel Chetroni | Dreamstime.com
There is an increasing wealth of information available to create personalised transport solutions – and the possibilities are exciting. But, Andrew Bunn warns, ITS companies have a duty to be explicit in explaining what people’s data is going to be used for


The advent of artificial intelligence (AI), Big Data, and the Internet of Things (IoT) has brought about a new age of tech-driven mobility where users continue to trade privacy for convenience, safety and security. This trade has been going on for years, and is largely linked to the increased personalisation of data: rather than simply recording how many vehicles pass through a toll booth or how many metro tickets are purchased, transit authorities and Mobility as a Service (8356 MaaS) companies now build profiles for users which show how many times a person uses a certain toll booth or what metro lines they use every day.

This phenomenon is quickly becoming even more pervasive, with models including and similar to MaaS creating user profiles which track routes across different modes. All of this data is collected and used for the benevolent purpose of improving the transportation network for everyone and providing the best user experience possible - but there are many questions to ask, and many choices for MaaS providers to make, in pursuing these abstract, data-rich service models.

Personalised mobility


Though MaaS is a multifaceted industry with lots of devices, models and technologies involved, all have begun moving towards technology driven by the passive collection, analysis and implementation of user-specific data points. This means that the ‘mass’ is being taken out of ‘mass transit’ and the faceless mob is gradually becoming a collection of individual users who happen to be using a particular mode at the same time. While customising transit certainly feels like a step in the right direction, treating people as individuals in a transportation network necessitates personalised mobility profiles and carries a new set of responsibilities that MaaS providers must address.

In the case of MaaS, there are two primary individual rights that are being affected: privacy and choice. Passively collecting deep, personalised data profiles might be incredibly efficient, especially when fed to an artificial intelligence for analysis, but it also presents a potentially massive infringement of the user’s right to privacy. They may have tapped ‘agree’ when they downloaded the app, but how much does the user know about what data is being collected and stored? How is that data being used, and who else is it being shared with?

For those who wish to create services which consolidate different mode providers, do all the providers within the service get access to the user’s data profile? More importantly, are local or regional governments such as departments of transportation or transit authorities sharing or receiving similar data, and can enforcement bodies like police forces appropriate the data to track suspicious or criminal individuals?

Just because providers are collecting data does not mean they are unethically infringing on their users’ privacy; it is how that data is used that matters. Security should be of the utmost concern. Not all systems lend themselves to an immensely secure platform like blockchain, but it can

be easy to forget the dangers of keeping large quantities of personal data in one place, and how difficult it is to secure that data. The main conflict is that mobility data must be lightweight enough to allow nimble sharing between different platforms, but remain secure enough to protect the individual. Each provider must find the right balance between the two, knowing that the user will likely agree to nearly anything - until there is a reason for concern.

Freedom illusion


The second individual right to keep in mind is how much choice a person has when interacting with a transportation network. Different modes already regulate choice in different ways: many airlines offer flights at different times, roads offer alternative routes through highways and turnpikes, and transit is the most rigid, offering one route per time period per mode. Transit is also, by some metrics, the most efficient way to move people from point A to point B. Perhaps the main advantage of MaaS is that it attempts to remove the variables introduced by individual drivers while offering users as many efficient choices as possible by providing access to various modes and routes to the same destination.

Unlike privacy, the regulation of choice is something that should be considered at a regional level. Some populations are more comfortable with having limited but efficient options, while some, especially less urban areas, are resistant to the idea of giving up their cars. While there are many legitimate arguments that the freedom provided by owning a car is entirely an illusion, it is still not a decision that a MaaS provider can, or should, make for their users. Just as providers must find a balance between security and fluidity for their user data, they must also find a balance between optimising the network and allowing their users choice in how they get to their destination.

One of the closest comparisons for the MaaS sector is the home automation industry. Both rely heavily on the IoT, AI and personalised data profiles, and home automation has gone through many growing pains that MaaS will have to deal with eventually.


For example, Amazon’s home security subsidiary Ring was the subject of negative media coverage early this year. Its software uses facial recognition to build visitor and user profiles - a practice which customers can benefit from. But it must be explained clearly.

Social experiment


There has also been criticism over the past year of Amazon, Google and Microsoft for the way that their products might be used to infringe civil liberties. This brings up many of the same questions that the MaaS industry needs to consider. AI-driven facial recognition services have helped to find lost and abducted children, and to track suspects - but extending these services to include neighbourhood and even in-home security footage is very concerning to many people.

Much like intelligent transportation systems, one of the primary objections to home automation is user privacy. Scandals resulting from a lack of transparency or unethical security practices could be very damaging.

Any engineered system in society, whether a new traffic pattern or a major modification to the transportation network, is a sort of social experiment where the subjects have very little choice in participating. Unlike a new product on the shelves - which allows people to choose how much risk they are willing to assume - the public must use roads and transit every day with few options. It is therefore the ethical responsibility of those implementing a change in this system to both uphold the rights of the participants and to approximate, as closely as possible, the consent of the participants.

It’s clear that these concerns should be addressed by MaaS providers, but how they do so is equally important. An idea common in the medical field for several decades is ‘informed consent’. In contrast with the typical definition of consent, which requires only a voluntary agreement, informed consent consists of both knowledge and well-reasoned voluntariness.

Explicit warning


Customers should be given the opportunity to make a fully informed, well-reasoned decision about where they want to draw the line between privacy and convenience. This opportunity is best provided through transparency on the provider’s end, but too much transparency can negatively affect the efficiency and seamlessness of the tool. Maintaining an ethical balance of transparency and seamlessness is critically important when bringing MaaS to the public, especially when approaching older or suburban demographics who may be uncomfortable with giving up cars and handing over their personal data.

Every social experiment contains some risk to the public, and this risk should not be suppressed in fine print or hidden deep within websites. MaaS providers should explicitly inform their users of the risks to their privacy in particular, since there is a great deal of trust implied in the ‘I agree’ tap when downloading an app and
granting location data access to a new provider. Informed consent means going beyond the bare minimum, and delivering to users a clear explanation of how their data is used in terms designed for the lay public.

Providing options while managing user choice, keeping data both secure and fluid, and carefully protecting users’ privacy is a delicate dance which mobility providers must take very seriously. Playing fast and loose with user data might allow a great deal of growth and profit in the beginning, but in the end the public values morality over money.

boombox1
boombox2