The Surveillance State will be made possible through Surveillance Capitalism

Yesterday, Apple announced ‘HomePod’ – a virtual assistant-cum-speaker in the mould of Amazon Echo and Google Home. Suffice to say that Chez Belshaw won’t be investing in one of these soon – or ever, unless some pretty basic concerns I have about privacy are addressed.

In Rise of the machines: who is the ‘Internet of things’ good for? Adam Greenfield challenges us to question why we’re so keen to let these kinds of devices into our lives and homes:

Whenever a project has such imperial designs on our everyday lives, it is vital that we ask just what ideas underpin it and whose interests it serves. Although the internet of things retains a certain sprawling and formless quality, we can get a far more concrete sense of what it involves by looking at how it appears at each of three scales: that of our bodies (where the effort is referred to as the “quantified self”), our homes (“the smart home”) and our public spaces (“the smart city”). Each of these examples illuminates a different aspect of the challenge presented to us by the internet of things, and each has something distinct to teach us.

It’s an excellent article that neatly summarises some of the problems around so-called IoT devices. The assumption with all of these things is that they serve us. That assumption couldn’t be more wrong: it’s us that eventually bend towards their algorithms and views of the world – programmed with a very particular world view:

At first, such devices seem harmless enough. They sit patiently and quietly at the periphery of our awareness, and we only speak to them when we need them. But when we consider them more carefully, a more problematic picture emerges.

This is how Google’s assistant works: you mention to it that you’re in the mood for Italian food, and then, in the words of one New York Times article, it “will then respond with some suggestions for tables to reserve at Italian restaurants using, for example, the OpenTable app”.

This example shows that though the choices these assistants offer us are presented as neutral, they are based on numerous inbuilt assumptions that many of us would question if we were to truly scrutinise them.

While we’re (with a modicum of futility) teaching schoolchildren and those new to the web to go beyond the first page of Google, the next wave of devices do away with even that ability to what you’re being presented with. It’s like constantly pressing the “I’m feeling lucky” button:

There are other challenges presented by this way of interacting with networked information. It’s difficult, for example, for a user to determine whether the options they are being offered by a virtual assistant result from what the industry calls an “organic” return – something that legitimately came up as the result of a search process – or from paid placement. But the main problem with the virtual assistant is that it fosters an approach to the world that is literally thoughtless, leaving users disinclined to sit out any prolonged frustration of desire, and ever less critical about the processes that result in gratification.

I’m trying to raise my children in a way that makes them thoughtful and critical users of apps and the web. They know that there are different operating systems and browsers. They’re aware that DuckDuckGo protects your privacy, as opposed to Google, Bing, and the like. But faced with a virtual assistant like Siri, all that goes out of the window. All of a sudden, they’re interacting with a ‘thing’ – and even more unaware of the bias and skewing that comes with something that’s been programmed to give you frictionless access to pre-programmed information and services.

We’ve messed about with Google Now and Siri before, but the main reason I don’t want something like Amazon Echo in my home is that it normalises ‘corporate surveillance’. The conversations that happen in my home are private, and I want them to stay that way:

Virtual assistants are listening to everything that transpires in their presence, and are doing so at all times. As voice-activated interfaces, they must be constantly attentive in order to detect when the “wake word” that rouses them is spoken. In this way, they are able to harvest data that might be used to refine targeted advertising, or for other commercial purposes that are only disclosed deep in the terms and conditions that govern their use. The logic operating here is that of preemptive capture: the notion that companies such as Amazon and Google might as well trawl up everything they can, because no one knows what value might be derived from it in the future.

I guess that I’m an early-adopter slowly trying to reform myself. Those on the cutting edge have to put up with a lot, historically. Buggy and half-finished software coupled with clunky hardware is often forgiven because the idea, the vision is compelling. These days, it’s less that the hardware and software is problematic with early offerings — although that of course can also be an issue — it’s more to do with the terms of service and the privacy policy you’re forced to sign up to:

Put aside for one moment the question of disproportionate benefit – the idea that you as the user derive a little convenience from your embrace of a virtual assistant, while its provider gets everything – all the data about your life and all its value. Let’s simply consider what gets lost in the ideology of convenience that underlies this conception of the internet of things. Are the constraints presented to us by life in the non-connected world really so onerous? Is it really so difficult to wait until you get home before you preheat the oven? And is it worth giving away so much, just to be able to do so remotely?

Greenfield moves swiftly on from discussing the home to talking about the reallyscary proposition: smart cities. After all, we can choose not to have the devices mentioned above in our homes, but we don’t get that choice when it comes to civic spaces:

A broad range of networked information-gathering devices are increasingly being deployed in public space, including CCTV cameras; advertisements and vending machines equipped with biometric sensors; and the indoor micropositioning systems known as “beacons” that, when combined with a smartphone app, send signals providing information about nearby products and services.

The picture we are left with is that of our surroundings furiously vacuuming up information, every square metre of seemingly banal pavement yielding so much data about its uses and its users that nobody yet knows what to do with it all. And it is at this scale of activity that the guiding ideology of the internet of things comes into clearest focus.

Quite apart from the fact that I just don’t want to be tracked thank you very much, this is a social justice issue. While advocates of smart cities see data as neutral, we’re fully aware that it’s nothing of the sort:

There is a clear philosophical position, even a worldview, behind all of this: that the world is in principle perfectly knowable, its contents enumerable and their relations capable of being meaningfully encoded in a technical system, without bias or distortion. As applied to the affairs of cities, this is effectively an argument that there is one and only one correct solution to each identified need; that this solution can be arrived at algorithmically, via the operations of a technical system furnished with the proper inputs; and that this solution is something that can be encoded in public policy, without distortion. (Left unstated, but strongly implicit, is the presumption that whatever policies are arrived at in this way will be applied transparently, dispassionately and in a manner free from politics.)

As Greenfield notes, every aspect of this approach is questionable; you can deploy as many sensors as you want, but they can only capture what they were designed to capture. There’s no way they can gather enough information to adequately base policy upon them. As such, all data is subject to interpretation:

Advocates of smart cities often seem to proceed as if it is self-evident that each of our acts has a single, salient meaning, which can be recognised, made sense of and acted upon remotely by an automated system, without any possibility of error. The most prominent advocates of this approach appear to believe that no particular act of interpretation is involved in making use of any data retrieved from the world in this way.

But data is never “just” data, and to assert otherwise is to lend inherently political and interested decisions an unmerited gloss of scientific objectivity. The truth is that data is easily skewed, depending on how it is collected. Different values for air pollution in a given location can be produced by varying the height at which a sensor is mounted by a few metres. Perceptions of risk in a neighbourhood can be transformed by slightly altering the taxonomy used to classify reported crimes. And anyone who has ever worked in opinion polling knows how sensitive the results are to the precise wording of a survey.

We already see this in the workplace, and in schools. Any time that complex forces and interactions are reduced to a single data point, we’re in trouble. Unsurprisingly, these reductionist approaches tend to favour those who are already privileged, and marginalise minority (or unpopular) voices.

If the formulas behind this vision of future cities turn out to be anything like the ones used in the current generation of computational models, life-altering decisions will hinge on the interaction of poorly defined and subjective values. The output generated by such a procedure may turn on half-clever abstractions, in which complex circumstances resistant to direct measurement are reduced to more easily determined proxy values: average walking speed stands in for the “pace” of urban life, while the number of patent applications constitutes an index of “innovation”, and so on.

Quite simply, we need to understand that creating an algorithm intended to guide the distribution of civic resources is itself a political act. And, at least for now, nowhere in the current smart-city literature is there any suggestion that either algorithms or their designers would be subject to the ordinary processes of democratic accountability.

The concept of ‘smart cities’ is being pushed by governments who want to know more about its citizens and by for-profit companies looking to monetise data and sell devices. We need a critical third voice in there, one representing the people. My concern is that the recent terrorist atrocities will pave the way for a surveillance state, made possible through surveillance capitalism. Right now, that sounds like a bit of a nightmare.