Ben Hammersley

Learning to live in the future

+1 323 696 1632

Possible Problems of Persona Politeness

One of my AIs is funnier than the other. This is proving to be a problem.

But first, consider how the amazing becomes normal very quickly. It feels like I've been using Siri on my phone my entire life, Siri on the iPad charging by my bed since forever, and Siri on my watch since last summer. I've not, of course. She's only four years old this October. But nevertheless, as with any new life-spanning tech, she's become background-banal, in a good way, remarkably quickly. Voice interfaces are, without a doubt, A Thing.

And so it is with Alexa, the persona of the Amazon Echo, living in my kitchen for the past fortnight. She became a completely integrated part of family life almost immediately. Walking into the kitchen in the morning, ten-month-old daughter in one hand, making my wife tea with the other, I can turn on the lights, listen to the latest news from the radio, check my diary, and order more milk, just by speaking aloud, then turn it all off again as I leave. It's a technological sprezzatura sequence that never fails to make me smile. Thanks, Alexa, I say. Good morning.

But there's the rub. Alexa doesn't acknowledge my thanks. There's no banter, no trill of mutual appreciation, no silly little, "it is you who must be thanked" line. She just sits there sullenly, silently, ignoring my pleasantries. 

And this is starting to feel weird, and makes me wonder if there's an uncanny valley for politeness. Not one based on listening comprehension, or natural language parsing, but one based on the little rituals of social interaction. If I ask a person, say, what the weather is going to be, and they answer, I thank them, and they reply back to that thanks, and we part happy. If I ask Alexa what the weather is, and thank her, she ignores my thanks. I feel, insanely but even so, snubbed. Or worse, that I've snubbed her.

It's a little wrinkle in what is really a miraculous device, but it's a serious thing: The Amazon Echo differs from Siri in that it's a communally available service. Interactions with Alexa are available to, and obvious to, everyone in the house, and my inability to be polite with her has a knock-on effect. My daughter is too young to speak yet, but she does see and hear all of our interactions with Alexa. I worry what sort of precedent we are setting for her, in terms of her own future interactions with bots and AIs as well as with people, if she hears me being forced into impolite conversations because of the limitations of her household AI's interface. It's the computing equivilent of being rude to waitresses. We shouldn't allow it, and certainly not by lack of design. Worries about toddler screen time are nothing, compared to future worries about not inadvertently teaching your child to be rude to robots. 

It's not an outlandish thought. I, myself, am already starting to distinguish between the personalities of the different bots in my life. Phone Siri is funnier than Watch Siri; Slackbot is cheeky, and might need reeducating; iPad Siri seems shy and isolated. From these personalities, from these interactions, we'll take our cues. Not only in how to interact, but when and where, and what we can do together. If Watch Siri was funnier, I'd talk to her more. If Phone Siri was more pre-emptive, our relationship might change. And it's in the little, non-critical interactions that their character comes through.

All this is, of course, easier said than done by someone who isn't a member of the Amazon design team - hey there lab126, beers on me if you're in LA soon - but there's definitely interesting scope to grow for the seemingly extraneous stuff that actually makes all the difference. Personality Design for AIs. That's a fun playground. Is anyone playing there?

Spectator Cockroaches, Sand, and the Social Facilitation of Skeuomorphs.

It was an experiment on cyclists in 1898 that first showed us how we might live with robots. It's a really interesting observation. Let me tell you about it.

So. I love bots. Give me a pseudo-human interface, a smattering of natural language, and  a computery-voice, and I'm all yours. This year I'm working on a project to discover just how useful they can be. With wearables, and systems like Amazon Echo we're about need to deal with a lot of these things, and it seems to me it's not so much the technology as the user psychology that we need to pay the most attention to, and so we need to ask what we know about these things already. 

Discussing this with Dr Krotoski, my very local social psychologist, I was pointed to the seminal paper, The Dynamogenic Factors in Pacemaking and Competition, by Norman Triplett, The American Journal of Psychology  Vol. 9, No. 4 (Jul., 1898) , pp. 507-533.

This is basically the ur-text of Social Psychology. You can read the original paper for details of the experiment, but the simplified conclusion was this: if a person is being watched, they find easy things easier, and harder things harder. It turns out, from other experiments, that this is true for many species. For example, cockroaches will find their way through mazes much more quickly if they have spectators too, (Zajonc, R. B. (1965). Social facilitation. Science, 149, 269-274.)

A maze for cockroaches, with spectator seating. 

A maze for cockroaches, with spectator seating. 

Further research, specifically Social facilitation effects of virtual humans, Park, Hum Factors. 2007 Dec;49(6):1054-60, went on to the nub of it: "Virtual Humans" produce the same social facilitation effect. In other words, the presence of a bot will make simple things simpler, and hard things harder, simply by just being there "watching".

This, it seems to me, is quite a big deal. If we're designing systems with even a hint of skeuomorphic similarity to a conscious thing - even if it just has a smiley face and a pretty voice - it might make sense for it to ostentatiously absent itself when it detects the user doing something difficult. This might be the post-singularity reading of the Footprints In The Sand story, but nerd-rapture aside, it's an interesting question: when is it best for context-aware technology to decide to disappear? When the going is easy, or when the going gets tough?

Furthermore, I'm not sure if we know yet the Uncanny Valley-like threshold of "humanness" that triggers the social facilitation effect: do cameras have the same effect? Or even just the knowledge that someone is surveilling you? But this has serious implications beyond AI design.

For example, the trend for the quantified workplace, where managers can gather statistical data on their employees' activities, might be counterproductive, not simply because of the sheer awfulness of metrics, but because the knowledge they are being watched might make the more complex tasks that employee needs to do inherently more difficult, and hence more unlikely to be attempted in the first place.

For the most challenging tasks we face, the problems requiring the most cognitive effort, and the most imaginative approaches, we may find that many of our current social addictions - surveillance, testing, and so on, might be deeply harmful. "It looks like you're writing a letter," as Clippy would say, "would you like me to make that sub-conciously more difficult for you?" 

Future-Dense Sentences

There's a technique for pondering emerging technologies that originated, I think, with Jamais Cascio. Imagine you'd been instantly transported back in time x years in a particular place. How many years would you have to have travelled before you noticed you had slipped back in time? What would give it away? People's clothing? The music on the radio? Headlines on newspapers in the first papershop you come across? The cars, the phones people are carrying, the TVs you can see through the windows you pass? Sat where you are now, could you tell if you'd suddenly dropped back to 2006? 2001? 1989?

Ok, you're on the internet, so that breaks that, but it's a fun game to play if you travel a lot, and can be also quite revealing within institutional buildings. Applied to business processes or cultural values, it can uncover a good deal too. 

My variation on this is to look for the places, or the ideas, or the writing, that is the most future-dense. What sentences can we find that contains the most stuff that, were we to fall back in time only a few years, would make no sense whatsoever. What contains the most embedded understanding of wholly modern concepts. Here's a good one, from this morning:

See what I mean? Go back ten years, and that would be crazy. Go back thirty years, and you'd have to start from such first principles, you'd be considered mad.

Here's another from earlier this year, that at first glance reads, technologically at least, entirely, boringly, banal:

If you fell back thirty years to 1985, think of all the things about this screenshot you'd have to explain, and all the layers you'd have to fill in before you could. "Ok, so...[deep breath] the President of the United States is a black man named Barack Obama. Yes, really. This is a message he has left on a microblogging service on the web...ermmm, it's a service based around a new hypertext protocol on the internet. Yes, that thing the scientists use. Kinda like a bbs, yes. But with a few billion users. Yes. Billion. With a B. Anyway, he's saying he's going to binge-watch a show on Netflix. Netflix? It's a streaming video site...oh...well, it's a place...errrrr...Retweets? Spoilers? know...I think we should drop it."

Anyway, looking for these brings me to a couple of things. Firstly, it's a useful koan-like personal thought experiment to find new insights around a place or an organisation or a cultural moment. At the very least, it's entertaining.

But secondly, I think it raises, once again, the realisation that our future world is heavily, fundamentally layered: that problems have no simple solution that a single technology plonked on top will fix. Instead, it is the interplay of the complex systems - complex, not necessarily complicated - of culture, technology, politics, culture and so on that will come together to make tomorrow's banal commonplace thing. That complexity, I think, is both deeply exciting, and - hopefully - humbling. The future is not about the tech. It's perhaps the other way around.

The Internet of Tells: Constant biomonitoring and some uses

In poker, they call them tells. The little physical signs that we can't control that give away our inner mental state. What happens if we make these privately machine-readable?

For me, a lot of the fun of future technologies isn't new tech per se, but the coming together of three or four older things, refined by new physical capabilities and design understandings, to push over the Hill of Single Use into a new valley of possible products. A strained metaphor, perhaps, so let me give you an example. Heart rate monitors have been around for years. I've been running with one strapped to my chest for at least a decade myself, and in those days the data has been restricted to its one single device (and later to a single app, barring the export of averages and such very high-level takes). You certainly didn't wear an HR monitor all the time, and even if you did, you couldn't use what it saw for anything other than athletic training.

But 2015 will see at least two products come to mass-market that might do such a thing. The Jawbone3, and the Apple Watch

The back of the Apple Watch, showing the HR monitor

The Apple Watch has an HR monitor on its back, has local processing, a data connection (and through that, infinite cloud processing) - but more than that, it has access to everything else we might do digitally, Not just publishing capability (send my HR to Facebook, Tweet when I go over 180, and so on) but a form of sense-making too. The complex network around the Apple Watch knows an awful lot about your personal context - that's really its point after all - and so it could start to make all sorts of correlations between HR and that context.

We know that changes in HR can reflect changes in psychological state. Your heart beats faster when you're aroused or stressed or angry. And we now have a device that can notice that tell, and try to work out what is causing it. What might that do? Here are some scenarios, and possible products:

  1. One to One. You regularly meet with someone, Mr X, who drives you insane. A deeply stressful person, who causes your heart to beat hard as you restrain yourself from violence. An asshole of the highest order. Your system detects the increase in heart rate, and sees it happens whenever you have a calendar appointment with Mr X. Matching the appointment data with LinkedIN, it identifies Mr X, and posts the "Meeting with Mr X is stressful" posit to a LinkedIN API-using offshoot, a "Rate My Meeting" clone. Over time, Mr X's rating is further added to by others' systems, perhaps without user input at all, flagging Mr X as (algorithmically designated) asshole. The system acts accordingly.
  2. Many to One. You walk to work down Oxford Street, but prefer to slip through side streets if the foot traffic is annoyingly dense. Luckily, the HR monitors on the wrists of tens of Apple Watch wearers already on Oxford Street are spiking higher than they usually average here, at this time of day, with this sort of weather. Your system notices this, and gently nudges you away from the area, pre-emptively avoiding the stress that others are giving away to the network.
  3. Many to Many. You're at a concert, and having a splendid time. Your HR is rising as the music builds, and from your watch you can see that others in the crowd are feeling it too. The crowd average HR goes past 140...141...144....147.......149......and as soon as it reaches 150,  it triggers the drop, the stage pyros, the lasers, the dancing girls. The musicians onstage, able to reach their musical climax just as the audience reaches theirs. That's showbusiness.

None of these use-cases, and there are many more, require a new magical technology. Apart from the actual heart-monitoring, you could prototype them today all quite (handwaving here) easily. But none of them would work without a good installed base of constantly available HR monitors already in place. That, if Apple and Jawbone and the rest get their way, is what we're about to have. It's a whole new product/service category, being unlocked almost by mistake.