Jan Bidner met with Josh Clark, from the American design studio Big Medium, headlining the conference UX London this year. The topic was Designing for What’s Next: The last ten years digital interaction has been driven by mobile, the next ten years will be driven by machine learning and AI. Engineers have showed us what is possible, now it is time for designers to find out how to use it – and the presentation of data is as important as the underlining logarithm.
Machine learning services are available for free! Microsoft, Google, and Amazon have web-friendly interfaces that anyone can try out and evaluate. UX researchers need to support the data scientists when it comes to building models that reflect the whole base of humans that we want to serve. Otherwise the data and the AI interfaces will be biased, developed by privileged white people in the western World. Image analysis is a typical example when things sometimes go wrong. A picture of a dinosaur on a measurer was described as a dinosaur on a skateboard…
Bots today use natural language, even sounding like humans in a phone call. But they are still less capable than humans. Is that what we want? How do we create interfaces that set the appropriate expectations that reflect what the system is actually capable of? It could be good to present the bot as a bot. (See Google Duplex.)
So what happens now? Very few interfaces go away. The keyboard is still there. Touch is still there. We are just adding more interfaces. The speech interface is here now, but it is not replacing the other ones, e.g. in open offices.
Josh Clark, Jan Bidner (23:46)