📚 Now Reading: Systems Smart Enough to Know They're Not Smart Enough 💡
I would never want to listen to children's Christmas music, especially in May. Anyone who has ever met me could guess that. You know who's too dumb to figure that out? Amazon's Alexa. She tried to play some for me the other day when I was asking her to turn my lamps on.
Smart wallets, cars, teddy bears, bathroom mirrors... today it seems like nearly anything can be "smart." The problem is, so often these things are actually pretty dumb. And while the whole kids holiday music thing is a funny anecdote, much deeper problems can ensue when our smart devices don't know enough to admit when they are just guessing.
Josh Clark (my husband, an advisor to this company, and hands down one of the smartest people I have ever met) details the nuances and implications of this problem in his piece, Systems Smart Enough to Know They're Not Smart Enough. As he succinctly puts it,
Our answer machines have an over-confidence problem.
Voice interfaces, almost by definition, have to have the confidence to give one answer, and even our most web basic searches in Google now come back with what is purported to be the most correct answer. It's the "I Feel Lucky" button without the button or the luck. And if the question is controversial, or the answer is hotly contested ( think of all of the fake news campaigns skewing the data) then the "one true answer" is even more likely to be wrong... or racist, or violence-inciting, or even worse. As Josh writes,
The worst of these cases tend to result in contentious areas where the algorithm has been gamed. In other cases, it’s just straight-up error; Google finds a page with good info but extracts the wrong details from it...
There are clever ways to design around this problem, though, and he does a great job of laying them out. This problem matters immensely to today's designers, but it matters even more to all of us, as day-to-day users of the web. Read it here.