The first post in this series about AI looked at how even dedicated chess engines powered by “weak” AI now routinely beat grand masters. Though the algorithms are likely still improvable, they evaluate permutations many times more quickly than the human brain.

Whilst it seems uncontentious that AI should support decision making and that its development should be informed by human constructs such as games, real life situations are less tangible and well defined, particularly the scope of the domain. A chess board’s 64 squares or even a GO board’s 361 points are hardly expansive. Our domains are far larger and complicated by morality, accountability, value judgements and fallibility.

That said, with parameters and variables setup, Game theory and deep learning focusing calculations on just the most promising outcomes, AI is becoming useful. Beyond outstanding play and situational management, the ability to manipulate huge data sets through multiple dimensions to find new insights offers extraordinary possibilities.


Google Analytics can already show how a website performs against similar sites and criteria e.g. bounce rate. I look forward to when it offers an opinion about why visitors are leaving a site too quickly e.g. font size of body copy likely too small to be legible to 70% of the visiting demographic.


Game: – “to use your knowledge of the rules to obtain benefits from a situation, especially in an unfair way” 



Deep Mind’s AlphaZero, a strong AI, which with some customisation, is now world champion at chess, Shogi, and the more complicated game of Go (AlphaGoZero), is also the only entity on the planet (AlphaFold) capable of crunching mind boggling permutations to predict the possible structures of folding protein molecules.

Perhaps the most extraordinary aspect of AlphaZero’s achievement is how it trained itself. In December 2017 and starting with just the rules, it took only a few hours to comprehensively beat Stockfish8, the reigning AI chess champion (these days humans are “not very close”).

Although many chess grandmasters are involved with Alpha Zero, it is more like an artificial general intelligence or strong AI, with ground breaking deep learning. Strong AI can learn to perform new and different functions often in unfamiliar ways. Stockfish on the other hand, is a narrow or weak AI, capable of just playing chess in a more conventional style.


The future is already here – it’s just not evenly distributed.

William Gibson 2003


AI being logical, would appear to make interactions more straightforward, but that might not be a great experience for people. Game theory mathematics is already used to model economic and sociological situations, but what will happen when people encounter logic and “computer says no”? Mr. Spock didn’t always enjoy harmonious relations.

There is no shortage of dystopic, “bad robot” sci-fi, so when strong AI starts to improve itself in ways we are unlikely to understand, it will be important for us to define the questions it answers, tasks it undertakes, rules it obeys and values it upholds, especially when running critical services.


Image of TED talk about how AI is different from human intelligence
It often helps to know the reasons behind a decision i.e. transparency. AI will get things wrong in ways we won’t understand, so accurately defining the problems it solves will be important.


“Understand user needs” is the first point of the service standard   and arguably a good starting point when integrating AI into the world. Other considerations might be:

  • What constitute necessary and sufficient conditions
  • Working with incomplete data
  • Managing exceptions
  • What to do when someone cannot engage for whatever reason (accessibility).


Effective systems are robust in the real, unpredictable world, and can recover from errors. And whilst Fuzzy logic helps to address uncertainty, enrich programmed meaning, and provide situational awareness, the world can be more messy than fuzzy.


Below are some considerations that seem relevant for developing robust AI based systems with a user centred approach. A subsequent post will look at how user research can address them.

  • Actual users should have ways to meaningfully contribute to development (including defining areas for improvement and how interacting with the system makes them feel)
  • Contribute to assessing real world performance
  • Human experts and service managers will need to define the metrics for success and failure
  • Model the domain and process
  • Plan the entire service experience for the real world
  • Define how to handle exceptions
  • Human experts, users and managers should contribute to training systems
  • Holistically assess the system’s usefulness
  • Guide evolution in the ecosystem


Science can only ascertain what is, but not what should be, and outside of its domain value judgements of all kinds remain necessary.

Albert Einstein 1935

Cover image by Gerd Leonhard licensed under creative commons

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *