The first post in this series about AI looked at how even dedicated chess engines powered by “weak” AI now routinely beat grand masters. Though the algorithms are likely still improvable, they evaluate permutations many times more quickly than the human brain.

Whilst it seems uncontentious that AI should support decision making and that its development should be informed by human constructs such as games, real life situations are less tangible and well defined. A chess board’s 64 squares or a GO board’s 361 points are much simpler. Actions too are complicated by morality, accountability, value judgements and fallibility.

Once parameters and variables are modelled, Game theory and deep learning aim to uncover the best strategies to achieve successful outcome.  When applied to analysing retinal scans and self driving cars as opposed to World of Warcraft, AI suddenly becomes very useful. Beyond outstanding analysis and complex situational management, the ability to manipulate huge data sets through multiple dimensions to discover new insights offers extraordinary possibilities.

Google Analytics can already show how a website performs against similar sites against any number of metrics e.g. time on page and bounce rate, and dimensions e.g. location and browser. I probably won’t be long before it can infer what can be done to improve a site’s performance.


Game: – “to use your knowledge of the rules to obtain benefits from a situation, especially in an unfair way” 



Deep Mind’s AlphaZero, a strong AI, which with some customisation, is now world champion at chess, Shogi, and the more complicated game of Go (AlphaGoZero), is also the only entity on the planet (AlphaFold) capable of crunching mind boggling permutations to predict the possible structures of folding protein molecules.

Perhaps the most extraordinary aspect of AlphaZero’s achievement is how it trained itself. In December 2017 and starting with just the rules, it took only a few hours to comprehensively beat Stockfish8, the reigning AI chess champion (these days humans are “not very close”).

Although many chess grandmasters are involved with Alpha Zero, it is more like an artificial general intelligence or strong AI, with ground breaking deep learning. Strong AI can learn to perform new and different functions often in unfamiliar ways. Stockfish on the other hand, is a narrow or weak AI, capable of just playing chess in a more conventional style.


The future is already here – it’s just not evenly distributed.

William Gibson 2003


AI being logical, would appear to make interactions more straightforward, but that might not be a great experience for people. Game theory mathematics is already used to model economic and sociological situations, but what will happen when people encounter logic and “computer says no”? Mr. Spock didn’t always enjoy harmonious relations.

There is no shortage of dystopic, “bad robot” sci-fi, so when strong AI starts to improve itself in ways we are unlikely to understand, it will be important for us to define the questions it answers, tasks it undertakes, rules it obeys and values it upholds, especially when running critical services.

“Understand user needs” is the first point of the service standard   and arguably a good starting point when integrating AI into the world. Other considerations might be:

  • What constitute necessary and sufficient conditions
  • Working with incomplete data
  • Managing exceptions
  • What to do when someone cannot engage for whatever reason (accessibility).


Effective systems are robust in the real, unpredictable world, and can recover from errors. And whilst Fuzzy logic helps to address uncertainty, enrich programmed meaning, and provide situational awareness, the world can be more messy than fuzzy.


Below are some considerations that seem relevant for developing robust AI based systems with a user centred approach. A subsequent post will look at how user research can address them.

  • Actual users should have ways to meaningfully contribute to development (including defining areas for improvement and how interacting with the system makes them feel)
  • Contribute to assessing real world performance
  • Human experts and service managers will need to define the metrics for success and failure
  • Model the domain and process
  • Plan the entire service experience for the real world
  • Define how to handle exceptions
  • Human experts, users and managers should contribute to training systems
  • Holistically assess the system’s usefulness
  • Guide evolution in the ecosystem


Science can only ascertain what is, but not what should be, and outside of its domain value judgements of all kinds remain necessary.

Albert Einstein 1935

Cover image by Gerd Leonhard licensed under creative commons

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *