Alan Turing was famously one of the greatest computer scientists of the world. That was before he was chemically castrated and sent to prison for being gay (WW2 were some weird times).

What was it?

He came up with his eponymous test to measure a machine’s “intelligence”. Roughly speaking:

What was it?

If someone could converse with a machine, in a text conversation, for a sustained period of time and not realise they were conversing with a robot, it would be declared to have passed the test.

But with the rise of Large Language Models (LLMs), what good is that? They’ve clearly passed, but no one is calling them “intelligent”. Cue the modern TT:

Turing Test 2.0

If you can give a machine an ambiguous, open-ended, complex goal that requires interpretation, judgement, creativity, decision making and acting across multiple domains, then it’s said to have completed the test. (M. Suleyman, The Coming Wave)

For example, if I told it to “Make me €1million on Amazon”, it would research what it has to do, the best marketing campaign, use image generation to make it etc.

Estimates have that capability set for 2027… WTF