The Turing test was always ridiculous. I regard anybody still talking about Turing tests to be a newbee who hasn't thought much about anything in the field of AI yet.
I strongly advocate giving computers the same IQ tests that humans are given. There have been only about five attempts to do this so far (the reason I know is that I wrote a proposal for such a project around 2015, it was rejected, but at least it forced me to do background research on all such attempts mentioned online). One such attempt is described at the following link, though it didn't involve pictorial questions as the other tests did...
https://observer.com/2015/06/artificially-intelligent-computer-outperforms-humans-on-iq-test/I have other ideas for intelligence tests for computers that are more advanced, but until we can get a computer to handle static, idealized, 2D images in black-and-white, there is little chance it's going to be able to understand the full complexity of the multicolored, fuzzy, 3D, dynamic world. One team's attempts to give IQ tests to computers used only Raven's progressive matrices, which is a specific type of IQ test question...
https://iqtestprep.com/ravens-progressive-matrices/That's a good place to start, I believe.
----------
(p. 78)
The Turing Test Cannot Prove
Artificial Intelligence
Mark Halpern
Mark Halpern is a computer software expert who has worked at
IBM. In the following viewpoint, he argues that the Turing Test
is fundamentally flawed. As evidence, he points to Turing Tests
conducted in 1991. During The Turing Tests, Halpern says, the
judging was flagrantly inadequate; computers that were generat-
ing random nonsense were judged human, while some humans
were judged to be computers. Halpern concludes that even if a
computer were to pass the Turing Test, it would not show that
that computer had achieved artificial intelligence.
(p. 79)
Perhaps the absurdity of trying to make computers that can
"think" is best demonstrated by reviewing a series of at-
tempts to do just that--by aiming explicitly to pass Turing's
test. In 1991, a New Jersey businessman named Hugh Loeb-
ner founded and subsidized an annual competition, the Loeb-
near Prize Competition in Artificial Intelligence, to identify and
reward the computer program that best approximates artificial
intelligence [AI] as Turing defined it. The first few Competi-
tions were held in Boston under the auspices of the Cam-
bridge Center for Behavioral Studies; since then they have
been held in a variety of academic and semi-academic loca-
tions. But only the first, held in 1991, was well documented
and widely reported on in the press, making that inaugural
even our best case study.
Practical Problems
The officials presiding over the competition had to settle a
number of details ignored in Turing's paper, such as how of-
ten the judges must guess that a computer is human before
we accept their results as significant, and how long a judge
may interact with a hidden entity before he has to decide. For
the original competition, the host center settled such ques-
tions with arbitrary decisions--including the number of
judges, the method of selecting them, and the instructions
they were given.
Beyond these practical concerns, there are deeper ques-
tions about how to interpret the range of possible outcomes:
What conclusions are we justified in reaching if the judges are
generally successful in identifying humans as humans and
(p. 80)
computers as computers? Is there some point at which we
may conclude that Turing was wrong, or do we simply keep
trying until the results support his thesis? And what if judges
mistake humans for computers--the very opposite of what
Turing expected? (This last possibility is not merely hypotheti-
cal; three competition judges made this mistake, as discussed
below.)
Berlatsky, Noah, ed. 2011.
Artificial Intelligence. Farmington Hills, MI: Greenhaven Press.