I've thought about Turing's idea more critically since the public advent of GPT and have reached some contrary conclusions. First let's assume that the notion of 'learning by observing and interacting' is understood in its technical sense as promoted by AGI (sic) camp: a machine, like man, achieves thought & consciousness, becomes a mind, via the learning mechanism(s)'. So, whatever it is that we humans mentally experience is engendered by a learning process fully mediated by the sensory apparatus. Now there is an interesting question that comes up: why do we have certainty that a random humanoid that we meet (whose birth we did not witness, thus provenance unknown), regardless of their level of apparent intelligence, is a conscious being? The sensory apparatus in the middle of our learning regiment from infancy has always only conveyed (superficially) measurable information. So it is purely an 'image'. And we project meaning unto images. This is what we do. The only reason one assumes that the other person is conscious is because we assume they are like us. "It's just like me. I am conscious, so they must be too". I think our friend Rene's formulation may see something of a philosophical resurgence. "I am conscious, so they must be too". That is the -only- reason that we unquestionably accept that the other humanoid is conscious as well. If you're with me so far, then you may agree that Turing idea is fundamentally flawed. Until and unless we can nail down consciousness definitively we will never be able to test via information exchange (interaction). Because our minds, we know, have been 'trained' on only superficial evidence. So we are by definition un-lettered in the art of determining the existence of minds in objects.
Sherry Turkle in particular and Kahneman and Tversky in general determined that 80% of our communication is non-verbal (and 60% of it is unspoken). The dearth of non-vocal, non-language cues in textual communication is filled in by expectation, and that expectation is cultural. The formality of any written communication used to be inversely proportional to the familiarity of the conversants; since the advent of the Internet, the formality of any written communication has mostly been a stand-in for the communicants' desired perception. Nonetheless, 80% of our impression of any online interaction comes from our own Id, nowhere else. We give any random humanoid we meet the benefit of the doubt because of these nonverbal cues, which are entirely absent in textual communication. If we provide that context the illusion collapses - put a speaker on anything from Boston Dynamics and the best voice synthesizer on the market will not convince a single human that ChatGPT is like them. Doing so, in fact, thrusts the speaker deep into the uncanny valley. This is pretty much the plot-line of every mainstream news investigation into ChatGPT, no matter how shallow: (1) start talking to the chatbot (2) be impressed by how lifelike it is (3) catch it in a lie (4) watch it double-down and get weird (5) recoil in horror. And unless you can confidently exclude 3, 4, and 5 from every interaction, the net experience of normies with AI is going to be abysmal; people hated Clippy, they didn't fear it. I personally feel that the whole "consciousness" canard is a red herring: "what tricks does it have to perform for us to give it rights." There are billions of certified humans walking the earth who aren't guaranteed any particular rights so it really just becomes an argument for the TESCRealists to favor their toys over actual human beings.
Even a jab in the ribs from a friend is processed in context of the 'implicit': "this other is just like me". Every 'gesture', 'smell', :) all is processed in that context. You have never ever communicated with a non-conscious being in your life. Ever. All your learning of 'behavior', etc. all occur with that implicit context of "this other is just like me". So when a robot jabs you in the ribs, the projection of the 'this other is conscious' is a given.