This is kind of the crux of the issue. David Levy says yes. Sherry Turkle says no. Levy's argument is "good enough is good enough" while Turkle's argument is "a human response does not imply a human motivation yet that's what we're banking on and it's gonna fuck us, here's a giant ream of data proving so." To your analogy: you ask me a question because you want my answer, not because you want a Google search. If I have no idea and tell you I have no idea, you have my answer ("I have no idea") and that is a datapoint in our relationship. If I have no idea and tell you I'll look it up, you have a different answer ("I don't know, but I like you enough to put in some research"). If I have no idea, look it up surreptitiously and then tell you, you have still a different answer ("I know everything"). It doesn't take too much extrapolation to see that these three responses cover a whole bunch of relational nuance and provide a wide fan of outcomes... yet from a machine intelligence standpoint, you only get the 2nd... and you're mistaking "IF A THEN B" for "I like you enough to put in some research".