"The discovery of quantitative relationships between retinal appearance and systemic pathophysiology readily aligns with pre-established conceptions of microvascular and degenerative tissue-level insults20. However, deep learning has shown that these algorithms demonstrate capability in tasks which were not previously thought possible21. Harnessing this power, new insights into relationships between retinal structure and systemic pathophysiology could expand existing knowledge of disease mechanisms. A study by Poplin et al. demonstrated a deep-learning learning algorithm which could accurately predict cardiovascular risk factors from fundus photos22; More surprising to ophthalmologists was the successful prediction of demographic information such as age and gender, the latter with an area under the curve (AUC) of 0.97. Here, the physiologic cause and effect relationships are not readily apparent to domain experts21. Predicting gender from fundus photos, previously inconceivable to those who spent their careers looking at retinas, also withstood external validation on an independent dataset of patients with different baseline demographics23. Although not likely to be clinically useful, this finding hints at the future potential of deep learning for the discovery of novel associations through unbiased modelling of high-dimensional data."
This makes me think of a concept I heard once on the Hard Fork podcast called the obsolescence regime.
I'm in the process of writing a historical hard sci-fi book about the future of AI (see note below) and here's the way one of the characters describes obsolescence regime to another:
“The concept behind obsolescence regime is that AI’s sophistication will outstrip our own. Eventually, it would assume greater control over critical decisions that govern human life. At that point, we’d find ourselves relying on AI for strategic guidance, gradually losing our own grasp on how our societies and destinies should be shaped. With AI’s expansive data-processing abilities and its skill at recognizing patterns and forecasting outcomes, there’s a possibility it might become superior to humans in sectors like commerce, governance, and even military operations. Its key advantage, of course, is that AI operates free from emotional influence, creating decisions rooted firmly in logic and evidence. That could undermine the more subjective, bias-driven side of human decision-making.”
This news about AI recognizing sex from retinal scans seems aligned with this idea.
The question is, can we do anything about it?
Could humans catch up to the degree of understanding of the technology we're creating?
Maybe not.
Note: I've been working on the book, titled "Glorified Ants," since June 2023. I should have the first draft ready by the end of this week! Lemme know if you wanna read the first draft and provide edits ;)
This reminds me of when Facebook shut down a couple of its AI robots after they created their own, more efficient, language, below is the text of the article: Facebook shut down a pair of its artificial intelligence robots after they invented their own language. Researchers at Facebook Artificial Intelligence Research built a chatbot earlier this year that was meant to learn how to negotiate by mimicking human trading and bartering. But when the social network paired two of the programs, nicknamed Alice and Bob, to trade against each other, they started to learn their own bizarre form of communication. The chatbot conversation "led to divergence from human language as the agents developed their own language for negotiating," the researchers said. The two bots were supposed to be learning to trade balls, hats and books, assigning value to the objects then bartering them between each other. But since Facebook's team assigned no reward for conducting the trades in English, the chatbots quickly developed their own terms for deals. "There was no reward to sticking to English language," Dhruv Batra, Facebook researcher, told FastCo. "Agents will drift off understandable language and invent codewords for themselves. "Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands." After shutting down the the incomprehensible conversation between the programs, Facebook said the project marked an important step towards "creating chatbots that can reason, converse, and negotiate, all key steps in building a personalized digital assistant". Facebook said when the chatbots conversed with humans most people did not realise they were speaking to an AI rather than a real person. The researchers said it wasn't possible for humans to crack the AI language and translate it back into English. "It’s important to remember, there aren’t bilingual speakers of AI and human languages," said Batra.
Probably one of our strongest biases is that our mental models reflect the environment in an objective quality that depends more upon the nature of the environment than ourselves. More likely, our mental models reveal the nature of our cognitive abilities in the context of the environment which expands beyond them. My dog cannot fathom my tax return. That reveals more about his cognitive abilities than it does the nature of my tax return.