Mind and Body and Large Language Models
by John Favaro
03 January 2026
by John Favaro
03 January 2026
Who hasn’t heard of GAI (Generative Artificial Intelligence) by now? Well, maybe some still don’t recognize the phrase, but I’ll bet that even those who don’t recognize the phrase will recognize the name “ChatGPT”. The explosive entrance of ChatGPT into our lives in November 2022 and its effect on our own work in GRASP was documented not long afterward in my post The Generative AI Revolution. In that post, I referred to our own involvement in the documentation of artificial intelligence for over forty years, like in the TV episode we hosted on German television in 1984 that was entirely dedicated to the topic. In that post I also commented in our “hits” and “misses”, regarding our predictions about where the technology of artificial intelligence was heading. One early “miss” was not realizing the role that the technical branch of machine learning was going to play in the future, rather than the “expert systems” that were prominent in the 1980s. (To be fair, though, it took a combination of the Internet, vast amounts of available storage, and significant hardware advances to enable the eventual successes of machine learning). By the early 2010s, however, the handwriting was on the wall, and in 2018 I wrote a post documenting the situation at the time concerning Machine Learning and the Arts. Even then, however, I “missed”, not foreseeing the revolution that would happen only a mere four years later, when generative AI would not just assist artists but actively create new art (albeit amidst a great deal of controversy about what it even means for a machine to create art).
But it’s turning out that, among all the predictions and prognoses being made on all sides, we might have had one more “hit” in a different area of controversy: the pursuit of Artificial General Intelligence. The spectacular feats of GAI, driven by their Large Language Models (LLMs), have understandably led many to predict that we are getting closer than ever to the AI Singularity: the moment in which artificial intelligence surpasses human intelligence (leading either to utopia or dystopia, depending on your point of view).
This sensational possibility tends to obscure the fact that there is actually a vigorous discussion in the AI community going on about whether LLMs are really going to reach the AI Singularity, and it has been going on for a while (see, for example, the posts of Gary Marcus on Substack). What has caught my eye in particular is the discussion around world models, and around common sense and physical reasoning. This led me to think back on the post I wrote in 2018 on Mind and Body and the Future of Work. Here I discussed the pioneer AI skeptic Hubert Dreyfus. As I wrote then: “The main argument of Dreyfus against A.I. was that computers would never be able to program what is called ‘common sense knowledge’, as opposed to bare facts that you just have to organize.” We have bodies, and acquire much common sense through them. It’s a lot less clear how computers will manage to do this, and the ongoing discussion confirms this doubt that has never really gone away.
Dreyfus wrote that the advances in AI being announced with such fanfare were like climbing ever higher trees in the hopes of reaching the Moon. Is that what is happening with Generative Artificial Intelligence and its Large Language Models? Time will tell, but we in GRASP will continue to promote the strong human-centric values and connections that we believe are necessary to live in the increasingly dematerialized world that GAI is contributing to.