Anyone who thinks Auto-Regressive LLMs are getting close to human-level AI, or merely need to be scaled up to get there, *must* read this. AR-LLMs have very limited reasoning and planning abilities. This will not be fixed by making them bigger and training them on more data.
Excellent piece by @Noahpinion on techno-optimism. So many good points. Like Noah, I'm a humanist who subscribes to both Positive and Normative forms of Active Techno-Optimism. Quote: "Techno-optimism is thus much more than an argument about the institutions of today or the…
A balanced view of the issues of open vs closed LLMs.
Quote: <<Princeton computer science professor Arvind Narayanan told Vox that “the bottleneck for bad actors isn’t generating misinformation — it’s distributing it and persuading people.” He added, “AI, whether open source or not, hasn’t made those steps any easier.”>>
ConvNets are a decent model of how the ventral pathway of the human visual cortex works. But LLMs don't seem to be a good model of how humans process language. There longer-term prediction taking place in the brain. Awesome work by the Brain-AI group at FAIR-Paris.
Excellent paper in which the word "criti-hype" is coined. Criti-hype designates the kind of academic and non-academic work that magnifies the imagined dangers of a new technology, feeding on and mirroring the hype from the advocates of said technology.