Geoff Hinton makes some similar points in this excellent interview. If you already know something about the topic (eg backprop), can skip to ~10m for details as to why recent LLM advances made him very worried about AGI. Possible End of Humanity from AI?