Sunday, February 2, 2025
I read an insightful comment within the current wave of LLM Reasoning hype. It has stuck with me.
At least two reasons:
It reminded me of my view that AGI is already here in the guise of companies It’s also a valid answer as to why I meditate and why Searle’s Chinese Room is mainly wrong Back to the comment, paraphrased it said: “the uncomfortable truth that these reasoning models show us that a lot of activities that we thought need human reasoning to complete simply need functional predictions”
Continue reading →