The challenges of being human: mistaking prediction, narratives, and rhetoric for reasoning

I read an insightful comment within the current wave of LLM Reasoning hype. It has stuck with me.

At least two reasons:

  • It reminded me of my view that AGI is already here in the guise of companies
  • It’s also a valid answer as to why I meditate and why Searle’s Chinese Room is mainly wrong

Back to the comment, paraphrased it said: “the uncomfortable truth that these reasoning models show us that a lot of activities that we thought need human reasoning to complete simply need functional predictions”

There is so much to unpack around “functional predictions” including its relation to knowledge:

  • knowledge being defined as a justified true belief
  • representing beliefs as Bayesian Distributions
  • how dopamine is a temporal difference error signal
  • it’s connection to the updating of our view/belief/prediction of the world
  • decisions made on those predictions

All that should be served up with a healthy seasoning of Computational Reducibility and Irreducibility…

There’s a counter argument that it isn’t “functional predictions” that are needed, rather explanations of events. That is narrative or rhetoric. This rings cynically true, though it could be the other side of the coin.

I currently see that the “functional prediction” argument fits with what I’ve learnt about reinforcement learning (i.e. optimal actions come from the greatest future reward prediction).

And with that, where do we need true reasoning?

Being Human AGI