@thompsonson “thinking” in the context of LLM’s is simple the LLM creating more context from it’s own model.

It is not reasoning, it is linking more information of its own internal information source.

Training a model on Chain-of-Thought is a way to train it to collect more relevant data.

Initial thoughts from: arxiv.org/abs/2507….