Discriminative ai (predictive, classification, etc.. Generative ai (create something from a prompt) Dynamic Programming (Value and Policy iteration, RL, Model-based, Model-free) Constraint satisfaction programming (formal methods, planning) Just my 2 cents, maybe too simplic but helping me arrange my thinking.
Where does decision making fit? On top of all? Yeah, which means there’s another domain of Information…
That is a belief state stored in paper, words, corporate culture, etc…, which inferring over is either intractable or not in place.
There are 5 types of controls in InfoSec:
Preventative Detective Corrective Recovery Deterrent Agents are irritating if you don’t give them access, really I don’t want the agent to be able to remove files from git but it finds a way when given the ability to add and commit.
So whilst I’m figuring out how to set up solid preventative controls AND not lose my mind with approvals I’ve set up a recovery control.
Links I collected last summer on Reasoning - large amount LLM links but it goes past that into what is reasoning and the cognitive link (I missed Active Inference!)
Looking back through my conversation with Gemini about Dynamic Programming (DP) and Constraint Satisfaction Problems (CSP) that opened the door to Contingent Planning.
This is so damn wholesome - a 13 year old kid, talking to one of the greatest rugby players ever and getting some great life advice whilst having a chuckle!!
Never undervalue the grunt work to make string foundations.
www.facebook.com/story.php
This is golden advice on being a good captain and the most important part of leadership !!
www.facebook.com/share/r/1…
Got a fun and interesting challenge ahead. Looking to refine my intuition/thinking/knowledge of search spaces and online search/planning.
I think the homoiconic nature of Lisp could unlock online search/planning and full autonomy (see matt.thompson.gr/2026/01/1…)
To be clear, an agent that produces valid Lisp, verified by a lisp parser guard , is a step forward.
The context would be something like: “Here’s the macros for AtomicGuard/Dual State Action Pairs: ….”
The original specification would be the goal from a human (an action pair may produce decomposed goals in the form of specification to meet the original goal specified).
Working definitions to track and build into the documentation of my research. Generally they are included in the framework or extensions, though I need to learn more about Markov Blankets as I think that could be a boundary between the two state spaces. What the agent can sense and take action on.
Otherwise this post is in an order that has trial logic, both in growing on the initial agency through to planning and learning - potential full autonomy.
A well formatted and concise overview of deep learning from the calculus of 1676, when Gottfried Wilhelm Leibniz please blushed the chain rule to the RL-based NN advancements by DeepSeek in 2025.
Nice article on the bridge between set theory and computer science (which I’d always thought was there! 🙃)
A New Bridge Links the Strange Math of Infinity to Computer Science
Also helped remind me what the axiom of choice is; an arbitrary choice that acts as a junction between rule based decisions.
I still have to understand the actual algorithm as it seems handy to be able to label infinite nodes so that they do not locally conflict… 🤔🤓
The challenge of contradiction in logic Different starting points of theories that contain contradictions all collapse to the same trivial theory in classical logic.
The trivial theory is a bullshit theory that says everything must be true. Trivial is not a term to say it is easy… Rather it’s a term that implies inconsistencies and is pejorative meaning the theory is a mess and useless.
Contradiction example The “Penguin Problem”: