Remastered broadcast of a 1964 lecture by Richard Feynman on the Double Split experiment. Finished with a call to action on having open priors to evidence we see from Mother Nature!
This is a bit of a rant.
I’ve memories of a senior manager shooting ideas down saying
“correlation is not causation, I’ve done Stats at Uni and can prove anything is related to baked beans”
I grated sooooo much.
Firstly as it was thoughtless rhetoric, either purposefully or accidentally steam rolling ideas. Immediately dismissing any attempts at constructive data driven decisions.
Secondly it grated because I didn’t have the tools to show causation.
A day vibe-coding, as a break from the normal routine of study. Done in new environment with language I’ve not used - VS Code extension in TypeScript .
I am very torn between two possibilities :
Building on my Q-Learning Maze Solving Agent I did for AI Applications (Q-Learning Maze Solving Agent) by adding a Neural Network (Sutton and Barton)
Building on the Intelligent Agents work I did in AI by applying an Agent Decisions Process (Self-Consistency LLM-Agent) (the process is my interpretation of Russell and Norvig’s work)
(The Douglas Adam’s extra option π€) Adding a cached “self-awareness” layer based on a Bayesian Learning Agent that stores it’s certainty on answers it gives.
I wish I had time to finish:
my research on the Evolution of Probalisitic Reasoning in AI Particularly Dempster-Shafer and Bayesian Networks How LLMs and Bayesian networks can be used for Risk Management create an youtube/insta/tiktok vid for my latest post on LLM Agent But I don’t!! So this is me putting it to one side…
Introduction In the previous post, I shared my view on “Why Study Logic?”, we looked at the Knowledge Representation and highlighted the importance of Logic and Reasoning in storing and accessing Knowledge.
In this post I’m going to highlight a section from the book “Introduction to Artificial Intelligence” by Wolfgang Ertel. His approach with this book was to make AI more accessible than Russel and Norvig’s 1000+ page bible. It worked for me.
Introduction The purpose of this article is to help me answer the question “Why am I studying Logic?”. If it helps you, that’d be great, let me know!
The question comes from a nagging feeling of, why don’t I see logic used more in the ‘real world’. It could be a personal bias as I more easily see the utility of Rosenblatt’s work, where he looked at both Symbolic Logic and Probability Theory to help solve a problem and choose Probability Theory ([NN Series 1/n] From Neurons to Neural Networks: The Perceptron), with that we had the birth of the Artificial Neuron and the rest is history!
“[IA Series 3/n] Intelligent Agents Term Sheet” breaks down essential AI terminology from Russell & Norvig’s seminal textbook. Learn what makes agents rational (or irrational), understand different agent types, and follow a structured 5-step design process from environment analysis to implementation. Perfect reference for AI practitioners and students. Coming next: how agents mirror human traits. #ArtificialIntelligence #IntelligentAgents #AIDesign
First draft in public π± ππ€
What’s the best way for an agent to build a semantically sound and syntactically correct knowledge base?
Dog fooding my course material means the first step is to define the task environment.
/Checks notes
Task Environment: The description of Performance, Environment, Actuators, and Sensors (PEAS). This provides a complete specification of the problem domain.
So how can I implement this π€
First I need to think on the domain, something different to the examples (e.
Here’s a “standard” progression of training methodologies:
PRE-Training - This is where the model gains broad knowledge, forming the foundation necessary for reasoning. CPT (Continued Pre-training) - Makes the model knowledgeable about specific domains. SFT (Supervised Fine-Tuning) - Makes the model skilled at specific tasks by leveraging knowledge it already has. RL (Reinforcement Learning) - Using methods like GRPO, DPO to align model behavior. Reasoning traces play different roles at each stage:
Source: Off Policy “zero RL” in simple terms
Results demonstrate that LUFFY encourages the model to imitate high-quality reasoning traces while maintaining exploration of its own sampling space.
Authors introduce policy shaping via regularized importance sampling, which amplifies learning signals for low-probability yet crucial actions under “off-policy” guidance.
The aspect that is still not clear to me is how there is any exploration of the solution space.
Based on conventional zero-RL methods such as GRPO, LUFFY introduces off-policy reasoning traces (e.g., from DeepSeek-R1) and combines them with models' on-policy roll-outs before advantage computation.
… However, naively combining off-policy traces can lead to overly rapid convergence and entropy collapse, causing the model to latch onto superficial patterns rather than acquiring genuine reasoning capabilities.
…genuine reasoning capabilities… I am not certain if the implication is that Deepseek-R1 can reason or that it is a reminder that no model cam genuinely reason.
Zero-RL applies reinforcement learning RL to base LM directly, eliciting reasoning potentials using models' own rollouts. A fundamental limitation worth highlighting: it is inherently “on-policy”, constraining learning exclusively to the model’s self-generated outputs through iterative trials and feedback cycles. Despite showing promising results, zero-RL is bounded by the base LLM itself.
A key characteristic is that it means a LLM can be trained without Supervised Fine Tuning (SFT).
You are doing Imitation Learning (specifically Behavioral Cloning) because the goal and mechanism involve mimicking the expert’s token sequences.
You are doing Transfer Learning (specifically Knowledge Distillation) because you are transferring reasoning knowledge from a teacher model to a student model.
You are not doing Off-Policy Reinforcement Learning because the learning process is supervised likelihood maximization, not reward maximization using RL algorithms.
Although the data itself is “off-policy” (not generated by the model being trained), the learning paradigm is supervised imitation, not RL.
Support Vector Machines (SVM) are a mathematical approach for classifying data by finding optimal separating hyperplanes, applicable even in non-linear scenarios using kernel methods.
The document discusses various search algorithms used by Intelligent Agents for navigating mazes, detailing their types, characteristics, tradeoffs, and implementations.
This text introduces key concepts and algorithms related to intelligent agents in AI, focusing on search terms, uninformed and informed search strategies, and adversarial search techniques.
Regularisation is known to reduce overfitting when training a neural network. As with a lot of these techniques there is a rich background and many options available, so asking the question why and how opens up to a lot of information. Diving through the information, for me at least, it wasn’t clear why/how it did this until I reframed what it was doing.
In short, regularisation changes the sensitivity of the model to the training data.
This is an interesting one as I’d thought it was quite academic, with limited utility. Then I saw these graphs
Error per epoch This graph shows the error per epoch of training a model on the data as is
We can see that it takes around 180-200 epochs to train with a learning rate (eta) of 0.0002 or lower.
Now compare it to this one
Here we see the training takes around 15 epochs with a learning rate of 0.
Next I’m looking at the Adaline in python code. This post is a mixture of what I’ve learnt in my degree, Sebestien Raschka’s book/code, and the 1960 paper that delivered the Adaline Neuron.
Difference between the Perceptron and the Adaline In the first post we looked at the Perceptron as a flow of inputs (x), multiplied by weights (w), then summed in the Aggregation Function and finally quantised in the Threshold Function.
The text discusses the development and significance of the Adaline artificial neuron, highlighting its introduction of non-linear activation functions and cost minimization, which have important implications for modern machine learning.
This post looks at the Percepton, from Frank Rosenblatt’s original paper to a practical implementation classifying Iris flowers.
The Perceptron is the original Artificial Neuron and provided a way to train a model to classify linearly separable data sets.
The Perceptron itself had a short life, with the Adaline coming in 3 years later. However it’s name lives on as neural networks have, Multilayer Perceptrons (MLPs). The naming shows the importance of this discovery.
intro Maths, computation, the mind, and related fields are a fascination for me.
I had thought I was quite well informed and to a large degree I did know most of the science in more traditional Computer Science (it was my undergraduate degreeβ¦). What had slipped me by was reinforcement learning, both its mathematical grounding and value of application. If youβve come from the previous post ([RL Series 1/n] Defining Artificial Intelligence and Reinforcement Learning) you know Iβve said something like that already.
intro I’m learning about Reinforcement Learning, it’s an area that has a lot of intrigue for me. The first I recall hearing of it was when ChatGPT wes released and it was said Reinforcement Learning from Human Feedback was the key to making it so fluent in responses.
Since then I’m studying AI and Data Science for a Masters so with that I’m stepping back to understand the domain in greater detail.