Learning

Double Split Experiment lecture from 1964 (remastered)

Remastered broadcast of a 1964 lecture by Richard Feynman on the Double Split experiment. Finished with a call to action on having open priors to evidence we see from Mother Nature!

Continue reading β†’

PCA and Entropy: The Information Connection

Linking Entropy as the guide of when to use Principle Component Analysis

Continue reading β†’

Statistics Term Sheet

Term sheet for key statistical ideas

Continue reading β†’

[BH 5/n] Argh... Just because we repeat Correlation does not imply Causation does not mean there isn't Causation!

This is a bit of a rant. I’ve memories of a senior manager shooting ideas down saying “correlation is not causation, I’ve done Stats at Uni and can prove anything is related to baked beans” I grated sooooo much. Firstly as it was thoughtless rhetoric, either purposefully or accidentally steam rolling ideas. Immediately dismissing any attempts at constructive data driven decisions. Secondly it grated because I didn’t have the tools to show causation.

Continue reading β†’

Taming the vibes 🐍

A day vibe-coding, as a break from the normal routine of study. Done in new environment with language I’ve not used - VS Code extension in TypeScript .

Continue reading β†’

Decisions decisions - Final Year Project πŸ€”πŸ€”πŸ€”

I am very torn between two possibilities : Building on my Q-Learning Maze Solving Agent I did for AI Applications (Q-Learning Maze Solving Agent) by adding a Neural Network (Sutton and Barton) Building on the Intelligent Agents work I did in AI by applying an Agent Decisions Process (Self-Consistency LLM-Agent) (the process is my interpretation of Russell and Norvig’s work) (The Douglas Adam’s extra option πŸ€“) Adding a cached “self-awareness” layer based on a Bayesian Learning Agent that stores it’s certainty on answers it gives.

Continue reading β†’

[Being Human 3/n]: moving on from previous unmet goals...

I wish I had time to finish: my research on the Evolution of Probalisitic Reasoning in AI Particularly Dempster-Shafer and Bayesian Networks How LLMs and Bayesian networks can be used for Risk Management create an youtube/insta/tiktok vid for my latest post on LLM Agent But I don’t!! So this is me putting it to one side…

Continue reading β†’

[Being Human 2/n] Being scrappy shows we are Human in this Brave New World

Polish is cheap in this Brave New World of AI. Being scrappy is a way of being authentic and, most importantly, Being Human!

Continue reading β†’

[IA Series 7/n] Building a Self-Consistency LLM-Agent: From PEAS Analysis to Production Code

Building a Self-Consistency LLM-Agent: From PEAS Analysis to Production Code - a guide to designing an LLM-based agent.

Continue reading β†’

[IA Series 5/n] The Evolution from Logic to Probability to Deep Learning: A course correction to Transformers

Introduction In the previous post, I shared my view on “Why Study Logic?”, we looked at the Knowledge Representation and highlighted the importance of Logic and Reasoning in storing and accessing Knowledge. In this post I’m going to highlight a section from the book “Introduction to Artificial Intelligence” by Wolfgang Ertel. His approach with this book was to make AI more accessible than Russel and Norvig’s 1000+ page bible. It worked for me.

Continue reading β†’

[IA Series 4/n] A Big Question: Why Study Logic in a World of Probabilistic AI?

Introduction The purpose of this article is to help me answer the question “Why am I studying Logic?”. If it helps you, that’d be great, let me know! The question comes from a nagging feeling of, why don’t I see logic used more in the ‘real world’. It could be a personal bias as I more easily see the utility of Rosenblatt’s work, where he looked at both Symbolic Logic and Probability Theory to help solve a problem and choose Probability Theory ([NN Series 1/n] From Neurons to Neural Networks: The Perceptron), with that we had the birth of the Artificial Neuron and the rest is history!

Continue reading β†’

[IA Series 3/n] Intelligent Agents Term Sheet

“[IA Series 3/n] Intelligent Agents Term Sheet” breaks down essential AI terminology from Russell & Norvig’s seminal textbook. Learn what makes agents rational (or irrational), understand different agent types, and follow a structured 5-step design process from environment analysis to implementation. Perfect reference for AI practitioners and students. Coming next: how agents mirror human traits. #ArtificialIntelligence #IntelligentAgents #AIDesign

Continue reading β†’

Building an Intelligent Agent

First draft in public 😱 πŸ˜†πŸ€“ What’s the best way for an agent to build a semantically sound and syntactically correct knowledge base? Dog fooding my course material means the first step is to define the task environment. /Checks notes Task Environment: The description of Performance, Environment, Actuators, and Sensors (PEAS). This provides a complete specification of the problem domain. So how can I implement this πŸ€” First I need to think on the domain, something different to the examples (e.

Continue reading β†’

[zero-RL] Summarising what LUFFY offers

Here’s a “standard” progression of training methodologies: PRE-Training - This is where the model gains broad knowledge, forming the foundation necessary for reasoning. CPT (Continued Pre-training) - Makes the model knowledgeable about specific domains. SFT (Supervised Fine-Tuning) - Makes the model skilled at specific tasks by leveraging knowledge it already has. RL (Reinforcement Learning) - Using methods like GRPO, DPO to align model behavior. Reasoning traces play different roles at each stage:

Continue reading β†’

[zero-RL] where is the exploration?

Source: Off Policy “zero RL” in simple terms Results demonstrate that LUFFY encourages the model to imitate high-quality reasoning traces while maintaining exploration of its own sampling space. Authors introduce policy shaping via regularized importance sampling, which amplifies learning signals for low-probability yet crucial actions under “off-policy” guidance. The aspect that is still not clear to me is how there is any exploration of the solution space.

Continue reading β†’

[zero-RL] LUFFY: Learning to reason Under oFF policY guidance

Based on conventional zero-RL methods such as GRPO, LUFFY introduces off-policy reasoning traces (e.g., from DeepSeek-R1) and combines them with models' on-policy roll-outs before advantage computation. … However, naively combining off-policy traces can lead to overly rapid convergence and entropy collapse, causing the model to latch onto superficial patterns rather than acquiring genuine reasoning capabilities. …genuine reasoning capabilities… I am not certain if the implication is that Deepseek-R1 can reason or that it is a reminder that no model cam genuinely reason.

Continue reading β†’

[zero-RL] what is it?

Zero-RL applies reinforcement learning RL to base LM directly, eliciting reasoning potentials using models' own rollouts. A fundamental limitation worth highlighting: it is inherently “on-policy”, constraining learning exclusively to the model’s self-generated outputs through iterative trials and feedback cycles. Despite showing promising results, zero-RL is bounded by the base LLM itself. A key characteristic is that it means a LLM can be trained without Supervised Fine Tuning (SFT).

Continue reading β†’

[zero-RL] When you SFT a smaller LM on the reasoning traces of a larger LM

You are doing Imitation Learning (specifically Behavioral Cloning) because the goal and mechanism involve mimicking the expert’s token sequences. You are doing Transfer Learning (specifically Knowledge Distillation) because you are transferring reasoning knowledge from a teacher model to a student model. You are not doing Off-Policy Reinforcement Learning because the learning process is supervised likelihood maximization, not reward maximization using RL algorithms. Although the data itself is “off-policy” (not generated by the model being trained), the learning paradigm is supervised imitation, not RL.

Continue reading β†’

Notes and links on SVMs (WIP)

Support Vector Machines (SVM) are a mathematical approach for classifying data by finding optimal separating hyperplanes, applicable even in non-linear scenarios using kernel methods.

Continue reading β†’

[IA Series 2/n] Search Algorithms and Intelligent Agents

The document discusses various search algorithms used by Intelligent Agents for navigating mazes, detailing their types, characteristics, tradeoffs, and implementations.

Continue reading β†’

[IA Series 1/n] AI Search - Terms and Algorithms

This text introduces key concepts and algorithms related to intelligent agents in AI, focusing on search terms, uninformed and informed search strategies, and adversarial search techniques.

Continue reading β†’

“But who was learning, you or the machine?”

“Well, I suppose we both were”

Amazing book πŸ”₯πŸ€“

#TheAlignmentProblem #Learning #ResponsibleAI

The Alignment Problem by Brian Christian πŸ“š

[NN Series 5/n] Regularisation: reducing the complexity of a model without compromising accuracy

Regularisation is known to reduce overfitting when training a neural network. As with a lot of these techniques there is a rich background and many options available, so asking the question why and how opens up to a lot of information. Diving through the information, for me at least, it wasn’t clear why/how it did this until I reframed what it was doing. In short, regularisation changes the sensitivity of the model to the training data.

Continue reading β†’

[NN Series 4/n] Feature Normalisation

This is an interesting one as I’d thought it was quite academic, with limited utility. Then I saw these graphs Error per epoch This graph shows the error per epoch of training a model on the data as is We can see that it takes around 180-200 epochs to train with a learning rate (eta) of 0.0002 or lower. Now compare it to this one Here we see the training takes around 15 epochs with a learning rate of 0.

Continue reading β†’

[NN Series 3/n] Calculating the error before quantisation: Gradient Descent

Next I’m looking at the Adaline in python code. This post is a mixture of what I’ve learnt in my degree, Sebestien Raschka’s book/code, and the 1960 paper that delivered the Adaline Neuron. Difference between the Perceptron and the Adaline In the first post we looked at the Perceptron as a flow of inputs (x), multiplied by weights (w), then summed in the Aggregation Function and finally quantised in the Threshold Function.

Continue reading β†’

[NN Series 2/n] Circuits that can be trained to match patterns: The Adaline

The text discusses the development and significance of the Adaline artificial neuron, highlighting its introduction of non-linear activation functions and cost minimization, which have important implications for modern machine learning.

Continue reading β†’

#BeingHuman - look after your << self >>: love is all it needs.

The author shares personal reflections on self-kindness and positive thinking as tools for finding peace amid societal challenges.

Continue reading β†’

[NN Series 1/n] From Neurons to Neural Networks: The Perceptron

This post looks at the Percepton, from Frank Rosenblatt’s original paper to a practical implementation classifying Iris flowers. The Perceptron is the original Artificial Neuron and provided a way to train a model to classify linearly separable data sets. The Perceptron itself had a short life, with the Adaline coming in 3 years later. However it’s name lives on as neural networks have, Multilayer Perceptrons (MLPs). The naming shows the importance of this discovery.

Continue reading β†’

[RL Series 2/n] From Animals to Agents: Linking Psychology, Behaviour, Mathematics, and Decision Making

intro Maths, computation, the mind, and related fields are a fascination for me. I had thought I was quite well informed and to a large degree I did know most of the science in more traditional Computer Science (it was my undergraduate degree…). What had slipped me by was reinforcement learning, both its mathematical grounding and value of application. If you’ve come from the previous post ([RL Series 1/n] Defining Artificial Intelligence and Reinforcement Learning) you know I’ve said something like that already.

Continue reading β†’

[RL Series 1/n] Defining Artificial Intelligence and Reinforcement Learning

intro I’m learning about Reinforcement Learning, it’s an area that has a lot of intrigue for me. The first I recall hearing of it was when ChatGPT wes released and it was said Reinforcement Learning from Human Feedback was the key to making it so fluent in responses. Since then I’m studying AI and Data Science for a Masters so with that I’m stepping back to understand the domain in greater detail.

Continue reading β†’