[NN Series 4/n] Feature Normalisation

This is an interesting one as I’d thought it was quite academic, with limited utility. Then I saw these graphs Error per epoch This graph shows the error per epoch of training a model on the data as is We can see that it takes around 180-200 epochs to train with a learning rate (eta) of 0.0002 or lower. Now compare it to this one Here we see the training takes around 15 epochs with a learning rate of 0.…

Read more ⟶

From Green Mars by Kim Stanley Robinson.

[NN Series 3/n] Calculating the error before quantisation: Gradient Descent

Next I’m looking at the Adaline in python code. This post is a mixture of what I’ve learnt in my degree, Sebestien Raschka’s book/code, and the 1960 paper that delivered the Adaline Neuron. Difference between the Perceptron and the Adaline In the first post we looked at the Perceptron as a flow of inputs (x), multiplied by weights (w), then summed in the Aggregation Function and finally quantised in the Threshold Function.…

Read more ⟶

[NN Series 2/n] Circuits that can be trained to match patterns: The Adaline

The text discusses the development and significance of the Adaline artificial neuron, highlighting its introduction of non-linear activation functions and cost minimization, which have important implications for modern machine learning.…

Read more ⟶

#BeingHuman - look after your << self >>: love is all it needs.

The author shares personal reflections on self-kindness and positive thinking as tools for finding peace amid societal challenges.…

Read more ⟶

#BeingHuman and a Dad.

My wife and I have 3 main concerns with our daughters use of phones and Social Media what her videos and posts can be used for. That includes both the companies and any who has access (making fake videos in their likeness) loss of critical thinking addiction and the infinite scroll So we’ve got these 4 guidelines in place: phone off at 20h 30 minutes reading each night creative session per week meditation (2*5 minutes per week) I’ve also gone through these two posts with my 13 year old daughter.…

Read more ⟶

Pondering Agency and Consciousness #BeingHuman

Had a nice exchange about Agency with Paul Burchard on LinkedIn this morning. My thinking goes towards Agency being a secondary characteristic, definition even, of what we see as a result of senses, perception, intelligence, and consciousness. Those primary characteristics are from the Buddhist 5 Aggregates (physical form, senses, perception, mental activity, and consciousness). It was a great exchange, helped me clarify and link my thinking to yesterday’s post on the Perceptron.…

Read more ⟶

[NN Series 1/n] From Neurons to Neural Networks: The Perceptron

This post looks at the Percepton, from Frank Rosenblatt’s original paper to a practical implementation classifying Iris flowers. The Perceptron is the original Artificial Neuron and provided a way to train a model to classify linearly separable data sets. The Perceptron itself had a short life, with the Adaline coming in 3 years later. However it’s name lives on as neural networks have, Multilayer Perceptrons (MLPs). The naming shows the importance of this discovery.…

Read more ⟶

This is not normal nor is it ok. Meta is now the pervy old man you have to teach your kids to avoid.

transparency.meta.com/en-gb/pol…

#BeingHuman #ResponsibleAI

Nice opening. Looking forward to reading more!

Nous pouvons et devons bâtir l’intelligence artificielle au service des femmes et des hommes, compatible avec notre vision du monde, dotée d’une gouvernance large, en préservant notre souveraineté.

We can and must build artificial intelligence to serve women and men, compatible with our vision of the world, with broad governance, while preserving our sovereignty.

Macron sur LinkedIn

First test with a “reasoning” model, pleasantly surprised.

Not sure how to integrate it into my workflow though, there’s a big response!!

How do humans decipher reward in an uncertain state and environment?

Imitation seems the most likely, supported by extended solitude usually leading to a depressed state.

Feels like a question to run a human Monte Carlo Tree Search on!

#BeingHuman #ReinforcementLearning #InverseReinforcementLearning

If I could answer any question in science, I’d find out what involvement the neurons in our heart and gut have in decision making and how we view ourselves.

What about you?

#BeingHuman #ThatsNotAWeekendProject 🙃

[RL Series 2/n] From Animals to Agents: Linking Psychology, Behaviour, Mathematics, and Decision Making

intro Maths, computation, the mind, and related fields are a fascination for me. I had thought I was quite well informed and to a large degree I did know most of the science in more traditional Computer Science (it was my undergraduate degree…). What had slipped me by was reinforcement learning, both its mathematical grounding and value of application. If you’ve come from the previous post ([RL Series 1/n] Defining Artificial Intelligence and Reinforcement Learning) you know I’ve said something like that already.…

Read more ⟶

The challenges of being human: mistaking prediction, narratives, and rhetoric for reasoning

I read an insightful comment within the current wave of LLM Reasoning hype. It has stuck with me. At least two reasons: It reminded me of my view that AGI is already here in the guise of companies It’s also a valid answer as to why I meditate and why Searle’s Chinese Room is mainly wrong Back to the comment, paraphrased it said: “the uncomfortable truth that these reasoning models show us that a lot of activities that we thought need human reasoning to complete simply need functional predictions”…

Read more ⟶

[RL Series 1/n] Defining Artificial Intelligence and Reinforcement Learning

intro I’m learning about Reinforcement Learning, it’s an area that has a lot of intrigue for me. The first I recall hearing of it was when ChatGPT wes released and it was said Reinforcement Learning from Human Feedback was the key to making it so fluent in responses. Since then I’m studying AI and Data Science for a Masters so with that I’m stepping back to understand the domain in greater detail.…

Read more ⟶

What is Off-Policy learning?

I’ve recently dug into Temporal Difference algorithms for Reinforcement Learning. The field of study has been a ride, from Animals in the late 1890s to Control Theory, Agents and back to Animals in the 1990s (and on). It’s accumulated in me developing a Q-Learning agent, and learning about hyperparameter sweeps and statistical significance, all relevant to the efficiency of Off-policy learning but topics for another day. I write this as it took a moment for me to realise what off-policy learning actually is.…

Read more ⟶

Are LLM learning skills rather than being Stochastic Parrots?

A Theory for Emergence of Complex Skills in Language Models Skill-Mix: a Flexible and Expandable Family of Evaluations for AI models www.quantamagazine.org/new-theor… youtu.be/fTMMsreAq… related to the authors Arora, S arxiv.org/search/cs Was that Sarcasm?: A Literature Survey on Sarcasm Detection Can Models Learn Skill Composition from Examples? Instruct-SkillMix: A Powerful Pipeline for LLM Instruction Tuning Goyal, A arxiv.org/search/cs Learning Beyond Pattern Matching? Assaying Mathematical Understanding in LLMs Metacognitive Capabilities of LLMs: An Exploration in Mathematical Problem Solving…

Read more ⟶

Domain Specific Languages

Ray Myers has started his Year of Domain Specific Languages 🎉 I listened to the first episode yesterday, on my bike because I’m getting fit again, and was reminded of when I did something similar. Got me wondering if this is a DSL? 🤔 Around 2007 I set up a CI/CD system for Pershing, using Microsoft Team Foundation Server, PowerShell, and MSI. I’m writing this to help remember the details, probably needs a diagram, however the main principles:…

Read more ⟶

Finished reading: Red Mars by Kim Stanley Robinson 📚

A great book, other than it being a highly recommended space opera I had little prior knowledge of it.

It’s a story of building a community and industry, starting with scientists, on Mars. Told from the viewpoint of multiple characters, the protagonists are fascinating and you read their stories about why they are on Mars and what they do when there!

The characters are great, each with a unique viewpoint and set of skills; the charismatic dreamer-leader John, grumpy geologist Ann, passionate leader Maya, supremely focused leader Frank, geeky terraformer Sax, enigmatic botanist Hiroku, rebellious Arkady, homesick psychologist Micheal, and the pragmatic engineer Nadia. There are more as well.

The arcs made me feel for the character, want them to succeed, and challenged my view of what was right for the group of Martians in a surprising way.

Had to stop myself immediately picking up Green Mars so I could reflect on it.

Dopamine as temporal difference errors !! 🤯

I expect I’m sharing a dopamine burst that I experienced! 🤓 I’m listening to The Alignment Problem by Brian Christian 📚 and it’s explaining how Dayan, Montague, and Sejnowski* connected Wolfram Schultz’s work to the Temporal Difference algorithm (iirc that’s, of course!, from Sutton and Barto). A quick search returns these to add to my maybe reading list: Dopamine and Temporal Differences Learning (Montague, Dayan & Sejnowski, 1996) Dopamine and temporal difference learning: A fruitful relationship between neuroscience and AI (Deepmind 2020) …

Read more ⟶

It is possible for dopamine to write cheques that the environment cannot cash. At which point the value function must come back down.

Nice lunch time walk into the village

Nice summary and stark reminder of what’s happening right now.

Only CEOs are making the decisions… they have a vested interest.

Worth keeping in mind it’s not just computing but robotics that are progressing.

Stuart Russel at the World Knowledge Forum 2024

Stuart Russel on Wikipedia

[video] Crew.ai experiment with Cyber Threat Intelligence

Read more ⟶

Agentic behaviours

My initial thoughts, expressed via the medium of sport, on agentic behaviours plus friends view, which I think is better (expected as he’s the Basketball player). Jackson is the ethical agent. Pippen is the organizing agent. Harper is the redundant agent. Note: since looking into this I’m not sure agentic is the right term, now thinking of them as simply components of a system.…

Read more ⟶

[short] Why use tools with an LLM?

Read more ⟶

[short] AI Systems

Read more ⟶

Project Euler meets Powershell - Problem #4

<# A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99. Find the largest palindrome made from the product of two 3-digit numbers. #> # 998001 - so what's the largest palindrome number less than this one - then check if it's a product of 3-digit numbers function ispalindrome { [cmdletbinding()] param( [int] $number ) process{ $digits = @() $next_int = $number while ($next_int -gt 9) { $digit = $next_int % 10 $digits = $digits + $digit $next_int = ($next_int / 10) - (($next_int % 10)/10) #(99099 / 10) - ((99099 % 10)/10) } $digits = $digits + $next_int #$digits #$digits.…

Read more ⟶

Project Euler meets Powershell - reworking factorial to avoid PowerShell 1000 recursions limit

Turns out it was the recursion the factorial function - I’ve reworked it to use a for loop function factorial { [cmdletbinding()] param($x) if ($x -lt 1) { return "Has to be on a positive integer" } Write-Verbose "Input is $x" $fact = 1 for ($i = $x; $i -igt 0; $i -= 1){ #Write-verbose "i: $i" $fact = [System.Numerics.BigInteger]::Multiply($i , $fact) } Write-Verbose "i equals $i" $fact } it’s still running for factorial 486847……

Read more ⟶

Project Euler meets Powershell - Problem #3

amonkeyseulersolutions: I’ve read that an integer p > 1 is prime if and only if the factorial (p - 1)! + 1 is divisible by p. So I’ve written this: Read More Here’s the fixed version… function isprime { [cmdletbinding()] param($x) if ($x -lt 1) { return "Has to be on a positive integer" } # An integer p > 1 is prime if and only if the factorial (p - 1)!…

Read more ⟶

Project Euler meets Powershell - largest prime factor of a value?

things to do… No more than the square root of the value. test each value? start from the square root and work down. the first one is the largest……

Read more ⟶

Project Euler meets Powershell - isprime

I’ve read that an integer p > 1 is prime if and only if the factorial (p - 1)! + 1 is divisible by p. So I’ve written this: function isprime { [cmdletbinding()] param([int] $x) if ($x -lt 1) { return "Has to be on a positive integer" } # An integer p > 1 is prime if and only if the factorial (p - 1)! + 1 is divisible by p [int] $factorial = factorial ($x-1) Write-Verbose "Factorial: $factorial" $remainder = ($factorial + 1) % $x Write-Verbose "Remainder: $remainder" if ($remainder -eq 0) { return $true } else { return $false } } Unfortunately it doesn’t work for some numbers known to be prime - 29 is the example I have……

Read more ⟶

Project Euler meets Powershell - factorial...

function factorial { [cmdletbinding()] param([int64] $x) if ($x -lt 1) { return "Has to be on a positive integer" } if ($x -eq 1) { [int64] $x } else { [int64] $x * (factorial ($x-1)) } } …

Read more ⟶

Project Euler - Problem 2

Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be: 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, … By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms. function fib { [cmdletbinding()] param ( [int] $x ) process { if ($x -lt 2) { $x } else { $n1 = $x - 1 $n2 = $x - 2 $fib2 = fib $n2 $fib1 = fib $n1 $result = $fib1 + $fib2 #Write-Verbose $result $result } } } #fib 5 -Verbose Write-Host "___________-------------______________" $ScriptStartTime = date for ($i = 0; $fib -lt 4000000 ; $i+=1) { $fib = fib $i Write-Host -ForegroundColor Magenta "fib = $fib" If ($fib % 2 -eq 0 ) { $sum += $fib Write-Host -ForegroundColor Green "sum = $sum" } } $ScriptEndTime = date $ScriptDuration = $ScriptEndTime - $ScriptStartTime Write-Host "`n Time Taken:" + $ScriptDuration …

Read more ⟶

Project Euler meets Powershell - Problem 1

If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. Solution to Euler’s Problem #1 for ($i = 1; $i -lt 1000; $i += 1) {if ( ($i % 3 -eq 0) -or ($i % 5 -eq 0) ) { $count += $i } };$count …

Read more ⟶