[zero-RL] where is the exploration?

Source: Off Policy “zero RL” in simple terms Results demonstrate that LUFFY encourages the model to imitate high-quality reasoning traces while maintaining exploration of its own sampling space. Authors introduce policy shaping via regularized importance sampling, which amplifies learning signals for low-probability yet crucial actions under “off-policy” guidance. The aspect that is still not clear to me is how there is any exploration of the solution space.

Continue reading →

[zero-RL] LUFFY: Learning to reason Under oFF policY guidance

Based on conventional zero-RL methods such as GRPO, LUFFY introduces off-policy reasoning traces (e.g., from DeepSeek-R1) and combines them with models' on-policy roll-outs before advantage computation. … However, naively combining off-policy traces can lead to overly rapid convergence and entropy collapse, causing the model to latch onto superficial patterns rather than acquiring genuine reasoning capabilities. …genuine reasoning capabilities… I am not certain if the implication is that Deepseek-R1 can reason or that it is a reminder that no model cam genuinely reason.

Continue reading →

[zero-RL] what is it?

Zero-RL applies reinforcement learning RL to base LM directly, eliciting reasoning potentials using models' own rollouts. A fundamental limitation worth highlighting: it is inherently “on-policy”, constraining learning exclusively to the model’s self-generated outputs through iterative trials and feedback cycles. Despite showing promising results, zero-RL is bounded by the base LLM itself. A key characteristic is that it means a LLM can be trained without Supervised Fine Tuning (SFT).

Continue reading →

[zero-RL] When you SFT a smaller LM on the reasoning traces of a larger LM

You are doing Imitation Learning (specifically Behavioral Cloning) because the goal and mechanism involve mimicking the expert’s token sequences. You are doing Transfer Learning (specifically Knowledge Distillation) because you are transferring reasoning knowledge from a teacher model to a student model. You are not doing Off-Policy Reinforcement Learning because the learning process is supervised likelihood maximization, not reward maximization using RL algorithms. Although the data itself is “off-policy” (not generated by the model being trained), the learning paradigm is supervised imitation, not RL.

Continue reading →

Notes and links on SVMs (WIP)

Support Vector Machines (SVM) are a mathematical approach for classifying data by finding optimal separating hyperplanes, applicable even in non-linear scenarios using kernel methods.

Continue reading →

[IA Series 2/n] Search Algorithms and Intelligent Agents

The document discusses various search algorithms used by Intelligent Agents for navigating mazes, detailing their types, characteristics, tradeoffs, and implementations.

Continue reading →

[IA Series 1/n] AI Search - Terms and Algorithms

This text introduces key concepts and algorithms related to intelligent agents in AI, focusing on search terms, uninformed and informed search strategies, and adversarial search techniques.

Continue reading →

[Python Series 1/n] Modern Python Package Management: pipx and uv for Data Scientists

This post is inspired by a conversation with a fellow Data Science and AI student. It’s from the conversation, co-authored with Claude. Hope it’s useful! Were to begin? When you’re starting your data science journey with Python, one of the first roadblocks you’ll encounter is package management. If you’ve tried conda and found it frustrating (as many do), you’re not alone. Let’s explore two modern tools that make Python package management simpler and more reliable: pipx and uv.

Continue reading →

Dystopia? It's already here and that's OK. Here's why.

The text reflects on the misuse of technology and ethics in Silicon Valley, highlighting the importance of awareness and compassion amidst current challenges.

Continue reading →

finally found something I wanted to use ChatGPT image generation for! On the fridge and the family loves it, going to be a busy week 👨🏻‍🌾

happiness is

    "django_cotton",
    "template_partials.apps.SimpleAppConfig",

unhappiness (for what seems like an eternity) is:

    "template_partials.apps.SimpleAppConfig",
    "django_cotton",

How did America break itself? Ideological sabotage of the scientific method and how to counter it.

Great podcast where she talks about why America is broken. Ideological sabotage (which surprised me, I’d thought the initial perpetrators would have done it for money) of the scientific method to protect the “right of freedom” has done exactly the opposite… It really feels like some Americas are stuck fighting against no longer existence foes, either the British tyranny and taxation without representation or the war between capitalism and communism.

Continue reading →

China’s first heterogenous humanoid robot training facility

www.globaltimes.cn/page/2025…

During the first phase of the project, the robots will be trained with approximately 45 atomic skills such as grasping, picking, placing and transporting

A single action may need to be repeated up to 600 times a day by a data collector for the robots to learn from

10 key scenarios, including industrial, domestic, and tourism services

It is expected that the collection of over 10 million real-machine data entries will be achieved within the year

Is the EU AI Act Killing Startups? A Medical Device Perspective

The analysis concludes that while the EU AI Act does not obstruct startups, it presents both challenges and opportunities for innovation within a complex regulatory landscape.

Continue reading →

The cold has fully kicked in now, and has a hint of covid about it… 😵😷

Plans to wire up the shed scrapped.

Split for choice between Russell’s Human Compatible, Mark Burgesses Treatise on Systems, or Green Mars. 🤓

Given the kids are out I might just enjoy the quiet!

#ChilledSunday #BeingHuman

“But who was learning, you or the machine?”

“Well, I suppose we both were”

Amazing book 🔥🤓

#TheAlignmentProblem #Learning #ResponsibleAI

The Alignment Problem by Brian Christian 📚

Clearly there are thoughtful, well spoken politicians in America.

youtu.be/ubBnUCXj4…

I hope people can rally around and stop the Baffons soon. 💪🏼

#BeingHuman

BBC news article is very clear…

The Russian president has given the US leader just enough to claim that he made progress towards peace in Ukraine, without making it look like he was played by the Kremlin.

Full article

New wave of Innovators: why AI won't replace software engineering

There’s a lot of change at the moment, my feed is all about foreign policies, US government cuts, AI writing all code, and now parenting adolescents. I’ve been experiencing a high level of uncertainty about Europe’s place in the world, mainly what decisions will be made after the US made their policy clear. Though there’s one area that my uncertainty is decreasing in; that’s AI generated code. It won’t replace software engineers.

Continue reading →

[NN Series 5/n] Regularisation: reducing the complexity of a model without compromising accuracy

Regularisation is known to reduce overfitting when training a neural network. As with a lot of these techniques there is a rich background and many options available, so asking the question why and how opens up to a lot of information. Diving through the information, for me at least, it wasn’t clear why/how it did this until I reframed what it was doing. In short, regularisation changes the sensitivity of the model to the training data.

Continue reading →