Agentic AI

[IA 10] The case of Claude, the Irrational AI Agent, and the Formal Decomposition of Goals

The discussion explores the challenges and implications of using AI coding agents with “irrational” performance measures, emphasizing the need for explicit goal-based frameworks to improve their functionality and alignment with human intentions.

Continue reading →

The need for robots - services and soft skills?

Snippet of a horse article about where we need robots and what skills they will need.

Continue reading →

Are coding agents offering value?

Positive experience with Claude Code on the web for a small change, will it offer value for bigger changes??

Continue reading →

Planning is offline search.

Planning is offline search.

Planning is…. Yup, it is offline search.

Discussing the state of LLM-as-a-Judge - is it good enough to use? (human edition)

This is about connection - both with a fellow human interested in and articulate about Artifical Intelligence and the connection of the information inputed, processed, and produced. The Information - LLM-as-a-Judge - we chat about the survey paper and how it can be applied to modern AI Applications. There’s a human written blog post, a Youtube video, and a NotebookLM to chat to. Fill your boots :)

Continue reading →

Using AI in research - connecting information is what it is all about

Google Meet, Youtube, and NotebookLM make for great research utilities.

Continue reading →

[IA 9] Agent Design Process v2: Bridging the Agent Function and Acceptance Criteria

Making AI Theory Testable. There’s a gap between the Agent Function and the Agent Program and what the Agent should do and what it does do. ATDD can help bridge this. Here I detail how.

Continue reading →

[IA Series 8/n] Building a Self-Reflection LLM Agent: From Theory to Proof of Concept

An initial free dive into Agentic Meta-cognition, using an element of Self-Reflection to be aware of what it knows and apply it in a utilitarian fashion.

Continue reading →

Can LLMs do Critical Thinking? Of course not. Can an AI system think critically? Why not?

A very interesting paper on Critical Thinking in an LLM (or lack thereof) Our study investigates how language models handle multiple-choice questions that have no correct answer among the options. Unlike traditional approaches that include escape options like None of the above (Wang et al., 2024a; Kadavath et al., 2022), we deliberately omit these choices to test the models’ critical thinking abilities. A model demonstrating good judgment should either point out that no correct answer is available or provide the actual correct answer, even when it’s not listed.

Continue reading →

[Being Human 3/n]: moving on from previous unmet goals...

I wish I had time to finish: my research on the Evolution of Probalisitic Reasoning in AI Particularly Dempster-Shafer and Bayesian Networks How LLMs and Bayesian networks can be used for Risk Management create an youtube/insta/tiktok vid for my latest post on LLM Agent But I don’t!! So this is me putting it to one side…

Continue reading →

[IA Series 7/n] Building a Self-Consistency LLM-Agent: From PEAS Analysis to Production Code

Building a Self-Consistency LLM-Agent: From PEAS Analysis to Production Code - a guide to designing an LLM-based agent.

Continue reading →

[IA Series 3/n] Intelligent Agents Term Sheet

“[IA Series 3/n] Intelligent Agents Term Sheet” breaks down essential AI terminology from Russell & Norvig’s seminal textbook. Learn what makes agents rational (or irrational), understand different agent types, and follow a structured 5-step design process from environment analysis to implementation. Perfect reference for AI practitioners and students. Coming next: how agents mirror human traits. #ArtificialIntelligence #IntelligentAgents #AIDesign

Continue reading →

China’s first heterogenous humanoid robot training facility

www.globaltimes.cn/page/2025…

During the first phase of the project, the robots will be trained with approximately 45 atomic skills such as grasping, picking, placing and transporting

A single action may need to be repeated up to 600 times a day by a data collector for the robots to learn from

10 key scenarios, including industrial, domestic, and tourism services

It is expected that the collection of over 10 million real-machine data entries will be achieved within the year

A speculative recipe for useful agentic behaviours

define actions by Promise Theory train multiple neural nets to classify an action for a given input (train them differently to spice things up) take an environment for the agents to operate in (e.g. a 3d maze where collaboration is needed to escape) bind the agents interactions with a healthy dose of the wave collapse algorithm

Continue reading →

Pondering Agency and Consciousness #BeingHuman

Had a nice exchange about Agency with Paul Burchard on LinkedIn this morning. My thinking goes towards Agency being a secondary characteristic, definition even, of what we see as a result of senses, perception, intelligence, and consciousness. Those primary characteristics are from the Buddhist 5 Aggregates (physical form, senses, perception, mental activity, and consciousness). It was a great exchange, helped me clarify and link my thinking to yesterday’s post on the Perceptron.

Continue reading →

[video] Crew.ai experiment with Cyber Threat Intelligence

Continue reading →

Agentic behaviours

My initial thoughts, expressed via the medium of sport, on agentic behaviours plus friends view, which I think is better (expected as he’s the Basketball player). Jackson is the ethical agent. Pippen is the organizing agent. Harper is the redundant agent. Note: since looking into this I’m not sure agentic is the right term, now thinking of them as simply components of a system.

Continue reading →

[short] Why use tools with an LLM?

Continue reading →

[short] AI Systems

Continue reading →