During the first phase of the project, the robots will be trained with approximately 45 atomic skills such as grasping, picking, placing and transporting
A single action may need to be repeated up to 600 times a day by a data collector for the robots to learn from
10 key scenarios, including industrial, domestic, and tourism services
It is expected that the collection of over 10 million real-machine data entries will be achieved within the year
The analysis concludes that while the EU AI Act does not obstruct startups, it presents both challenges and opportunities for innovation within a complex regulatory landscape.
I hope people can rally around and stop the Baffons soon. 💪🏼
#BeingHuman
BBC news article is very clear…
The Russian president has given the US leader just enough to claim that he made progress towards peace in Ukraine, without making it look like he was played by the Kremlin.
There’s a lot of change at the moment, my feed is all about foreign policies, US government cuts, AI writing all code, and now parenting adolescents.
I’ve been experiencing a high level of uncertainty about Europe’s place in the world, mainly what decisions will be made after the US made their policy clear.
Though there’s one area that my uncertainty is decreasing in; that’s AI generated code. It won’t replace software engineers.
Regularisation is known to reduce overfitting when training a neural network. As with a lot of these techniques there is a rich background and many options available, so asking the question why and how opens up to a lot of information. Diving through the information, for me at least, it wasn’t clear why/how it did this until I reframed what it was doing.
In short, regularisation changes the sensitivity of the model to the training data.
define actions by Promise Theory train multiple neural nets to classify an action for a given input (train them differently to spice things up) take an environment for the agents to operate in (e.g. a 3d maze where collaboration is needed to escape) bind the agents interactions with a healthy dose of the wave collapse algorithm
(I forget exactly but I’m pretty sure this is from an Alan Watts lecture).
A farmer needs some help around his farm. He puts up a sign in town, asking for someone with general skills to help around the farm.
A gent arrives two days later with his toolkit, the farmer welcomes him and tells him there are some broken fences on the north side of his farm.
The gent heads over to the area, spends the day fixing the fences, and comes back in the evening to tell the farmer it’s done.
This is an interesting one as I’d thought it was quite academic, with limited utility. Then I saw these graphs
Error per epoch This graph shows the error per epoch of training a model on the data as is
We can see that it takes around 180-200 epochs to train with a learning rate (eta) of 0.0002 or lower.
Now compare it to this one
Here we see the training takes around 15 epochs with a learning rate of 0.
Next I’m looking at the Adaline in python code. This post is a mixture of what I’ve learnt in my degree, Sebestien Raschka’s book/code, and the 1960 paper that delivered the Adaline Neuron.
Difference between the Perceptron and the Adaline In the first post we looked at the Perceptron as a flow of inputs (x), multiplied by weights (w), then summed in the Aggregation Function and finally quantised in the Threshold Function.
The text discusses the development and significance of the Adaline artificial neuron, highlighting its introduction of non-linear activation functions and cost minimization, which have important implications for modern machine learning.
My wife and I have 3 main concerns with our daughters use of phones and Social Media
what her videos and posts can be used for. That includes both the companies and any who has access (making fake videos in their likeness) loss of critical thinking addiction and the infinite scroll So we’ve got these 4 guidelines in place:
phone off at 20h 30 minutes reading each night creative session per week meditation (2*5 minutes per week) I’ve also gone through these two posts with my 13 year old daughter.
Had a nice exchange about Agency with Paul Burchard on LinkedIn this morning.
My thinking goes towards Agency being a secondary characteristic, definition even, of what we see as a result of senses, perception, intelligence, and consciousness.
Those primary characteristics are from the Buddhist 5 Aggregates (physical form, senses, perception, mental activity, and consciousness).
It was a great exchange, helped me clarify and link my thinking to yesterday’s post on the Perceptron.
This post looks at the Percepton, from Frank Rosenblatt’s original paper to a practical implementation classifying Iris flowers.
The Perceptron is the original Artificial Neuron and provided a way to train a model to classify linearly separable data sets.
The Perceptron itself had a short life, with the Adaline coming in 3 years later. However it’s name lives on as neural networks have, Multilayer Perceptrons (MLPs). The naming shows the importance of this discovery.
Nous pouvons et devons bâtir l’intelligence artificielle au service des femmes et des hommes, compatible avec notre vision du monde, dotée d’une gouvernance large, en préservant notre souveraineté.
We can and must build artificial intelligence to serve women and men, compatible with our vision of the world, with broad governance, while preserving our sovereignty.