Hmmm…. I listened to it over a few months so I forgot all that happened in this book… Not sure if that’s a reflection of the book or that I’ve been busy…
Still now I have more time I finished the last 25% in 2 days and it was great to see the back story for the Freemen of Arakas and the root of the Artrades/Harkanan rivalry.
Cheesy trope with humans behaving without emotion and robots displaying emotion at the end…. Topical but meh.
The audio book could have been structured better - I’m pretty sure there were more than 2 chapters in the book… I think that made it difficult to parse and remember what was occurring.
F##king did it. π€π
Tomorrow I’ll see if it repeats.
3-4 months of back to basics thinking followed by some surprisingly smooth coding (only one structural update to the formal definitions and a couple of remarks).
Well happy.
Let’s see what tomorrow brings (hopefully it runs again π€πΌ)
Great video on how to learn: youtu.be/mOJu1I57A…
I’m thinking that I - like lots of people - marginalise conversations as well as reading :)
Interesting overlap with active listening as well?
Need to rewatch to see where this fits in: www.linkedin.com/feed/upda…
I’m trying to think of a song to remember Homotopy - πΆtravelling through a base layer in a Fibrated space and searching for the corresponding path in the fiber! πΆ Don’t stop me now… πΆ π€·πΌββοΈπ€
If any topologists are reading this - Am I correct that the base layer can be discrete?
The discussion explores the challenges and implications of using AI coding agents with “irrational” performance measures, emphasizing the need for explicit goal-based frameworks to improve their functionality and alignment with human intentions.
Wow, I think I’ve just turned the final straight on my paper. It is both, exactly what I thought it’d be and completely new to me.
Writing down my intuition, with as much rigour as I can, is a great experience.
Now I find out that someone else has already done something similar and I’ve just not understood or found what they have said! Still a great experience to have worked to this point and understand it.
The more ways a system can achieve a function, the more robust and adaptable it becomes.
I think it is fair to say we tend to think of βdegenerateβ as a pejorative. Something broken, collapsing, or inferior.
But in complex systems β biological, neural, or artificial β degeneracy means something far more interesting: different structures performing similar functions.
Generalisation… I am comparing State Spaces and Solution Spaces and realised that I may be talking about generalisation….
The post is diving into the definitions to prompt some thought.
This guide outlines the steps to build the offline speech-to-text application Handy from source on an Intel Mac, including installation of necessary tools and troubleshooting common issues.
β"Domain Modelling is itself the process of learning, you cannot know it all at the start, and should expect to update aspects at any stage of the product development".
This is about connection - both with a fellow human interested in and articulate about Artifical Intelligence and the connection of the information inputed, processed, and produced.
The Information - LLM-as-a-Judge - we chat about the survey paper and how it can be applied to modern AI Applications. There’s a human written blog post, a Youtube video, and a NotebookLM to chat to. Fill your boots :)