Dystopia? It's already here and that's OK. Here's why.

I should probably make a category of “roughly organised thoughts” or “initial outputs from stated inputs”… I mean, this post is not edited/rough. 🤓 It does have a click-baity title though! 🤷🏼‍♂️

I’m reading “What Tech Calls Thinking”, been listening to Naomi Oreskes, Emile P. Torres and Stuart Russell’s the human compatible, this morning I’ve read more of Joanna Bryon’s work.

I highly recommend them all.

Naomi highlights how science can be misused to corrupt public opinion. She does this by highlighting where it has happened in the past.

In What Tech Calls Thinking, Adrian Daub looks at why Silicon Valley is the way that it is. I’d argue that the title is misleading though, not least because I work in “Tech” and do not think like that! That is probably why it touches a lot of emotion in me. The misuse of Tech is not a future problem, it is already systematic.

Edward Snowdon highlighted that a nation state will spy on its own people. In the Cuckoo’s Egg, Clifford Stole well documents the start of Russia’s epic, and continuous, state sponsored hacking program. There is plenty more material to read around this subject, however for this article I highlight those as threats that citizens have from without and within.

It’s not just state-level threats that we should be aware of, the Social Dilemma highlights the threats to the independence of our thinking, habits, and time. Corporations spend a lot of money, not for public benefit, rather for us to use their product more. In short, Corporate Surveillance has become a thing. GDPR is a valid and needed response to protect EU citizens (iirc there was an element of the EU saying hands off our citizen’s data to the USA’s NSA).

So things are pretty messy. Certainly not a utopia but I’m OK with that. Probably because I’m lucky me and my family are OK.

I’m also OK with it as I can be aware of where it is bad, and have the agency to take action against that. I hope that many others are similar, though some do not have one or both.

With that comes a question of what is the best thing to do, and I’ve found Joanna’s work to be an example of where science is used to inform and highlight that a lot of these discussions have been in place for a while now, this being a great example: AI Ethics: Artificial Intelligence, Robots, and Society.

Lots of the quandaries I find myself meditating on she appears to have done that and written about it 10, sometimes 20, years ago.

Notes from participating in a 2013 Panel on AI Ethics that are pertinent today. It is important to highlight this work and the quality of it. Not least to help avoid falling into the trap of thinking Silicon Valley is now solving these ethical issues, or that they are completely new.

Below is a collection of links and my initial thoughts.

She makes two key points, almost the same but not, we have to be careful of:

  1. Confounding intelligence and humans (can’t find where I read this).
  2. Confounding intelligence and sentience.

At the end of this article she talks about failure (I’ve not yet read Ron’s work) however, I wish to note a question: is failure any different from exploring? And are we hardwired to do that?

Can you be separate from the military complex?

What principles should be at the core of robotics?

The five principles are:

  1. Robots should not be designed as weapons, except for national security reasons.
  2. Robots should be designed and operated to comply with existing law, including privacy.
  3. Robots are products: as with other products, they should be designed to be safe and secure.
  4. Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users.
  5. It should be possible to find out who is responsible for any robot.> Notice that the first three correct Asimov’s laws – it’s not the robot that’s responsible

I have also read an article she wrote in compassion (maybe empathy - I cannot find it), the gist, as I recall it, was the intellectual aspects of compassion and its relevance to AI Ethics, however I forget the actual point. 😬 What I recall is thinking there is another aspect and that compassion can be nurtured and practiced.

Which I wish to finish this note on that tone.

Yes, the world is messy, no don’t believe the “Tech” hype/BS about AI, for two reasons;

  1. They are reframing for their growth
  2. AI is already here. (That’s a full stop, no debate about the name).

However, we can nurture and practice being open-minded, being compassionate, and being human. In my humble opinion the 4 authors mentioned at the top are doing that.

AI generated image