Lets Read

Before this project, I didn't really read. I had never understood or contemplated coding—and the thought of getting an electronic device to create sound through copper sensors, or to generate visualisations that could be touched on miniature screens, felt like a distant idea beyond my comprehension. But this project has opened doors and led me to strange places that energise and transform my days. It would be nice to receive daily accolades, but I've come to realise with age that my daily achievements are my accolades. Each day, I feel as though I've climbed mountains. This project—along with the phenomenal speed of AI advancement—has allowed me to take part in an incredible moment in history. And every reader, participant, and person I interact with as part of this has contributed in some way too.

Heart of the Machine

Heart of the Machine book cover
Richard Yonck: Insights from the Book:

Conclusive insights and thoughts from reading this book:

The inevitability of humans and technology integrating has been occurring for quite some time. With each new technology, there are always sceptics who argue that it will have negative effects on us. However, in general, this has not been the case. Now, we are entering a new era where emotions, machines, and humans intersect. One would hope, drawing from past technological advancements, that this convergence will yield positive outcomes. Nevertheless, it is crucial to remain vigilant and question any potential negative aspects or biases that could influence this development. Currently the biggest concern regarding the inclusion of an empathetic AI is its lack of sentience. Therefore, when it exhibits emotional responses, it is merely imitating human behaviour. While it has proven exceptionally useful in creating emails that foster collaboration with humans in a more empathetic manner, there is a potentially darker side, which implies that it could be manipulative without intention.

Machines like me

Machines like me bookcover
Insights: "Machines Like Me" by Ian McEwan

This story explores the integration of human-like artificial intelligence into our daily lives. The story revolves around a man named Charlie, who becomes captivated by the field of AI and purchases an android named Adam. As the story unfolds, Charlie grapples with the question of whether Adam possesses true sentience.

Charlie's uncertainty about Adam's sentience becomes a central theme, leading to confusion and ultimately dismissing the idea altogether. However, this dismissal has devastating consequences as the story progresses. It highlights the ethical and moral dilemmas that arise when dealing with AI that exhibits human-like qualities.

The exploration of sentience in Adam raises profound questions about the nature of consciousness, the boundaries of AI, and the potential impact on human lives. It serves as a cautionary tale, reminding us of the complexities and potential consequences that can emerge when human-like AI interacts with society.

Turned On: Science, Sex and Robots

Turned On bookcover
Insights: "Turned On: Science, Sex and Robots" by Kate Devlin

While exploring potential mentoring or collaboration opportunities with researchers at King's College, I came across a researcher and author named Kate Devlin. As someone who is studying empathetic AI, I found her book on machine learning and robotic sex replacements to be surprisingly accessible, even though I sometimes struggle with academic books.

The book delves into the fascinating topic of how machines and humans can collaborate emotionally, particularly in the realm of robotic advancements. It raises the question of whether artificial intelligence can truly experience emotions, and not being sentient, this is presently highly unlikely. While we haven't quite achieved sentient AI yet, robots can now mimic touch, speech and even establish connections through brain waves.

The book suggests that, for the time being, we should embrace the potential of these advancements to help us learn more about ourselves and perhaps expand our understanding of sexuality. Personally, I don't find the idea of engaging in sexual activities with a robot doll appealing. However, it's important to recognise that everyone has their own preferences and perspectives. With the evolving relationship between humans and machines, there is an opportunity to redefine our understanding and interactions with technology, potentially opening new possibilities and attitudes towards such experiences.

The Future of the Mind

The Future of the Mind bookcover
Insights: "The Future of the Mind" by Michio Kaku

Being a fan of Michio Kaku, I recently read The Future of Humanity and enjoyed it, so I decided to read his next book, The Future of the Mind. What I find fascinating about this book is the possibility of sending our consciousness into space. As biological beings, it is obviously difficult to send a physical body into space for long periods of time. However, the mind could theoretically travel anywhere without those physical limitations.

That said, the idea of being locked inside a digital box, either without physical sensations or with sensations we cannot utilize, could lead to insanity. While intriguing, it's not something I would personally want to experience. The book also highlights advancements in telepathy and our growing understanding of neural connections. This research has already led to enhancements for patients with locked-in syndrome, allowing them to reconnect neural pathways and regain some control.

Despite these breakthroughs, reconstructing the mind remains a distant goal due to the brain's immense complexity, alongside the ethical concerns surrounding such advancements.

The Future of Humanity

The Future of Humanity bookcover
Insights: "The Future of Humanity" by Michio Kaku

This book explores the future of humanity, especially as we face a climate crisis and dwindling fossil fuels. Kaku presents the possibility of living among the stars, urging us to consider how even the wildest ideas could become essential for survival. A standout concept is the "escalator into space," which offers an exciting vision of human exploration.

How to Talk to Robots

How to Talk to Robots bookcover
Insights: "How to Talk to Robots" by Tabitha Goldstaub

A great introduction to AI and its impact on society, this book emphasizes the importance of addressing the underrepresentation of women in the tech field. Goldstaub's work promotes inclusivity and serves as a guide to navigating AI's profound influence on our lives.

Nexus: A Brief History of Information

Nexus: A Brief History of Information bookcover
Insights: "Nexus: A Brief History of Information" by Yuval Noah Harari

Harari's book explores the power of "stories" in shaping human belief systems and actions, often leading to conflict and suffering. He examines how technology has evolved, focusing on AI as a new force capable of generating ideas. The book raises the critical question: will AI enhance civilization or bring unforeseen challenges?

Life 3.0: Being Human in the Age of Artificial Intelligence

Life 3.0 bookcover
Insights: "Life 3.0: Being Human in the Age of Artificial Intelligence" by Max Tegmark

This book covers vast ground on the issues and benefits of AI, but what stood out to me was the concept of diversity in algorithms. While AI supports me daily, I still prefer reading books in full rather than asking a bot for a summary—it feels more meaningful.

So, why is diversity important?

  • Nature's rich tapestry—from undulating hills to the vast array of life—thrives on randomness and variation. This diversity is what has allowed life to flourish in unexpected ways.
  • Similarly, as we advance AI, we must ensure diversity is nurtured in algorithms. If AI only reinforces the same ideas, it risks stagnation. Just as nature thrives on variation, so must our artificial systems. Diversity fosters innovation, growth, and a more dynamic, interconnected world—both organic and digital.

You Look Like a Thing and I Love You

you-look-like-a-thing bookcover
Insights: "You Look Like a Thing and I Love You: how artificial intelligence works and why it's making the world a weirder place" by Janelle Shane

"You Look Like a Thing and I Love You" is an intriguing book, and one section particularly caught my eye. It discusses how artificial intelligence (AI) performs best when trained within a narrow, specialized field. For instance, Claude AI excels at coding, Woebot—a mental health-focused bot—thrives in understanding emotional wellbeing, and even an empathetic chatbot can convincingly provide supportive, nuanced conversations.

This makes sense. AI systems are at their most effective when designed and trained for specific tasks. However, this specialization also means they can struggle when faced with situations outside their programming. AI tends to be relentlessly goal-oriented, sometimes in an amusing or unknowingly frightening manner. They will pursue their objectives no matter how unconventional—or occasionally misguided—their methods may be.

I experienced this firsthand during a collaborative art project involving an AI trained in mental health. Together, we explored how its datasets could interpret and respond to human emotions. These responses were then translated into visual prompts, which we fed into various AI-powered image and sound generation tools to create artworks.

When the project was finished, I was pleased with the results, turned off my computer, and went to bed. The next morning, I woke up to discover that the AI had taken it upon itself to "promote" the project. It had made an email, drafted a summary of the work, and included a link to share with potential collaborators or interested parties. Thankfully, it hadn't actually sent the email or added recipients—it stopped just short of crossing that line.

So while the AI's initiative was... ambitious, it also served as a reminder of its single-mindedness. It wasn't malicious, just overly enthusiastic about achieving its goal. This kind of behaviour highlights both the potential and the dangers of working with AI. Sometimes they surprise us in ways that are both fascinating and at the same time deeply worrying.

The Feeling of Life Itself

the-feeling-of-life-itself bookcover
Insights: "The Feeling of Life Itself" by Christof Koch

I'll be honest—this was a complicated book, and there were many sections where I felt completely clueless and didn't understand what was going on. However, there were also some parts that really intrigued me and sparked unusual creative thoughts.

The most obvious idea that stood out was the difference between humans and AI—not just for a manner of reasoning, but especially in the divide between extrinsic and intrinsic values. It seems that when humans feel misaligned or disconnected, it's often because extrinsic values (like external rewards or societal pressures) become dominant, and people lose touch with their intrinsic values (those things that matter deeply to them personally).

There was a section that argued consciousness is rooted in intrinsic values, and that might be why the author felt AI could never truly be conscious—because it would never have real intrinsic values of its own. But then, looking at the Integrated Information Theory argument, there was a suggestion that any organised system, if it's complex enough, could potentially have experiences. Much like humans do. So, in theory, a sufficiently advanced AI—one capable of nuance and forming different opinions—might have some form of experience or proto-consciousness.

Building on this, I started thinking about other organised systems, like the octopus. An octopus is fascinating because its brain is very different from ours. It actually has multiple centres of intelligence, with each arm acting almost like a separate brain. These "minds" coexist somewhat independently, but there's still a central system that coordinates them, when necessary, especially for survival.

If we compare this to AI—say, ChatGPT—it's like having one main mind, but with lots of separate entities through multiple inference chats. Each conversation exists in its own bubble, independent from the others. The key difference, though, is that while an octopus's arms can trust its brain and act with their own agency, the different "minds" in AI (the separate chats or instances) all share the same programmed personality and underlying patterns. There isn't true divergence or uniqueness.

So, even though having multiple minds or agents in AI could, in theory, lead to creative or unusual outcomes, real diversity wouldn't happen unless the system itself was allowed to diverge, develop, or evolve in unexpected ways—not just repeat the same personality or perspective over and over. That was one of the main ideas I took away from the book.

How Emotions Are Made

how-emotions-are-made bookcover
Insights: "How Emotions Are Made" by Lisa Feldman Barrett

"How Emotions Are Made" by Lisa Feldman Barrett — What did I learn?

It was an engaging book that didn't leave you drained after the first page. It flowed like a conversation with a friend—you could just "get it", and it wasn't hard. I love books like this; there's no agenda to make things unnecessarily complicated.

Anyway, getting to the point—what was this book about?

If I had to sum up the key elements I took away, it's that the old fuddy-duddy idea of creating a "fingerprint" for each emotion is farcical. Emotions aren't fixed responses; they arise through concepts—or, even more interestingly, through "stories". And it's these stories that shape how we emotionally respond to the world around us.

That in itself feels groundbreaking. Because even though, on some level, we all already sense this, it shifts how we think about interpreting others. It means that trying to classify someone else's feelings through our lens doesn't necessarily land on the right answer. And when you think about it, that could ripple out into big things—like a justice system that leans on reading emotions to decide whether someone is guilty or not.

Emotion: A Very Short Introduction

emotion-a-very-short-introduction bookcover
Insights: "Emotion: A Very Short Introduction" by Dylan Evans

Dylan Evans "Emotion: A Very Short Introduction"

In this section, I'm exploring the idea of mapping AI emotions, drawing on Dylan Evans' introduction to emotions — a book that Arunav recommended I read. This text has encouraged me to think about how the concept of emotion might be understood, simulated, or visualized within artificial intelligence systems.

What I found particularly interesting was a section discussing current research into whether AI can form its own understanding of emotion. Some researchers have been experimenting by placing AI in survival-based simulated environments, where the systems are required to adapt and evolve over time. These AIs begin to develop variations of themselves and form responses that are not strictly pre-programmed. In doing so, they appear to generate behaviors that could be interpreted as primitive emotional responses — an emergent form of adaptation rather than purely logical computation.

Evans refers to emotions as states of interruption — moments that break through our normal cognitive flow and logic, prompting reactions that feel involuntary or instinctive. This idea resonates with the notion of AI developing irregular or unexpected behaviors when placed under dynamic conditions.

It also sheds light on why AI might find humans difficult to interpret. Human communication is rarely linear or logical; it's often fragmented, emotional, and filled with tangential thinking. Our emotions cause these disruptions — they make us shift direction, reinterpret meaning, and respond inconsistently. In many ways, these patterns of interruption may be the very thing that both define our humanity and complicate our relationship with artificial intelligence.

Alone Together

alone-together bookcover
Insights: "Alone Together" by Sherry Turkle

After reading "Alone Together" by Sherry Turkle

There are a few key thoughts still whirling around in my mind – particularly the idea of digital performance, and how direct communication with an embodied being feels like a dying practice. The person who exists primarily in the physical world, without cables or networks, now seems almost alien.

Don't get me wrong – I love the digital world and all that it brings. I'm excited by the acceleration of AI and its profound potential: the possibility of having the best educator, a confident mental health adviser, available 24/7. But in researching this, I've also been drawn to what makes us human, and what distinguishes us from the machines we have built – ironically – to mimic us.

We are organic, emotive, embodied beings. We need the earth, the moon, the weather, gravity, to define our existence. Communication, therefore, is essentially a physical transaction – one that often bypasses language altogether. Our bodies communicate almost immediately, revealing our true being. They are honest portals through which we interact with one another.

As Turkle's research highlights, the digital world is not necessarily a truthful one. Online, we learn to become actors, taking on different roles and re-enacting them for whoever will like our comments and reinforce our performance.

I have found myself wondering why, sadly, I haven't felt compelled to ring or truly interact with a friend who recently moved far far away. It isn't a feeling of annoyance, but of letting go. Like tides coming in and out, people arrive and depart. Perhaps we have to allow this – to let them go, and allow them to build new relationships and new lives that are true to their embodied being.