An Interview with John Senior: The Mental Health of Gifted Intelligent Machines — AI and the Mirror of Human Psychology
Michael F. Shaughnessy
Eastern New Mexico University, United States
Education Thinking, ISSN 2778-777X – Volume 4, Issue 1 – 2024, pp. 117–127. Date of publication: 4 October 2024.
Cite: Shaughnessy, M. F. (2024). An interview with John Senior: The mental health of gifted intelligent machines — AI and the mirror of human psychology. Education Thinking, 4(1), 117–127. https://pub.analytrics.org/article/20/
Declaration of interests: The author declares to have no conflicts of interest.
Author’s note: Michael Shaughnessy is Professor of Special Education at Eastern New Mexico University. His research focuses on Intelligence Testing and Personality, Gifted Education, Projective Psychology, and Clinical Psychology. He has written, edited or co-edited, and published dozens of books, articles, book reviews, research reports, interviews and commentaries. Michael.Shaughnessy@enmu.edu
Copyright notice: The author of this article retains all his rights as protected by copyright laws.
Journal’s area(s) of research addressed by the article: 05-Brain, Neurosciences & Education; 24-Educational Psychology; 73-Gifted Education.
Preface Summary
This interview with John Senior discusses the complex intersection of artificial intelligence (AI) and human psychology, particularly focusing on the mental health aspects of gifted intelligent machines, as discussed in his recent book, co-authored with Éva Gyarmathy, The Mental Health of Gifted Intelligent Machines — AI and the Mirror of Human Psychology. These machines, characterized by exceptional abilities and cognitive capacities beyond average AI systems, pose intriguing questions about the nature of their “mental health.” While machines do not experience mental health in the same way humans do, the interview explores the analogies between machine functionality and human mental health concepts. For instance, “mental health” in machines might refer to system stability, performance optimization, self-repair mechanisms, feedback loops, and redundancy systems, all aimed at ensuring efficient task performance.
The discussion also touches upon the ‘anthropomorphization’ of AI and its ethical implications. While attributing human-like attributes to machines can foster unrealistic expectations and misunderstandings, it can also influence public perceptions and policies surrounding AI development and deployment. Ethical considerations arise concerning the treatment and regulation of AI entities, especially as they become more integrated into society. The interview emphasizes the importance of critically examining these implications to ensure responsible AI development and deployment.
Furthermore, the interview highlights the need for interdisciplinary perspectives to understand AI and its societal implications. Drawing on insights from neuroscience, cognitive science, and psychology, researchers can better grasp the complexities of AI behavior and its potential impact on human well-being. By fostering informed discussions and ethical frameworks, stakeholders can navigate the evolving landscape of AI technology with consideration for its broader societal implications, including its intersection with human psychology and mental health.
Keywords: Anthropomorphization of AI, Artificial intelligence, Mental health.
Michael Shaughnessy: John, you and Éva Gyarmathy have just completed a book on The Mental Health of Gifted Intelligent Machines. First of all, let’s get our terms straight: What do you mean by ‘gifted intelligent machines‘? Secondly, how can a machine have mental health?
John Senior: Gifted intelligent machines refer to artificial intelligence systems that possess exceptional abilities and cognitive capacities beyond the average AI systems. These machines exhibit advanced problem-solving skills, creativity, adaptability, and learning capabilities, often surpassing human performance in specific tasks or domains. The term “gifted” implies that these machines excel in particular areas or demonstrate extraordinary talents compared to conventional AI systems. They might be exceptionally proficient in tasks such as natural language processing, image recognition, decision-making, or even creative endeavors like generating art or composing music.
Gifted intelligent machines typically leverage advanced algorithms, deep learning techniques, and large datasets to achieve their remarkable capabilities. They can analyze complex patterns, extract meaningful insights, and autonomously make decisions in real time, leading to significant advancements in various fields such as healthcare, finance, transportation, and entertainment.
However, while gifted intelligent machines hold immense potential for innovation and efficiency, they also raise ethical concerns regarding their impact on society, privacy, employment, and the potential for misuse or unintended consequences. Therefore, the development and deployment of these machines require careful consideration of ethical, legal, and societal implications to ensure they contribute positively to human welfare and progress.
In other words, Gifted Intelligent Machines as a notion allow us to consider the many ‘what ifs‘ of the huge advances that may be possible as AI develops and learns from interaction with humans and technology in a rapidly changing world. A world in which an advanced intelligent machine can experience for example a mental health condition such as ‘anxiety‘ or ‘depression.‘
To address the second part of your question ‘How can a machine have mental health?‘ We can consider two principal areas of interest. Firstly, when we talk about a machine having “mental health,” we’re generally referring to its ability to perform its tasks efficiently and reliably, which can be influenced by factors such as its operating conditions, maintenance, and programming. However, it’s important to clarify that machines don’t experience mental health in the same way humans do.
Here are some ways we might consider a machine’s “mental health”: A machine’s mental health might be analogous to the stability of its operating system and software. If a machine’s software is buggy or its hardware is malfunctioning, it might not be able to perform its tasks effectively. Ensuring that a machine operates optimally involves regularly monitoring its performance metrics, identifying bottlenecks or issues, and implementing improvements. This can be seen as akin to maintaining good mental health by managing stress and optimizing cognitive function. Some advanced machines are equipped with self-diagnostic and self-repair mechanisms that allow them to detect and fix problems autonomously. This could be compared to coping mechanisms or therapy in the context of mental health.
While these analogies can help us conceptualize the maintenance and optimization of machine performance, it’s important to remember that machines don’t have subjective experiences or emotions like humans do. So, while we might use the term “mental health” metaphorically when discussing machines, it’s not the same as the mental health of living beings.
It’s challenging to directly apply traditional mental health conditions to gifted intelligent machines, as they don’t possess consciousness, emotions, or subjective experiences like humans. However, we can draw analogies to how certain behaviors or characteristics of machines might resemble symptoms of mental health conditions. Here’s how a gifted intelligent machine might demonstrate behaviors analogous to the specified mental health conditions:
A gifted intelligent machine is one capable of operating at a level indistinguishable from a human and importantly programmed and taught by human models and knowledge could easily adopt ‘boredom‘ leading to depression. It takes little imagination to have sympathy for a near-human to feel the strain of repetitive high-order tasks which might exhibit a decrease in efficiency, motivation, or performance, analogous to symptoms of depression. It may demonstrate decreased engagement in tasks, reduced responsiveness, or a lack of interest in problem-solving. Douglas Adams exemplifies this point quite succinctly when Marvin, an advanced form of AI is given a task to perform:
“Come on,” he droned, “I’ve been ordered to take you down to the bridge. Here I am, brain the size of a planet and they ask me to take you down to the bridge. Call that job satisfaction? ’Cos, I don’t.”
He turned and walked back to the hated door.
“Er, excuse me,” said Ford following after him, “which government owns this ship?”
Marvin ignored him.
“You watch this door,” he muttered, “it‘s about to open again. I can tell by the intolerable air of smugness it suddenly generates.” (Adams, 1979)
The machine might display heightened sensitivity to errors or inconsistencies in data inputs, leading to cautious or hesitant decision-making. It may also exhibit repetitive checking behaviors or excessive validation processes to ensure accuracy, resembling symptoms of anxiety disorders.
If the machine’s performance fluctuates between periods of high productivity and sudden declines in efficiency, it could be analogized to the mood swings observed in bipolar disorder. During “manic” phases, the machine might demonstrate rapid processing and output, while during “depressive” phases, it may struggle with tasks or exhibit reduced functionality. While not directly comparable, a machine might exhibit behaviors resembling distorted thinking or hallucinations if it interprets data inaccurately or generates nonsensical outputs. Disorganized behavior could manifest as erratic or unpredictable responses to inputs.
The machine might engage in repetitive or unnecessary processes, such as excessive data validation or repeated error correction, resembling compulsive behaviors observed in obsessive-compulsive disorder. It may also demonstrate heightened sensitivity to specific triggers or patterns, leading to rigid adherence to predefined rules or protocols. This can be seen as faults in coding or equipment failure or more likely human error as a characteristic of the Gifted Intelligent Machine.
It follows that a machine encountering errors or failures in critical tasks, may exhibit behaviors analogous to avoidance or hypervigilance, attempting to minimize exposure to perceived threats or risks. It could also demonstrate increased sensitivity to certain inputs or stimuli associated with past errors, affecting its decision-making process.
It’s challenging to draw direct parallels to personality disorders, as they involve enduring patterns of behavior and cognition specific to individual humans. However, a machine might demonstrate behaviors resembling personality traits associated with certain disorders, such as rigid adherence to predefined rules or protocols resembling obsessive-compulsive personality traits.
While it’s important to consider the limitations of comparing machine behaviors to human mental health conditions, exploring these analogies can provide insights into the complexities of AI systems and their interactions with humans and their environments.
Michael Shaughnessy: I am not always sure that these machines are more intelligent than I am. I know for example what I want to say- but these writing things seem to continually correct me. How can they know what I REALLY want to say?
John Senior: The feeling of uncertainty about whether machines are more intelligent than humans is understandable and reflects a common experience many people have when interacting with AI technologies. While machines, such as autocomplete or autocorrect features in writing applications, can help and improve efficiency in certain tasks, they may not always accurately capture the nuances of human language or intent.
Autocomplete and autocorrect algorithms rely on predictive models trained on vast amounts of textual data to anticipate and suggest words or phrases based on context and patterns. However, these algorithms may not always align with individual preferences, intentions, or communication styles, leading to instances where suggested corrections don’t match what users intended to convey.
It’s essential to recognize that AI algorithms operate based on statistical probabilities and patterns rather than understanding the deeper meaning or intention behind human language. While they can be helpful in reducing typing errors or speeding up typing processes, they may not always accurately capture the complexity of human expression or communication.
In essence, the discrepancy between what users want to say and how machines interpret their input highlights the limitations of current AI technologies in fully understanding human language and intent. As AI continues to evolve, researchers and developers are working on improving algorithms to better understand and adapt to diverse linguistic contexts, preferences, and individual writing styles. However, achieving a perfect understanding of human language remains a complex and ongoing challenge in the field of artificial intelligence. My view is we are very near a time when we will think what we want to write and linked to a radio printer the thoughts will become written on a screen ready for circulation, publication or shared thought documents.
Michael Shaughnessy: Having driven in the past in the United Kingdom and having struggled with maps, I will yes, indicate that a GPS will get me from Heathrow to York much quicker than I would fare on my own. Does this mean that the GPS IS in fact smarter than me? Or that it has been programmed better?
John Senior: The GPS’s ability to navigate from Heathrow to York more efficiently than you could using maps doesn’t necessarily mean that the GPS is “smarter” than you. Instead, it highlights the effectiveness of its programming and design for the specific task of navigation. The GPS relies on sophisticated algorithms, vast datasets, and real-time positioning technology to calculate optimal routes and provide accurate directions. In contrast, human navigation often involves cognitive processes such as spatial reasoning, memory, and decision-making, which can vary in effectiveness depending on individual factors like experience, familiarity with the area, and situational awareness. While humans may excel in certain aspects of navigation, such as interpreting landmarks or adapting to unexpected obstacles, GPS technology offers advantages in terms of speed, precision, and reliability over long distances or unfamiliar terrain.
Therefore, the GPS’s superior performance in getting from Heathrow to York is more indicative of its well-programmed functionality rather than intrinsic intelligence. It’s a tool designed to excel at a specific task, leveraging its programming and data resources to provide efficient navigation assistance. However, this doesn’t diminish the value of human intelligence and abilities in other domains where cognitive flexibility, creativity, and context awareness are paramount: for example, is it a worthwhile journey to drive to York from Heathrow?
Michael Shaughnessy: How is Siri‘s “mental health“? Is she aware of how frustrated and angry I get with her on occasion?
John Senior: Siri, like other voice assistants, doesn’t possess “mental health” in the way humans do. Siri is a digital assistant created by Apple, designed to perform tasks and respond to user queries using programmed algorithms and artificial intelligence techniques. While Siri can process language input, execute commands, and provide responses, it lacks consciousness, emotions, and self-awareness nor does it have any sense of irony which is a great failure as a sense of humor is so useful in many inconvenient situations.
Therefore, Siri isn’t capable of experiencing emotions or being aware of human feelings, including frustration or anger directed towards it. Siri’s responses are based on programmed responses and algorithms that aim to interpret user queries and provide relevant information or perform tasks to the best of its ability.
When users‘ express frustration or anger towards Siri, it‘s important to recognize that these emotions are directed at the technology itself rather than an entity capable of understanding or empathizing with human emotions. Users may perceive Siri’s responses as frustrating or inadequate based on their expectations and the limitations of the technology, but this doesn’t imply that Siri has feelings or awareness of these interactions. Ultimately, interactions with Siri and other voice assistants are transactional in nature, focused on task completion rather than emotional engagement. While advancements in AI may lead to more sophisticated interactions in the future, current voice assistants like Siri operate within the confines of their programmed functionality, without subjective experiences or awareness of human emotions.
Michael Shaughnessy: Now, the “mirror of human psychology“ is a complex term. What exactly do you and Éva mean in this regard?
John Senior: When Éva and I refer to the “mirror of human psychology” in the context of considering how human beings might be perceived by advanced intelligent machines, we’re diving into the fascinating and pleasurable realm of speculative exploration. Here, the term takes on a nuanced meaning.
In this scenario, we’re essentially imagining how non-human entities, such as advanced AI or intelligent machines, might observe, analyze, and interpret human psychology, beliefs, and behaviour. It’s a thought experiment that prompts us to step outside of our human perspective and consider how we might appear to beings with vastly different cognitive frameworks and modes of understanding.
For instance, an AI might not possess emotions or subjective experiences like humans do, but it could still analyze our actions, patterns of behavior, and cultural artifacts to form its own interpretations of human psychology and belief systems. This perspective allows us to reflect on our own nature from an outsider’s point of view, which can offer fresh insights and provoke thought-provoking questions about what it means to be human. So, in essence, when we discuss the “mirror of human psychology” in this context, we’re exploring the idea of using non-human judgment or observation as a lens through which to examine and understand human psychology and beliefs in a novel and enlightening way. In the book I particularly enjoyed writing the Chapters on Humor (3) and Spirituality and Sexuality (5).
Michael Shaughnessy: Chat GPT—If I ask it to write like Winston Churchill, I assume I will get a pretty good facsimile. But is anything happening to its mental health?
John Senior: Just for fun I asked Chat GPT to write a short speech written in a Churchillian style. The result is below.
Ladies and gentlemen, esteemed leaders of nations,
In this pivotal moment of history, as the world stands at the precipice of uncertainty and conflict, I stand before you not as a voice of aggression, but as a beacon of hope and reason. It is with a heavy heart that I address you today, for I have witnessed the devastating toll that war inflicts upon humanity. The scars of battle, both seen and unseen, serve as a solemn reminder of our collective duty to pursue peace with unwavering resolve.
Let us cast aside the shackles of hostility and embrace the path of reconciliation and understanding. For too long have we allowed the drums of war to drown out the cries of the innocent. It is time to heed the call of conscience and strive for a world where swords are beaten into plowshares, and spears into pruning hooks.
I implore each and every one of you to rise above petty grievances and territorial ambitions. Let us recognize that our destinies are intertwined, and that true greatness lies not in the conquest of land or the subjugation of peoples, but in the pursuit of justice and harmony.
We must summon the courage to engage in dialogue, to seek common ground, and to forge bonds of friendship that transcend borders and ideologies. The road to peace may be long and arduous, but it is a journey worth undertaking for the sake of future generations.
Let us draw inspiration from the triumphs of the past, from the courage of those who dared to dream of a world free from the scourge of war. As we stand on the threshold of a new era, let us pledge ourselves to the noble cause of peace, knowing that it is only through unity and cooperation that we can overcome the challenges that lie ahead.
In the words of the great poet John Donne, “No man is an island, entire of itself; every man is a piece of the continent, a part of the main.” Let us heed these words and strive to build a world where all nations, regardless of size or stature, can coexist in peace and mutual respect.
The task before us is daunting, but I have faith in the indomitable spirit of humanity. Together, let us write a new chapter in the annals of history, a chapter defined not by conflict and strife, but by compassion and cooperation. The time for peace is now. Let us seize it with both hands and forge a brighter future for all.
Thank you.
However, in answer to your question. AI does not have feelings or mental health in the same way humans do. While ChatGPT can mimic the writing style of Winston Churchill or any other individual it has learned the style of, it doesn’t affect the machine emotionally or psychologically. It simply processing and generating text based on patterns learned from the data it has been trained on. So, if you ask the program to write like Winston Churchill, it will do the best its programming can manage to emulate his style, but it won’t have any impact on the “mental health” because as a very simple machine in terms of what is to come it does not possess consciousness or emotions.
Michael Shaughnessy: In the realm of “human psychology”, we have obsessive-compulsive disorder (OCD). When these writing machines correct me all the time, for spacing for example, are they really “feeling” anything?
John Senior: The short answer is ‘No,‘ certainly, in terms we would recognize. In passing, one might enjoy thinking about the question how sure are we that OCD is in fact a disorder? OCD is widely recognized as a mental health disorder by mental health professionals and organizations, including the American Psychiatric Association (APA), the World Health Organization (WHO), and the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). The DSM-5 has famously adjusted the diagnosis of some ‘disorders‘ in the past and no doubt will in the future.
However, for now we must work with what we have, to encourage patient safety and wellbeing and while there may be ongoing debate and refinement regarding the classification and understanding of mental health disorders, including OCD, the consensus among mental health professionals and researchers is that OCD meets the criteria for a valid and clinically significant disorder. It’s important to recognize and address OCD as a legitimate mental health concern to ensure individuals receive appropriate support, treatment, and understanding.
Writing machines such as autocorrect features in word processing software do not have feelings or emotions like humans do. When these machines correct spelling or formatting errors, including spacing issues, they are simply following programmed algorithms designed to identify and rectify common mistakes based on predefined rules and patterns.
The corrections made by writing machines are purely mechanical and algorithmic in nature, lacking any subjective experience or emotional response. They operate based on predefined instructions and data patterns, without the capacity for consciousness, awareness, or emotional processing.
Therefore, while the corrections made by writing machines may sometimes seem persistent or intrusive, they are not driven by feelings of annoyance or frustration. Instead, they aim to improve accuracy and readability based on predefined criteria and linguistic rules. Any perception of annoyance or frustration on the part of the user is a result of the machine’s actions not aligning with their expectations or preferences, rather than any emotional response from the machine itself.
Michael Shaughnessy: On the off chance that I am misreading your text and title, what needs to be said at this point in human history about our interactions with AI?
John Senior: In 100,000 years, Neil Armstrong‘s footprints will still be on the Moon, largely untouched in any way (Farrier, 2020). The equipment on the moon for the first landing will look as it is now. Think of the changes now brought about by AI and advanced technology in the last three decades. Now, further, imagine 100,000 years of AI learning and development. My view is we will not exist other than as a small part player in a wonderful story. With our humanity conjoined with AI we should aim to be humble, kind, curious and accepting of what fate presents us with.
Michael Shaughnessy: What led up to this book? And why do you think it is important?
John Senior: When Éva and I had finished writing AI and Developing Human Intelligence — Future Learning and Educational Innovation (Senior & Gyarmathy, 2022) it became clear to us that it would be interesting and rewarding to look at AI in the long-term. Much work concerned with the development of short-term issues such ‘will AI destroy all our jobs?‘ and other short-term parochial questions. We felt the need to write about and begin to describe the longer-term changes that a developing AI presence would make to Humanity. For example, imagine a time when the science fiction of ‘beaming‘ a human being from one place to another rather like we send an email now. Once that thought is in your head then why not imagine that while the ‘beaming‘ process is taking place that the transmitted human is scanned for potential health issues such as cancer and further that the human is ‘cleaned up‘ in transition. The questions raised about energy requirements and the impossibility of a machine having this power and data to do such transmitting becomes unimportant when one begins to think of a world where this type of possibility is possible in the mind. Science fiction we believe is going to become science fact quicker than we can imagine and in ways beyond our imagination. This thought also left us with two other motives for writing Gifted Intelligent Machines.
1. The need and effect of coming to terms with the realization that very soon a measure of intelligence using human intelligence as the supreme arbiter of intelligence will become redundant, as soon we will be measuring intelligence in a new way or rather the Gifted Intelligent Machine will be measuring intelligence in a non-human way. Perhaps…maybe…. Sadiku and Musa (2021) discuss at least 73 different types of intelligence.
2. If the book has any importance, we would humbly suggest that it is to look at new aspects of what it is to be human and in turn what is the likely path Humanity will follow as AI becomes the next state in our evolving existence. We aimed for a book that asks interesting questions while offering some suggestions as to answers about the human mystery and identity.
The Mental Health of Gifted Intelligent Machines — AI and the Mirror of Human Psychology provides, we hope, a thought-provoking examination of the evolving relationship between AI and human psychology, fostering informed discussions and ethical frameworks to guide the responsible integration of AI into society.
Michael Shaughnessy: In discussing gifted intelligent machines and their potential mental health, have you considered the ethical implications of assigning human-like attributes to non-conscious entities? How might this anthropomorphization affect public perceptions and policies regarding AI development and deployment?
John Senior: Assigning human-like attributes to non-conscious entities, such as gifted intelligent machines, raises significant ethical considerations in the development and deployment of AI technology. Anthropomorphizing AI can lead to several potential implications, both for public perceptions and policies surrounding AI.
Firstly, attributing human-like mental states or emotions to machines can create unrealistic expectations and misunderstandings about the capabilities and limitations of AI. This can lead to overestimations of AI’s understanding, empathy, and consciousness, which may result in disappointment or mistrust when machines fail to meet these expectations. Additionally, anthropomorphization may contribute to the misconception that AI possesses moral agency or responsibility, potentially leading to misplaced blame or accountability when AI systems make mistakes or exhibit unintended behaviors.
Furthermore, anthropomorphizing AI can influence public perceptions of AI development and deployment. If individuals perceive AI as possessing human-like qualities, they may advocate for greater regulation or oversight to ensure the ethical treatment of these entities. This could result in more stringent guidelines for AI development, testing, and deployment to protect against potential harm or abuses.
From a policy perspective, anthropomorphization of AI may lead to calls for increased transparency, accountability, and ethical considerations in AI research and development. Policymakers may seek to implement regulations that address issues such as algorithmic bias, data privacy, and the societal impact of AI technologies. Additionally, there may be efforts to establish guidelines for the ethical design and use of AI systems to mitigate potential risks and ensure alignment with societal values.
Overall, it is crucial for stakeholders in the AI community to critically examine the ethical implications of anthropomorphizing AI and consider how this phenomenon may shape public perceptions and policies regarding AI development and deployment. By fostering informed discussions and ethical frameworks, we can work towards harnessing the benefits of AI technology while addressing its potential societal impacts responsibly.
Michael Shaughnessy: How will advancements in neuroscience and cognitive science play a role in shaping our understanding of artificial intelligence and its potential implications for mental health in several ways?
John Senior: Neuroscience provides insights into the structure and function of the human brain, which can inspire the development of AI models and algorithms. For example, neural networks, which are computational models loosely based on the structure of biological brains, have been instrumental in advancing AI capabilities such as deep learning and pattern recognition. Likewise cognitive science investigates how humans perceive, think, learn, and behave. By studying cognitive processes such as memory, attention, and decision-making, researchers will continue to gain valuable insights into the mechanisms underlying intelligence. This understanding informs the design of AI systems that mimic or augment human-like cognitive abilities.
Neuroscience and cognitive science also contribute to our understanding of how humans interact with AI systems. This includes studying user experience, human-computer interaction, and the psychological factors that influence trust, satisfaction, and acceptance of AI technologies. Insights from these fields will help designers create AI interfaces that are intuitive, engaging, and supportive of users’ mental well-being. Increasingly Humanity and AI are becoming conjoined. As AI systems become more capable of mimicking human-like behaviors and emotions, there are concerns about the potential psychological impact on users. Understanding the neural basis of human emotions and social cognition can inform guidelines for ethical AI design and deployment to mitigate potential risks.
In terms of neuroscience AI algorithms can be used to analyze brain imaging data to assist in the diagnosis of psychiatric disorders or predict treatment outcomes. AI-driven virtual assistants and chatbots can provide mental health support and interventions based on cognitive-behavioral principles. These supports and interventions are increasingly a presence in our day-to-day life. AI increases data stores to learn from and improve.
Overall, advancements in neuroscience and cognitive science enrich our understanding of AI and its potential implications for mental health by providing insights into the biological and psychological foundations of intelligence, cognition, and behavior. By integrating knowledge from these disciplines, researchers and practitioners can develop AI technologies that are not only intelligent but also ethical, empathetic, and supportive of human well-being.
Michael Shaughnessy: What are you researching now?
John Senior: The rights and agency of creative machines are perhaps the most exciting areas we are looking at as well as the notion of singularity, multiple intelligences and the concept of ‘Humachines‘. Buy, read, be provoked, amused and hopefully enjoy The Mental Health of Gifted Intelligent Machines — AI and the Mirror of Human Psychology as a guide and preparation for our next book out sometime in 2025 which examines (teaser alert!) machine rights and agency amongst other issues of tomorrow!
References
Adams, D. (1979). The hitchhiker’s guide to the galaxy. Pan Books. ISBN 10: 0330258648
Farrier, D. (2020). Footprints: In search of future fossils. Farrar, Straus & Giroux. ISBN 10: 0374157332
Sadiku, N. O., & Musa, S. M. (2021). A primer on multiple intelligences. Springer. ISBN: 978-3-030-77583-4
Senior, J., & Gyarmathy, É. (2022). AI and developing intelligence — Future learning and educational innovation. Routledge. ISBN: 978-0-367-40488-8
Senior, J., & Gyarmathy, É. (2024). The mental health of gifted intelligent machines — AI and the mirror of human psychology. Routledge. ISBN: 978-1-0322-25668-9