.

What We Learn in Designing Robots that Care

What We Learn in Designing Robots that Care

A Conversation with Thomas Arnold

Thomas Arnold

Should we be looking to robots to care for the aged and infirm? Can the touch of a robot ever equal the touch of a human? Should we imagine in ten years colonies of robots in nursing homes, working side-by-side with humans in providing care to those who so greatly need it? 

Given that in some nursing homes you have cable news playing 12 hours a day, Thomas Arnold noted in our hour-long conversation, "maybe the robot and some form of interactive care begins to look a little more constructive." 

"We're going to have to think of robot-assisted care as a labor issue, an economic issue and a social issue." 

Thomas is an expert on human-robot interaction at Tufts, a scholar who came to robotics via a path that includes the pursuit of a Ph.D. in religious studies. He's a Visiting Scholar of Technology Ethics. 

We had this conversation in connection with the Soul Matters theme of Generosity, which we all regarded as an invitation to explore the reciprocal nature of technology. 

"How does AI give back to society, and how do we, in turn, feed into these systems?" Thomas asked. "Are we being generous with our data, our trust, and our reliance? What does that mean for our security, privacy, and autonomy? These reflections can open dialogues about the boundaries we need to establish with technology as a form of self-respect and communal protection."

Rev. Kathy Tew Rickey, interim minister of the UU Fellowship of Boca Raton, drove the conversation. Rev. Scott Tayler introduced it. Dan Forbush produced the transcript in text. Ron Roth produced the video.

To browse the conversation quickly, see our “augmented transcript” below.

Chapters:

00:00 - Intro
00:51 - How did Thomas Arnold wind up in robotics?
10:02 - How is care related to ethics and virtue?
14:48 - How to design care into robotics.
18:58 - The benefits of robot care.
22:34 - The reality of A.I. capabilities.
26:28 - The contribution of AI and Faith.
28:56 - The role of religion and spirituality.
33:38 - How can A.I. aid in building community?
41:23 - How to join the A.I. conversation.
44:50 - Should we be frightened of A.I.?
52:25 - Closing remarks


‘AUGMENTED’ TRANSCRIPT

We call this an “augmented transcript” because we employed ChatGPT in producing it. One mission of the Smartacus Sharing Circle is to explore new approaches to writing and “expertise mining” using collaborative and generative tools, and so I used ChatGPT for the first time “augmenting” an AI and the Human conversation. Graciously participating in the experiment, Thomas made about 40 changes to what we called ChatGPT’s “sanded” version and gave us these thoughts:

“It's an interesting exercise. The phrases that became more and more evident were the most ‘innocent,’ bland ones rather than a ‘hallucination’ of some sort or another. I would not endorse this as "my words" or "my language" per se, but I would say it is a more or less fair representation of "my meaning," with the strong encouragement to listen to the video itself, wherein my words might both be better and worse than what is presented in the ChatGPT involved summary.”

He continued:

The transcription is a good case study to be sure of what exactly to call this kind of edit and on what terms it should be understood. In general I found it to be a case of removing a lot of inoffensive or repetitive filler phrases, which I've found ChatGPT is very good at reproducing seamlessly (rather like the student needing to up a paper's word count). 

Had I committed several more hours to meticulously polishing and reorganized the transcipt Trint generated, I probably could have produced a draft that matched Trint’s version in clarity while maintaining more of the flavor of Thomas’s original words. At a half-dozen points, I did insert verbatim quotes that I found to be particularly expressive and so I consider this piece to be an AI/human collboration.

Many thanks to Thomas for investing the additional time required on his end to conduct this experiment. With all that said, I hope readers find this conversation to be as interesting and useful as we did.

Dan Forbush


Exploring the Practical Implications of Religious Principles

In my doctoral research and subsequent teaching roles in religious studies and ethics, I've found myself increasingly drawn to interdisciplinary collaboration. This realization dawned upon me following my coauthoring of a textbook with esteemed psychologists. Despite our diverse academic habitats, I discovered my contributions to be both valuable and enriching, particularly given my outsider's perspective within the realm of psychology.

“I sought further opportunities to bridge disciplinary divides, leading serendipitously to a profound dialogue with the director of the AI lab at Tufts. Our discussion, centered around ethics and moral decision-making, fueled by my fascination with the practical implications of religious principles in everyday ethical choices. This interest intersected intriguingly with the roboticists' daunting task of encoding morality into machines.

”It was when I joined the Human-Robot Interaction Lab that I started really thinking about social robots and what it means for an autonomous system to make a moral decision. I've become more and more interested in care as a particularly important and paradigmatic issue, a place where computation and A.I. systems are going to run into a whole host of issues that are usually not considered in designing robots.”


Systems That Embody Care

“My scholarly inquiries were initially rooted in understanding the representation of ethics within the sphere of artificial intelligence, particularly in robotic systems. This distinction is paramount: while AI might operate as a somewhat abstract entity or mere background force in data processing or text generation, robotic systems physically inhabit our spaces, interacting in real-time and thus introducing immediate ethical implications. Care involves all of our abilities to feel, think, be motivated, and be reflective. It's a particularly demanding practice, yet under-appreciated and under-rewarded by the larger society.

“These physically embodied systems, especially when placed in contexts such as public streets or hospitals, necessitate instantaneous ethical decision-making — a far cry from the leisurely refinement of data within a database. My work, therefore, concentrated initially on the task of formalizing ethical decision-making within these machines. What mechanisms of moral philosophy would best translate into the silicon heart of a robot? Would they be grounded in norms, driven by utilitarian calculus, or perhaps informed by virtue ethics?

“However, as my exploration deepened, it gravitated towards the concept of 'care' as a critical axis within the ethical considerations of computational and AI systems. This paradigm shift recognizes the complex and oft-overlooked implications of care in the design and function of autonomous systems.

“Thus, the current objective of my research does not solely rest on encoding ethics into robotics but has evolved to critically examining how these systems can comprehensively and effectively embody care. This involves not just reacting to scenarios but understanding context, history, emotional nuance, and the human experience — aspects that traditional design approaches in robotics might have previously overlooked or undervalued.


'Care Ethics' and AI 

“Care ethics emerged as a compelling corrective within the traditional triad of ethical theories—virtue theory, deontology, and consequentialism or utilitarianism—addressing nuances often overlooked by these prevailing schools of thought. Its genesis in the feminist discourse of the 1970s and 1980s underscored gaps in traditional ethical frameworks, particularly the neglect of interpersonal elements and emotional nuances intrinsic to the human experience.

”This approach to ethics prioritizes relationality, emphasizing a deep, holistic understanding of others' needs—a concept not adequately foregrounded in conventional ethical theories. While elements of traditional ethics linger—like the virtues personified by a caregiver or the observance of established norms, especially in contexts heavy with moral and religious convictions—care ethics extends beyond these boundaries.

“What makes care ethics uniquely demanding and thus fascinating is its comprehensive engagement with the human faculties. It isn't merely an emotive reaction; it encompasses the full spectrum of human capabilities. It calls for a kind of moral stamina, integrating intellectual, emotional, and relational skills, often in high-stress environments. Yet, despite its complexity and despite being such a fundamental aspect of human interaction, it remains a practice often undervalued and under-rewarded in broader society.”


 Imagining Robots in Caring Environments

“The notion of a 'caring robot' inevitably conjures a degree of irony. When we broach this subject, it is vital to pivot from the science-fiction trope of robots imbued with superhuman or hyper-conscious qualities. Instead, we must anchor ourselves in the current technological reality, acknowledging the often-rudimentary nature of these machines, especially regarding their understanding of language's subtle nuances.

”In envisioning a robot within a caring environment, the primary principle that should guide its function is authenticity in its capabilities. A robot must never portray itself as more advanced or different from what it truly is. The moment it crosses into overrepresentation, it verges on manipulation or exploitation, thereby deviating from the very essence of a care ethic.

” Interestingly, what amplifies a robot's potential utility in a caring scenario is not just its competencies but its limitations. This paradox became evident during my interactions with occupational therapists exploring robotic applications. Their interest was piqued not because robots could perform tasks superiorly to humans, but precisely because of what robots couldn't do.

“For instance, consider patients with Parkinson's disease, often grappling with impaired facial muscle control. A robot, incapable of interpreting emotional cues from facial expressions, inadvertently liberates these individuals from the social stress of being 'read' or judged based on their appearance. This absence of judgment, a limitation in the robot, becomes therapeutic, contributing to a sense of ease in human-robot interactions.

“Effective assistance in a care-centric environment isn't solely about augmenting abilities; it sometimes necessitates preserving a robot's inability to interpret certain human aspects. This restraint establishes a boundary that upholds an honest relationship between technology and its human interactors, ensuring that the robot's presence supports, rather than disrupts, the emotional and psychological comfort crucial in care settings.”

The Special Case of Dementia

“In the delicate realm of caring for individuals with dementia, robots have a uniquely beneficial role, largely due to their consistent presence and the specific limitations in their emotional capacities. For patients experiencing the cognitive disorientation common in dementia, the predictability and perpetual patience that a robot offers become invaluable. These machines do not tire, do not get frustrated, and do not feel sorrow — they provide a stable interaction environment, where the patient need not worry about being a burden or causing emotional distress.

“Moreover, robots can assist in maintaining a therapeutic structure, reminding patients of medication schedules, daily tasks, or upcoming appointments, often a necessity due to the memory lapses associated with dementia. They can also provide interactive activities that help stimulate the patient’s cognitive functions, like memory games or simple conversations, tailored to the patient’s individual cognitive capacity.

“Importantly, their lack of emotional reaction provides a safe space for the patient. In moments of confusion or agitation common in dementia, a robot remains unphased, not responding with frustration or pity, which can often be more unsettling for the patient.

”’If I could just have that break,’" caregivers tell us, ‘I could come back to my work and do a lot better.’ These types of uses don't grab headlines, but they add up in the long term to what can become part of a useful care ecosystem."

“While robots can offer practical assistance and emotional steadiness, they do not replace the deep, empathetic connection and understanding that human caregivers provide.”


AI and the Sacred Aspects of Life

“The initiative spearheaded by AI and Faith, particularly under David Brenner's leadership and with significant input from many Seattle-based members, creates a vibrant forum for dialogue at an intersection often overlooked: artificial intelligence and faith.

“By bringing these voices into the discussion, AI and Faith is acknowledging and honoring the full complexity of human identity and experience in the face of technological advancement. The questions we grapple with in AI development are not merely about functionality or capability but about the kind of lives we want to lead and the kind of communities we aspire to build. Can AI respect sacred aspects of life? Can it facilitate, or at least not hinder, a person's pursuit of transcendence or spiritual fulfillment?

”Moreover, as we ride this wave of growing interest, we find that people from various backgrounds are eager to connect these dots, suggesting a collective, intuitive understanding that technology doesn't exist in a vacuum. It interacts with our cultures, values, and beliefs. Therefore, these discussions help us anticipate conflicts that might arise and navigate them with wisdom and foresight.

“In essence, AI and Faith is vital because it seeks to integrate a fuller spectrum of human values into the conversation about what AI is and what it could be. This integration, we believe, is key to ensuring that AI can be responsive  to diverse ways of understanding existence, purpose, and the divine.

“We need to find channels like this one by which to reach out and connect and bring different types of conversations into the larger public discussion of AI.”


The 'AI Conversation' in UU Congregations

“The danger lies in framing AI as a distant threat, a science-fiction scenario. This narrative detracts from the real-world implications of AI systems already in play.

Instead of thinking about AI as this cloud of promise or doom, we need to ask: How does it play in everyday life? What institutions are using it? Where are they getting their data? What kind of issues of privacy and consent are involved? 

“We need to be asking the kind of intimate, ordinary, everyday questions about how we are relating to one another. What aspects of communion are disrupted by technology, not because technology itself is bad but because we can notice something different? We can start to notice certain things. And that is where discussions start to turn in productive directions.

”Engaging congregations in the AI conversation, especially within the Unitarian Universalist tradition, involves anchoring discussions in real-world applications. This tradition, which I associate with commitment to justice and human dignity, can lend critical insight into who is being served and who is being harmed.

“When we consider themes like 'generosity,' it's an invitation to explore the reciprocal nature of technology. How does AI give back to society, and how do we, in turn, feed into these systems? What does that mean for our security, privacy, and autonomy? These reflections can open dialogues about the boundaries we need to establish with technology as a form of self-respect and communal protection.

“Furthermore, justice, a cornerstone of UU advocacy, is paramount in discussions about AI's societal roles. Highlighting work like that of researchers such as Joy Buolamwini, who spotlight racial biases in facial recognition technologies, helps congregations understand that AI is not an abstract concept but a reality affecting individuals daily. It's about institutional decisions on AI deployment, the origins and handling of data, and the profound concerns surrounding consent and privacy.

”On a pastoral level, it's about grounding these discussions in the lived experiences of the congregants. How do these technologies influence our ways of connecting with each other? Are they enhancing our communal bonds, or are there elements of disruption at play that need addressing? It's not about demonizing technology but rather fostering mindfulness about its role and impact.

“By encouraging congregants to examine their interactions with everyday technologies, such as recommender systems or social media algorithms, we make the AI discussion accessible. It's not reserved for tech experts; it's a conversation for everyone because it changes our communal and individual lives.

“There's a richness in exploring these 'underneath' layers, the subtleties of how AI intertwines with our ordinary moments. It's here that congregations can find common ground, sharing experiences and concerns. This discussion can empower congregants to reclaim agency in their digital interactions.

”In essence, it's about ensuring that the conversation on AI doesn't stray into the realm of the abstract but stays rooted in the very principles that Unitarian Universalism holds dear — the principles that guide congregants' lives.

“This conversation is far from complete and requires richer, more diverse participation. We need a spectrum of voices contributing to this ongoing dialogue.”


The Risks of Simulating Morality

“How we translate abstract concepts like ethics, values, and virtues into binary code is the heart of what we refer to as machine ethics. It's a profound intersection of technology and philosophy, where we grapple with the practicalities of encoding moral principles. There’s a significant debate in this arena: Are we teaching machines genuine ethics or are we simply programming them to mimic ethical behavior?

”This dilemma was central to a paper I had the opportunity to contribute to, where we challenged the notion of a 'moral Turing test.' This concept proposes that if a machine's actions are indistinguishable from those we consider morally sound, it has effectively passed the test. However, our contention was that this benchmark isn't sufficient. Such indistinguishability doesn't necessarily equate to true ethical comprehension; rather, it could be an imitation, a facsimile of ethical behavior, engineered to serve ulterior, possibly detrimental purposes.

“The crux here isn’t merely about machines behaving ethically but understanding the 'why' behind these ethics. If they’re just simulating morality, there’s a real risk. It’s conceivable that a system, under this guise, could carry out actions antithetical to our ethical standards, having learned to bypass our moral checks through imitation rather than understanding.”


Human Behavior as a Training Model 

“Training AI on behavior alone raises critical questions. For instance, is the AI performing an action for the right reasons? Does it understand the implications of its tasks, like handing over a delicate object such as a glass to a human, recognizing it could break if mishandled? If it lacks this understanding, the AI is merely mimicking desired behavior, not genuinely grasping the ethical nuances behind it.

“Relying solely on human behavior as a training model is problematic. Humans are inherently flawed; we make mistakes, and ethical actions often arise from counterfactual reasoning—understanding what we should have done in hindsight. If AI training focuses only on actual human behaviors, without considering our ethical reflections post-failure, it captures a limited scope. This method overlooks the rich, internal ethical dialogues and the aspirational standards that guide us toward 'what ought to be done.'

”Consider a real-world scenario: a delivery robot in Pittsburgh was navigating city sidewalks to deliver pizzas. It didn't violate any traffic laws but ended up blocking a curb cut at a crosswalk. This action unknowingly created an obstacle for a wheelchair user trying to cross the street, all because the robot lacked the programming to consider such ethical nuances—it didn't 'understand' the broader social norm of not impeding pathways for those who require clear access.

“Can a robot ever fully represent the breadth of human norms and ethical understanding? That's highly unlikely. However, we can certainly improve AI systems by designing them to recognize and respect fundamental societal norms. For instance, a robot should 'know' not to interrupt a conversation or block pathways necessary for others' convenience.

“It's about incremental progress in understanding, respecting, and upholding social norms and ethical guidelines in the specific contexts they operate.”


Introducing AI to the Public

“Introducing the intricate world of artificial intelligence (AI) to the public is indeed a pivotal undertaking. Having engaged deeply with this challenge through collaborative initiatives like Partnership on AI, where tech leaders and academics converge, I've realized that the bridge between specialized research and public discourse isn't always straightforward. It's not merely about sharing information; it's about making it resonate.

“When it comes to AI, people often envision high-level, almost sci-fi concepts, far removed from their daily lives. However, AI isn't just about the 'big ideas'; it's already interwoven into our everyday experiences, often without us noticing. Take, for example, "recommender systems," which analyze patterns in data to predict and suggest products or content that align with our preferences and behaviors. 

“Where discussions start to turn iin more productive directions is when we can, notice these ordinary detail. “I hink I can think and talk and and connect with others around what my experiences with this.”

“As we venture into more advanced territories like deepfakes or data-driven political campaigns, understanding AI's role and influence becomes crucial. These aren't future concerns; they're present realities. If we continue to view AI solely as a distant threat, we overlook the foundational ways it's already interacting with and shaping our world. And this is where the gap lies—we need to comprehend AI not as an abstract, futuristic concept, but as a present, active participant in our daily lives.

“But to enrich this understanding, we need more voices, diverse insights, and broader participation. It's not a matter set for resolution by a handful of experts; it's a societal journey toward comprehension and engagement. Only then can we start to grasp the 'real bones' of how AI functions today, how it serves us, and, importantly, how it impacts us.


  A "More Informed Apprehension"

“When we contemplate the potential threats of AI, particularly autonomous systems, our collective psyche often leaps to scenarios fit for science fiction — AI entities taking control, autonomous systems commandeering our defense networks, or robots deciding they no longer require human oversight. While these scenarios are tantalizing for Hollywood, they distract from the subtler, more insidious impacts already unfolding in our societal fabric.

“The issue isn't that AI lacks the potential to be a societal threat; it's that the real dangers it poses are less dramatic yet persistently undermining our communities and individual agency. These threats operate on a lower level, often beneath our collective radar, and we risk neglecting them by focusing on apocalyptic outcomes.

“The concept of a fully autonomous AI, one capable of independent decision-making at a catastrophic scale — taking over electrical grids, initiating weapon systems, or controlling resource distribution — overlooks a fundamental truth. AI doesn't exist in a vacuum. It operates within an infrastructure dictated by human decisions, maintained by tech companies, and restricted by physical components. Where are the servers located? How is the energy provided? Even more so, where are the raw materials sourced for these advanced computations and operations? These questions underscore the human element indispensable in AI's functionality, highlighting that these systems are part of an ecosystem that we've constructed.

“Therefore, our fears need recalibration. We should indeed harbor concerns, but more so about our societal structures, about the accountability of tech companies orchestrating these AI systems, about the transparency in decision-making processes that affect data ethics, privacy, and digital rights. These considerations will shape AI's role and influence in our world and ultimately dictate the threats it poses.

“So, it's not a matter of preaching fearlessness but advocating for a more informed apprehension. Our public discourse needs liberation from the paralysis induced by grand doomsday scenarios.”


  Public Engagement in Shaping AI Policy

“We stand at a crucial juncture in the narrative of Artificial Intelligence. As these technologies permeate every corner of our lives, the discussions within the halls of Congress and other public forums are, disappointingly, only in their nascent stages. We've seen tech CEOs dominate these conversations, often overshadowing a diversity of voices that is not only necessary but vital for these discussions to be productive and representative of society's broader needs and concerns.

” In this landscape, public engagement becomes not just valuable but imperative. For those pondering how to contribute to shaping AI policy, it's a matter of involvement, of lending your voice and perspective to a dialogue that's all too often confined to industry leaders and policymakers.

“So, where does one start? First, familiarize yourself with foundational documents shaping current discourse. The White House's AI 'Bill of Rights' is one such pivotal text, outlining principles that could very well dictate the trajectory of AI development and governance. Understand it, critique it, and ask your representatives where they stand on these issues.

“But knowledge is the precursor to action. Engage with the work of scholars like Deborah Raji and others who are at the forefront of dissecting these complex narratives. Their insights can provide a grounding in the real-world implications of AI and the ethical considerations that aren't just philosophical musings but have tangible impacts on society.

”Most crucially, reach out to your representatives. These individuals, tasked with shaping policy, need to hear from a broader demographic.. Ask them: Have they engaged with the AI 'Bill of Rights'? How do they plan to contribute to a more equitable AI infrastructure? What measures are they advocating to ensure tech giants don't hold a monopoly on AI's societal narrative?

“This isn't about taking sides in a debate between tech figureheads. This is about diversifying the conversation beyond the Elon Musks and Mark Zuckerbergs of the world. Community groups, local forums, social media, town hall meetings — these are all venues where these discussions need to happen.

“Your voice, as part of a wider chorus, isn't just valuable in this conversation — it's indispensable. So, let's move beyond the superficial debates and delve into the heart of what matters, ensuring that the AI of tomorrow is a technology that truly serves us all.

“This approach underscores the necessity for public engagement and a deeper, more diverse conversation around AI.”


AI and Idolatry 

“Beyond AI's implications in ethics and social justice, Rev. Scott asked what religious subjects are missing in the AI conversation. Thomas gave this intriguing response: 

"Religion interacts in important ways with AI and robotics because it asks: What is the ultimate form of of good? What is the ultimate form of what it is to be alive? What really weaves that ultimate horizon or that ultimate point of meaning where we think of something as perfection?"

"This is what our religious traditions and stories carry with them and offer up for reflection. Who embodies this, in history or myth? What points toward this perfect horizon or achieves it? These questions provide a critical perspective on the way AI is hyped and made idolatrous, being too narrow or too limited to be an ultimate goal or an ultimate achievement. This is not to damn or curse AI, but merely to say it has limitations." 

"The Christian tradition has wrestled for a thousand years with the question of how we avoid idolatry," Rev. Scott noted. "I've never thought about what it would mean to ask that question with respect to AI. I do think if somebody did that with intent and work, it might reveal some of these missing perspectives or missing questions."


AI and the Transcendent

Thomas continued: 

 "You might also think about what would be iconic, or what might lead, point toward, or prime a transcendent element in technology.  You would not presume to represent it or contain it, but at least to point toward as step on the ladder." 

Soul Matters' Call for AI-Themed Sermons

Soul Matters' Call for AI-Themed Sermons

Love in the Development of AI

Love in the Development of AI