Ethical concerns rise over use of social robots in long-term elder care
While emphasizing the potential of social robots to promote wellbeing, the paper also critiques the limited scope of existing research. It warns that qualitative case studies, however detailed, cannot capture the full spectrum of ethical implications across diverse populations and care environments. It therefore advocates for mixed-method, culturally responsive studies that include voices from underrepresented groups, LTC professionals, and families.
While social robots like Paro and Lovot are increasingly used to combat loneliness and depression among older adults in long-term care (LTC) facilities, a new Canadian study warns that their deployment raises urgent ethical questions - particularly around consent, access, and the risk of dehumanizing care.
The research published in Frontiers in Robotics and AI under the title "Ethical considerations in the use of social robots for supporting mental health and wellbeing in older adults in long-term care", highlights findings from two empirical studies conducted in Canadian LTC homes using the social robots Paro, a robotic seal, and Lovot, a mobile AI-powered companion robot. Led by Lillian Hung of the University of British Columbia, the research scrutinizes both the psychological benefits and the ethical pitfalls of using such technologies in vulnerable populations.
The core of the paper presents four ethical challenges identified during robot implementation: inequitable access, barriers to informed consent, the substitution of human care with machines, and concerns over infantilization. Though both Paro and Lovot demonstrated marked benefits, reducing loneliness, easing anxiety, and promoting cognitive engagement, the researchers argue that these gains must be balanced against serious ethical shortcomings.
A central concern is inequitable access, particularly for non-English-speaking residents. In one documented instance, an elderly Mandarin-speaking woman, Mrs. Zhang, was excluded from Paro sessions due to staff language limitations. Her eventual engagement with the robot, initiated by a multilingual researcher, revealed clear emotional benefits - highlighting how implicit staff biases and language barriers can deprive marginalized residents of therapeutic tools. The study emphasizes the need for culturally inclusive practices and staff training that extends beyond English-speaking populations.
Consent, especially for residents with cognitive impairments, emerged as another critical issue. Conventional institutional ethics protocols often require written, informed consent, thereby excluding individuals unable to sign documents, effectively cutting off access to potentially beneficial interventions. In the case of Mr. Lee, an elderly man with advanced dementia and no living relatives, researchers employed a “relational process consent” model. His non-verbal cues, such as smiling and reaching for the Lovot robot, were interpreted by care staff as indications of assent. This approach underscores the importance of relational ethics, valuing ongoing interaction and context over rigid procedural norms.
Another ethical dilemma lies in the potential substitution of human care. While no direct evidence of this occurred during the study’s facilitated sessions, researchers found that after their departure, LTC staff occasionally left robots with residents unsupervised. The absence of structured facilitation raises concerns that overburdened facilities might use robots as a replacement for human engagement, particularly with residents who are already isolated. The authors caution that robots should serve to augment, not replace, human care—and must be integrated thoughtfully, with ongoing human presence to support meaningful interactions.
The study also grapples with perceptions of infantilization. Though most participants found comfort and joy in the robots, a few residents and family members perceived the technology as patronizing. In one case, a resident’s son described his mother’s engagement with Paro as “embarrassing” and “childish.” However, the mother herself reported the experience as socially enriching. This disconnect illustrates the importance of centering the resident’s perspective rather than defaulting to external assumptions about dignity or age-appropriateness.
Hung and her colleagues advocate for “everyday relational ethics” as a more suitable framework than traditional institutional ethics in these settings. This approach emphasizes continuous, relationship-based engagement with residents and their lived realities, rather than a one-size-fits-all ethical rubric. It places older adults, particularly those with dementia or limited communication abilities, at the heart of decision-making processes, empowering their voices in research and care design.
Paro and Lovot were each studied in different LTC contexts. Paro sessions, conducted with 10 participants aged 60 and over, primarily served individuals with varying stages of dementia. Lovot sessions involved 36 participants, most of whom were women between the ages of 80 and 90, often with mobility impairments. Both robots produced improvements in mood, engagement, and emotional expression, according to the researchers.
However, the paper insists that success in pilot studies does not justify ignoring ethical missteps. Robots that fail to account for linguistic, cultural, and cognitive diversity risk exacerbating the very problems they are designed to solve. The authors argue that equity must be at the forefront of any implementation strategy. This includes not only linguistic inclusivity but also consent mechanisms adapted for cognitive diversity and explicit safeguards against the over-mechanization of care.
Beyond individual LTC homes, the study raises broader questions for policymakers and developers. It calls for participatory design processes in which older adults help shape the development and deployment of care technologies. This co-design model, the researchers argue, would foster dignity, autonomy, and inclusivity, while reducing risks of misinterpretation, exclusion, and emotional harm.
While emphasizing the potential of social robots to promote wellbeing, the paper also critiques the limited scope of existing research. It warns that qualitative case studies, however detailed, cannot capture the full spectrum of ethical implications across diverse populations and care environments. It therefore advocates for mixed-method, culturally responsive studies that include voices from underrepresented groups, LTC professionals, and families.
Crucially, the research reinforces a growing consensus that while AI-enabled tools can enhance human care, they must never be used to replace it. Especially in a context as vulnerable as long-term care, the human touch remains irreplaceable.
The study urges health authorities, care providers, and developers to align technological innovation with ethical rigor. As social robots become more common in eldercare worldwide, ensuring that these tools serve all residents fairly, respectfully, and meaningfully is not just a clinical obligation - it is a moral one.
- FIRST PUBLISHED IN:
- Devdiscourse

