Skip to main content
Markkula Center for Applied Ethics

Older Adults, Carebots, and Deceit: What Should We Do?

Older Adults, Carebots, and Deceit: What Should We Do?

To address a caregiving crisis and fill the caregiving void, technologists and caregivers are turning to robotics and Artificial Intelligence (AI).

What's at Stake:

A “Silver Tsunami” looms in the United States: the current population of older adults aged 65 years and older is 48 million, and that population is expected to nearly double to 88 million by mid-century. [1] Where do most of our eldest citizens grow old? At home. A recent AARP survey found that 76% of Americans aged 50 and up would prefer to age in their current home. [2] But as of 2016, 8.3 million people, most of whom are older adults, age with the help of long-term care services such as long-term care facilities (LTCFs), home health providers, and hospice. [3] Data and statistics on older adult care is sparse, but one fact remains clear, older adults and their caregivers face a multitude of problems. Older adults routinely confront loneliness, social isolation, and a general societal apathy towards their care while caregivers face long hours, complex problems they are not always trained for, and low or no pay.

Caregiving falls into two categories, professional (paid) caregivers and family (unpaid) caregivers. The professional caregiving force numbers around 4.5 million workers and is dwarfed by the 53 million familial caregivers who work, usually cost free, in the homes of their friends and family. [4], [5] Caregiving is expensive. In 2016, long term care services consumed 30% of total Medicaid expenditures, and in 2017 informal caregivers in the United States worked 34 billion hours, valued at 470 billion dollars, for free. [6] Seen from either side, the United States has a caregiving crisis. Too few formal caregivers are stretched too thin and too many informal caregivers are engaged in work for which they are untrained and unpaid.

Carebots: A Potential Solution?

To address this caregiving crisis and fill the caregiving void, technologists and caregivers are turning to robotics and Artificial Intelligence (AI). Japan, whose Gray Wave is cresting now, leads the world in the adoption of caregiving robots, typically called carebots. The most common carebots in Japan and worldwide are robotic companion “animals.” These companion animal-shaped carebots keep older adults company and engaged by interacting with them and responding to physical and audio ques with appropriate movements and sounds. Paro, a highly popular companion carebot “seal,” has demonstrated that it decreases older adults’ loneliness, and anecdotally makes older adults more calm, talkative, and sociable. [7] Carebots have many uses. Robear, for example, is a large bear-like robot which can move older adults into and out of bed, the bath, and other hard to reach positions. [8] Further, startups like CareCoach build and sell AI-enabled virtual assistants to engage with older adults, keep tabs on their movements and medical needs, and call emergency services in times of crisis. [9]

There is no doubt that carebots will play a necessary role in any future of caregiving, yet it is important to pause here and state a few normative claims regarding that use. First, carebots are not an equivalent replacement of human care. They may perform some tasks more efficiently, or reduce difficult burdens of informal caregivers, but ensuring that humans continue to care for humans in some capacity is paramount. Children who were raised by their parents have a moral obligation of reciprocity to care for them in turn. Further, there perhaps may even exist a broader intergenerational duty for the young to care for the old.  Second, carebots will help reduce caregiving costs, but that does not entitle caregivers to lower the quantity or quality of their care. In fact, a decrease in cost should increase the affordability and access to quality care. In short, lower costs should be passed on to older adults and their families, not corporations.

Cost is just one of the many ethical dilemmas surrounding carebots, another important concern is that of deception. One may ask, is it okay for a carebot to deceive an older adult if the deception ultimately works in the adult’s best interests? Is it deception if an older adult falsely believes that their companion carebot can actually talk and think? How should a carebot respond when an older adult asks when their long-deceased parents are coming to visit? (More on this below.)

It is important to pause and make several important notes before proceeding. First, any agency that a carebot exhibits is derivative from the human(s) who built, developed, and coded it. As of today, the actions of robots, even those with complex motor functions and what is perceived as agency, are reliant on their human builders. Second, going forward I will refer to deception as how it is defined in the Stanford Philosophical Encyclopedia, “to intentionally cause to have a false belief that is known or believed to be false.” [10] Third, I will touch upon the deception-manipulation distinction, and define manipulation as “to change by artful or unfair means so as to serve one's purpose.” [11]

Ethical Considerations:

The broad question is often asked, is deception ever morally acceptable? German Philosopher Immanuel Kant would argue that it is always wrong to be deceptive. Why? Kant believes that, as contemporary philosopher Russ Shafer-Landau puts it, we must “always treat persons with respect, as valuable in themselves.” Acting deceptively towards another is to treat them as a mere means to an end and therefore break the obligation to see persons as valuable in themselves. [12] A Utilitarian might answer differently. [13] John Stewart Mill, for example, would contend that some forms of deception are justified as long as they bring about more pleasure than pain. When a murderer knocks on your door asking if your sister is home, Mill would implore you to lie whereas Kant would say you must answer truthfully. In the context of older adults and carebots I believe that the correct answer lies somewhere in between Kant and Mill, yet closer to Kant. Below I lay out two examples to show how and when it might be justifiable to deceive – in very restricted situations – older adults through carebots.

First, it is suitable to detail some basic rules one must keep in mind when considering deception, carebots, and older adults.

  1. Do good and minimize harm. This is a basic moral obligation in healthcare, and it applies even more in the older adult care context. Why? Because more so than younger adults, older adults face more critical and numerous healthcare decisions.
  2. The intention behind using a carebot should be to serve older adults’ best interest. Their primary charge is to improve care, all else should be secondary.
  3. Older adult care belongs to a special, protected, class. Many older adults are more vulnerable and fragile than the general population and thus should be treated with greater respect and care.
  4. Act to preserve older adult’s autonomy and agency. As older adults age, some tend to experience reductions in their autonomy and agency. It is crucial that carebots don’t reduce them further.
  5. Manipulation is never morally justified. Using an older adult as a mere means to achieve one’s own ends is a definitive moral wrong.
  6. Deception may lead to feelings of betrayal and distrust. Deception is a risky tool; when it is used too liberally it can create compounding negative consequences.
  7. Deception should be the last resort. It is always morally preferable to rely on another form of response, for example, avoiding the question or redirecting the conversation. 

 

Example Cases:

Asking about deceased parents

•Sam, who is 85 years old and experiencing dementia, asks his AI care assistant “when is mom coming to visit?” Sam’s mother passed away several decades ago, but due to the effects of dementia on his cognitive capacity, Sam doesn’t recall or know this. Furthermore, if Sam was reminded of his mother’s death, it has been shown from previous experience that he would become depressed, angry, and sad – he might even accuse the carebot of lying, thus breaking trust. How should Sam’s AI assistant respond? I would argue that in this case the assistant could be justified in acting deceptively. Why? Sam’s question has a relatively low impact on his overall care outcome thus lowering the threshold for him to be fully informed. More importantly, answering his question truthfully would create greater harm (reminding him of his mom’s death and causing him to become depressed, angry, and sad, even accusatory) than the good caused by telling him the truth (treating him as an agent worthy of being told the truth). While the assistant’s deception could be justified, it should use deception only as the last resort and instead attempt to avoid answering the question or redirect the conversation. 

•Sally, who is 85 years old and as sharp as a tack, asks her AI care assistant “when is mom coming to visit?” Like Sam, Sally’s mother passed away several decades ago, but unlike Sam, Sally knows this, and asks her assistant as a joke or as an experiment to see how it will react. If Sally were to be reminded that her mother is dead, she would not react negatively, and instead would fondly recall time spent with her. How should Sally’s assistant respond? In this case the assistant is not justified in acting deceptively, because Sally has a high cognitive ability which allows her to easily see through the attempted deceit and understand the scenario’s nuance. Acting deceptively towards Sally is wrong because it would undoubtedly create more harm and good and treat her as a mere means.   

Deception with respect to surgery

•Charlotte is 95 years old and has lost much of her cognitive capacity. She fails to grasp the complexities of the medical world and has a short patience for complicated conversations. Charlotte has a carebot with whom she converses about all of her medical questions. Charlotte’s doctors recommend a surgery which is crucial to her wellbeing. The surgery is 90% effective yet has a .01% chance of death. In Charlotte’s caregivers educated medical opinion, it is in her best medical interest to have the surgery. Charlotte may have misgivings or simple questions to ask of her carebot about the surgery. Even if the carebot thinks that Charlotte may not understand complete and complex answers, or that by answering the questions Charlotte may decide to decline the surgery, the carebot should not act deceptively to Charlotte. Why? The surgery has a large influence on her care outcome and therefore belongs in an important category where deception is never allowed. 

•Charles is in much the same situation as Charlotte but has a high cognitive capacity. In the same surgery scenario, Charles’s carebot is similarly, and more strongly, not allowed to use deception. Why? Because the surgery has a large influence on his care outcome and Charles has a high cognitive capacity. Having a high cognitive capacity increases the need to treat Charles’ as a full autonomous agent worthy of respect and the truth.

 

Decision Matrix:

 

Is deception warranted?

Large Influence on Care Outcome

Small Influence on Care Outcome

High Cognitive Ability

Never warranted

Never warranted

Low Cognitive Ability

Never warranted

Rarely warranted

 

Final Thoughts:

There are a few final considerations to keep in mind when building carebots for the needs and intricacies of older adults. First, older adults suffering from dementia may lack the ability to speak clearly or coherently. Many carebots enabled with speech capacities rely on natural language processing capabilities whose effectiveness is lowered by unclear speech. Addressing the issue of unclear speech is necessary if carebots are to be widely accessible to all older adults.

Similarly, carebots also need to be able to fit the unique needs and personality of each individual patient. Carebots must be able to work with both “Sam” and “Sally.” This requires getting to know the patient, and that presents a danger too. Carebots must place the patients’ privacy front and center. Where will older adults’ personal data be stored? For how long? Will older adults’ personal data be sold to third parties? Is the carebot easily hackable? These questions should be answered before these technologies are widely deployed. 

Carebots also must be able to replicate human caregivers' tact. Carebots should be able to recognize a diversity of scenarios that call for premeditation and be able to quickly adapt when unexpected scenarios unfold. It is important for carebots to always ensure that older adults take their medication (maintain routine), but it is equally important that carebots know whether an older adult’s unusual sounds and movements indicate pain or happiness (ability to adapt to the unexpected).

Designers of carebots must consider when to design a carebot to be program-based or when to allow it to learn on the job. A program-based carebot follows hardcoded rules built into it and is most useful in low skill repetitive tasks (filling up a water cup for example). A learning carebot is able to adapt to a variety of situations and learn how to act in each specific scenario (Boston Dynamics’ Spot robot for example). [14] If a carebot were to be used amongst many older adults, it would need to understand context switching. A carebot who justifiably deceives one older adult may not be justified in deceiving the next.

Carebots will likely play a crucial role in assisting caregivers in the future, and while caregiving is time sensitive, it is important to pause and consider the implications of wide scale carebot adoption. As a society, do we want robots to be caring for our oldest members, and if so, how should they behave? Is a large-scale retraining of human caregivers a more suitable alternative? If you were an older adult, how much time would you like to spend with a carebot as opposed to a human caregiver? 

 

References:

[1] Barbara Cire, “World’s older population grows dramatically,” National Institutes of Health, March 28, 2016. Available at: https://www.nih.gov/news-events/news-releases/worlds-older-population-grows-dramatically#:~:text=Highlights%20of%20the%20report%20include,to%2076.2%20years%20in%202050.

[2] Joanne Binette, Kerri Vasold “ 2018 Home and Community Preferences: A National Survey of Adults Ages 18-Plus” AARP Research. July, 2019. Available at: https://www.aarp.org/research/topics/community/info-2018/2018-home-community-preference.html

[3] Harris-Kojetin L, et al, Long-term care providers and services users in the United States, 2015–2016. National Center for Health Statistics. Vital Health Stat 3(43). 2019. Available at: https://www.cdc.gov/nchs/data/series/sr_03/sr03_43-508.pdf

[4] Susan C. Reinhard, et al, Valuing the Invaluable: 2019 Update, AARP Public Policy Institute, November, 2019. Available at: https://www.aarp.org/content/dam/aarp/ppi/2019/11/valuing-the-invaluable-2019-update-charting-a-path-forward.doi.10.26419-2Fppi.00082.001.pdf

[5] Harris-Kojetin L, et al, Long-term care providers and services users in the United States, 2015–2016. National Center for Health Statistics. Vital Health Stat 3(43). 2019.  Available at: https://www.cdc.gov/nchs/data/series/sr_03/sr03_43-508.pdf

[6] Susan C. Reinhard, et al, Valuing the Invaluable: 2019 Update, AARP Public Policy Institute, November, 2019. Available at: https://www.aarp.org/content/dam/aarp/ppi/2019/11/valuing-the-invaluable-2019-update-charting-a-path-forward.doi.10.26419-2Fppi.00082.001.pdf

[7] Harmon, Amy. “A Soft Spot for Circuitry,” New York Times. New York, New York. July 4th, 2010. Available at: https://www.nytimes.com/2010/07/05/science/05robot.html?_r=2&pagewanted=1

[8] Dredge, Stuart. “Robear: the bear-shaped nursing robot who'll look after you when you get old.” The Guardian, February 27th, 2015. Available at: https://www.theguardian.com/technology/2015/feb/27/robear-bear-shaped-nursing-care-robot

[9] Smiley, Lauren. “What Happens When We Let Tech Care For Our Aging Parents.” Wired, December 19th, 2017. Available at: https://www.wired.com/story/digital-puppy-seniors-nursing-homes/

[10] Mahon, James Edwin, "The Definition of Lying and Deception", The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.). Available at: https://plato.stanford.edu/archives/win2016/entries/lying-definition/

[11] Merriam-Webster Dictionary, “Definition of Manipulation,” Merriam-Webster Dictionary. Available at: https://www.merriam-webster.com/dictionary/manipulation?src=search-dict-hed

[12] Shafer-Landau, Russ. The Fundamentals of Ethics. New York :Oxford University Press, 2015.

[13] Shafer-Landau, Russ. The Fundamentals of Ethics. New York :Oxford University Press, 2015.

[14] Boston Dynamics, Spot autonomous robot. Available at: https://www.bostondynamics.com/spot

Ethics
technology, article, ai, bioethics, hackworth

Possessed-Photography/Unsplash

Make a Gift to the Ethics Center

Content provided by the Markkula Center for Applied Ethics is made possible, in part, by generous financial support from our community. With your help, we can continue to develop materials that help people see, understand, and work through ethical problems.