Corps de l’article

introduction

Human health is of great importance and can also be seen as a morally significant value that must be preserved, at least prima facie. The existence of a certain state of health is a prerequisite for realizing other life concepts, goals, needs and values, which is why one can speak of health as a constitutive requirement (1). Thanks to the constant progress of modern medicine, a corresponding state of health can be achieved and maintained using a variety of means and measures, including the use of artificial intelligence, for example. This article deals with care robots and their relevance for mental health. On the one hand, this focus is shaped by the advances in AI applications and the dissemination of technical possibilities for the care sector, and on the other hand, research on this topic remains an under-addressed issue (2). Although numerous works, including ethical perspectives (3-7), on AI-based care robots can be found in the literature, the connection to mental health and justice-related considerations is still missing. Mental health seems to have taken on a special meaning in recent years, partly as a result of various crises, armed conflicts, the digital transformation and the pandemic. People’s subjective ideas about (mental) health and illness have also received more attention as a result. But what role can AI-based entities play regarding mental health? First, the relevance of mental health is discussed, along with ethical implications that arise. Second, the use of care robots is examined in terms of the extent to which they can influence mental health, for which numerous studies and works will be analyzed. Third, an idea of equity of access is presented that can be reconciled with the previous considerations. This is followed by a discussion of frequently raised concerns regarding the implementation of artificial entities, to meet the concerns of mental health and equitable access on the one hand, and to offer an ethical justification on the other.

The value of (mental) health

This section shows the relevance of mental health and discusses the associated ethical implications.

At least since the introduction of the biopsychosocial model (8,9), the focus in the scientific community has shifted away from a purely physical view of illness and health and towards a holistic[1] concept in which biological, psychological and social influences are decisive. However, this is also accompanied by a complex interweaving of various dimensions and factors that can be associated with a state of health (14). In recent years, there has been a growing emphasis on mental health as an essential component of health in general, as reflected in the national mental health strategies adopted by various countries (15). Many illnesses and disorders affecting our well-being cannot be attributed to physical impairments alone, which emphasizes human health as a multidimensional phenomenon (16,17). Yet, it is reasonable to say that mental illnesses often do not receive the same attention as physical impairments, and this is reflected in access to health care (18-20). The World Health Organization’s (WHO) objectives are helpful in this context, according to which it is important to 1) recognize and strengthen the value of mental health, 2) align local practices with this value, and 3) focus health care services more strongly on mental health (21). These strategies are also reflected in the mental health atlas, whereby WHO member states commit themselves to promoting certain goals regarding mental health, which would not be necessary, for example, if the corresponding health services in the respective countries were already functioning adequately (22). The provision of contact points and responsible people to address mental health concerns is only gradually improving.

For this reason, too, a focus on mental health appears essential in order to then clarify the question of what opportunities arise from the use of care robots. As already mentioned, mental health appears to be a prerequisite for a meaningful life (23). The decisive foundations for mental health are laid in childhood and adolescence because the earliest possible internalization of one’s own (mental) strategies shapes how one deals with difficult life situations throughout one’s entire life (14). However, study results suggest that many mental health problems are the result of an interplay between a variety of factors (24-26). These include genetic and environmental influences, with the latter category also covering special life events. In general, people can be described as mentally healthy if they are able to master their daily tasks and certain life situations and achieve a level of well-being that is individually understood as good (27). This view of health differs from a purely physical approach in that even without physical and medically detectable impairments, people can lack something. This something relates to cognitive processes, emotional states, and the mental and spiritual condition in general.

The idea of mental health presented so far can be used to subsume basic mental activity, emotions, motivation, coping strategies, psychophysical performance and self-determination potential (14). However, when individuals feel overwhelmed by their own lives, are exposed to constant stress, or even reach the limits of their resilience due to illness, health impairments are often the result. It is not uncommon for these gradual processes to lead to mental disorders, impaired well-being, impaired cognitive performance, depression, anxiety disorders or other psychiatric impairments (14,19,28,29). Apart from diseases that can be classified using diagnostical systems — such as the International Statistical Classification of Diseases and Related Health Problems (ICD-10) and the Diagnostic and Statistical Manual of Mental Disorders (DSM-V)[2] — the consequence for affected individuals is a reduced quality of life and they are also increasingly unable to participate and function in social interactions as usual. The impairments described above can limit an individual’s ability to act and make decisions. This impairment appears ethically relevant above all because one’s own life can no longer be led in the usual way, and especially if appropriate measures and support are not provided.

However, a positive understanding of mental health not only relates to an individual but also affects society due to the importance of this fundamental and constitutive value. Marckmann (27) emphasizes this significance because health is a transcendental good, that is, a basic prerequisite for the realization of life goals and equal opportunities in society. Put simply, health can be understood as a value or transcendental good because it is a conceptual idea, not just a state of being. On the one hand, health is determined on an individual basis, while on the other hand, it is a universally used term. The fact that different people may define health differently but still describe and most likely want to achieve such a state, expresses this idea. According to Ornstein (32), this value ranks particularly high compared to other goals, ideas of life or even moral values. Such ideas indirectly refer to the moral plurality regarding certain values but also allow the described value of (mental) health to continue to exist, which manifests itself particularly clearly in the state of being ill (33,34).

According to the WHO, health describes a state of complete physical, mental, and social well-being, i.e., not the mere absence of (physical) illness (16,21). This description at least indirectly emphasizes the relevance of mental health by referring not only to pathology and medically verifiable functional incapacity. According to Bauer and Jenny, the value of health is to be seen as a dynamic concept in a constant process of adaptation — it does not exist as a result (33). This goes hand in hand with aspects of well-being, people’s ability to act, opportunities to shape self-determination and an obligation on the part of the human community to make this essential value available. The social dimension is also emphasized by Bhugra et al. (35) insofar as mental health also depends on a person’s social embeddedness. According to Cloninger (24), determining mental health involves assessing 1) temperament and character, 2) current emotional state, 3) various syndromes, and 4) formulating steps for modification. In contrast to the considerations above, this is not just a definition, but an approach to promoting mental health.

The WHO definition is used for the arguments below, first because these considerations are consistent with the statements made so far, and second because they appear conclusive for the discussion of AI-based robots:

Mental health is a state of mental well-being that enables people to cope with the stresses of life, realize their abilities, learn well and work well, and contribute to their community. It is an integral component of health and well-being that underpins our individual and collective abilities to make decisions, build relationships and shape the world we live in. Mental health is a basic human right. And it is crucial to personal, community and socio-economic development.

21

Even if illness, health and, even more specifically, mental health are difficult to express in general terms — or even should be — a plausible claim is conceivable in ethical terms, insofar as these values are dependent on individual interpretation, but their general claim is not lost as a result (1). This general character can be explained by the universal importance of health, insofar as for all people it is associated with something that they want to maintain and preserve, regardless of what exactly this idea and realization of health looks like in a specific case (27,33). This also reveals the fact that, in the context of medical and care facilities, it should be understood as an essential concern to make this access to health available to all people. If health represents something morally important — as the previous explanations suggest — then it is ethically imperative to take certain measures to preserve it and to refrain from any actions that are contrary to this value (23,36). The previous considerations also make it clear that mental health is part of the general understanding of being healthy and is therefore also subject to this ethically required duty.

Care robots and mental health

After considering the relevance of (mental) health, attention now shifts to care robots. This form of human-machine interaction was chosen, firstly, because these robots are increasingly being used in the care sector; secondly, because a greater and possibly technical need for support in care facilities is to be expected in the near future; and thirdly, because these robot companions can provide a new solution to mental health problems of the people to be cared for. Furthermore, although there are many therapeutic offers to create or promote mental health, many people do not receive the necessary mental health care (37), which is why care robots have great potential to close this gap (28). As a result of demographic change and the increasing number of people in need of care, these robot companions appear to be a conceivable replacement for skilled workers or at least a means of support. But to what extent are machines able to promote people’s mental health? What ethical challenges are associated with the potential use of AI-based robots for mental health care? How can these problems be solved? The following sections will attempt to address these questions and related aspects.

In the care setting, robots have so far mostly been used to counter acute staff shortages, to support professionals in their work or to try out new ways of communicating and interacting (38,39). Although the effects of human-robot interaction on mental health are being investigated in various studies (40-43), their number is small compared to other topics, and, above all, too little attention has been paid to ethical considerations. Due to the large number of different care robots, a specific selection is made for this article. These machines can be classified as socially assistive robots (SARs[3]) (39,50) and, they represent the current state of the art, which appears to be crucial for the question regarding the promotion of mental health. In addition, SARs have communicative abilities, they can move (partly autonomously) in an environment, and they have a different physical presence, in comparison to companion robots such as Paro, for example. These skills allow for a different kind of social interaction, which seems particularly useful considering the WHO definition of mental health.

Study results show that the use of care robots can improve mental health, increase emotional well-being and reduce loneliness compared to the normal care constellation (42,51-53). Empirical surveys by Papadopoulos et al. (51) in England and Japan noted that these robot companions were also able to adapt to cultural differences regarding the needs of people receiving care. This programmable cultural sensitivity was well received and, in ethical terms, corresponds to an approach characterized by intercultural (technical) competence and the necessary respect for the respective counterpart — already embedded in the programming — which can only be called up by human caregivers after readiness and internalization. Papadopoulos et al. conclude that SARs “may be likely to protect against mental health problems” (51, p.252), however, the lack of experience is certainly taken into account, insofar as the positive or negative effects of long-term implementation of robots make further research necessary.

Other researchers have come to a similar conclusion, because it is not clear from studies, most of which are short-term, how people in need of care will react to robots once the effect of the novelty has worn off (52,54). Apart from that, this habituation effect not only affects machines, but also occurs with human caregivers after a certain period of time — and it does not have to be negative. Chita-Tegmark and Scheutz’s study results make it clear that robot companions can also prove their worth as moderators in interpersonal relations supported by technology (54). They are therefore not only available for direct patient contact, but also for nursing staff, by providing feedback on social behaviour, raising awareness of problematic attitudes, possible interventions or even care-related goals. “Nao would alert the speakers when their voice was too high or too low or when the conversation was problematic” (54, p.205). These types of actions can directly influence the behaviour of the people present in the care sector, which appears crucial for mental health. Heated, stressful and laborious arguments increase mental strain, whereas care robots[4] could counteract these problems.

The kind of human-robot interaction depends on the research project and the respective study design. Nevertheless, it can be concluded that communication in particular (54), regardless of whether it is auditory or visual, has a significance that should not be underestimated, and that SARs are therefore preferable to other robots. The very presence of the robot and the knowledge of the possibility of an exchange can be seen as something important for people receiving care. Empirical studies (58-60) confirm this assumption because “participants who interacted with a physically present robot rated the interaction as significantly more enjoyable and significantly more useful than […] similar computerized or screen-based activities.” (28, p.37). Aspects of mental health can be taken into account, for example, by actively involving the people being cared for, using videos, educational measures, a form of coaching (49), help and observation of the therapy (28), through conversations about personal issues (52), useful tips from the robot (54) or joint (not only physical) activities (51,61). Their use as naive therapeutic companions[5] is also conceivable, as shown by some studies on the use of SARs in the context of dementia or children with an autism spectrum disorder.

An approach based on the well-being of the people to be cared for makes sense and would be beneficial to the value of (mental) health as already described. If care robots can be understood as useful, insofar as their activities not only do not endanger the mental health of affected people, but can potentially improve it, withholding access to robot companions would be difficult to justify. Clearly, the consent of the persons concerned, which could be jeopardized by the conceivable benevolent justification in the sense of benevolent coercion (23), should not be circumvented. However, if individuals in need of care consent to this technical measure, it would result in an unjustified withholding of care-related options if the potentially mental health-promoting care robots were not worthy of consideration. These machines must not only be understood and integrated differently due to their lack of human appearance, traits, and behaviour, but also because they can perform other tasks due to this very difference. Several studies (64-66) show that people in need of care also feel comfortable[6] with a robot.

Based on these considerations, it is reasonable to conclude that the individuals being cared for may also welcome SARs and experience positive influences in terms of mental health because they are aware of the fact that these are non-human counterparts. Some studies show that men tend to react more positively to these machines than women and that older people are more likely to have reservations than younger people (50,69,70). It is also apparent that people in need of care sometimes prefer to be cared for by people of the same sex, which could be a decisive factor in the appearance of AI-based care robots. Nevertheless, universal solutions seem inappropriate. The specific and technical kind of interaction between robots and humans promotes a new approach and possibilities[7] for use that would not be expected or might even be undesirable in purely human encounters (54). In this context, there is immense potential for AI-based robots, insofar as they are used during mental health care and ethically relevant aspects come into consideration. With the constant progress in medicine and technology, for example through deep learning, neural networks or the general adaptivity of AI (71), an alignment of humans and machines seems to be increasingly establishing itself, for example by robots coming ever closer to human behaviour (43,72). Whether this forced comparability, so to speak, is desirable in the context of care and even more specifically regarding mental health care, must be critically questioned[8].

The above considerations tend to support the conviction that equating human and technical care activities would not have the desired effect. The consideration of (mental) health, and the contributions of care robots, which have so far been perceived as positive and valuable — but different — could, so to speak, vanish into thin air. Their specific contribution to mental health, the promotion of independence, well-being or even human-machine interaction in general are based precisely on this described otherness (3,53,61,73). Wanting to make machines more human in the sense of a human concept could be questionable in ethical terms, especially if the specific needs of people receiving care are not considered. If SARs offer certain advantages (in the view of people requiring care) over human nurses due to their technical features, the complete alignment of humans and machines would mean that these potential advantages would no longer exist — at least not in the form presented here. People’s concerns, wishes and ideas must be respected and, above all, used as a guideline for the initial integration of SARs — including their physical appearance. Otherwise, the interests of the individuals addressed would not be taken seriously, a lack of respect would be expressed and the important approach via the individual would be neglected in favour of general solutions. In turn, health serves as an important benchmark for determining which courses of action and strategies should be pursued and which should not. However, it must be determined what exactly this care should look like, for example as basic medical care or in the sense of a minimum standard (74).

Equitable access to health

After considering mental health and AI-based care robots, the question of how fair access to this technological option can be possible is explored.

As the significance of individual health and its socio-political consideration is implicitly clear, it seems appropriate to consider the question of ensuring health, even if these considerations only cover a fraction of the philosophical debate on distributive justice and access. If mental health represents an intrinsic value, then access to medical services should not depend on individual economic capabilities (74). However, additional services or insurance can be taken up by precisely those people for whom such services are affordable. This idea leads to the question of whether the worst-off in a society should be given special preference, which Rawls showed in his Theory of Justice and is discussed in a comparable way by Rauprich (75-77). The provision of adequate basic care for the realization of vital goods, interests and ideas of a successful life appears essential, for which the potential characteristics are outlined below.

The provision of adequate basic services also seems reasonable for Rauprich, because in this way we take into account those in a society who are particularly in need of help and support, thus granting them important access without at the same time attaching too much weight to the value of equality (75). This highlights the problem, in terms of prioritization, between a purely egalitarian approach (egalitarian view) and one that gives priority to people in need (priority view) (78). Brock also deals with this common attitude that disadvantaged persons or groups are to a certain extent preferable to the privileged, and takes a critical view of the priority position, insofar as this “would have what for many are highly counterintuitive implications for health-care prioritization.” (78, p.45). Theoretically, poorer people would have to be given priority over richer people per se, even if the richer person were much sicker, but their overall level of health would still be higher than that of the poorer person. In accordance with the ideas of Höffe, many tasks of justice arise from the limited nature of natural resources (79) and, in this context, also from the use of technical aids. Leaving aside this problem of favouring the poorest and most needy in a society, the access to medical care mentioned above should be understood — albeit in a minimal and fundamental sense — as a concern of any justice-oriented medicine.

According to Huster, the introduction of a classic minimum standard would be problematic insofar as it would widen the gap between rich and poor, or sick and healthy, especially because (tax) contributions would possibly no longer be income dependent. For this reason, he argues in favour of minimal health care, in which additional benefits can be claimed and health concerns remain a reference (74). In this context, Höffe’s considerations can be followed, according to which the opportunity for equal participation, involvement and agency with regard to AI-based robots leads to the conclusion that all people can be potential users of technical aids (79). However, the considerations of Otsuka, who argues in the context of the distribution of relevant goods, but nonetheless illustrates the important consideration of the individual over the mere preference of the many, offer a decisive point of reference (80). Ultimately — and this can also be derived from the above — it is entirely reasonable to assert that the focus should be on the people interacting with care robots, which includes those receiving care and human care professionals.

The reflections so far lead to the conviction that the egalitarian approach can be implemented more plausibly in the context of this article than the priority view (78). The use of AI-based robots does not (only) belong to the worst-off, because certain knowledge and skills are required to use them. Although it could be argued that a care robot should be meaningful[9] for those people who need to be cared for, looked after and thus also supported. Providing support for an exceptionally active person through a technical entity would appear to contradict the fundamental purpose of the care robot — but only at first glance. Imagine that, by chance, ten individuals requiring care are selected in a nursing home. They are sitting together in a group and discussing their level of activity. After a few moments, the people present come to a clear conclusion and agree that Stefan is the most active person. Now, however, let’s imagine that another ten people join the group, and the question of activity and passivity arises again. Among the newcomers is Laura, who, in the opinion of everyone present, now takes on the status of the most active person requiring care, meaning that Stefan loses his place at the top, so to speak. What does this tell us about the previous statements regarding care robots? Should Stefan be denied the opportunity to benefit from these machines in scenario one? Can it be justified that using the robots is possible again once he loses his status as the most active person in the nursing home to Laura in scenario two? Furthermore, if Roger Federer were to join the group, Laura might no longer appear to be so active. Apart from the fact that labeling someone as active or passive in this context is not only subjective but can also change over time, the following conclusion seems appropriate.

Even active people in need of care should have the opportunity to interact with care robots in a care facility if they so desire. Otherwise, such opportunities would be reserved only for those who appear to be particularly needy. However, this apparent need may also be epistemically false and, moreover, says nothing about whether the person in question wants to interact with machines. This perspective is compatible with the above considerations relating to an egalitarian approach and rejects a strict priority view, although it does acknowledge that certain priorities are justified on the basis of special preferences[10]. In addition to the problem just described, namely determining the status of the most active person among different people requiring care in a care facility, there is another aspect that argues in favour of equal[11] access: within a care facility, we are already dealing with a specific group of people who are receiving care. In the previous example, Roger Federer would most likely only join the group if he reached a stage in his life where he needed help and support. The same can be said, broadly speaking, of all people who are cared for in a nursing home. The fact that Laura and Stefan are living in a nursing home means that it can already be assumed that they are no longer able to manage their own lives without assistance in all areas — in short, there is a reason why they are in a nursing home. Thus, it can be argued that the distinction between active and passive within a care facility is less clear and relevant than the consideration of individual preferences. This is primarily because the group of potential beneficiaries of AI-based care robots is already selective, and person-specific preferences should be considered.

Another point is relevant when talking about SARs but is often forgotten in general discussions about robots and their capabilities. Prioritizing the worst off in a care facility, such as pursuing a priority view, seems both inappropriate based on what has been said so far and implausible considering the robot models presented here. NAO and Pepper are not useful — if at all in this context — for helping particularly passive and needy people. For them to be useful would mean that they are capable, for example, of lifting a person out of bed on their own, taking over their personal hygiene completely, providing active support when walking, or, to put it simply, replacing a human caregiver. Expecting such performance from Lio and PR2 is not in line with their current field of application and also fails to recognize that these machines are primarily intended to provide support (73). For this reason, too, it seems appropriate to make care robots available to all people requiring care, regardless of how active or passive they are. When NAO reads a story aloud or uses visual/physical representations to motivate physical activity, both active and passive residents can benefit. The same applies, for example, when PR2 takes on a moderating role. It makes little sense to assign PR2 only to Stefan, who is passive enough in scenario two, and at the same time leave Laura to resolve a conflict on her own because she is active enough — unless Roger Federer walks in. Apart from this consideration, it nevertheless seems plausible and justifiable to use the priority view primarily for medically limited measures in exceptional situations in which life is threatened (23,75), and to perceive the integration of care robots as a therapeutic but at the same time additional, alternative and not yet established intervention. Consequently, equal treatment can be expressed and demanded for all those who belong to the area of potential use of such innovations due to life-related circumstances. Further prioritization by referring to physical, psychological, or spiritual needs seems less plausible than the almost equal opportunity to interact with care robots.

What can also be pursued for reasons of justice is a perfectly understandable focus on adequate basic care, which is more comprehensible in the context of AI-based robots than the minimum standard often demanded (75). Apart from the limitations to which Huster drew attention, the area of application of care robots in particular can be assigned to appropriate basic care rather than the minimum standard (74). Regarding medical and care-related measures, the latter approach will primarily focus on the critical treatment of illness, the restoration of health and precisely these fundamental basic health needs. However, adequate basic care could soon provide these robot companions to enable service activities, mobilization functions or even active living and socialization. This approach can be compared with the decent minimum described by Beauchamp and Childress (23), according to which equal access to basic needs and unequal access to special needs should be granted. These basic needs correspond to the considerations made above regarding general access to SARS (in principle for all people requiring care), whereas the special needs correspond to specific and ethically justifiable prioritization (based on subject-related preferences). This strategy seems plausible insofar as care robots are not yet part of general basic medical care but can (still) be seen as an affordable addition. This is also made clearer by the fact that AI-based robots do not yet operate fully autonomously in all settings, that activities close to the body are limited to a certain extent and that in many cases medical or nursing professionals have the final decision (53,81,82). Whether their use will develop into a standard intervention remains to be seen, but the considerations above emphasize the openness required in ethical terms. With reference to equality of opportunity (27,76,83) and the conceivable use of AI-based robots, a target claim arises according to which the value of (mental) health, which is considered constitutive, must be made available to all people to an extent that is essential for basic needs. This claim is based on the egalitarian considerations described and calls for adequate basic care.

Final considerations

After an egalitarian approach to care robots has been discussed, which primarily refers to basic needs, final thoughts on the machines, mental health and this idea of fair access are presented.

There is often talk of direct or indirect deception, which seems to be ethically questionable (28,54), but is this really the case? When this term is applied to care robots, a consequence arises that is not entirely plausible. Deception can primarily be understood as the withholding or concealment of truth, or the intentional persuasion of another person to believe something false to be true (23,84). However, if care robots become established in a facility, this technical support must also be identified as such; and the relevant people or directly involved care professionals are responsible. Accusing a care robot of deception would either have to be based on the scientifically implausible assumption that there is something like consciousness hidden in the machines and that they can therefore deliberately do something morally wrong or attribute the blame to the companies carrying out the work. In the second scenario, however, any attempt to deceive refers to human actors such as programmers, designers, company executives or other individuals involved in production. As a result, the accusation of potential deception by robots cannot be sustained, as this ethical concern should rather be understood as a human fear of human intervention.

As a result of the constant advances in robotics and their use, it is conceivable that people could become attached to certain care robots and then have to end the friendship when a new model is released. Some studies (28,54) have identified a potential ethical problem in this regard. This personal attachment created by the technical care relationship would seem to undermine the ability for self-determination and independence, because people in need of care are no longer able or willing to escape this cherished dependency. At first glance, it may seem trivial to point out the generally recognized significance of autonomy. In this case, however, this approach is indeed appropriate, it has just not been implemented in an adequate way. In human interaction, too, there are often intended and unexpected dependencies that contribute to the reduction of subjective options. A caregiver can promote the dependency of a person in need of care through caring, inspiring, and compassionate behaviour, which does not seem to raise any concerns — but AI-based robots do. It is not very plausible to use this challenge solely in the context of care robots as a decisive factor for their justified non-integration; and, in addition, attachment can also be understood positively (85). Rather, these relationships often arise through deliberate, intentional, or accepted actions that involve emotional and empathic aspects and cannot currently be implemented by care robots in a human manner (81,82). The responsibility lies with those in charge, who must be aware of this potential problem even before the robot is used. Accordingly, it seems much more plausible to regard this ethical problem as important and worthy of consideration, but to locate it first and foremost in human meetings and responsibilities.

Although care robots and their integration address the problem of loneliness, authors (54,82) also fear that the problem will only shift elsewhere, leading to a loss of social contact with human actors. At first glance, these considerations appear to be justified if constant contact with a machine has a positive effect on mental health and the importance of care professionals’ concerns is reduced. However, it should be kept in mind that care robots are primarily there to supplement missing resources, to realize an approach that cannot be implemented by humans or to promote the described social activity in a technical way (61,86). The use of these devices therefore closes a social care gap rather than taking something away. The studies presented also show to what extent the frequently feared social contact can be compensated for by robot companions, on the one hand, and the human component can be equally preserved through appropriate planning, on the other (85,87). The concern about loneliness also seems unfounded because adequate goals and moral considerations must certainly play a role in the structural implementation and integration of care robots.

In connection with the previous difficulties, the statements of the High-Level Expert Group (HLEG) on AI offer certain points of reference, such as the promotion of diversity, basic safety, the protection of privacy or the promotion of human well-being, which is essential in this article (88). Such requirements must already be considered when planning a specific robot. One conceivable path would be, for example, the integration of suitable care robots “into existing and effective treatment programs, ideally in ways that reduce the time-demand placed on human treatment providers” (28, p.40) or the implementation of new training programs with the robot companions, an option corresponding to the previous passages. Specific values and points of reference would already be anchored in internalized routines and programs, so to speak, which could be used to guide the integration of care robots (73). A specific focus on the individual also appears to be essential, whereby a “personalization of the robot’s behavior to meet the specific needs of the user, determined by the user’s particular health situation as well as personality and preferences” (54, p.206) is crucial. Ultimately, the previous investigation suggests that AI-based care robots offer an important option for mental health.

It has been shown that robots can be used to support the mental health of human beings. Conceivable integrations in the care sector relate to the promotion of social engagement, programmable cultural sensitivity, technical presence, a moderating role and, finally, the decisive difference. In this context, a new approach to mental health care can be developed, especially through the specific possibilities for action of the robot companions, which is generally given a high priority — or at least appears to be — but is usually given little consideration compared to physical suffering (19,20,37). There is immense potential here due to the constant and expected further development of care robots, also regarding an aging society. The explanation of fair access should not be understood as a universal solution, but rather as a conceivable path that can also stand up to ethical justification. At present, their use is plausible and justifiable primarily for the support of care professionals, but this does not rule out future expansions of the scope of action (52). An attitude guided by moral values and ethical points of reference will be essential, which, however, integrates the conviction that care robots represent a justifiable component in the realization of mental health.