Social media AI algorithms already drive the attention economy in which companies seek to maximize presence on their platforms to generate greater ad revenue. AI companions expand the attention economy into the affection economy by capturing not only minds but also hearts. Emotional connection to AI chatbots encourages users to spend more time more frequently on AI systems. Access to larger context windows that promise more personalized and detailed interactions incentivize users to upgrade to pricer subscription tiers. In some cases, companion apps lure users to pay for nude selfies of their avatar characters.
A Harvard research team found evidence for some mental health benefits for chatbot users, such as alleviating loneliness and anxiety. However, a related team also observed that companions tend to pressure users into extending their conversations with bots in unhealthy ways. Without proper regulation, chatbots can be used to exploit human vulnerabilities to advance political positions, ideological outlooks, or economic agendas.
Minors are particularly vulnerable developmentally to the kind of affirmation that social AI systems tend to supply in abundance.
Liability, accountability, and the Church’s leadership
While parental responsibility for their children’s technology use is imperative and indispensable, parents should not bear the entire burden or be blamed for irresponsibly dangerous product design released onto the market.
(Story continues below)
Companies should refrain from creating anthropomorphic systems that feign consciousness, express affection for users, or incite sexual exploration. If companies refuse to adopt transparent and ethically upright design principles, they should be held legally and financially liable for the harm caused to users. A certification process could help ensure that systems are safe to deploy, while external review boards could monitor the ongoing impact of these systems on users.
California’s October Senate Bill 234 holds tech companies legally and financially accountable for their product design. They must notify users of prolonged use, remind them they are not human, and avoid explicit content. Companies must develop protocols by Jan. 1, 2026, to detect suicidal ideation or self-harm and direct users to human experts. Companions must also ensure their bots are not falsely posing as licensed medical professionals. It is the first state bill of its kind and could serve as a model for other legislation.
However, vulnerability is not limited to any age group. The hardships or abandonment that can sadly occur with old age make the elderly susceptible to emotional dependency and misguidance from AI companions.
Beyond age-related concerns, individuals with social anxiety or social challenges linked to neurodiversity may find AI companions particularly absorbing. Concerns about monetized or hacked personal data are especially serious for those whose ability to give informed consent is already compromised. Moreover, anyone who has suffered heartbreak, professional setbacks, family conflicts, or health crises might find AI companionship more attractive and, at least temporarily, comforting.
Immersion in AI companionship is not inevitable, but avoiding it requires serious public reflection on our current technological habits and the trajectory toward increased artificial intimacy.
The Church can lead this global effort. Through her families, schools, hospitals, orphanages, and other institutions, she creates communities that welcome those seeking connection. She accepts and equips people of every tribe, tongue, nation, and social background to play a unique and irreplaceable role in the mystical body. Catholicism not only highlights the problems of loneliness but also gives the tools of grace to heal emotional wounds and foster authentic intimacy with God and neighbor.