Julian De Freitas, Ph.D., and I. Glenn Cohen, J.D.
In the wake of recent advancements in generative artificial intelligence (AI), regulatory bodies are trying to keep pace. One key decision is whether to require app makers to disclose the use of generative AI-powered chatbots in their products. We suggest that some generative AI-based chatbots lead consumers to use chatbots in unintended ways that create mental health risks, making consumers contextually vulnerable — defined as a temporary state of susceptibility to harm or other adverse mental health effects arising from the interplay between a user’s interactions with a particular system and the system’s response. We argue that for health apps, including medical devices and wellness apps, disclosure should be mandated. We also show that even when chatbots are disclosed in these instances, they may still carry risks due to the tendency of app makers to humanize their chatbots. The current regulatory structure does not fully address these challenges. We discuss how app makers and regulators should proactively address this challenge by considering where apps fall along the continuum of perceived humanness. For health-related apps, this evaluation should lead to a mandate or strong recommendation that neutral (nonhumanized) chatbots be the default, with any deviations from this standard requiring clear justification. (Funded in part by a Novo Nordisk Foundation Grant; NNF23SA0087056.)