Designing Positive AI: Key Challenges
How might we design AI that support the wellbeing of conscious creatures?
Artificial intelligence is transforming the world as we know it—for better or for worse. As AI systems become increasingly advanced and ubiquitous, it is crucial that we develop them to align with human values and priorities, especially human wellbeing. However, creating AI that actively promotes wellbeing, which we call Positive AI, faces many challenges.
In a new preprint, we identify 12 key challenges that must be addressed to develop Positive AI. These challenges fall into two main categories: a lack of knowledge and a lack of motivation.
Lack of knowledge:
Choosing the right theoretical paradigm: There are many ways to conceptualize wellbeing, and the right one depends on the context. Wellbeing could focus on happiness, life satisfaction, health, relationships, purpose, and more. This makes it hard to determine which facet(s) of wellbeing an AI system should target.
Modeling wellbeing and attributing fluctuations: Wellbeing is complex, dynamic, and hard to model, making it difficult to determine what causes changes. There are many open questions about how factors like technology use, physical activity, sleep, and social interaction influence wellbeing. Without understanding these relationships, AI cannot effectively support wellbeing.
Measuring wellbeing in context: We need ways to assess wellbeing that consider individuals and environments. Traditional wellbeing measures may not capture how technologies like AI affect people. Without context-appropriate measures, AI cannot gauge its impact on wellbeing.
Translating between human and system scales: It’s hard to apply insights from small-scale wellbeing research to large-scale AI optimization. AI requires huge datasets, but wellbeing is deeply personal. This mismatch in scale makes it difficult to design AI that benefits individuals.
Correlating self-reports and behavioral data: We lack understanding of how to relate what people say about wellbeing to how they behave. Self-reports are ideal for wellbeing but don't match how AI gathers data. Without relating these data types, AI cannot gain a complete view of wellbeing.
Managing optimization tradeoffs: Algorithmic optimization involves tradeoffs that are hard to identify and balance. Optimizing for one aspect of wellbeing could compromise another. These unforeseen tradeoffs challenge AI alignment with comprehensive wellbeing.
Dealing with differences in pace: Changes in wellbeing are slow while AI optimization is fast, so they’re hard to reconcile. AI can optimize quickly but wellbeing effects emerge over time. This difference in pace makes it hard for AI to respond appropriately to support wellbeing.
Designing actions to promote wellbeing: It's unclear how to design AI systems in a way that actually improves wellbeing. There are few examples and limited methods for "Positive AI." Without proven ways to design AI for wellbeing, progress in this area will remain limited.
Lack of motivation
Institutional challenges: Companies focus on short-term metrics and growth over wellbeing, and education/policy could help address this. Incentives must change to motivate building wellbeing-focused AI. Without proper incentives and metrics, companies will not prioritize wellbeing.
Economic challenges: Effects on wellbeing and revenue are unclear, and it may take too long to see how they relate. Companies may avoid wellbeing if the business case is uncertain. Unproven business cases reduce motivation to build AI for wellbeing.
Data access challenges: It’s hard for researchers to study commercial AI’s impact on wellbeing without access to proprietary data and algorithms. Transparency is needed but risks competitive advantage. Limited data access prevents understanding AI's influence on wellbeing.
Public relations challenges: Companies may avoid publishing research showing negative effects on wellbeing due to bad publicity. Addressing wellbeing issues openly could damage reputations and stock prices. The risk of bad publicity deters companies from openly examining AI's impact on wellbeing.
There are three main takeaways from our analysis:
We must improve our understanding of how AI influences wellbeing. This means creating better models of wellbeing that consider individuals, communities, and real-world dynamics; developing methods to measure wellbeing changes over time; and finding ways to apply insights from small-scale research to large-scale AI.
We need to design AI systems that actively foster wellbeing. Wellbeing cannot be an afterthought; it must be built in from the start. Approaches like Positive Design and Positive Computing can help, but designers must play a key role in developing human-centered AI.
Positive AI starts with the belief that we can use technology to better the world. We need examples and business cases showing how prioritizing wellbeing can be profitable. If companies gain a competitive advantage from it, a virtuous cycle may emerge where wellbeing-focused AI becomes the norm.
Overall, aligning AI with wellbeing is crucial but challenging. With AI set to transform our lives, prioritizing human flourishing could help ensure that its impact on society is positive. By improving knowledge, design, and motivation, we can work towards creating Positive AI systems that enhance life for individuals and communities alike. The future remains unwritten, but focusing AI on human wellbeing may help make it brighter.
Read more at: https://arxiv.org/abs/2304.12241