Imagine facing an economic downturn with unpredictable consumer behaviors and with products or services that are not keeping pace with the market. As a business leader, you're finding that the usual strategies aren’t producing predictable results, and investments in technology-driven initiatives aren’t maturing fast enough to keep up with inflation. If that wasn’t enough, work culture is resisting change and performance is declining as you see signals of quiet quitting.
In these unprecedented times of uncertainty, the natural response might be to move faster, decide quicker, and double down to transition out of this state as quickly as possible. Instead, what if you paused for a moment to reflect on decisions being made in a more systematic way, and drop into System 2 thinking1? Ask what led to the current scenario? What if, by taking the time to explore the quality of decisions, you could better make sense of the patterns and barriers that impede adaptation and resiliency?
Not all leaders are equipped with the tools to create strategies and make good decisions at the level of uncertainty and complexity we currently face in our global economy. It's time to start thinking about how leaders can externalize some of their thinking and leverage emerging technologies like data and continuously learning AI models – HCI professor Yvonne French calls this Integrated Thinking2.
AI models don’t just predict or find correlations but are learning faster than humans can to improve themselves. Models are able to reason, represent knowledge, and have human-like perception. On top of that, new marketplaces are emerging faster than ever. For example, PromptBase, a prompt engineering service, emerged from the need to better leverage large language models (LLM) like Dall-e, GPT-3, and stable diffusion.
For those in leadership roles, now is not the time to be defensive or make decisions from a place of fear. It's time to build organizational resiliency by improving the quality of decisions and getting innovative when it comes to business strategy.
Leveraging Good AI, technology can help shepherd these positive and innovative principles. Here are five Good AI principles that organizations can implement today to navigate through uncertainty:
- Collect data that represents healthy financial behaviors.
It's challenging for transactional data to represent long-term benefits and perceived value. Developing healthy behavioral models that embody well-being and social fairness requires a different way of collecting and curating data that also protects privacy. Through user research, teams can curate a healthy dataset coupled with data evaluation processes that mitigate risk of harmful models.
- Measure to incentivize collective well-being.
Designing for clicks (i.e. “engagement”) based on behavioral economics, fails to acknowledge or take accountability for a participant's well-being. Replace engagement measures with value-driven measures like responsibility, fairness, justice, and agency. Models that offer feedback on things like mortgage qualification should be based on future potential rather than backcasting on historical data. Synthetic data and Futures Thinking activities give designers the ability to simulate plausible future scenarios and design incentives that nudge behaviors toward collective well-being.
- Diversify participation in the design and development of emerging technologies.
Developing decision-making tools where the end-user has more control in creating their own experiences is key. Anticipatory design workshops drive collaboration by aligning diverse perspectives from stakeholders to end-users to create value, and anticipate outcomes and potential failures before designing the experience. For example, stakeholders and end-users can work together to design tools that help citizens spend within their means and protect them from harmful debt.
- Upskill by monitoring AI system outcomes.
Historically, regulatory offices or ethics offices are the sole arbiter of oversight. However, everyone in the organization needs to be responsible for monitoring and flagging potential harm from AI systems. This means upskilling the organization for AI readiness through feedback loops. By involving non-technical teams in the decision-making process or development process of AI systems, you’re building AI literacy through continuous learning activities.
- Incentivize co-ownership of data assets.
Re-examine measures of productivity and shift to an innovative and collaborative value creation model. One strategy would be to reward people with co-ownership for contributing to the collection and utilization of their data in development of an internal AI system.
The time for change is always. For many organizations, making the leap to emerging technologies is a daunting challenge saved for good times. It’s time to change the paradigm from "business as usual" to thinking in systems, growing collective well-being, and creating future scenarios that are built for resilience.
Reach out to The BIO Agency team at firstname.lastname@example.org to learn more about our strategic services and sense-making activities to get you started.
1. In Daniel Kahneman’s book “Thinking Fast & Slow”, he models behavior by contrasting System 1 and System 2 thinking. System 1 is intuitive, fast, automatic, and prone to errors in decision-making, while System 2 is more like your voice of reason, more deliberate and rational. System 2 checks your biases and validates assumptions while System 1 helps you react quickly by relying on heuristics.
2. In https://www.youtube.com/watch?v=geLIz-9aa84, inspired by Kahneman’s theory, HCI professor Yvonne French defines integrated thinking as externalizing thoughts to be more systematic and reduce bias while problem-solving tasks with high uncertainty. This is important because experts have huge amounts of cognitive work: having to make decisions from too much information, having demands on their attention that force constant context switching, and thinking, most of which is done in their heads.