2024 AI House Davos at the World Economic Forum
In recent years, there has been a significant shift in how technology influences our daily lives. The advent of social media platforms marked a notable change in our perceptions of privacy. We gradually adapted to sharing more personal information online, often overlooking potential risks. This evolution in our relationship with technology raises a question about our growing reliance on AI.
As AI systems become more sophisticated, where the processes and decisions made are not transparent or easily understood by users, do we accept the risk of conditioning ourselves to decisions being made by an AI 'black box', much like we became conditioned to having less privacy since the advent of social networks?
This is a key societal question that will shape our future.
It is about what we define in dealing with the Trade-off Between two Cs: Convenience and Control.
For convenience we are already accepting and may even more accept not to be willing to understand what are the reasoning behind many decision made or influenced by AI.
Because it takes times to do so. And that is basically what is at stakes.
Just as the convenience offered by social networks platforms led to a tacit acceptance of reduced privacy, there's a risk that the benefits provided by AI could lead to a tacit acceptance of gradual erosion of human agency.
To understand, interpret, critically think, challenge, and reflect is at the very essence of the value that human agency is uniquely set to bring.
Let’s be clear: In the AI era we are living, we will have to make everything possible so that taking the time to understand does not become a human behavior in danger of extinction.
The level of demand for transparency that society establishes as a norm, will determine the level of human agency we decide to have in many decisions for our lives in the future.