top of page

The Looming Algorithmic Divide: Navigating the Ethics of AI




In recent months, the rapid adoption of generative artificial intelligence (Gen AI), exemplified by OpenAI’s software ChatGPT, has propelled AI into the global spotlight. However, amidst the fascination with the new super-human capabilities offered by AI, there is an emerging “algorithmic divide” fueled by both disparities in technology access and literacy, along with cognitive biases inherent in AI models trained on available data. Bringing these challenges to the forefront will allow us to openly manage them across industry, creators, and society.


What Is the Algorithmic Divide?

While the ubiquity of AI in our lives is evident, it is important to acknowledge that its impact is not uniform across the globe. Beyond the well-known “digital divide,” the development and proliferation of AI have given rise to an “algorithmic divide.” This divide separates regions where AI thrives from those where it remains largely unexplored. Brookings’ Mark Muro and Sifan Liu estimate that just 15 cities account for two-thirds of the AI assets and capabilities in the United States (San Francisco and San Jose alone account for about one-quarter). As humans increasingly interact with algorithms, we are bound to undergo adaptations that could reshape our thinking, societal norms, and rules. And while new AI technologies such as large language models are poised to disrupt white-collar jobs maybe even more so than blue-collar jobs, professionals from underserved communities face a major gap in access to broadband and computing technologies that are vital to upskilling ahead of this shift. The algorithmic divide needs to be front and center for business and political leaders as we navigate this new wave of AI-driven transformation so this disparity does not get worse.


Acknowledging Cognitive Biases

As AI becomes an integral part of our lives, it’s imperative to examine the ethical and responsible principles associated with its presence in society. While the focus often rests on biases transmitted from humans to machines, it is essential to recognize the vast array of biases ingrained in human cognition. These biases extend far beyond our individual or collective awareness and include confirmation bias, survivor bias, availability bias, and many others. Acknowledging these biases is crucial because attempting to eliminate them from the intelligent systems we develop is an unattainable goal for humanity. Just as data privacy has become more of a universal right for citizens, proposed legislation like the European Union’s AI Act and The Algorithmic Accountability Act in the U.S. are attempting to add transparency and protect consumers against AI bias.


Recognizing Augmented Biases

Eliminating one bias often introduces another. The impact of AI on human existence becomes a paramount concern, surpassing the issue of biases themselves. Creators of artificially intelligent entities bear the responsibility of continuously auditing the societal changes caused by these systems and optimizing positive effects while minimizing harm. As cognitive biases can have profoundly negative consequences, their amplification through AI raises critical questions. What are the potential negative effects of artificially augmented cognitive biases when computing power acts as an amplification factor? Are companies prepared to take responsibility for the unintended consequences that AI-based agents may impose on humans as we rely more on machines to augment our decisions? Can AI aid in reducing biases in datasets, and how do we determine which biases are tolerable or dangerous?


Being Aware of ‘Alter Ego AI’

A vital concept for AI creators to grasp is that the introduction of one AI in society inevitably gives rise to another — a counterpart or alter ego. As AI advances and achieves unprecedented efficiency, a complementary AI emerges to restore equilibrium. This “Dual-Sided Artificial Intelligence” (DSAI) effect ushers in an era of machine-to-machine interaction and competition. It is crucial for AI creators to ensure that human agency remains central in this landscape. The defects and qualities of AI, which derive from their human creators, present a superhuman challenge due to the often-invisible biases inherent in these systems. OpenAI has developed its own classifier to allow users to understand if a written response was generated by a human or AI and also the ability to reference where the underlying data was sourced from.


Nine Guideposts for Closing the Algorithmic Divide

As the new wave of AI technologies propels us towards a new paradigm for work and life with both promise and peril ahead, what can leaders do now to head off the looming algorithmic divide that will grow if left unchecked?


Make AI readiness one of the key priorities in the company agenda.

  • Educate the leadership team and employees by unpacking the “black box” of AI’s capabilities and applications to develop AI literacy across the company and reduce the fear factor.

  • Build trust and transparency into AI by establishing employee innovation challenges to spur AI experimentation across the business, and establish an AI innovation ecosystem including customers, technology vendors, and AI start-ups to help co-create, validate, and scale new AI solutions.

  • Apply human-centric design thinking to identify and pilot quick wins using AI to demonstrate near-term value while pursuing longer-term, more transformative opportunities, to gain buy-in across the organization, with customers, and more broadly with society.

Ensure that AI Governance is foundational to AI development at scale.

  • Establish policies and frameworks that guide responsible AI development and continuous improvement in your organization like Rolls Royce’s Aletheia Framework, which assesses new AI solutions on 32 dimensions across social impact, accuracy/trust, and governance, fostering a culture of responsible AI development and usage.

  • Build responsible AI governance principles and expertise involving supporting functions such as legal, compliance, regulatory, HR, procurement, and IT to help accelerate the adoption of “responsible” AI leadership by balancing speed and guardrails to manage the inherent risks in AI solutions.

  • Participate in industry working groups along with government policymakers to ensure there is a balance of AI innovation and evolving regulations across industries while providing universal access to new AI technologies as the workforce is transformed.

Make technology investments that serve a sustainable future.

  • Build algorithmic trust by promoting transparency in AI algorithms using an internal AI “sandbox” to enable auditing and assessment of their environmental impact, helping identify areas for improvement and minimizing unintended consequences.

  • Incorporate data protection, AI detectors, source data identification, and explainable AI approaches as fundamental quality requirements in the design of any AI-enabled application, while reducing unnecessary data storage and processing.

  • Ensure your data and technology infrastructure is AI ready, including the ability to open up and expand your data/content assets for AI innovations, maintaining security and privacy, while using algorithms that are optimized for energy consumption, reducing the computational resources required for AI tasks.

Our Collective Responsibility for the Ethics of AI: Moving Towards a Human-Centric Future

The algorithmic era, already unfolding in various parts of the world, necessitates contemplation of humanity’s role in the face of AI-driven “machine-to-machine” interactions. Developing responsible practices that prioritize humans is not merely a competitive advantage or a localized endeavor. It is not a competitive advantage that would be the exclusive property of any specific company. Any other practice could not, and should not, be contemplated.

Just like any other disruptive tech wave like the internet, it will be critical for society to guide the evolution of generative AI in a direction where the benefits are available to the full spectrum of innovators and end-users who want to leverage this powerful technology, especially those with the least access today.

Elon Musk, Steve Wozniak, and notable scientists are asking for a break on the development of artificial intelligence superior to version 4 of ChatGPT. Now is the time for leaders to define the fundamental and universal principles to guide their organization’s use of powerful AI technologies in the future, to ensure we shape an ethical AI landscape that serves humanity’s best interests.


This article was written by Scott A. Snyder, a senior fellow at Wharton, adjunct professor at Penn Engineering, and chief digital officer at EVERSANA; and Hamilton Mann, group vice president, digital marketing and digital transformation at Thales, and was published in Knowledge at Wharton

Recent Posts

See All
bottom of page