0

By Tolu Oyekan, Managing Director and Partner, Boston Consulting Group (BCG)

Artificial Intelligence (AI) remains a top priority for business leaders worldwide, with a strong focus on reaping tangible results from their AI initiatives. This is supported by BCG’s recent AI Radar report, From Potential to Profit: Closing the AI Impact Gap, which found that 72% of African companies rank AI and/or GenAI as a top three strategic priority and 86% are planning to increase their tech investments, pushing for a more disruptive usage of AI.

RELATED: Building a competitive edge with GenAI in Africa

While GenAI is an intuitive and approachable tool, it is important to recognise that implementing it in the workplace requires vigilance, discipline, and continuous effort.

Vigilance is crucial – companies often assume that simply having humans in the loop will negate any potential problems, leading them to deploy GenAI systems with a false sense of security. However, effective human oversight is only one part of a comprehensive solution, which must be designed alongside the GenAI system.

Human oversight is most effective when integrated with system design elements that aid in identifying and addressing potential issues. It should be combined with other vigilance strategies, such as testing, evaluating GenAI outputs, clearly defining use cases, and having response plans ready. Planning for oversight should occur during the product conception stage rather than as an afterthought during implementation.

ADVERTISEMENT

African executives see talent and AI as complementary

Our AI Radar research found that African executives see talent and AI as complementary: 19% see AI taking the lead, but humans keeping oversight; 66% see AI and humans collaborating with complementary roles; and 15% are prioritising human talent and using AI only when necessary.

Tolu Oyekan

The centrepiece of most risk-reduction approaches is to have humans review GenAI’s output. But simply asking employees to take on oversight roles – evaluating GenAI outputs for incorrect, biased, or otherwise erroneous outputs – can create a false sense of security for several reasons.

One of those reasons is automation bias, the tendency to overly trust automated systems. The more reliable a technology seems – and GenAI seems very reliable – the less critical human reviewers become, overlooking errors they previously might have caught.

Corporate incentives can also discourage oversight. Thoughtfully reviewing GenAI output takes time, cutting into the promised efficiency gains. Concerned about the negative repercussions of slowing things down, reviewers might perform only cursory reviews.

ADVERTISEMENT

These are just two of the roadblocks to effective human oversight outlined in our recent report on Responsible AI, You Won’t Get GenAI Right if You Get Human Oversight Wrong.

Designing effective oversight

We often talk about the 10-20-70 rule; top-performing organisations follow the 10-20-70 principle. They dedicate 10% of their efforts to algorithms; 20% to data and technology; and 70% to people, processes, and cultural transformation.

In implementing AI, 70% of the effort should be directed to people and processes. In other words, effective human oversight must be thoughtfully integrated into the system’s design, with the riskiest outputs receiving the most attention and escalation paths being clear and simple.

Companies can establish guidelines so that reviewer qualifications match the complexity and technical nature of the GenAI systems they’re overseeing. None of us would want an accountant reviewing our medical test results – and vice versa. The same holds true for GenAI reviewers. They should have the expertise to evaluate accuracy and other potential risks.

Additionally, organisations can build realistic time to review GenAI’s output into their business cases. For example, a bank might occasionally want to insert an incorrect output into its system to test whether the call centre worker passes it along. If companies don’t account for these tests, they will overestimate the time savings from the GenAI system.

Human supervision central to GenAI systems

To ensure meaningful oversight, organisations must take a proactive approach, treating human supervision as central to their GenAI systems.

Here are a few guidelines for CEOs and senior executives steering their organisations through GenAI adoption:

  • Reinforce the value GenAI can unlock so long as risks are appropriately managed.
  • Set the tone that GenAI oversight matters and that your teams should feel empowered to thoroughly evaluate outputs.
  • Take a risk-adjusted approach to oversight, focusing on those outputs that could most significantly affect business performance and brand strength.
  • Hold users – not GenAI systems – accountable for decisions to avoid the “AI made me do it” excuse.
  • Make sure your teams integrate human oversight into system design.

Educating users

Robust human oversight is fuelled by knowledge: not only evidence for and against the output but also an understanding of the strengths and limitations of GenAI technologies and the implications of a system’s different risks.

Educating users also means sharing results from the test-and-evaluation (T&E) phase and providing insight into when the system performs well and when there tend to be gaps. And it means articulating – clearly and precisely – the purpose of a GenAI system or use case, so reviewers can identify not only inaccurate results, but also deviations from the intended function.

In conclusion, human oversight is a critical element in harnessing GenAI’s capabilities while mitigating its risks. By integrating oversight into the design process and ensuring it is robust, organisations can foster vigilance, keep their operations safe, and utilise both technology and human capabilities effectively. When handled properly, oversight not only protects against GenAI’s shortcomings but also amplifies its value, ensuring a balanced coexistence of technology and human input.

More in Business

You may also like