<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2429892&amp;fmt=gif">

New! Prompts that write themselves Try it out

Five Steps to Responsible AI

Public discussions around Responsible AI (RAI) have typically popped up when something clearly has gone wrong: a self-driving vehicle causes an accident, facial recognition software misclassifies women, a marketing algorithm allows companies to target real estate ads based on race.

But the alarm bells around AI have become louder and more urgent since late 2022 with the explosion of interest in Generative AI (GAI) and Large Language Models (LLMs). The seemingly miraculous ability of tools like Dall-E and ChatGPT to create images and essays based on prompts has raised concerns about perpetuating biases baked into training data, spreading misinformation based on AI “hallucinations,” and violating privacy with deepfakes, among other potential abuses. (As a side note, I don’t like the term “hallucination,” because the model is just relating what it learned and not making up anything. So “delusion” would probably be a better term.)

As a result of all the hype and concerns, everyone suddenly has thoughts about bias, fairness, transparency and explainability in AI.

The truth, however, is that Responsible AI concepts have been gaining traction in the tech sector for several years. Amazon, Google, IBM, Meta and Microsoft came together in 2016 to establish the Partnership on AI to promote the responsible use of artificial intelligence. The Responsible AI Institute followed in 2017 to advance safe and trustworthy AI.

More recently, national and local governments have been proposing and enacting an increasing number of laws and regulations to enforce Responsible AI principles. The proposed EU AI Act and the American Data Privacy and Protection Act (ADPPA) are just two prominent examples of new laws that likely will impact companies using machine learning in the near future.

It is not surprising, then, that many companies already have taken steps toward implementing Responsible AI principles in their AI/ML projects. In fact, a 2023 study by Verta Insights, the research group in our organization, found that nearly half (46%) of companies currently have a Responsible AI team or initiative.

What are Responsible AI principles?

  • Fairness – ensuring that AI systems do not make decisions that discriminate against particular groups of people.
  • Accountability – creating mechanisms to hold AI developers and users responsible for the outcomes of the algorithms.
  • Transparency – opening up the “black box” so that it’s clear how an AI system was developed and deployed.
  • Privacy – protecting personally identifiable information (PII) and the overall privacy of those subject to the algorithms.
  • Safety – prioritizing the protection of individuals and society as a whole in the design and use of AI.

In a follow-up post, I’ll provide a deeper dive in each of the above principles, but for now let’s consider five steps that companies can take to adopt Responsible AI.

1. The first step is to establish RAI as a strategic priority that has the focus and support of senior leadership. AI/ML initiatives involve and affect stakeholders across a business, so it’s important that communication of RAI priorities comes from the top. Senior executives should take the lead in fostering a culture of responsibility and ethical decision-making throughout the organization.

2. Next, the organization should establish an AI governance structure for managing and monitoring AI models, with empowered RAI leadership and clear lines of responsibility and accountability. An RAI executive can build the team necessary to effectively implement and monitor RAI in the organization, but also take lead in communicating and enforcing the RAI principles cited above throughout the organization.

3. With the structure in place, the organization is ready to conduct a risk assessment to understand its current RAI maturity level and identify potential red flags and ethical concerns arising from its use of AI. The RAI group or executive should lead this assessment, but other stakeholders also should be involved, including legal, compliance and IT.

The result of this cross-functional assessment should include a “current state” description of the organization’s RAI standards, controls and tools, with a roadmap for developing its RAI maturity. This could include investments in technology and staff, but should also cover areas such as training for staff involved in the ML lifecycle on bias detection and mitigation, algorithmic transparency and ethical decision-making.

4. Based on its roadmap, the organization should work to integrate Responsible AI practices into the design and development process of AI systems. This should include techniques such as fairness testing, interpretability and explainability.

5. Moving forward, the organization should regularly review and evaluate its Responsible AI framework and governance structure to identify areas for improvement and adapt to changing ethical and regulatory landscapes.

By taking these steps, organizations can ensure that their use of AI is ethical, responsible and aligned with their values and missions. Implementing a Responsible AI initiative can also help organizations build trust with customers and stakeholders, as well as mitigate potential legal and reputational risks.

Look for an upcoming post that takes a deeper look at the principles of Responsible AI.

Subscribe To Our Blog

Get the latest from Verta delivered directly to you email.

Try Verta today