<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2429892&amp;fmt=gif">

New! Prompts that write themselves Try it out

Blueprint for an AI Bill of Rights

The White House Office of Science and Technology Policy (OSTP) this week released a “Blueprint for an AI Bill of Rights,” which aims to address challenges posed by uses of “technology, data and automated systems” that could impinge on the rights of the American public.

The document lays out five principles that OSTP identified to guide the design, use, and deployment of automated systems, with the goal of protecting the public in the age of artificial intelligence. The Blueprint also provides a “technical companion” with guidance for organizations looking to put each of the five principles into practice.

The five principles include:
  1. “Safe and Effective Systems: You should be protected from unsafe or ineffective systems.”
  2. “Algorithmic Discrimination Protections: You should not face discrimination by algorithms, and systems should be used and designed in an equitable way.”
  3. “Data Privacy: You should be protected from abusive data practices via built-in protections, and you should have agency over how your data is used.”
  4. “Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.”
  5. “Human Alternatives, Consideration and Fallback: You should be able to opt-out, where appropriate and have access to a person who can quickly consider and remedy problems you encounter.”

The document does not recommend (let alone require for private companies, since it’s positioned as voluntary) any specific technologies to support the five principles, but the companion does layout “Expectations for Automated Systems” under each principle.

The “expectations” are essentially recommendations for capabilities that would be underpinned by technology. For example, under “Safe and Effective Systems,” the document calls for:

  • Ongoing monitoring procedures
  • Recalibration procedures
  • Continuous evaluation of performance metrics and harm assessments
  • Retraining of models as necessary
  • Fallback mechanisms to allow reversion to a previously working system
  • Mechanisms for testing the accuracy of predictions or recommendations
  • Manual, human-led monitoring as a check in case of shortcomings in automated monitoring systems
  • Data derived or inferred from prior model outputs should be identified and tracked

The document also covers governance and reporting. Regarding governance, again under the “Safe and Effective Systems” principle, the companion document calls for “clear governance structures and procedures,” with responsibility for oversight resting “high enough in the organization that decisions about resources, mitigation, incident response, and potential rollback can be made promptly, with sufficient weight given to risk mitigation objectives against competing concerns.”

The companion also provides extensive recommendations around reporting. For example, the companion calls for documentation of systems making predictions, the algorithms underlying the predictions, the data used in making predictions, the predictions themselves, and the human fallback options for the predictions – in short, documentation of the full machine learning lifecycle. The text also provides, among other provisions, recommendations for an algorithmic impact assessment that would document testing and evaluations performed and mitigation measures.

Again, this “AI Bill of Rights” does not mandate specific tools or technologies to be used to meet the goals described in the text. Much like industry standards typically describe an end state and leave it to the user of the standard to figure out how to achieve the end state, the document is, in a sense, aspirational in nature.

As Consumer Privacy World notes, “This effort is intended to further the ongoing discussion regarding privacy among federal government stakeholders and the public, but its impact on the private sector could well be limited because it assumes voluntary action rather than mandated outcomes.”

That said, if the federal government moves forward with adopting the provisions in the document for its own AI/ML efforts, the “recommendations” could later come to be applied to government contractors and subsequently be more broadly adopted as a de facto standard across the economy.

The release of the “AI Bill of Rights” also joins the broader efforts in the US Congress, state legislatures, the EU, and elsewhere to create guardrails around the use of AI and to prevent the misuse of data and AI.

As we highlighted in our earlier post on American Data Privacy and Protection Act (ADPPA), model lifecycle management tools from a solution provider like Verta can help organizations prepare for compliance with AI regulations, with capabilities for tracking and reporting on how models were created, trained, tested, deployed, monitored and managed.

The “AI Bill of Rights” should give AI and ML practitioners further incentive to ensure that they have the processes and tooling in place to ensure that their organizations are prepared for a time when interpretable, trackable and explainable algorithms are “must haves” to continue doing business.

Verta Insights, the research group at Verta, is launching a research study around AI Regulations, and we’re interested to hear your thoughts and concerns around regulations. Write me at andyreese@verta.ai to share your feedback, or let me know if you would like to participate in the study.

 

Subscribe To Our Blog

Get the latest from Verta delivered directly to you email.

Try Verta today