Women in AI: Miriam Vogel stresses the need for responsible AI

Women in AI Miriam Vogel stresses the need for responsible AI

To give AI-focused women academics and others their well-deserved—overdue—time in the spotlight, Tech has been publishing a series of interviews focused on remarkable women who’ve contributed to the AI revolution. We publish these pieces throughout the year as the AI boom continues, highlighting critical work that often goes unrecognized.

Miriam Vogel is the CEO of EqualAI, a nonprofit created to reduce unconscious bias in AI and promote responsible AI governance. She also serves as chair of the recently launched National AI Advisory Committee, which Congress mated to advise President Joe Biden and the White House on AI policy. She teaches technology law policy at Georgetown University Law Center.

Vogel previously served as associate deputy attorney general at the Justice Department, advising the attorney general deputy attorney general on a broad range of legal policy operational issues. As a board member at the Responsible AI Institute and senior advisor to the Center for Democracy Technology, Vogel advised White House leadership on initiatives ranging from women’s economic and regulatory food safety policy to matters of criminal justice.

Briefly, how did you get your start in AI? What attracted you to the field?

I started my career working in government, initially as a Senate intern, the summer before 11th grade. I got the policy bug and spent the next several summers working on the Hill and then the White House. At that point, I focused on civil rights, which is different from the conventional path to artificial intelligence, but looking back, it makes perfect sense.

After law school, my career progressed from being an entertainment attorney specializing in intellectual property to engaging in civil rights social impact work in the executive branch. I had the privilege of leading the Equal Pay Task Force while I served at the White House. While serving as associate Deputy Attorney General under former Deputy Attorney General Sally Yates, I led the creation and development of implicit bias training for federal law enforcement.

I was asked to lead EqualAI based on my experience as a lawyer in tech my background in policy addressing bias systematic harms. I was attracted to this organization because I realized AI presented the next civil rights frontier. Without vigilance, decades of progress could be undone in lines of code.

I have always been excited about the possibilities created by innovation. I still believe AI can present excellent new opportunities for more populations to thrive — but only if we are careful at this critical juncture to ensure that more people can meaningfully participate in its creation development.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

We all have a role to play in ensuring that our AI is as effective, efficient, and beneficial as possible. That means doing more to support women’s voices in its development (who, by the way, account for more than 85% of purchases in the U.S., so ensuring their interests and safety are incorporated is an intelligent business move) and the voices of other underrepresented populations of various ages, regions, ethnicities, and nationalities who are not sufficiently participating.

As we work toward gender parity, we must ensure more voices and perspectives are considered to develop AI that works for all consumers—not just AI that works for developers.

What advice would you give to women seeking to enter the AI field?

First, there is always time to start. Never. I encourage all parents to use OpenAI’s ChatGPT, Microsoft’s Copilot, or Google’s Gemini. We will all need to become AI literate to thrive in what is to become an AI-powered economy. That is exciting! We all have a role to play. Whether you are starting a career in AI or using AI to support your work, women should be trying out AI tools to see what these tools cannot do and whether they work for them, generally becoming AI savvy.

Second, responsible AI development requires more than just ethical computer scientists. Many people think that the AI field requires a computer science or some other STEM degree when, in reality, AI needs perspectives and expertise from women and men from all backgrounds. Jump in! Your voice perspective is required. Your engagement is crucial.

What are some of the most pressing issues facing AI as it evolves?

First, we need greater AI literacy. We are “AI net positive” at EqualAI, meaning we think AI will provide unprecedented opportunities for our economy to improve our daily lives—but only if these opportunities are equally available and beneficial for a greater cross-section of our population. We need our current workforce, the next generation, our grandparents—all of us—to be equipped with the knowledge and skills to benefit from AI.

Second, we must develop standardized measures and metrics to evaluate AI systems. Standardized evaluations will be crucial to building trust in our AI systems, allowing consumers, regulators, and downstream users to understand the limits of the AI systems they are engaging with and determine whether that system is worthy of our trust. Understanding how a system is built to serve the envisioned use cases will help us answer the critical question: For whom could this fail?

What are some issues AI users should be aware of?

Artificial intelligence is just that: artificial. It is built by humans to “mimic” human cognition and empower humans in their pursuits. We must maintain the proper amount of skepticism and engage in due diligence when using this technology to ensure we place our faith in systems that deserve our trust. AI can augment—but not replace—humanity.

We must remain clear that AI consists of two main ingredients: algorithms (created by humans) and data (reflecting human conversation interactions). As a result, AI reflects and adapts our human flaws. Bias harms can be embedded throughout the AI lifecycle, whether through algorithms written by humans or data that provides a snapshot of human lives. However, every human touchpoint is an opportunity to identify and mitigate the potential harm.

Because one can only imagine as broadly as one’s own experience allows, AI programs are limited by the constructs under which they are built. The more people with varied perspectives and experiences on a team, the more likely they are to catch biases and other safety concerns embedded in their AI.

What is the best way to responsibly build AI?

Building AI that is worthy of our trust is our responsibility. We can’t expect someone else to do it for us. We must start by asking three basic questions: (1) For whom is this AI system built (2) what were the envisioned use cases (3) For whom can this fail? Even with these questions in mind, there will inevitably be pitfalls. To mitigate against these risks, designers, developers, and deployers must follow best practices.

At EqualAI, we promote good “AI hygiene,” which involves planning your framework, ensuring accountability, and standardizing testing documentation and routine auditing. We also recently published a guide to designing and operationalizing a responsible AI governance framework, which delineates the values principles framework for implementing AI responsibly at an organization. The paper serves as a resource for organizations of any size sector or maturity while adopting and implementing AI systems with an internal public commitment to do so responsibly.

How can investors better push for responsible AI?

Investors have an outsized role in ensuring our AI is safe, effective, and responsible. Investors can ensure that companies seeking funding are aware of and consider mitigating potential harm liabilities in their AI systems. Even asking, “How have you instituted AI governance practices?” is a meaningful first step in ensuring better outcomes.

This effort is not just reasonable for the public good; it is also in the best interest of investors who will want to ensure the companies they are invested in affiliated with are not associated with bad headlines or encumbered by litigation. Trust is one of the few non-negotiables for a company’s success. A commitment to responsible AI governance is the best way to build and sustain public confidence. Robust, trustworthy AI makes good business sense.

Leave a Reply