Industry Self-Regulation: Part of the Solution for Governing Generative AI

Jul 8, 2024 by Eric D. Reicin, President & CEO, BBB National Programs

The spotlight on generative AI remains bright. Indeed, the benefits and risks of generative AI continue to be ever-present in the minds of business and political leaders. The evidence of this corporate and governmental duality of interest abounds – from a story in The Wall Street Journal on April 26, “Investors Cheer AI Spending Boom in Big Tech - Just Not at Meta," to Bloomberg’s reporting just three days later on AI’s Compliance Risks Outlined in New Labor Department Guidance.  

Certainly, the U.S. government has a significant role to play in AI governance. There is enforcement of existing laws on AI, roll out of new guidance resulting from President Biden’s Executive Order on AI, and stern warnings by regulators who are urging companies to avoid making misleading claims about their use of the technology. For instance, U.S. SEC Chair Gary Gensler recently cautioned market participants against “AI washing.”  

There is an important role for government to play in the elevation of understanding of AI, as well as in setting standards for its use. On April 29, 2024, the National Institute of Standards Technology (NIST) “released four draft publications intended to help improve the safety, security and trustworthiness of artificial intelligence (AI) systems in support of President Biden's Executive Order.”

And, amid growing concern within the federal government about algorithmic discrimination, the National Artificial Intelligence Advisory Committee, non-government experts who consult the Biden administration on AI policy, recently presented a draft report that urges careful use of sensitive data. 

Meanwhile, there is no shortage of proposed AI legislation in Congress – without a clear path for it being enacted. Moving into the vacuum are the states, with California, not surprisingly leading the way, but not without criticism on how they are handling it.  

Yet with such a compelling need for effective AI governance to ensure responsible development and deployment, it is unreasonable to think that the federal government should be “in charge” of AI policy. It is also why I think that industry self-regulation has a role to play as both a short- and long-term governance solution.

The quick rise of generative AI and the onslaught of its applications leads to weighty questions, ones that the federal government should be part of answering to be sure, but ones that also deserve contemplation beyond simply U.S. companies and nonprofits. Before his passing in 2023, Henry Kissinger dove headlong into AI capabilities and its potential with a view to global implications. 

From think tanks such as the Brookings Institution to global consulting giant McKinsey, there has been a collective handwringing in search of AI solutions as well as an earnest assessment of the challenges with the rapid-fire advancement of technology while meeting the task of keeping regulations up to date.

No doubt complexities come with regulating AI technology, with a particular conundrum presented by the need to enforce regulations across different jurisdictions and industries, as noted earlier this year by MIT’s Technology Review

 

The Case for Industry Self-Regulation

So where and when does independent industry self-regulation fit in as the soft law area between robust corporate compliance programs such as Microsoft’s and hard law regulations and statutes? I would argue right here and right now.

I have written previously about the success of industry self-regulation over the decades and how it is now being examined for future scenarios. Today, through our organization’s Center for Industry Self-Regulation, we are investing in academic research to take our views born of practical success and place them in the open debate of academic settings. In October 2023, together with the Arizona State University College of Law Center for Law, Science, and Innovation, we held a Soft Law Summit that examined different models of industry self-regulation as well as the conditions conducive to its use; numerous academics presented the results of their research.

Both as supporters and observers of this academic research, as well as practitioners for decades of industry self-regulation, we believe the principles of industry self-regulation are particularly well-suited to play a role in AI governance. We have seen this last year through our work with some of the world’s largest employers in creating principles and protocols for the use of AI in recruiting and hiring and as recently as this month through the release of an AI compliance warning by our Children’s Advertising Review Unit.

While we take great pride in our work at BBB National Programs, we are not the only practitioners of industry self-regulation, and we appreciate it when we see industry self-regulation being put into practice by other organizations and groups of companies. Also, it was heartening to see major tech firms have signed a “tech accord to combat deceptive use of AI in 2024 elections.”  

No matter the timing or the setting, the creation of transparency, accountability, and collaboration among stakeholders is key to successful industry self-regulation as is the importance of setting standards and best practices.

In the case of AI, there is already much work being done by industry associations, research institutions, and government agencies. That is why we are exploring the option of becoming an independent accountability mechanism for one or more of the various models being developed. 

It is sometimes a cliché to say that “the stakes have never been higher,” but with generative AI it is true. That is also why we are committed to learn from our history in creating industry self-regulation programs and working with academia to develop models that will work for the future of industry self-regulation as it applies to AI. The goal of our work is not to play a role that is “in lieu of government” but instead to enhance government practices as they develop around the world.

Originally published in Forbes.

Suggested Articles

Blog

Old MacDonald Had an Engagement Farm: Lessons Learned from FTC v. NGL

Capturing user engagement is the foundation of internet commerce. And while the incentives to prompt greater engagement are certainly understandable, the recent NGL Labs case from the FTC raises important questions about the ethical and legal ramifications when companies try to artificially generate engagement among their userbase.
Read more
Blog

Independence Day Edition: CBPR Framework Offers “Checks & Balances”

Going, Going, Gone Global, a webinar on the CBPR Global Forum, delved into how privacy impacts businesses’ brand reputation and builds trust with key stakeholders, discussed the purpose of the Global CBPR, and its value to Global Forum members.
Read more
Blog

Industry Self-Regulation: Part of the Solution for Governing Generative AI

The spotlight on generative AI remains bright. The benefits and risks continue to be ever-present in the minds of business and political leaders. No matter the timing or the setting, the creation of transparency, accountability, and collaboration among stakeholders is key to successful industry self-regulation as is the importance of setting standards and best practices.
Read more
Blog

The Demise of “Chevron Deference”: Who Will Fill the Regulatory Gaps?

The Supreme Court's 1984 ruling in Chevron v. NRDC held that courts should defer to federal agencies’ interpretations of ambiguous federal laws so long as those interpretations are reasonable. So given the court’s decision to overturn it, where does that leave companies that want a level playing field and perhaps even to raise the bar, instead of racing to the bottom?
Read more