AI Oversight: Biden’s Executive Order and AI Expert Analysis

November 2, 2023
AI Regulations and Oversight

 

The Artificially Intelligent Enteprise

Get the latest updates on artificial intelligence via my weekly newsletter The Artificial Intelligent Enterprise.


AI oversight took front and center in the news this week. In a pivotal move that could be a game-changer for the United States, the White House has issued an executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” This isn’t just a bureaucratic exercise; it’s a comprehensive roadmap to catapult the U.S. into a leadership position in the AI arena. The stakes are high—AI is not just another technology; it’s a transformative force that has the potential to redefine industries and geopolitics.

The recent executive order offers a fresh interpretation of the longstanding Defense Production Act, a law initially enacted during the Cold War. This act grants the President the authority to bolster national defense and manage crises. Notably, the order can be executed without additional legislative approval.

The executive order is a multi-faceted strategy that touches upon critical areas such as funding, private-sector collaboration, and ethical governance. It’s a clarion call for a unified, national approach to AI, one that balances innovation with responsibility. But what does this mean for business leaders, especially those in tech?

One of the most prominent criticisms is the vagueness of the executive order. Experts argue that it lacks concrete details, making it difficult to gauge its effectiveness in shaping the future of AI.

Let’s dive into the critical components of this executive order and explore its broader implications.

Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

The White House has recently issued an executive order aimed at accelerating the United States’ progress in artificial intelligence (AI). This move is designed to ensure that the U.S. remains at the forefront of AI innovation, thereby securing its economic and national security interests. The executive order outlines a comprehensive strategy that includes increased funding, collaboration with private sectors, and ethical guidelines.

The Importance of AI for National Interests

The executive order underscores the critical role of AI in shaping the future of various sectors, including healthcare, transportation, and defense. By taking proactive steps, the U.S. government aims to safeguard its leadership position in AI, a technology with transformative potential.

Funding and Collaboration: The Twin Pillars

One of the critical elements of the executive order is the allocation of substantial funding for AI research and development. This financial backing is expected to catalyze innovation and attract top talent. Additionally, the order encourages partnerships between government agencies and private sectors, fostering a collaborative ecosystem for AI advancement.

Ethical Considerations: A Balanced Approach

The executive order doesn’t overlook the ethical dimensions of AI. It calls for creating guidelines that ensure responsible AI development and usage, addressing concerns such as data privacy and algorithmic bias.

The Global Context: A Competitive Landscape

The U.S. is not alone in its quest for AI supremacy. Other nations, notably China, are also investing heavily in AI. The executive order is a strategic move to maintain a competitive edge in a rapidly evolving global landscape.

Conclusions and Implications

The White House’s executive order is hoped to be a significant step toward consolidating the United States’ position as a global leader in AI. By focusing on funding, collaboration, and ethics, the U.S. is laying a robust foundation for the future of AI. This move not only has immediate benefits but also long-term implications for the country’s economic and national security.

Criticisms of the Executive AI Order

The criticisms highlight the need for a more comprehensive and detailed approach to AI governance. They underscore the importance of not just setting ambitious goals but also providing the means to achieve them.

Lack of Specificity

One of the most prominent criticisms is the vagueness of the executive order. Experts argue that it lacks concrete details, making it difficult to gauge its effectiveness in shaping the future of AI.

Insufficient Funding

Another point of contention is the lack of financial commitment. Critics question whether the allocated resources are sufficient to meet the goals outlined in the order.

Ethical Concerns Overlooked

The executive order is also criticized for not adequately addressing ethical considerations. Issues such as data privacy, bias, and the potential misuse of AI are not given the attention they deserve.

Regulatory Gaps

Experts point out that the order does not provide a comprehensive regulatory framework. This leaves room for inconsistencies and loopholes that could be exploited.

Global Collaboration

The order is seen as too inward-focused and does not emphasize the importance of international collaboration in AI development, which is crucial in a globally connected ecosystem.

Where Companies Stand in Regulation of Open Source

One of the most critical issues around the regulation is how the industry will view the regulation of LLMs, and one thing I am especially concerned about is any limitations that they place on large language models, including those that are free and open source.

In a recent development sending ripples through the tech world, Meta (formerly Facebook) and OpenAI are locked in a heated debate over the future of open-source AI. This clash of titans revolves around the issue of whether advanced AI models like GPT-3 should remain open source or be controlled and monetized by tech giants. Meta’s Mark Zuckerberg emphasized the importance of open source models for maintaining U.S. competitiveness, stating:

“It’s better that the standard is set by American companies that can work with our government to shape these models.” He argued that the tech industry could address concerns about safety, rather than ceding leadership on open-source code to other countries”

Facebook has released one of the most potent free models, Llama 2, but they have some licensing limitations.

OpenAI’s CEO, Sam Altman, testified in Congress that he would advocate for:

Number 1, I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards. Number 2, I would create a set of safety standards focused on what you said in your third hypothesis as the dangerous capability evaluations. One example that we’ve used in the past is looking to see if a model can self-replicate and self-exfiltrate into the wild. We can give your office a long other list on the things that we think are important there, but specific tests that a model has to pass before it can be deployed into the world. And then third I would require independent audits…”

The Information’s Jon Victor has a good article on The Information outlining where the big player in AI stands on open source.

AI Experts Weigh in on the Existential Threat or X-Risk of AI

Many people argue that the current emphasis on the existential risk of AI, or x-risk, is causing a lack of attention to be paid to the very real and measurable risks of AI, such as bias, misinformation, high-risk applications, and cybersecurity. The reality is that most AI researchers are not overly concerned with x-risk, and instead focus on addressing these more immediate issues.

Three of the godfathers of modern AI Geoffrey Hinton, Andrew Ng, Yann LeCun, and Yoshua Bengio—have reignited the debate on the existential risks of AI. While Hinton and Bengio express grave concerns about these risks, Ng and LeCun argue that such fears are exaggerated and possibly manipulated by Big Tech to consolidate power. These three were co-winners of the 2018 Turing Award, the Nobel Prize for Computing.

Geoffrey Hinton

Hinton, who recently left Google to speak freely about AI risks, tweeted:

“Andrew Ng is claiming that the idea that AI could make us extinct is a big-tech conspiracy. A datapoint that does not fit this conspiracy theory is that I left Google so that I could speak freely about the existential threat.” – @geoffreyhinton, October 31, 2023

Yann LeCun, Chief AI Scientist Meta

LeCun, Chief AI Scientist at Meta, responded:

“You and Yoshua are inadvertently helping those who want to put AI research and development under lock and key and protect their business by banning open research, open-source code, and open-access models. This will inevitably lead to bad outcomes in the medium term.” – @ylecun, October 31, 2023

Andrew Ng, Founder of Google Brain and Founder of DeepLearning.AI

Ng, in his newsletter, stated:

“My greatest fear for the future of AI is if overhyped risks (such as human extinction) lets tech lobbyists get enacted stifling regulations that suppress open-source and crush innovation.”

Yoshua Bengio, Founder and Scientific Director of Mila – Quebec AI Institute. 

Bengio, who recently co-authored a policy framework on AI risks, has not tweeted but expressed in an opinion piece that AI risks are “keeping [him] up at night.”

Broader Implications

The debate has far-reaching implications for AI policy and governance. Hinton and Bengio advocate for allocating one-third of AI R&D budgets to safety research, emphasizing the urgency of the matter. Despite the heated debate, the pioneers maintain long-standing friendships, reminding us that intellectual disagreements need not sever personal bonds. The debate on AI’s existential risks is far from settled. As business leaders, it’s crucial to stay informed and consider how these discussions could shape future regulations and innovation in AI.

AI Oversight Going Forward

President Joe Biden’s recent executive order marks a significant advancement in the United States’ approach to artificial intelligence (AI), establishing it as the most comprehensive national policy on AI to date. This groundbreaking directive emphasizes the need for new federal standards focusing on AI’s safety, security, and trustworthiness, addressing a broad spectrum of AI-related risks and developmental aspects. It notably includes the creation of regulatory and safety boards dedicated to AI, highlighting the urgency to manage AI’s potential risks such as misinformation, encoded biases, privacy threats, and national security concerns.

The executive order is a leap toward greater transparency and accountability in AI development. It mandates AI developers to disclose crucial data to the U.S. government before launching significant AI models, aiming for increased transparency. The order also calls for the development of federal standards and tests to mitigate AI threats to national security. However, it faces challenges in implementation, particularly due to the lack of AI expertise in government positions and technical feasibility issues, such as the complexities in watermarking AI-generated content. While the order is a pivotal step forward, filling a critical policy gap and setting a precedent for future AI governance, it’s not expected to immediately impact daily AI interactions, underscoring the need for further legislative actions and specific sector regulations.

While a commendable effort to regulate a rapidly evolving and potentially hazardous technology, exhibits significant limitations in scope and practical enforceability. The order’s focus on requiring companies to report on the risks of AI systems aiding in the creation of weapons of mass destruction and the reduction of deep fake dangers is a crucial step towards mitigating some of AI’s most alarming threats. However, the order’s effectiveness is constrained by Biden’s limited authority to regulate the private sector directly. This limitation raises questions about the order’s overall impact, particularly in a domain where rapid innovation and private sector involvement are predominant. Biden’s acknowledgment of the need for Congressional action underscores the executive order’s inherent limitations and the necessity for more comprehensive legislative measures to effectively govern AI technologies.

The order’s emphasis on safety, security, and the encouragement of AI development within the U.S., including measures to counter China’s technological advances, reflects a strategic approach to national security and technological competitiveness. However, the reliance on a Korean War-era law, the Defense Production Act, for security mandates on companies, and the requirements for cloud service providers to report foreign customers, might not be sufficient to address the nuanced and complex challenges posed by advanced AI. The directives, such as watermarking AI-generated content, while innovative, may face practical challenges in implementation and effectiveness. Additionally, the order’s ambitious goals, like rapid hiring of AI experts in government and urging privacy legislation, confront significant hurdles in a competitive job market and a politically divided Congress, potentially limiting the order’s ability to bring about meaningful change in AI governance and regulation.

In conclusion, the executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” has faced significant criticisms from experts. While it outlines a comprehensive strategy for the U.S. to remain at the forefront of AI innovation, the order’s vagueness and lack of specificity have raised concerns about its effectiveness. Critics question whether the allocated resources are sufficient to meet the order’s goals and if ethical considerations have been adequately addressed. Additionally, the order’s inward-focused approach overlooks the importance of international collaboration in AI development. While the executive order is a step in the right direction, it highlights the need for a more comprehensive and detailed approach to AI governance.