Skip to main content
Trump’s AI Revolution: Deregulation, Dominance, and the Fight for America’s Digital Future
Picture

Member for

4 months 2 weeks
Real name
David O'Neill
Bio
Founding member of GIAI & SIAI
Professor of Data Science @ SIAI

Changed

A New Era of AI: Power, Not Precaution
The Conflict Over Competition, Control, and Copyright
One Nation, Divided by Algorithms
Trump's executive order aims to bring American Leadership in Artificial Intelligence / istock

A New Era of AI: Power, Not Precaution

The country entered a new era in January 2025, as President Donald J. Trump retook the reigns of power in Washington. This era was not only characterized by political resurgence, but also by technological transformation. The document signed in the Oval Office during the first week of the new administration received little attention from those outside of policy circles as snow covered the National Mall. However, for those who work in the field of artificial intelligence, it was comparable to an earthquake.

Trump's executive order, "Removing Barriers to American Leadership in Artificial Intelligence," was a declaration that the future of AI in the United States would not be influenced by caution or constraint. Rather, it would be characterized by an unrelenting pursuit of dominance, ambition, and deregulation. A significant and deliberate change in the direction of national AI policy ensued, which diverted the United States from the ethics-focused frameworks of the Biden administration and prioritized innovation, speed, and global supremacy.

This pivot ignited the most influential technology companies in the country, from Silicon Valley to Capitol Hill. It also sparked a ferocious debate regarding the significance of accountability, safety, and fairness in the era of artificial intelligence.

The tone was established promptly. Trump not only dismantled his predecessor's AI safety mandates within days of assuming office, but also initiated the AI Action Plan, a blueprint that the White House Office of Science and Technology Policy (OSTP) was to develop within 180 days. The days of wringing their hands over the risks of AI were over, as Trump officials made plain in public statements and at international summits. In Paris, Vice President JD Vance stated to an audience, "The AI future will not be achieved by obsessing over safety."

This policy transition was most evident in the Artificial Intelligence Safety Institute (AISI), which was established in 2023 during the Biden administration to investigate the hazards of robust AI models. In March, AISI's new cooperative agreements with partner scientists eliminated all references to "AI safety," "responsible AI," and "AI fairness." In their stead, new priorities were established: the reduction of "ideological bias," the enhancement of economic competitiveness, and the security of America's status as the global leader in AI.

Although this linguistic transition may appear inconsequential, it represented a significant transformation for those who were privy to its inner workings. According to a researcher associated with AISI, the new guidance is unsettling: "Unless you are a tech billionaire, this will result in a worse future for you and the people you care about." Anticipate that AI will be deployed irresponsibly, unsafe, discriminatory, and unjust.

However, the administration's new hands-off approach was not a cause for concern for many in the tech industry; rather, it was a long-awaited invitation. Meta, Google, OpenAI (supported by Microsoft), and Anthropic (supported by Amazon) enthusiastically embraced the Trump administration's call to expedite the development of AI. In their formal submissions to the AI Action Plan, these companies advocated for the relaxation of regulations, the reduction of supervision, and the expansion of their freedoms to facilitate the rapid and extensive scaling of their models.

Copyrighted content and fair use are one of the most significant issues involving AI development / istock

The Conflict Over Competition, Control, and Copyright

The conflict between copyrighted content and fair use is one of the most significant issues that has arisen in this new regulatory environment.

OpenAI and Meta have been utilizing a vast amount of internet data, including books, articles, images, and videos, to train their large language models (LLMs) for years. Some of this data is copyrighted. They are currently in the process of obtaining explicit government protection for this practice. They contend that this form of data utilization is permissible under the legal principle of fair use, which permits specific unlicensed applications of copyrighted works for the purposes of education, commentary, or innovation.

OpenAI framed the issue in geopolitical terms in its submission to the OSTP, cautioning that American companies could lose ground to Chinese competitors such as DeepSeek, a rising AI startup that reportedly constructed a competitive model at a fraction of the cost, if fair use is not enforced. "The race for AI is effectively over if Chinese AI developers have unfettered access to data and American companies are left without fair use access," OpenAI wrote.

Meta went a step further by advocating for the government to formalize open-source development and declare that public internet data should be permissible for the purpose of training AI models. The company contended that the objective was to "ensure American AI dominance" and to enable smaller startups to compete with tech titans.

However, not all individuals concur. A substantial coalition of artists, authors, performers, and musicians, including Hollywood celebrities such as Cynthia Erivo and Ben Stiller, vigorously opposed the proposal. They contended that the erosion of copyright would have a devastating impact on creative industries, resulting in a decrease in the revenues of artists and the ability for AI to exploit their work without their consent or compensation.

They wrote, "There is no justification for weakening or eliminating the copyright protections that have facilitated the growth of America." "Not when AI companies can simply negotiate appropriate licenses with copyright holders, as every other industry does."

This debate is currently being conducted in courtrooms throughout the nation, in addition to government halls. The law's interpretation of the equilibrium between intellectual property and innovation in the era of AI may be influenced by the numerous lawsuits that content creators have filed against Microsoft, Meta, and OpenAI. In the interim, the U.S. Copyright Office is in the process of compiling a formal report that is anticipated to be released later this year.

Massive risks are involved. The decision could determine whether companies have the freedom to train AI on all previously published content, or whether artists and creators retain control over the use of their work in the digital economy.

The US remains divided on the use and regulation of AI / ChatGPT

One Nation, Divided by Algorithms

The Trump administration's AI agenda has sparked a broader philosophical debate regarding the governance of AI, in addition to the copyright dispute.

A number of technology companies have advocated for federal preemption, a policy that would supersede state-level AI laws and establish a unified national framework. According to industry leaders, the compliance with various regulations is becoming a logistical and legal nightmare as a result of the introduction of over 780 AI-related legislation in U.S. states over the past year. "Innovation may be impeded by the inconsistent state AI regulations," stated a single organization. "We require a unified federal strategy that guarantees the competitiveness of the United States."

Some companies suggested federal-private partnerships as a means of sweetening the agreement, providing the government with data access and model transparency in exchange for regulatory relief.

However, not all individuals endorse a top-down approach. The White House was urged by a bipartisan coalition of state legislators to acknowledge that states have been at the forefront of the development of intelligent, ethical, and responsive AI policies. They noted that "State legislatures have established expertise in areas where there was previously little in government," citing the frameworks that have already been established in California, Illinois, and other early adopter states.

The Trump administration is under pressure from a number of companies to reconsider the export control regulations that were implemented during the Biden era to prevent the acquisition of advanced AI processors and tools by U.S. adversaries. They contend that these regulations, which are crucial for national security, may currently be impeding the global expansion of U.S. companies. One technology company suggested that license exceptions be broadened to include more "democratic allies," which would facilitate the free movement of AI technology while simultaneously safeguarding critical systems from adversarial use.

Numerous individuals have issued warnings regarding energy. The training and operation of sophisticated AI models necessitates an immense quantity of electricity, which has prompted apprehensions regarding the capacity of the U.S. power grid to accommodate this demand. Companies are currently advocating for permitting reform, government-backed infrastructure projects, and public-private energy partnerships to guarantee that there is sufficient energy to support the United States' AI ambitions.

The administration appears to be empathetic. It has already maintained export restrictions from the Biden era; however, it is purportedly in the process of reviewing the AI Diffusion Rule. Furthermore, individuals such as Elon Musk, who currently serves as the director of the Department of Government Efficiency (DOGE), have played a significant role in the transformation of the federal bureaucracy to align with the new agenda through subtle maneuvering.

DOGE has already carried out widespread terminations in several agencies, including the National Institute of Standards and Technology (NIST), which oversees AISI. According to sources, the administration has silenced or pushed out any individuals who challenge its approach to AI or diversity, equity, and inclusion, and dozens of staff members have been terminated.

Divisions are forming among AI researchers as well. "A significant number of individuals are attempting to maintain their positions at the table by maintaining a close relationship with the administration," stated a scientist affiliated with AISI. "However, I trust that they will recognize that these individuals—and their corporate sponsors—are faceless leopards who prioritize power."

The United States is finding itself at a historic juncture as the Trump administration prepares to unveil its AI Action Plan this summer. One path leads to a future that is characterized by unwavering ambition, scale, and speed, in which innovation is the guiding principle and the guardrails of accountability, safety, and fairness are viewed as obstacles that must be surmounted. On the other hand, there is a more deliberate and cautious approach that acknowledges the potential of AI but prioritizes the protection of individuals, rights, and institutions.

The future of America's technology and the very foundation of its democracy will be significantly influenced by the path it ultimately selects. As AI is not merely code; it is entangled in every sector of society, it is also culture, policy, labor, and law.

At present, that future is being shaped by engineers, lobbyists, executives, and a government that is resolute in its pursuit of victory, rather than by philosophers or ethicists.

Picture

Member for

4 months 2 weeks
Real name
David O'Neill
Bio
Founding member of GIAI & SIAI
Professor of Data Science @ SIAI