AI Security and Accountability Act Could Criminalize AI Imports from China
In a controversial move, a new bill introduced by Senator Josh Hawley could make it a federal crime to download, distribute, or use DeepSeek AI in the United States. The proposed legislation, known as the AI Security and Accountability Act, classifies AI models originating from China as a national security threat, imposing strict penalties on individuals and organizations found violating the law.
Understanding the AI Security and Accountability Act
The bill seeks to ban the importation of AI technology and intellectual property from China, which explicitly includes downloading DeepSeek AI models such as DeepSeek R1 and V3. The proposed penalties are among the harshest ever seen for software-related offenses, with individuals facing fines of up to $1 million and prison sentences ranging from 5 to 20 years depending on the severity of the violation.
Furthermore, the bill extends beyond merely downloading AI models. It also prohibits exporting AI to what it calls an “entity of concern.” This means that sharing AI with foreign nationals within the U.S. or transmitting AI technology abroad could also be considered a federal offense. For example, releasing AI models like Llama 4 to unauthorized entities could result in a 20-year prison sentence.
Broader Implications and Restrictions
The bill also introduces strict measures to prevent collaboration between U.S.-based researchers and Chinese institutions. It prohibits transferring research or engaging in partnerships with any university, laboratory, or individual working under Chinese law. Even an undergraduate research assistant co-authoring a conference paper with a Chinese academic could be considered a violation under this legislation.
This move signals a broader strategy by the U.S. government to tighten control over AI research and development, particularly in relation to China. While the bill is positioned as a safeguard against potential AI misuse by malicious actors, critics argue that it could stifle innovation, restrict academic freedom, and harm international scientific collaboration.
Enforcement and Consequences
The bill’s enforcement mechanisms remain unclear, but experts speculate that it could involve cooperation between federal agencies, cybersecurity watchdogs, and AI companies themselves. Given the increasing capabilities of AI surveillance tools, monitoring downloads and tracking AI usage could become part of national security enforcement measures.
If passed, the AI Security and Accountability Act would mark one of the most stringent AI-related legal frameworks in modern history. Companies, researchers, and individuals working with AI models must remain vigilant about the potential risks associated with handling foreign-developed artificial intelligence.
With the increasing competition in AI development between the U.S. and China, this bill could set a precedent for further restrictions, raising questions about the balance between national security, technological progress, and international cooperation.