AI Regulation Policies Would be Thwarted by Algorithm and Source Code Secrecy Guarantees
To try to avoid civil rights and liberties violations and other harms from AI systems being rushed into use, legislators are introducing bills in statehouses nationwide that require impact assessments, bias audits, or pre-deployment testing to ensure that AI models are fair and accurate. The Big Tech-demanded “digital trade” rule that bans access to source code and algorithms would forbid such reviews from being conducted by or available to government regulators or independent bodies, as many bills require.
If the “digital trade” rules Big Tech seeks were widely enacted, requirements for AI developers to make available algorithmic information to deployers, mandates to conduct external bias audits, or regulatory disclosure requirements for high-risk AI systems could be attacked as a violation of the “digital trade” special secrecy guarantees forbidding disclosure of even detailed descriptions of algorithms.
In total, 39 AI regulation policies introduced since 2021 could be threatened by source code and algorithm secrecy rules; six of these policies have already been signed into law.
Click on the states highlighted on the right to see their AI regulations legislation.