
The simple fact that the AI Commission is led by former Google CEO Eric Schmidt should trouble those who care for privacy, accountability, transparency, and individual liberty.
In late January, the National Security Commission on Artificial Intelligence (NSCAI), or the AI Commission, released a draft of their upcoming report to Congress, rejecting calls to ban AI-powered autonomous weapons, characterized by critics as “killer robots”. While the AI Commission did briefly address privacy and civil liberties concerns, they ultimately called on Congress to double AI research and funding annually up to $32 billion a year by 2026. The report also failed to note clear conflicts of interest between the Commission’s Chairman, and former Google CEO, Eric Schmidt.
Opponents of the advancing AI-powered surveillance and police states include privacy advocates concerned about a future where law enforcement are wearing glasses equipped with facial recognition software powered by secret AI algorithms.
The draft report addresses the surveillance concerns, stating, “The stakes of the AI future are intimately connected to the enduring contest between authoritarian and democratic political systems and ideologies.” The Commission also notes that AI-enabled surveillance will “soon be in the hands of most or all governments” and “authoritarian regimes will continue to use AI-powered face recognition, biometrics, predictive analytics, and data fusion as instruments of surveillance, influence, and political control.”
The report correctly points a finger at China’s authoritarianism and AI-driven surveillance state. However, the draft also attempts to paint the U.S. as a “liberal democracy” that uses such technologies for “legitimate public purposes…. compatible with the rule of law.” The implication is that the enemies of the U.S. could use this technology for tyrannical purposes, but the U.S. and its allies would only ever use AI in the interest of preserving liberty.
“A responsible democracy must ensure that the use of AI by the government is limited by wise restraints to comport with the rights and liberties that define a free and open society,” reads the draft. “The U.S. government should develop and field AI-enabled technologies with adequate transparency, strong oversight, and accountability to protect against misuse.”
Taken at face value, these statements might offer a sense of reassurance. Unfortunately, we are speaking about the U.S. government and military, and these institutions do not have a history of transparency or accountability. Even more worrisome is the drafts mention of the “urgent need” to use AI for national security purposes, particularly against “foreign and domestic terrorists operating within our borders.” The draft encourages the DOD not to pursue their counter-terrorism goals without ensuring that “security applications of AI conform to core values of individual liberty and equal protection under law.”
Read more: Google’s Eric Schmidt & The Artificial Intelligence Military-Industrial Complex
