The National Security Commission on Artificial Intelligence, a body created by Congress to study the impact of advances in AI on US national security has put out a call for essays which analyze how advances in artificial intelligence could affect US government and security policy. There are 5 specific prompts, which I summarize as follows:

How will AI affect the nature of war, and more generally interstate competition at or below the level of armed conflict? What kind of military and non-military AI capabilities should the US government invest in? What AI-related skills will be necessary for the national-security workforce in the future? What kind of infrastructure, institutions and organizational structures will be best suited for ensuring AI development? What kinds of AI research should the US national security community engage in? Will this research require the creation of new institutions? What other kinds of infrastructure are necessary to create a sustainable advantage in artificial intelligence, and what are the ethical concerns with attempting to create a sustainable advantage? How will government acquisition processes need to change? What kinds of data are necessary for developing AI applications and tools? What are the ethical and security concerns related to collecting, analyzing and storing this data? What should the US do to influence global norms around artificial intelligence? Given that there are many nations pursuing AI, what should the US do to influence adversaries' AI development? How should the government interact with the private sector? How can the private sector educate the government about the capabilities and risks of AI? What can the government and the private sector do that AI systems used for national security are trusted, by the public, strategic decision-makers, and allies?

I'm heartened to see the government taking the risks of AI more seriously, and I think submitting an essay here is a relatively low-effort way that someone, as a member of the public, can get their opinions regarding the risks and opportunities around AI development in front of a body specifically tasked with creating recommendations for future AI development and regulation.