Skip to main content

TRANSLATING THE TRIAD: INSIGHTS ON ARTIFICIAL INTELLIGENCE, GEOPOLITICS, AND INTERNATIONAL POLITICAL ECONOMY

Amelia headshot
April 29, 2025

Brooks Tech Policy Institute (BTPI) Fellow Basim Ali engaged in an interview with Amelia Arsenault, also a BTPI Fellow and a Doctoral candidate at Cornell University’s Department of Government, specializing in the International Relations subfield, to discuss the implications of Artificial Intelligence (AI) on geopolitics and the international political economy.


Basim: How might the incorporation of AI influence geopolitical dynamics within the framework of international political economy?

Amelia: As AI becomes increasingly sophisticated, with advancements in both military and civil applications, public and private actors worldwide have shown sustained interest in accessing this technology. The global market for AI tools is highly lucrative and competitive, with potentially significant implications for inequality, the global balance of power, and international digital governance. Countries which lack the resources and capabilities needed to develop AI domestically are likely to import these tools in an effort to remain at the forefront of innovation, or at the very least, adapt to the evolving technological landscape. As the companies that develop and sell these technologies reap substantial economic benefits, often while relying on exploitative labor from the Global South, the proliferation of AI technology could deepen global economic inequality. Further, as global demand for AI continues to grow, major innovators like the US and China are likely to continue competing for technological dominance and a larger share of the market. States that successfully integrate AI in ways that offer economic and military advantages may see favorable shifts in their geopolitical standing, potentially altering the global balance of power. There is also rising concern that trade relationships could shape state approaches to digital governance in the AI era. States that use AI for domestic repression and surveillance may leverage trade dependencies to spread an authoritarian model of global technological governance, potentially contributing to democratic backsliding and the erosion of civil liberties internationally. The global market for AI tools with defense, surveillance, or intelligence applications remains highly fragmented and opaque, making it difficult for watchdogs and activists to track the networks of actors involved in developing and transferring highly-advanced AI tools. Ultimately, the lack of transparency may lead to the unchecked proliferation of AI tools, including those with serious implications for democratic values and human rights.

Basim: In the context of AI, what challenges do the integration of emerging technologies, such as facial recognition, pose for government capabilities, and how might these challenges impact international relations?

Amelia: While emerging technologies like AI offer opportunities across a range of sectors, the risks associated with their adoption, such as worsening inequality, the erosion of civil liberties, and environmental degradation, are well-documented. The proliferation and widespread adoption of AI technologies, including facial recognition software, raises critical questions about defining responsible use and identifying contexts where the deployment of AI is inappropriate or harmful. Governments, and especially democracies, will have to grapple with balancing the competing goals of remaining at the forefront of technological innovation with the need to build robust safeguards that regulate the development, sale, and usage of AI technology. There also needs to be serious conversations about opportunities for concerned citizens to ‘opt-out’ or limit their susceptibility to highly-invasive surveillance. Governments must navigate these ethical dilemmas while also confronting the significant influence that private tech companies have played, and will likely continue to enjoy, in contemporary politics. Again, the challenge lies in balancing technological advancement with the protection and enshrinement of democratic values like accountability and transparency. Like-minded states facing similar challenges may look for opportunities to negotiate and coordinate regulations that strike a balanced approach, embracing innovation while prioritizing democratic values and peace. On the international stage, states must consider the implications associated with AI procurement and trade decisions. Exports of AI technologies to repressive regimes risks complicity in human rights abuses, while imports of these tools from certain countries may jeopardize data privacy and test relationships with allies. Democracies in particular must be cautious and deliberate in preventing invasive, AI-driven surveillance and ‘mission creep’ both domestically and abroad, as emerging technologies could contribute to a global erosion in democratic values.

Basim: In your opinion, what safeguards or oversight mechanisms are needed to prevent AI-powered surveillance tools from being used to suppress dissent or target marginalized groups or stifle human rights?

Amelia: States have struggled to develop comprehensive regulations on emerging technologies, in part due to the slow pace of the regulatory process and the rapid speed at which these tools are being developed, marketed, and sold. In general, states should move beyond reactive regulation in response to scandals and incidents of misuse towards the adoption of more proactive legislation designed to limit harms from the outset. Safeguards should prioritize transparency, requiring that the public be informed about the types and quantities of data being collected in order to ensure accountability and prevent overreach. Safeguards must also address the disproportionate surveillance targeting marginalized communities on the basis of race, class, and religion. Because AI-powered surveillance tools are capable of collecting and analyzing vast amounts of private information, their use should be restricted to proportionate, narrowly defined purposes subject to oversight from independent watchdog organizations.
At the international level, stronger mechanisms are needed to prevent sales of AI-powered surveillance tools to states that are likely to use them to repress and infringe on citizen’s human rights. Although many states, including the U.S., have implemented stricter export controls for AI tools with military or surveillance applications, the complexity and obscurity of global AI supply chains complicates enforcement and compliance. Greater market transparency can create opportunities for accountability and multilateral coordination towards the adoption of effective regulatory safeguards.

Basim: How might the establishment of AI Safety Institutes (AISIs) in various countries, along with the Bletchley Park process, influence international cooperation on AI regulation?

Amelia: As AI technology proliferates and becomes more accessible, state-level regulation alone will struggle to address the global risks that this technology poses to human rights, the environment, and equality. The establishment of AI Safety Institutes could signal countries’ commitment to addressing these challenges, creating opportunities for like-minded states to collaborate towards global guidelines and norms on the development and adoption of AI. However, these efforts will face significant hurdles. First, successful international cooperation on AI regulation must find a way to incorporate the private sector without succumbing to capture by private, corporate interests. While these companies undoubtedly play a major role in the current technological landscape, their interests towards profitability often conflict with the public interest.
Second, international coordination on AI must be flexible and adaptable in the face of rapid technological innovation, a challenge that is made all the more difficult by states’ competing interests and disagreement regarding the regulation of emerging technology. Finally, while states cannot ignore long-term risks, focusing on the hypothetical could overshadow the urgent need for regulation that addresses immediate, ongoing harms caused by AI. AI is already contributing to environmental degradation, ubiquitous surveillance, and algorithmic bias in ways that are directly affecting the public, and disproportionately those who are already most vulnerable. As such, international cooperation on AI regulation must avoid overlooking the harms of today for the potential threats of tomorrow.


Thank you so much Amelia for sharing your insights.

The responsible development and deployment of AI are paramount in the future. Cornell University’s Brooks Tech Policy Institute is actively engaged in this space through its AI Governance Research Hub. They’re working on projects ranging from international governance regimes for civilian AI to addressing concerns like bias, misinformation, and potential risks as well as researching the military applications of AI.