Does a potential for an AGI arms race exist between countries? The industry led AI research dominated by digital natives like Google, Facebook, Amazon & bellwethers like IBM & Microsoft seems to be heading towards a positive direction – improving the human-machine interface by applying a bottom-up approach to emulate the human brain, but they are not the only face of AI research. Governments across the globe have also joined the race.
OpenAI’s Sam Altman believes that in the future, governments will probably dominate AI development – and when [ governments] gets serious about [superhuman machine intelligence] they are likely to out-resource any private company, he was quoted by the Foundational Research Institute.
Now, historically, tech giants and independent companies weren’t the only face of AI research. There are numerous examples of present-day consumer AI applications that spun out of US Defence agency DARPA (Siri, Google Maps). And it is not uncommon for US defense department to take over tech projects for their own use – a recent case in point is Google’s TensorFlow AI systems being used by the US Department of Defense’s Project Maven – this project is a pilot with DoD and Google has provided TensorFlow APIs for object recognition on unclassified data, in other words flagging images for further review.
In between this chatter, talk about AGI arms race between nations and using technology for totalitarian purpose resurfaces. Both China & US are great examples of nations leveraging AI for national security and global dominance.The Foundational Research Institute forecasts that weaponization of AI could lead to worse expected outcomes in general.
Which brings us to the question does an AGI race truly exist? AIM cites a few arguments:
A natural defense to dangerous AGI is open sourcing research: OpenAI is one such initiative that is working towards building a safe AGI and by open sourcing its research the venture ensures that no one entity/individual controls AI and uses it for a harmful purpose. Other marquee tech giants and independent companies like Numenta led by Jeff Hawkins are applying a bottom-up approach to emulate the human brain. Ray Kurzweil, Google’s Director of Engineering has applied LSTM approach to “experiment with Google’s Smart Reply machine learning email software”. By his own admission [Kurzweil’s predictions], we will achieve singularity by 2029.
Rise of AI-powered military can spark an AGI arms race: Much of AI’s usage in China’s military is unknown to the world. From news about building a $2.1 billion AI research park to equipping nuclear submarines with AI, China is at the forefront of AI race pushing technology to new levels. If a country comes closest to building human-level AI, it is China. With its ambition to become a leading AI superpower by 2025 and the subsequent rise of its AI-powered military, an American think tank, the Center for a New American Security notes that Chinese PLA is fast approaching “singularity” on the battlefield, at which human cognition can no longer keep pace with the speed of decision-making and tempo of combat in future warfare.
Drones & military robotics can spark AGI race: The Foundational Research Institute believes military robots are viable candidates where we will see AGI emerge. With an increased usage of drones and military robotics, the next wars will definitely not be fought by humans.
According to Mark Gubrud, a physicists and adjunct professor in the Peace, War and Defense curriculum at the University of North Carolina, military robotics is one of the most likely reasons that a robot arms race might develop and to a certain, we are seeing an intensified arms race to build drones and autonomous weapons systems.
In the race to develop weapons that can’t be beaten, America’s no. #1 adversary Russia is also leveraging AI & robotics to usher in an era of hyperwar. Interestingly, American company SparkCognition is a well-known name for providing AI solutions for defense and security sector. According to its founder Amir Husain, the company is working on a swarm-mothership concept and has also filed U.S. patents covering the design of such systems, as per a recent article. Husain also posited that recent advances in machine-learning algorithms that make drone-launching robot submarines a reality can also spawn a cost-effective data-gathering networks based on sensors.
Private tech companies Google & Amazon have more data than the military for surveillance: According to the World Economic Forum, the fast development of AI in private tech can also be used for dual use, or in other words weaponization, which is a serious cause for concern. A broad array of technology, such as facial recognition algorithms, voice cloning from less than a minute of audio data by Baidu Research, usage of drones for delivery is now used for commercial purpose. But it also enables big tech companies with more surveillance capability than the military. This is a cause of concern, believes well-known robotics Professor Mary Cummings who questioned: what would happen when governments will turn to private tech organizations to provide them with the latest defense technology.
According to WEF, this poses a serious concern – private tech companies are providing off-the-shelf AI solutions which can be bought and weaponized to devastating results, bringing to pass concerns echoed by the late Stephen Hawking, Elon Musk, Max Tegmark and Bill Gates.
How can the current security risks be addressed?
Several developments in AI such as an AI-augmented military can spawn inequalities between nations on the world stage and shift the power equilibrium in favour of the most powerful nation. This means, there is an imminent need for a platform with multi-stakeholders (governments and private sector) to monitor, govern and implement rules on developing and restricting emerging technologies.
Several think tanks cite the need for a Global Task Force on artificial general intelligence to monitor and enforce safety guidelines. Amid rising concerns, there is also a call for securing global cooperation in the form of a Benevolent AGI Treaty to be developed by UN member nations. The Partnership of AI is a step in a direction to ensure the development of AI for public good. Researchers have also suggested developing an AI Standards Developing Organization, with an aim to provide guidelines for risk management and AI safety in an industrial context.
Try deep learning using MATLAB