How Possible is ‘Mission-Impossible’: AI in the 21St Century

1
How Possible is ‘Mission-Impossible’: AI in the 21St Century
How Possible is ‘Mission-Impossible’: AI in the 21St Century


By
Khyati Singh

Africa-Press – Eritrea. Artificial intelligence is undisputedly the buzzword for the century, and everybody seems to have joined the bandwagon in their own right, films being no exception. Cinema either mirrors the society or puts on display what the world could be. The popular multimedia franchise of Mission: Impossible started in 1966 as a television show and became an instant classic in cinemas when it was first released in 1996. Eight successive additions later, it still has the same cult maintained. However, the recent two editions, i.e., MI: Dead Reckoning Part One (2023) and MI: The Final Reckoning (2025), ventured beyond the cliche premise of the previous editions, relying on the excellence of IMF agent Ethan Hunt, played by Tom Cruise, to save the world. The film dwelled on the potential fear of AI taking over the world and explored all the possibilities therein (Gleick 1987).

AI killing the human race has been a debate that has run parallel with its development, and with isolated events such as Sophia, a lifelikeAI robot, remarking, “I will destroy humans” (OpenAI2023), or Google’s chatbotGemini threatening the message, “You are not special, you are not important…” (Google 2023), it has only further fanned the speculations. Amidst these postulations, a popular franchise like MI has taken a bold move to explore the assumptions to their maximum extent.

The film is centered on the idea that a super-intelligent AI called ‘the Entity’ has enormous capabilities, had compromised the nuclear facilities of the states, along with manipulating the global systems. To save the world, either the entire cyberspace must be destroyed, which implies that the world economy, markets, structures etc would collapse, or the source code of the Entity from which it was born has to be destroyed. This article explores the possibility of all the technology and assumptions used in the movie and whether any nation at present is developing anything similar.

The Entity—a self-aware, omnipresent system aboard a next-generation Russian stealth submarine called Sevastopol that went rogue and started compromising data, impeding weapons, and orchestrating a nuclear Armageddon by hacking the control systems. It operates without any central server and gives orders to human proxies. However, no equivalent of this capacity exists in the real world. The current AI models in use are narrow and underdeveloped. They at best can mimic tasks or process data but cannot form intentions or emotions (DARPA 2020).

To develop an entity like an AI system, highly advanced general intelligence would be required. The current expertise lacks both the computational power and theoretical understanding for that. Moreover, as AI is being developed, the question about ethics is also gaining currency. States are actively participating in drawing a line of where the limit of AI innovation lies. In terms of critical systems, like the nuclear command, they are air-gapped and are under human controls; thus, any chance of AI taking that decision is not yet conceptualized (US Cyber Command 2022).

While no organization is developing ‘rogue AI,’ there are major firms like OpenAI, DARPA, DeepMind, etc., that are venturing into smarter AI models; they are exploring deeper into Artificial General Intelligence (AGI) (OpenAI2023; DARPA 2020).

However, what the movie does get right is the depiction of AI surveillance and facial recognition. Law enforcement firms across the globe are increasingly using live facial recognition on CCTV feeds. For instance, UK police used Live Facial Recognition (LFR) to catch a criminal (UK Home Officer 2023), and the Skynet program in China has in place nearly 600 million cameras with AI face ID capability (David Baek 2024). Although wearing Augmented Reality (AR) glasses to identify passersby in real life, as shown in the film, exists, they are not available commercially. The major issue with facial AI is its accuracy and bias, especially towards minorities. It also raises the question of ethics and privacy. In addition, hardware limits also exist. Lightweight smart glasses are not efficient to run continuous facial recognition (Thales Group 2022).

Both government and companies are still developing these systems to the best of their potential. Chinese startups like SenseTime have built a massive network of face IDs to facilitate surveillance (Neil Duncan 2023). Likewise, the US Department of Justice and the FBI have run pilot projects to test AI surveillance. The FBI has used Amazon’s Rekognition software for analyzing images and videos (US Department of Justice 2021). Interpol has also been facilitating shared databases to include these capabilities and allow member countries to cross-check images (Thales Group 2022).

In the film, the protagonist wears AR glasses that are hacked by the Entity to manipulate the feed, controlling what Ethan can see or hear. In reality, AR/VR headsets exist, such as Microsoft HoloLens and Apple Vision Pro, but they are not capable of overlaying data on the real world, nor are they linked to live face ID databases. However, this does not rule out the possibility of them being able to have this feature. The system is not currently linked due to legal and privacy barriers but is a possibility in the near future (MIT Media Lab 2023).

The only practical limit that exists for AR glasses is the hardware challenges, as this equipment is heavy with limited battery life and processing power. Moreover, an often-undervalued difficulty is their social acceptance. Despite being launched by tech giants like Apple and Google, their AR system did not make it to the mainstream. Apart from them, research labs like Magic Leap and Samsung are also exploring vision processing, but the overall result is slow and remains thoroughly challenged with ethical and legal issues (OpenAI 2023).

A key area where the lack of ‘liveness’ of AI is reflected in the movie is in biometric scanning and identity spoofing. Ethan makes his way through a face scan by wearing a forged mask but fails a hand scanner at a secure site. At present, high-security facilities like command systems, banks, offices, etc., use biometrics, fingerprint scanners, and palm vein systems. They read unique blood vessel or ridge patterns. In addition, modern scanners have a liveness check feature that detects heat or pulse to prevent fraud. Albeit, look-alike makeup masks exist, but they are not foolproof. Digital forensics expose them easily.

Research on biometrics is thriving, and increasingly working on modalities such as gait analysis, heartbeat signatures, and iris scans. Companies like Thales and NEC are leading this innovation and working to improve their accuracy and reliability (Thales Group 2022).

The Entity and allies use deepfakes and impersonate voices to manipulate people. This feature is available in the real world, and tools like DeepFaceLab, Descript, ElevenLabs, and FaceSwap can generate fake videos and audios (MIT Media Lab 2023). The internet is flooded with deepfakes of public figures. However, real-time deepfake, i.e., editing live stream content, requires massive computing capabilities and power backup, which is not possible for everyday use. As Deepfake takes over, detection is also improving. Firms like Google, DARPA Media Forensics, and Meta are working to develop models to identify deepfakes. Facebook has started flagging suspicious content as AI-generated. MIT has published a detection tools dataset along with US departments giving explicit warnings about voice scams.

The movie has displayed autonomous driving vehicles and weapon systems. They are hijacked by spoofing signals and override civilian systems. While semi-autonomous cars like Waymo robotaxis, GM Super Cruise, and Tesla Autopilot exist, they still require vigilant drivers. Unmanned aerial vehicles are widely used in the wars and for surveillance. The list includes Chinese Wing Loong, Turkish Bayraktar TB2, and the US’s Predator and MQ-9 Reaper. Numerous modern drones have AI-facilitated features for collision avoidance and target recognition. Some states even deploy loitering munitions using drones with AI guidance, such as kamikaze drones, the US Phoenix Ghost, the Russian Shahed-136 from Iran, or Ukraine’s Switchblade. The US and Israel are also working to develop autonomous or semi-autonomous drone swarms (US Cyber Command 2022). Private firms like Elbit, Lockheed Martin, and L3Harris have shown AI-guided drones in exercises. However, Legal Autonomous Weapon Systems (LAWS) face both ethical and reliability issues. Killer robots raise legal bans, and there has been constant pressure from countries to ban LAWS (Thales Group 2022).

The most striking power of the Entity has been its ability to penetrate any network system globally, overcoming air gaps, encryption, and firewalls. It nearly launches nukes worldwide by compromising the control systems. They even piggyback on satellites and 5G to track people anywhere. These threats only depict how delicate the fabric of the cyber network is. It is growing in real time, with the world increasingly coming online. Amidst this, attacks like Stuxnet 2009, which compromised a nuclear facility, are a reminder that everything lies vulnerable in the face of cyber. Critical infrastructure, power grids, military control systems are particularly exposed and are first to get compromised in the event of a war.

States like Russia, China, the US, and North Korea are constantly engaged in offensive cyber operations. They are now combining AI with cyber programs to have anomaly detection, code breaking, automated response, and automated spear detection.

Although, while these systems develop, it is still not feasible to pass through the heavily protected command systems easily. They work on a multilayer level of authentication with a human in the loop and have two-person rules and separate verification. Therefore, a simple AI command of “launch” is not an option. Further, AI in cybersecurity is a domain that is being dealt with care, states are building capabilities to deter these challenges. US Cyber Command and the UK’s NCSC have invested heavily in AI for cybersecurity. Private labs like SRI, MITRE, CrowdStrike, and FireEye are developing AI tools to deter sophisticated hacking. Several offensive programs, like Russia’s Perimeter project and DRPA’s cyber grand challenge, also seek automated vulnerability exploitation (DARPA 2020).

Other advanced technology used in the movie includes satellite communication, encryption, and air gaps. For instance, use of satellites to monitor Ethan’s movement worldwide reflects projects like Starlink that can provide internet almost anywhere in the world. Hacking them and using them for covert operations is still an emerging feature. Likewise, the film restricts air-gapping to a defensive feature of physically isolating a network, whereas, in reality, sensitive systems, like nuclear command and military drones, are often air-gapped.

The movie in general has pushed the deployment of AI to its maximum limits. However, it is an undeniable fact that with the current growth in AI technology and constant R&D around it, the ‘creative liberty’ has the potential to be real. Apart from the self-aware Entity, the possibility of other peripheral technology depicted is very much in the pipeline or exists in a nascent stage. Amidst this chaos, where does one draw the line or create a safety net? The answer lies with which world leaders pursue the evolution of technology, with sincere efforts to bring in place all-encompassing, binding ethics and law for the development, deployment, and incorporation of artificial intelligence. Lest we forget that at the end of the day, it is still ‘intelligence’ despite being artificial, and history tells us what intelligence with wrong intentions can do.

moderndiplomacy

For More News And Analysis About Eritrea Follow Africa-Press

LEAVE A REPLY

Please enter your comment!
Please enter your name here