Michael Steed is Founder and Managing Partner at Paladin Capital Group, a leading global cyber investor.
U.S. national security officials are raising concerns about the risks artificial intelligence poses to national security, with the director of the Cybersecurity and Infrastructure Security Agency, Jen Easterly, warning of a “world in the not-too-distant future where how-to guides, AI-generated imagery, auto-generated shopping lists are available for terrorists and for criminals, providing the capability to develop things like cyber weapons, chemical weapons, [and] bio weapons.”
At a recent conference, Craig Martell, the chief digital and AI officer for the Department of Defense, said “he is ‘scared to death’ of the potential for generative artificial intelligence systems like ChatGPT to deceive citizens and threaten national security.”
At the April RSA Cybersecurity Conference, Rob Joyce, the National Security Agency’s cybersecurity director, noted that he expects AI to aid criminals in creating more effective phishing attacks and help attackers modify existing malware to bypass security tools and software.
Generative AI Poses Security Threats
Armed with these warnings, we at Paladin Capital Group decided to ask ChatGPT about the cybersecurity concerns of generative AI. ChatGPT came back with a litany of responses, including using the model to manipulate and deceive. This refers to the ability to create convincing deepfakes that could be used to spread disinformation and disrupt elections or impersonate someone to gain unauthorized access to a system.
BrainChip Sees Gold In Sequential Data Analysis At The Edge
Interactive AI App Sizzle Guides Students In Bold Vision For AI
Retail Redefined: AI’s Affinity For Personalized Engagement
Generative AI could also bypass authentication systems using contextual password guessing. By training a large language model on the personal data available on the internet, such as social media profiles and public records, attackers can make highly targeted password guesses. This method is more effective than traditional attacks because it reduces the number of possible passwords, making it more likely to work.
These capabilities undermine the authentication paradigm and threaten U.S. national security. Easterly noted the 2024 election will likely follow the release of the more powerful GPT-5, and at a recent White House meeting, CEOs of AI innovation companies and officials discussed the fake videos and stories likely to flood social media platforms in the lead up to the 2024 election.
Some experts have expressed concern over particularly nightmarish scenarios involving nefarious actors using generative AI to defeat biometric authentication systems, allowing them to gain access to systems and networks that support daily life.
Fighting Tech With Tech
We don’t have to wait on regulation to mitigate these risks. Through innovation and investment, technologies can be part of the solution.
AI technology can also verify user identity. Passkey tools can combat contextual password guessing by moving toward a passwordless future. The private sector should double down on such efforts.
Investors hold a pivotal role in shaping the trajectory of AI development and driving the adoption of responsible practices that prioritize security and trustworthiness. With hundreds of millions of dollars being poured into new generative AI companies—some of which do not even have a proven product—there is a mad dash to be first to market with the next big project.
Historically, this “move fast” mindset often means technology is only tested for functionality, not trust or security. But investors need to understand that ensuring trust in AI is not just a moral imperative but also a business one.
Clearly, there is no shortage of existential fear around how AI will reshape our lives. If we cannot demonstrate that this technology can be trusted, then it will impede its adoption even in cases where society stands to benefit. Alongside big investments in companies claiming to build a better ChatGPT, there needs to be a concerted effort in investing in the necessary solutions that developers require to pressure test, validate and secure new AI technologies.
The AI arms race requires a new toolbox that will meet privacy requirements, ensure compliance, prevent attacks on the data set the technology relies on and defend from jailbreaks. These are digital solutions of absolute need that require funding to drive innovation and development.
Alongside proper regulation and standard setting, investing in technology that can ensure this life-changing innovation can be trusted to better our lives should be an absolute priority to protect ourselves and our nation from the new and dire risks posed by AI.