Most in the AI space can see the near-limitless potential for productivity and prosperity that the increasingly powerful AI innovations are producing (and, to be clear, we agree with the probability of a future of abundance for all). But many in the sector are also quick to brush off the potential risks, especially in the short- to medium-term as the frontier model builders race to achieve artificial general intelligence (AGI) or superintelligence.  

The practitioners willing to acknowledge the risks tend to fall into one of two camps:

a) those who believe the problem is solvable by embedding guardrails directly on top of the foundation models, and

b) those who believe that the models can be easily manipulated, the guardrails circumvented, and/or since the code and the underlying data are publicly accessible, bad actors can build their own fit-for-purpose generative AI models to achieve their nefarious goals, thus increasingly the likelihood of a human extinction-level event. 

The latter group of doomsday scenario forecasters, known as the p(doom)-ers, includes some very smart scientists, and the possibility should not be discounted.

Having worked with fraud and financial crime professionals for decades—a notoriously practical bunch—and risk-assessed many billions of payment transactions and outcomes, we’d like to focus on neither the best nor worst case but have an urgent but pragmatic discussion on the immediate financial risks we must deal with while the foundation model creators iterate to AGI and superintelligence. 

We should be able to agree that, in the hands of bad actors, AI poses both a non-zero possibility of an existential threat to humanity and a 100% probability of financial fraud risk to both the organizations we all work for and to each of us individually.

The Spectrum of Inbound AI Threats:  Known and Unknown Risks and Mitigants

We must begin by agreeing that many risks are inherently unknowable.  To suggest otherwise is to ensure we get caught off guard and amplify the potential damage. One approach to effectively navigate these risks is to categorize them into “known knowns,” “known unknowns,” “unknown unknowns,” and “unknown knowns.”  This framework provides a structured approach to understanding and mitigating the potential dangers posed by generative AI.

generative ai1) ‘Known Knowns’ 

Definition: Risks that are clearly understood and can be directly addressed.

Examples:

  • Identity Fraud: AI can simulate or assemble realistic identities to open fraudulent accounts and conduct fraudulent transactions.
  • Account Takeover: Generative AI can manipulate individuals to secure credentials and, 
  • Deepfakes: AI-generated videos and images can be used to impersonate individuals for fraud and misinformation.

Mitigation Strategies:

  • Enhance account opening and identity verification processes with multi-factor authentication (MFA), data enrichment, and. step-up authentication protocols.
  • Ensure customer and counterparty monitoring is in place with real-time anomaly detection, live reporting, and rapid human-in-the-loop problem resolution.
  • Update current point solutions. Addi deepfake detection, but its utility life may be short-lived.

2) ‘Known Unknowns’

Definition: Risks that are recognized but whose full scope or methods are not yet fully understood.

Examples:

  • New Fraud Schemes: Emergent fraud schemes leveraging generative AI in ways not yet fully identified, especially via social engineering and emotional manipulation.
  • AI-generated Malware: The potential for AI to autonomously create and deploy sophisticated malware.
  • Regulatory Challenges: Uncertainty around how existing regulations will adapt to address AI-generated content and actions.

Mitigation Strategies:

  • Invest in continuous monitoring and research & development to stay ahead of emerging threats.
  • Collaborate with security experts and regulatory bodies to anticipate and address potential risks.
  • Regularly update security protocols and conduct threat simulations.

3) ‘Unknown Knowns’ 

Definition: Risks that some individuals or entities are aware of, but that knowledge is not widely disseminated or recognized within the organization.

Examples:

  • Insider Negligence / Lack of Communications: Employees or partners who understand the vulnerabilities introduced by generative AI but do not disclose or escalate them.
  • Hidden AI Capabilities: Underutilized or misunderstood aspects of AI that could pose security risks if exploited.

Mitigation Strategies:

  • Promote transparent communication and knowledge sharing within the organization.
  • Conduct regular training and awareness programs to ensure all stakeholders understand potential risks and mitigation strategies.
  • Encourage a culture of vigilance, ethical AI use, and internal whistleblowing for reporting vulnerabilities or misuse.

4) ‘Unknown Unknowns’

Definition: Risks that are completely unforeseen and emerge unpredictably.

Examples:

  • Novel Fraud Techniques: Completely new methods of fraud that leverage AI in ways not previously considered or imagined.
  • Unexpected AI Behaviors: Generative AI systems behave in ways their creators did not anticipate, leading to unforeseen vulnerabilities or exploits.

Mitigation Strategies:

  • Implement robust monitoring and anomaly detection systems to quickly identify and respond to unexpected behaviors.
  • Maintain flexible and adaptive security frameworks that can evolve as new threats emerge.
  • Foster a culture of vigilance and rapid response within the organization.

Protect Against Generative AI Risk Now with Fraud.net

While we grapple with the profound implications of artificial general intelligence and beyond, it’s crucial to address the immediate and tangible risks posed by generative AI knowing they will certainly be abused in the hands of amateur and professional fraudsters as well as rogue nation-states.  By segmenting these risks and developing potential mitigants for each, we can develop a comprehensive strategy to prevent or contain many of them. This pragmatic approach should help protect our organizations and ourselves from the financial fraud risks that generative AI inevitably brings.

Let’s remain hopeful and proactive, embracing AI’s potential while also staying vigilant against its misuse. Contact Fraud.net today to learn more.