Skip to main content
Learning Center
Agentic Fraud & AI-Driven AttacksAgentic Fundamentals: LLMs, Agents, and Agentic Systems

Technical foundation: Understanding the building blocks of autonomous AI systems from LLMs to fully agentic fraud operations

Why This Matters for Fraud Professionals

The Playing Field Has Changed

AI tools that were once available only to well-funded tech companies are now accessible to everyone—including fraudsters. But here's the key insight: these same tools are equally available to defenders.

This isn't about AI being "superhuman" or unstoppable. It's about understanding a new category of tools that both sides now have access to.

What You Need to Know

As a fraud prevention specialist, you need to understand:

  1. What these tools actually are (not the hype)
  2. What they can realistically do (and what they can't)
  3. How attackers might use them (the threat)
  4. How you can use them too (the opportunity)
  5. How to detect AI-assisted attacks (the defense)

The Democratization of AI

Before 2023: Building AI tools required:

  • PhD-level expertise
  • Massive computing resources
  • Millions in infrastructure
  • Months of development

Today: Anyone can:

  • Use ChatGPT, Claude, or similar tools for free or low cost
  • Build custom AI agents with no-code tools
  • Connect AI to databases, email, and other systems
  • Automate complex multi-step workflows

This applies equally to:

  • Fraudsters automating scams
  • Fraud analysts automating investigations
  • Security teams building detection systems
  • Anyone wanting to scale their work

A Realistic View

AI agents are powerful tools, but they're not magic. They:

  • Make mistakes (hallucinations, logic errors)
  • Lose context (memory limitations)
  • Fail frequently (API errors, brittle automation)
  • Leave detectable patterns (different from human behavior)

Understanding these limitations is just as important as understanding their capabilities.


Part 1: Large Language Models (LLMs)

What Is an LLM?

A Large Language Model is an AI system trained to predict the next word in a sequence of text. LLMs are trained on massive datasets (trillions of words) to learn patterns in human language.

Core Function:

  • Input: Text sequence
  • Output: Prediction of what comes next
  • Method: Statistical probability based on training data

How LLMs Work

Training Process:

  1. Data Ingestion: Fed billions of documents (web pages, books, articles)
  2. Pattern Learning: Learns relationships between words, concepts, and contexts
  3. Probability Modeling: Develops statistical understanding of language patterns
  4. Fine-tuning: Optimized for specific tasks and safety

Inference Process:

  1. Input Processing: Receives text prompt
  2. Context Analysis: Analyzes meaning and context
  3. Prediction: Generates most probable next words
  4. Output Generation: Produces human-like text response

LLM Capabilities

Text Generation:

  • Write in any style (formal, casual, technical)
  • Translate between languages
  • Summarize documents
  • Generate code, emails, reports

Analysis:

  • Extract information from documents
  • Answer questions about content
  • Identify patterns and relationships
  • Classify and categorize text

Reasoning:

  • Follow logical steps
  • Break down complex problems
  • Make connections between concepts
  • Apply learned knowledge to new situations

LLM Limitations

No Real-World Actions:

  • Cannot access external systems
  • Cannot execute code or commands
  • Cannot browse the internet
  • Cannot remember between conversations

No Learning:

  • Cannot update knowledge from interactions
  • Cannot adapt to new information
  • Cannot improve from experience
  • Fixed knowledge cutoff date

Context Limitations:

  • Limited memory window (conversation length)
  • No persistent memory across sessions
  • Cannot maintain long-term state
  • Loses context when limit exceeded

Fraud-Relevant LLM Capabilities

Content Creation:

  • Generate convincing phishing emails
  • Create realistic social media profiles
  • Write persuasive social engineering scripts
  • Produce institutional communications

Analysis:

  • Process fraud investigation reports
  • Identify patterns in transaction data
  • Analyze customer communication styles
  • Extract key information from documents

Part 2: AI Agents

What Is an AI Agent?

An AI Agent is an LLM enhanced with additional capabilities that allow it to:

  • Remember previous interactions
  • Use tools to access external systems
  • Plan multi-step tasks
  • Execute actions in sequence

Agent = LLM + Memory + Tools + Planning + Execution

Agent Architecture

┌─────────────────┐
│   LLM Core      │ ← Language understanding and generation
└─────────────────┘
         │
┌─────────────────┐
│   Memory        │ ← Stores conversation history and context
└─────────────────┘
         │
┌─────────────────┐
│   Tool Access   │ ← Interfaces with external systems
└─────────────────┘
         │
┌─────────────────┐
│   Planning      │ ← Breaks down complex tasks
└─────────────────┘
         │
┌─────────────────┐
│   Execution     │ ← Carries out planned actions
└─────────────────┘

Memory Systems

Short-term Memory:

  • Current conversation context
  • Active task progress
  • Immediate goals and objectives

Long-term Memory:

  • Historical interactions
  • Learned user preferences
  • Accumulated knowledge and experience

Working Memory:

  • Temporary storage for multi-step tasks
  • Intermediate results and calculations
  • Planning states and progress tracking

Tool Integration

Database Access:

  • Query customer records
  • Search transaction histories
  • Access fraud databases
  • Retrieve account information

Communication Tools:

  • Send emails and SMS messages
  • Make phone calls
  • Post to social media
  • Send system notifications

Analysis Tools:

  • Run fraud detection algorithms
  • Generate reports and summaries
  • Perform statistical analysis
  • Create visualizations

System Integration:

  • Update case management systems
  • Trigger security protocols
  • Modify account settings
  • Execute financial transactions

Planning Capabilities

Task Decomposition:

  • Break complex goals into smaller steps
  • Identify required resources and tools
  • Determine optimal execution sequence
  • Plan for contingencies and error handling

Goal Management:

  • Maintain focus on primary objectives
  • Balance competing priorities
  • Adapt plans based on changing conditions
  • Track progress toward completion

Resource Management:

  • Allocate available tools and systems
  • Optimize for efficiency and effectiveness
  • Handle resource constraints
  • Coordinate multiple parallel tasks

Execution Framework

Action Sequencing:

  • Execute planned steps in order
  • Handle dependencies between actions
  • Manage timing and coordination
  • Ensure proper error handling

Feedback Processing:

  • Monitor results of each action
  • Adjust subsequent steps based on outcomes
  • Learn from successes and failures
  • Update plans based on new information

Error Recovery:

  • Detect when actions fail
  • Implement backup strategies
  • Escalate to human oversight when needed
  • Maintain system integrity

Part 3: Agentic Systems

What Does "Agentic" Mean?

Agency refers to the capacity to act independently and make decisions to achieve goals. An agentic system demonstrates:

  1. Autonomy - Acts without constant human direction
  2. Intentionality - Has goals and works toward them
  3. Adaptability - Modifies behavior based on results
  4. Persistence - Continues working over extended periods

The Agentic Spectrum

Level 1 - Reactive: Responds to specific prompts and requests Level 2 - Proactive: Initiates actions to achieve assigned goals Level 3 - Adaptive: Modifies strategies based on environmental feedback Level 4 - Learning: Improves capabilities through experience Level 5 - Creative: Develops novel approaches to achieve objectives

Key Agentic Properties

Goal-Directed Behavior:

  • Maintains focus on specific objectives
  • Makes decisions that advance toward goals
  • Balances short-term actions with long-term objectives
  • Prioritizes tasks based on goal importance

Environmental Awareness:

  • Monitors changing conditions
  • Responds to new information
  • Adapts to unexpected situations
  • Learns from environmental feedback

Strategic Thinking:

  • Plans multiple steps ahead
  • Considers alternative approaches
  • Anticipates potential obstacles
  • Optimizes for success probability

Self-Monitoring:

  • Tracks own performance
  • Identifies areas for improvement
  • Adjusts strategies based on results
  • Recognizes when to seek help

Agentic vs. Traditional Systems

Traditional Automation:

IF condition THEN action
  • Fixed rules and responses
  • No adaptation or learning
  • Limited to programmed scenarios
  • Requires human updates

Agentic Systems:

GOAL: Achieve objective
METHOD: Adapt approach based on results
  • Dynamic strategy development
  • Continuous learning and adaptation
  • Handles novel situations
  • Self-improving capabilities

Multi-Agent Coordination

Agent Communication:

  • Share information and updates
  • Coordinate parallel activities
  • Negotiate resource allocation
  • Synchronize timing and actions

Collective Intelligence:

  • Combine specialized capabilities
  • Distribute complex tasks
  • Learn from shared experiences
  • Achieve goals beyond individual capacity

Emergent Behaviors:

  • Systems exhibit capabilities not programmed explicitly
  • Agents develop novel coordination strategies
  • Collective problem-solving approaches emerge
  • Complex behaviors arise from simple rules

Part 4: Technical Implementation Frameworks

Understanding how these systems actually work in practice

Agent Framework Architecture (LangChain Pattern)

Router Agent (Decision Engine):

Loading diagram...

Core Components:

  • Task Classification: Analyzes incoming requests to determine approach
  • Agent Selection: Chooses optimal specialist agent for each task
  • Resource Management: Allocates computational resources efficiently
  • Quality Control: Monitors output quality and consistency

Tool Integration Patterns

Model Context Protocol (MCP): An open standard for AI tool integration (originally developed by Anthropic, now adopted industry-wide). MCP allows any AI model to connect to any tool through a standardized interface—think of it like USB for AI. This means:

  • Tools built once work with any MCP-compatible agent
  • Attackers can mix and match capabilities easily
  • Defenders can also leverage MCP for detection and response tools

Agent Tool-Calling Workflow:

# Simplified example of agent tool access class FraudAgent: def __init__(self): self.tools = { "web_scraper": WebScrapingTool(), "voice_synthesizer": VoiceTool(), "email_composer": EmailTool(), "credential_validator": ValidationTool() } def execute_attack(self, target_profile): # Router decides which tools to use intel = self.tools["web_scraper"].gather_intel(target_profile) voice_script = self.tools["voice_synthesizer"].create_script(intel) email = self.tools["email_composer"].craft_phishing_email(intel) return self.coordinate_attack(voice_script, email)

Tool Categories:

  • Information Gathering: Social media scrapers, data breach analyzers, public record searchers
  • Communication: Voice synthesis, email composition, SMS messaging, web form automation
  • Infrastructure: Domain registration, proxy management, credential validation
  • Coordination: Task scheduling, progress monitoring, success tracking
  • Computer Use: Browser automation, desktop control, form filling (emerging in 2025)

Memory Management Systems

Persistent Knowledge Architecture:

Agent Memory Stack:
├── Working Memory (Current Session)
│   ├── Target conversation state
│   ├── Active tool outputs
│   └── Real-time decision context
├── Episodic Memory (Campaign History)
│   ├── Previous target interactions
│   ├── Successful attack patterns
│   └── Failed attempt analysis
└── Semantic Memory (Knowledge Base)
    ├── Institution profiles (banks, companies)
    ├── Social engineering techniques
    └── Security bypass methods

Memory Types in Practice:

  • Vector Stores: Embedding-based similarity search for target research
  • Conversation Buffers: Maintaining context across multi-hour campaigns
  • Knowledge Graphs: Mapping relationships between targets, institutions, and attack vectors
  • Success Metrics: Tracking what works for continuous improvement

Chain-of-Thought Reasoning

Multi-Step Attack Planning:

Reasoning Chain Example:
1. GOAL: Compromise target's bank account
2. ANALYSIS: Target is cautious, high-value professional
3. APPROACH: Multi-channel credibility building required
4. PLAN:
   ├── Step 1: Gather intel from social media (2-3 days)
   ├── Step 2: Send initial "fraud alert" SMS (creates urgency)
   ├── Step 3: Follow with spoofed bank call (builds trust)
   ├── Step 4: Direct to phishing site (captures credentials)
   └── Step 5: Execute transfer while maintaining phone contact
5. EXECUTION: Deploy specialized agents for each step
6. MONITORING: Track success metrics and adapt as needed

Reasoning Components:

  • Situational Analysis: Understanding target psychology and security awareness
  • Strategy Selection: Choosing optimal attack vector combinations
  • Timing Optimization: Coordinating actions for maximum effectiveness
  • Risk Assessment: Evaluating detection probability and mitigation strategies

Multi-Agent Coordination Frameworks

Hierarchical Coordination Pattern:

Loading diagram...

Communication Protocols:

  • Message Bus: Instant updates between all agents (Redis/RabbitMQ pattern)
  • Event Triggers: Automated handoffs based on success criteria
  • Shared State: Real-time synchronized knowledge across all agents
  • Failover Logic: Backup agents activate when primary agents fail

Real-World Framework Examples

LangChain-Style Social Engineering Bot:

SocialEngineeringChain:
├── RouterAgent(decides_approach)
├── ResearchAgent(gathers_target_intel) 
├── PersonalizationAgent(crafts_messages)
├── ChannelAgents(sms, voice, email, web)
├── CredibilityAgent(maintains_institutional_facade)
└── ExecutionAgent(processes_captured_credentials)

AutoGPT-Style Persistent Campaign:

CampaignManager:
├── Goal: "Compromise target banking credentials"
├── Planning: Multi-step strategy development
├── Execution: Autonomous task completion
├── Memory: Persistent learning from attempts
└── Adaptation: Strategy refinement based on results

Detection Implications of Technical Patterns

Framework Signatures:

  • Router Patterns: Systematic task delegation with characteristic retry patterns
  • Tool Chains: Structured integration with predictable error handling
  • Memory Patterns: Context window limitations cause characteristic "forgetting" behaviors
  • Coordination Timing: Faster than human but with detectable automation signatures

Technical Red Flags:

Agentic Attack Indicators:
├── Templated Inputs (structured variations, not truly random)
├── Characteristic Retry Patterns (automated error recovery)
├── Timing Anomalies (too fast OR suspiciously regular intervals)
├── Lack of Human Noise (no typos, hesitation, or behavioral drift)
├── Context Window Artifacts (repeated information, "amnesia" patterns)
└── Parallel Operations Across Channels (same identity, simultaneous actions)

Important Reality Check: Agents are NOT perfect. They make API errors, lose context, hallucinate, and fail frequently. Detection focuses on their characteristic imperfections—they fail differently than humans do.

Current Capabilities (2025):

  • Model Context Protocol (MCP): Anthropic's open standard for connecting AI to external tools, databases, and APIs—enables plug-and-play tool integration across any MCP-compatible agent
  • Frontier Models: GPT-5, Claude Opus 4.5, and Gemini 2.0 provide advanced reasoning with native tool use
  • Agent Frameworks: LangChain, LlamaIndex, CrewAI, and AutoGen enable complex multi-agent workflows
  • Vector Databases: Pinecone, Weaviate, and Chroma provide persistent semantic memory
  • Computer Use: Agents can now directly control browsers and desktop applications

Emerging Capabilities (2025-2026):

  • Self-improving agent architectures with automated prompt optimization
  • Cross-platform agent migration and persistent identity
  • Autonomous infrastructure provisioning and management
  • Real-time adversarial adaptation based on defense detection
  • Native multimodal agents (vision + audio + text + action)

Part 5: Implications for Fraud

Traditional Fraud Assumptions

Human Limitations:

  • Limited working memory and attention
  • Fatigue and consistency issues
  • Geographic and time constraints
  • Communication delays and errors

Linear Progression:

  • Attacks follow predictable patterns
  • Step-by-step progression
  • Limited coordination between activities
  • Clear cause-and-effect relationships

Agentic Fraud Realities

Enhanced Capabilities (Not Superhuman):

  • Extended operation hours (but require monitoring and error handling)
  • Parallel processing across multiple targets (but with coordination overhead)
  • Rapid iteration and adaptation (but prone to compounding errors)
  • Consistent messaging templates (but detectable patterns emerge)

Important Nuance: Agents amplify human capabilities but don't replace human limitations entirely. They introduce new failure modes: hallucinations, context loss, API failures, and brittle automation that breaks unexpectedly.

Non-Linear Attacks:

  • Multi-channel coordination (email + SMS + voice in sequence)
  • Adaptive strategies based on victim responses
  • Complex campaigns with interdependent steps
  • Pattern variation to evade simple detection rules

What Changes

Scale:

  • From individual attacks to coordinated campaigns
  • From single targets to many simultaneous targets
  • From manual effort to automated parallel processing
  • From slow iteration to rapid testing and adaptation

Sophistication:

  • From static scripts to adaptive strategies
  • From generic templates to personalized variations
  • From fixed patterns to dynamic variation (though still detectable)
  • From reactive to more proactive approaches

Coordination:

  • From independent actors to networked systems
  • From sequential to parallel execution
  • From slow communication to faster coordination
  • From random human errors to systematic machine errors (different, not absent)

Detection Challenges

Pattern Recognition:

  • Traditional fraud patterns still apply, but agents add new ones
  • Agents leave different indicators than humans (not fewer)
  • Static rules need supplementing with behavioral analysis
  • Cross-channel correlation becomes more important

Scale Considerations:

  • Agents can attempt more attacks, but each still leaves traces
  • Alert volume may increase, requiring smarter triage
  • Automation means faster iteration—defenders need faster feedback loops
  • Parallel attacks create correlation opportunities (same templates, timing patterns)

Behavioral Analysis:

  • Human behavioral models remain useful as a baseline for comparison
  • Suspicious consistency: lack of typos, perfect timing, no hesitation
  • Agent-specific indicators: retry patterns, context resets, templated variations
  • Key insight: Agents fail differently than humans—learn their failure modes

The Good News: These same AI tools are available to defenders. Fraud teams can use agents to detect agents, automate investigation, and respond faster.


Key Technical Concepts

LLM Foundation

  • Transformer Architecture: Neural network design enabling language understanding
  • Training Scale: Billions to trillions of parameters for sophisticated reasoning
  • Context Windows: Memory limitations affecting conversation length
  • Prompt Engineering: Techniques for effective LLM interaction

Agent Enhancement

  • Memory Management: Systems for persistent information storage
  • Tool Integration: APIs and interfaces for external system access
  • Planning Algorithms: Methods for multi-step task decomposition
  • Execution Engines: Frameworks for reliable action performance

Agentic Properties

  • Goal Optimization: Algorithms for objective-directed behavior
  • Adaptation Mechanisms: Learning systems for strategy improvement
  • Coordination Protocols: Communication methods for multi-agent systems
  • Emergence Patterns: Behaviors arising from complex system interactions

Practical Applications

Legitimate Uses

Customer Service:

  • 24/7 support with perfect knowledge retention
  • Personalized interactions at scale
  • Multi-language support with cultural awareness
  • Complex problem resolution with escalation protocols

Fraud Detection:

  • Continuous monitoring of all transactions
  • Adaptive pattern recognition
  • Real-time risk assessment
  • Coordinated response across systems

Malicious Applications

Social Engineering:

  • Perfect personalization based on extensive research
  • Multi-channel coordination for credibility
  • Adaptive conversation strategies
  • Persistent persuasion campaigns

Financial Fraud:

  • Automated account takeover attempts
  • Coordinated transaction manipulation
  • Real-time adaptation to security measures
  • Large-scale parallel operations

Summary

Understanding the Progression

  1. LLMs: Advanced text generation and analysis
  2. Agents: LLMs enhanced with memory, tools, and planning
  3. Agentic Systems: Autonomous operation toward goals with adaptation

Key Differentiators

  • LLMs: Reactive text processing
  • Agents: Active task execution
  • Agentic: Independent goal pursuit

Fraud Professional Implications

Traditional methods assume human attackers with human limitations. Agentic systems have different constraints—they're faster at some things, worse at others, and leave different traces. Effective defense requires understanding both what agents can do and how they fail.

The key advantage for defenders: You have access to the same tools. Use AI to detect AI, automate investigations, and scale your response capabilities.

Next: Examining specific agentic fraud attack patterns and defensive strategies.


Fast Facts

  • Response Speed: Modern AI systems achieve sub-600ms response times, approaching human conversation speed (LLM Latency Research)
  • Agent Capabilities: Advanced multi-agent frameworks can coordinate 100+ tools and capabilities simultaneously (Multi-Agent Systems Overview)
  • Processing Speed: AI systems demonstrate latency-sensitive decision making with significant speed advantages over human reaction times (AI Speed Research)
  • Parallel Operations: Modern systems support 10,000+ parallel operations through advanced parallelization techniques (Parallel Computing Guide)
  • Hardware Acceleration: Cutting-edge photonic AI processors achieve near-electronic precision while processing complex models like ResNet and BERT (Photonic AI Acceleration)

Sources: Recent AI research papers, technical documentation, and peer-reviewed studies from 2024-2025

Test Your Knowledge

Ready to test what you've learned? Take the quiz to reinforce your understanding.

    Agentic Fundamentals: LLMs, Agents, and Agentic Systems - Agentic Fraud & AI-Driven Attacks