5 Ways Security Teams Are Using AI That Most Vendors Won't Tell You
Opinionated, practical AI tips from real security practitioners: the kind of stuff that gets shared in security Slack channels.
Vendors will tell you their AI tools find threats faster, reduce analyst burnout, and close detection gaps. That’s all true in the right deployment. What they won’t tell you is how practitioners are using AI in the gaps between licensed products: the workflows that actually save hours every week and don’t show up in any product brochure.
These are five things we’ve seen work in real security environments.
1. Using LLMs to Write Incident Reports (and Saving 30-40% of Report Time)
SOC analysts spend a disproportionate amount of time writing. Incident reports, shift handoffs, executive summaries, post-mortems. The documentation burden is real and it’s not improving with more tooling. Conservative estimates from practitioner surveys put documentation at 30-40% of analyst time.
Microsoft Security Copilot handles this natively for Microsoft-stack environments; it generates incident summaries automatically from Sentinel and Defender data. But most teams aren’t fully in the Microsoft stack, and even those who are still need to produce reports in specific formats for their organization.
The workflow we use: feed raw alert data, timeline, and observed IOCs into an LLM (with appropriate data handling) and use a structured prompt to generate a draft report.
The prompt template:
You are a senior SOC analyst writing an incident report for a Tier 2 escalation.
Incident data:
- Alert type: [paste alert title and severity]
- Timeline: [paste key events with timestamps]
- Affected assets: [list hostnames, usernames, IPs]
- IOCs observed: [list hashes, IPs, domains]
- Actions taken: [list what you've already done]
Generate an incident report with the following sections:
1. Executive Summary (3-4 sentences, non-technical)
2. Incident Timeline (bullet points with timestamps)
3. Scope of Impact (affected systems and data)
4. Root Cause Analysis (preliminary, mark as preliminary if unknown)
5. Actions Taken
6. Recommended Next Steps
7. IOC List (formatted for SIEM ingestion)
Tone: factual, past-tense, no speculation without labeling it as such.
You paste in the raw data. The LLM produces a structured draft in under 60 seconds. You spend 5-10 minutes reviewing and correcting, not 45 minutes writing from scratch.
The time savings compound quickly. A team handling 20 incidents per week gets back 10-15 analyst hours, time that goes back into actual investigation work.
2. Converting Runbooks into Decision Trees with an LLM
Every SOC has runbooks. Most of them are Word documents that haven’t been updated since the analyst who wrote them left the company. They’re written in prose, organized inconsistently, and useless to a Tier 1 analyst at 3am dealing with an unfamiliar alert type.
The workflow: feed your existing runbooks into an LLM and ask it to generate structured decision trees. The output can be a Mermaid diagram (which renders directly in many documentation platforms and SOAR tools), a numbered decision flowchart, or a YAML/JSON structure that feeds directly into a SOAR playbook.
Prompt for converting a prose runbook to a Mermaid decision tree:
I'm giving you a SOC runbook written in prose. Convert it into a Mermaid flowchart
decision tree with yes/no branches. Each decision node should be a question an analyst
can answer from available data. Terminal nodes should specify the action to take.
[Paste runbook text here]
Output format: valid Mermaid flowchart syntax, starting with 'flowchart TD'
Example output for a basic phishing triage runbook:
flowchart TD
A[Phishing Alert Triggered] --> B{Is sender domain newly registered < 30 days?}
B -->|Yes| C[HIGH PRIORITY: Sandbox attachment, block sender domain]
B -->|No| D{Does email contain attachment or link?}
D -->|Attachment| E{Does hash match known malware?}
E -->|Yes| F[CONFIRMED MALICIOUS: Quarantine, isolate recipient device]
E -->|No| G[Submit to sandbox for detonation]
D -->|Link| H{Does link domain match threat intel feeds?}
H -->|Yes| F
H -->|No| I{Has recipient clicked link?}
I -->|Yes| J[Investigate endpoint, check proxy logs]
I -->|No| K[Quarantine email, monitor recipient]
This output can be pasted directly into Confluence, rendered in GitBook, or imported into your SOAR platform as a playbook skeleton. In Splunk SOAR, Mermaid logic maps directly onto the visual playbook builder.
The practical value: new analysts can follow a decision tree without reading prose runbooks. Alert response becomes consistent across the team. And the process of converting runbooks to decision trees often surfaces outdated steps, missing escalation paths, and undefined conditions that your current runbooks gloss over.
3. Using Snyk’s Free Tier as a Poor Man’s SAST
Most security teams know Snyk as an enterprise tool. Most developers know the free tier exists for open-source dependencies. What many teams don’t realize: Snyk’s free tier gives you meaningful security testing at zero cost.
Free tier limits (as of early 2026):
- 200 open-source dependency tests per month
- 100 container image scans per month
- 300 infrastructure as code checks per month
For a small security team auditing side projects, internal tools, or a subset of production repositories, this is enterprise-grade software composition analysis for nothing.
The CLI workflow:
# Install the Snyk CLI
npm install -g snyk
# Authenticate (links to your free account)
snyk auth
# Test a project's open-source dependencies
snyk test
# Monitor a project (tracks vulnerabilities over time, alerts on new CVEs)
snyk monitor
# Test a specific package file
snyk test --file=requirements.txt
# Test a container image
snyk container test nginx:latest
# Test infrastructure as code
snyk iac test ./terraform/
snyk test outputs a list of vulnerabilities by severity with CVE IDs, affected package versions, and available fix versions. For a small team auditing a handful of repos, this replaces paying for a commercial SCA tool.
The practical use case we’ve seen work well: internal security teams use the Snyk free tier to audit contractor-written code before it’s merged into production. It takes 5 minutes per repo and catches the obvious stuff: outdated dependencies with known exploits, npm packages with known malicious versions, basic Terraform misconfigurations.
The honest ceiling: Free tier test limits are 200/month. A development organization with 20 repositories running CI/CD pipelines will burn through that in days. For any meaningful coverage of an active development program, you need the paid tier ($25/developer/month for Team) or an alternative. The free tier is useful for spot-checks, audits, and small teams, not for continuous coverage of large codebases.
4. Building Custom GPTs Loaded with Your Stack Documentation
Every enterprise security team has the same new-analyst onboarding problem: it takes 6-12 months before someone is reliably effective, because they need to learn your specific tooling, your specific environment, your specific escalation procedures. Documentation is scattered across Confluence, email threads, and the brains of senior analysts.
The workflow: build a custom GPT (or Claude project) pre-loaded with your documentation, and let it serve as an always-available first stop for procedural questions.
What to load into the custom GPT knowledge base:
- Tool documentation for your specific stack (how Falcon console works in your environment, how to query your specific Splunk indexes)
- Internal playbooks and runbooks (after scrubbing sensitive data)
- Escalation procedures and contact information
- Compliance framework requirements relevant to your industry
- Your specific detection logic explanations
With this loaded, a Tier 1 analyst can ask:
- “How do I investigate a Falcon alert for suspicious PowerShell execution in our environment?”
- “What’s the escalation path for a ransomware detection outside business hours?”
- “How do I run a KQL query in Sentinel to check login history for a specific user?”
- “What does our policy say about isolating a compromised endpoint while a user is working on it?”
The custom GPT answers based on your actual documentation, not generic vendor help articles.
Building it in ChatGPT:
- Go to chat.openai.com → Explore GPTs → Create
- Configure with system instructions: “You are an internal SOC assistant for [org]. Answer questions based only on the provided documentation. If the answer isn’t in the documentation, say so and suggest who to ask.”
- Upload your documentation files (PDF, Word, text, up to 20 files)
- Restrict access to your team workspace
Security considerations: Don’t load anything into ChatGPT that you wouldn’t be comfortable with OpenAI processing. ChatGPT Enterprise excludes data from training by default; other tiers don’t. For sensitive documentation, self-hosted LLMs (Ollama with Llama 3, a local Claude deployment via API) are the appropriate path.
This same approach works for compliance frameworks. Load your SOC 2 controls documentation, NIST CSF mapping, and audit evidence requirements into a custom GPT and let analysts ask compliance questions in natural language rather than hunting through 200-page control frameworks.
5. Using AI to Detect Vendor Lock-In Before It Bites You
This one gets less attention than detection workflows, but it’s saved real organizations real money. Most security teams don’t have a clear picture of how locked in they are to any given vendor until they’re mid-negotiation on renewal and realize they can’t leave.
The workflow: document your current tool inventory and integration dependencies, then use an LLM to analyze the dependency graph and identify risk.
The prompt:
I'm going to give you our security tool inventory and integration map.
Analyze it for:
1. Single points of failure (tools where losing the vendor would break multiple workflows)
2. Vendor dependencies (tools that only work well within one vendor's ecosystem)
3. Data portability risks (tools where our data would be difficult to export or migrate)
4. Pricing risk (tools with consumption-based pricing that could spike unpredictably)
5. Alternatives that would reduce dependency
Tool inventory:
[List your tools, e.g.:
- SIEM: Splunk Enterprise Security (ingestion-based pricing, SPL queries)
- EDR: CrowdStrike Falcon Enterprise + OverWatch
- SOAR: Splunk SOAR (300+ integrations, custom playbooks)
- Email Security: Microsoft Defender for Office 365
- Threat Intel: Recorded Future (integrated with Splunk via API)
- Cloud Security: Wiz (agentless, API-based)
- Identity: Okta SSO + CrowdStrike Identity Protection]
Integration dependencies:
[List what connects to what, e.g.:
- Splunk receives logs from Falcon, Okta, AWS CloudTrail, Wiz
- SOAR playbooks call Falcon for endpoint isolation, Okta for account lockout
- Recorded Future enriches Splunk alerts via API lookup]
The LLM output is usually illuminating. Common findings that come up:
- “Your SOAR playbooks use Falcon-specific API calls; switching EDR vendors would require rebuilding all automation”
- “Splunk ingestion pricing means your security monitoring cost scales with your infrastructure growth rather than being fixed”
- “Recorded Future is only queried via Splunk; if you migrate off Splunk, you lose the threat intel enrichment until you rebuild the integration”
This isn’t analysis the LLM is inventing; it’s surfacing the implications of the dependency map you already described. The value is in having it organized and explained before you’re in a renewal conversation.
None of the tools above will help you with this analysis because none of them benefit from you identifying their lock-in risk. That’s precisely why it doesn’t show up in vendor-produced content, and why the 20 minutes it takes to run this exercise is worth it before any major security tool renewal.