Preface
This book exists because I kept having the same conversation.
A security team would bring me in to review their "AI security posture." They'd walk me through their prompt injection defenses, their model evaluation frameworks, their responsible AI policies. They'd show me dashboards tracking hallucination rates and bias metrics. They were proud of the work—and they should have been. It was thoughtful, well-intentioned work.
Then I'd ask a simple question: "Can you trace the data that trained this model back to its sources?"
Silence.
"Do you know what permissions this AI system has to your internal systems?"
More silence.
"If this model were compromised tomorrow, what's the blast radius?"
At that point, the conversation would shift. The confidence would drain. And we'd start talking about the actual architecture.
This pattern repeated across industries—financial services, healthcare, technology, manufacturing. Organizations that had invested millions in AI capabilities and significant resources in AI security tools were missing fundamental architectural controls. They had secured the model while leaving the system exposed. They had built elaborate defenses at one layer while ignoring the layers that mattered more.
I wrote this book for the practitioners who sense that something is wrong with how AI security is being approached but can't quite articulate what. For the security architects who are skeptical of vendor claims and framework theater. For the engineers who know that real security is about systems, not slogans.
Who This Book Is For
This book assumes you are technically sharp. You understand security fundamentals—identity, access control, network segmentation, logging, incident response. You've seen security programs succeed and fail. You know the difference between compliance and actual security.
This book does not assume you understand machine learning. You don't need to know how transformers work, what gradient descent does, or why attention mechanisms matter. Those are implementation details for ML engineers. You need to understand AI systems—how they're built, deployed, operated, and integrated. That's what this book teaches.
If you're a security engineer expanding into AI, this book will give you the architectural mental models you need. If you're a security architect responsible for AI systems, this book will sharpen your questions and focus your attention. If you're a CISO trying to understand what AI security actually requires, this book will cut through the noise.
What This Book Is Not
This book is not a threat catalog. I won't walk you through every known attack against AI systems. Threat catalogs create a whack-a-mole mentality—find threat, deploy countermeasure, repeat. Architectural thinking asks different questions: What conditions allow threats to succeed? What design decisions reduce attack surface? Where should trust boundaries exist?
This book is not a product guide. I won't tell you which vendors to use or which tools to deploy. Vendor landscapes change quarterly. Products come and go. Architecture endures. If you understand the architectural requirements, you can evaluate tools yourself.
This book is not academic. I won't explain the mathematics of machine learning or survey the research literature on adversarial examples. That work exists and has value. This book is for practitioners who need to secure systems in production, not researchers exploring theoretical boundaries.
This book is not reassuring. AI security is hard. The systems are complex, the risks are real, and the controls are imperfect. I won't pretend that following the right checklist makes you safe. I will help you think clearly about what you can and cannot control.
How to Read This Book
The book is structured around the AI security lifecycle: data, training, deployment, runtime, infrastructure, and governance. Each chapter stands alone—you can read them in any order based on where your organization's gaps are most acute.
That said, I recommend reading Chapters 1 and 2 first. They establish the mental model that the rest of the book builds on. If you skip them, the later chapters will feel like disconnected advice rather than coherent architecture.
Chapter 10 is different from the others. It's a diagnostic tool—a set of questions that reveal whether your AI security architecture actually exists or merely appears to exist in documentation. You might read it last as a capstone, or first as an assessment. Either works.
Throughout, I use hypothetical scenarios rather than real-world case studies. This is intentional. Real incidents come with details that distract from architectural lessons. They invite debates about whether the organization was negligent or unlucky. Hypotheticals let us focus on the architecture without the noise.
A Note on Opinions
This book is opinionated. I believe AI security is a systems problem, not a model problem. I believe most AI security tools address the wrong layer. I believe governance without technical enforcement is theater. I believe agents are an identity and authorization problem, not an AI problem.
You may disagree. That's fine. The goal isn't to make you agree with me—it's to make you think architecturally about AI security. If you reach different conclusions through rigorous architectural reasoning, we'll both have succeeded.
What I won't tolerate is hand-waving. "It depends" is not an architecture. "We'll figure it out later" is not a security strategy. "The vendor handles it" is not risk management. AI security requires the same precision and rigor that good security requires everywhere else. This book aims to demonstrate what that looks like.
Acknowledgments Preview
No book emerges from a single mind. This one benefited from countless conversations with security architects, ML engineers, platform teams, and incident responders who shaped my thinking. The organizations that let me see their AI architectures—and their failures—taught me more than any research paper could. I'm grateful to all of them, even though I can't name them here.
Let's begin.