Further Reading
This book intentionally avoids citations and academic references—it's written for practitioners, not researchers. But for readers who want to go deeper, these resources provide solid foundations. I've organized them by topic rather than chapter, as the best resources often span multiple concerns.
Foundational Security Architecture
"Designing Secure Systems" — Problem-Oriented Approaches
The principles in this book apply to any system. If you want to strengthen your architectural thinking beyond AI, study how security architecture is done in distributed systems, cloud infrastructure, and zero-trust networks. The mental models transfer directly.
Threat Modeling
Understanding how to systematically identify threats, trust boundaries, and attack surfaces is essential. The STRIDE methodology and its successors provide structured approaches. Adam Shostack's work on threat modeling remains foundational.
Zero Trust Architecture
NIST Special Publication 800-207 articulates zero trust principles that apply directly to AI systems. The core insight—never trust, always verify—is exactly what AI architectures need but rarely implement.
AI Security Research
MITRE ATLAS (Adversarial Threat Landscape for AI Systems)
The most comprehensive taxonomy of AI-specific attack techniques. ATLAS catalogs known attacks against machine learning systems and maps them to the adversarial lifecycle. Essential for understanding what attacks are possible, even if this book argues that attack catalogs are insufficient.
OWASP Machine Learning Security Top 10
A practitioner-oriented list of the most critical security risks in machine learning applications. Useful as a checklist, though it should be supplemented with architectural thinking.
NIST AI Risk Management Framework (AI RMF)
The authoritative U.S. government framework for managing AI risks. More governance-oriented than technical, but important for understanding the regulatory landscape and organizational accountability structures.
Data Security and Governance
Data Lineage and Data Observability
The emerging field of data observability addresses many of the data traceability challenges discussed in Chapter 3. Look for resources on data lineage tools, data contracts, and data quality monitoring in modern data platforms.
Privacy-Preserving Machine Learning
Differential privacy, federated learning, and secure multi-party computation offer technical approaches to training on sensitive data. These are advanced topics but increasingly relevant for organizations handling regulated data.
Supply Chain Security
Software Supply Chain Security
SLSA (Supply-chain Levels for Software Artifacts), Sigstore, and software bill of materials (SBOM) standards provide frameworks that can be adapted for model supply chains. Understanding software supply chain security is prerequisite to understanding model supply chain security.
Model Cards and Datasheets for Datasets
Research on documenting model and dataset characteristics. Model cards describe model capabilities, limitations, and intended uses. Datasheets for datasets describe data provenance, collection methods, and known issues. Both are emerging practices for supply chain transparency.
Agent Security
LLM Agent Architectures
The technical architectures for agentic systems are evolving rapidly. Understanding frameworks like ReAct, tool-use patterns, and memory architectures helps you reason about where security controls must exist.
Identity and Access Management
Agent security is fundamentally an IAM problem. Deep knowledge of identity federation, OAuth flows, service account management, and fine-grained authorization provides the foundation for securing agentic systems.
Incident Detection and Response
Security Observability
Modern approaches to security monitoring emphasize observability over static alerting. Understanding how to instrument systems for security-relevant telemetry, build detection pipelines, and investigate incidents in distributed systems applies directly to AI.
Incident Response Frameworks
NIST SP 800-61 (Computer Security Incident Handling Guide) and similar frameworks provide structured approaches to incident response. AI incidents require the same discipline, adapted for AI-specific forensic challenges.
Governance and Risk
AI Ethics and Responsible AI
While this book focuses on security rather than ethics, the responsible AI literature addresses adjacent concerns—fairness, transparency, accountability. Understanding these frameworks helps navigate the governance landscape.
Risk Quantification
FAIR (Factor Analysis of Information Risk) and similar frameworks provide structured approaches to quantifying security risk. Applying risk quantification to AI systems helps prioritize investments and communicate with leadership.
Staying Current
The AI security field evolves rapidly. Academic conferences (IEEE S&P, USENIX Security, NeurIPS workshops), industry publications, and security research blogs provide ongoing updates. Be skeptical of vendor-sponsored content, which often emphasizes product capabilities over architectural realities.
A Note on Sources
I've intentionally avoided listing specific books, papers, or URLs. Resources go out of date; URLs break; papers get superseded. The topics above give you search terms and concepts that will lead to current resources. A search for "MITRE ATLAS" or "NIST AI RMF" will find the authoritative current version, which is more valuable than a link that might be stale by the time you read this.
What matters more than any specific resource is the discipline of continuous learning. AI security is a young field. The practitioners who will secure AI effectively are those who keep learning, stay skeptical, and adapt their mental models as the technology and threat landscape evolve.