Rachel James, AbbVie: Harnessing AI for corporate cybersecurity
In the last few years the cybersecurity landscape has changed as fast as the technology powering it. Defenders and attackers alike are racing to harness artificial intelligence—and while AI multiplies the scale, speed, and subtlety of attacks, it also gives defenders a new set of tools to automate detection, triage, and response. At the intersection of this arms race sits Rachel James, Principal AI/ML Threat Intelligence Engineer at AbbVie, whose work brings together applied machine learning, threat intelligence, and cross-industry collaboration to protect one of the world’s largest biopharmaceutical companies. Her path and practical perspective offer useful lessons for any enterprise trying to put AI to work for security without creating new, emergent risks. artificialintelligence-news.comhealth-isac.org
Why AbbVie’s cybersecurity problems matter
AbbVie operates in a high-stakes sector. Pharmaceutical companies are attractive targets: intellectual property, clinical trial data, patient information, supply-chain details, and regulated systems all make life sciences firms high-value targets for espionage, ransomware, and destructive attacks. A successful cyberattack can delay drug development, expose sensitive data, disrupt manufacturing or logistics, and cause real-world patient harm. For that reason, cybersecurity at a biopharma company isn’t an IT-only concern — it is a business continuity, regulatory, and patient-safety imperative.
That context shapes the job Rachel James holds: building security capabilities that are robust, auditable, privacy-aware, and able to operate in complex environments that include legacy systems, cloud platforms, and regulated workloads. Her title—Principal AI/ML Threat Intelligence Engineer—captures a rare blend of responsibilities: lead applied machine learning to detect and contextualize threats while also fostering threat-sharing and standards work across the health sector. health-isac.org
Rachel James — a bridge between AI and threat intelligence
Rachel James is not only an enterprise practitioner; she has been active in sector-wide collaborations and standards efforts. In late 2024 she received Health-ISAC’s Steve Katz Hero Award (the organization’s renamed annual “Hero” award), recognising her leadership in threat sharing and education across the health sector. She has chaired working groups, helped author practical guidance (notably the “CTI in a Box” whitepaper for Cyber Threat Intelligence Program Development), and led workshops that translate AI concepts into operational security practices. These activities demonstrate a crucial point: deploying AI for security does not happen inside a vacuum. It requires shared playbooks, community validation, and operational governance. health-isac.org
Her expertise has also been sought on the conference circuit—she’s listed as a speaker at major industry events focused on AI, big data, and cybersecurity—signaling that her work combines technical depth with the ability to explain operational impact to industry audiences. That combination—hands-on engineering plus cross-industry education—helps make her perspective especially practical. ai-expo.nettechexevent.com
From theory to practice: how AbbVie uses AI in security
The headlines often focus on flashy research models or sensational attacks, but Rachel’s approach is applied and pragmatic. The common patterns she and her peers emphasize are:
1. Use AI to augment, not replace, human analysts.
AI scales detection and triage, surfacing the most probable incidents for human review. In regulated industries, humans remain essential for context, policy decisions, and verification. Automated scoring and natural language summaries can massively reduce analyst fatigue while keeping human judgment in the loop.
2. Focus on high-value signals and enrichment.
Instead of throwing models at every log source, the highest ROI comes from models that enrich signals (e.g., augmenting IP/domain reputation with predictive risk scoring, clustering attacker activity across telemetry, or using embeddings to surface similar tactics from previous incidents). Threat intelligence combined with ML-driven enrichment makes alerts more actionable.
3. Prioritize explainability and auditability.
Regulated environments require that decisions—especially automated ones—be traceable. Models used for detection must be instrumented so analysts can explain why an alert was raised, how a risk score was calculated, and what data influenced an outcome. This is non-negotiable for incident response playbooks, compliance, and executive reporting.
4. Build reusable, modular pipelines.
Data pipelines that feed models should be standardized and modular so that new models or feature sets can be introduced without re-engineering whole platforms. This supports experimentation and continuous improvement while reducing implementation risk.
These pragmatic patterns echo the broader industry shift: AI in cybersecurity succeeds when product, data science, and security operations teams cooperate around measurable use cases rather than chasing novelty. Rachel’s recent public comments and the interview published by AI News highlight precisely this grounded, cross-functional approach. artificialintelligence-news.com
Concrete initiatives: CTI in a Box and prompt-injection leadership
One of Rachel’s practical contributions is leadership in resources that scale capability across organizations. The “CTI in a Box” whitepaper—developed under the Cyber Threat Intelligence Program Development working group at Health-ISAC—aims to give organizations a clear blueprint to establish or mature threat intelligence capabilities. This is the sort of artifact that helps smaller teams avoid repeating mistakes and provides a tested set of operational primitives to integrate with detection and response. The whitepaper covers program structure, playbooks, data sources, and metrics—components essential to making machine learning useful and reliable in production. health-isac.org
Rachel is also documented as leading the Prompt Injection entry for the OWASP Top 10 for Large Language Model Applications and Generative AI. Prompt injection is a class of attack unique to models that accept free text prompts; it can cause models to reveal secrets, bypass filters, or take actions they shouldn’t. Her involvement here is critical because health-sector systems increasingly adopt LLMs for note summarization, triage automation, and other assistive tasks—making guidance on prompt-injection mitigations a business-critical requirement. That kind of standards work reduces systemic risk across many organizations simultaneously. health-isac.org
A layered, defense-centric architecture for AI-powered security
Rachel’s operational recommendations fall into a multi-layer strategy that any enterprise can adapt:
1) Data hygiene and provenance as the foundation
AI is only as good as the data it consumes. Ensuring timestamps, canonical identities, and provenance metadata across telemetry prevents garbage-in/garbage-out and supports reproducible investigations.
2) Feature engineering with domain context
Feature sets should represent attacker behaviors (e.g., lateral movement patterns, suspicious process ancestry, or exfiltration indicators) rather than only superficial signal statistics. Embeddings and graph-based features often capture the relational context essential for threat detection.
3) Model governance and testing
Implement model approval gates, performance baselines (precision/recall targets), drift detection, and adversarial testing (including fuzzing and red-team exercises). This ensures models don’t silently degrade or become exploitable.
4) Explainability and human-in-the-loop controls
Provide confidence scores, top contributing features, and counterfactual explanations for alerts. Where automation acts (for example, auto-quarantine), place human approval gates for high-impact actions.
5) Threat sharing and community validation
Feed anonymized telemetry and indicators into peer communities and ISACs (information-sharing organizations) to improve detection across the sector and to benefit from community-curated indicators and playbooks. Rachel’s Health-ISAC work exemplifies this model. health-isac.org
Collectively, these principles produce systems that are not only faster but safer and more interoperable across vendors and partners.
Use cases where AI delivers clear value
Rachel and other practitioners often prioritize use cases that are measurable and repeatable. Examples include:
-
Phishing triage and prioritization. ML models can rapidly score incoming messages on risk, flagging those that warrant immediate response. Augmenting with URL sandbox results and ATS (anomaly/actor) features makes triage more precise.
-
Anomaly detection across identity graphs. Models that profile normal user-to-resource access patterns can flag account compromise or insider threats earlier than static rules.
-
Malware family clustering and attribution. Unsupervised or semi-supervised models help cluster similar samples and link campaigns, reducing analyst time.
-
Automated enrichment of alerts with threat context. Using embeddings and knowledge graphs to surface related incidents, relevant TTPs (tactics, techniques, procedures), and historical IOC overlap.
-
Document and code scanning for sensitive leakage. LLMs and classifiers can help detect inadvertent sharing of credentials or PHI (protected health information) in inappropriate repositories or notes.
These are the domains where AbbVie’s approach—balancing automation with strict governance—yields rapid returns while controlling risk. The key is keeping the scope tight and measuring outcomes: time-to-detect, analyst time saved, false positive rates reduced, and mean time to remediate. artificialintelligence-news.com
Governance, privacy, and regulatory considerations
Health sector organizations must juggle security and privacy in ways that other sectors may not. AI systems that process clinical notes or patient data must be designed with privacy-preserving architectures:
-
Data minimization and tokenization before feeding models. Keep PHI out of model training unless explicitly authorized and governed.
-
Differential privacy or synthetic data when models require patterns but not real patient records.
-
Robust access controls and logging on model inference to enforce least privilege and auditability.
-
Regulatory alignment—for example, documenting model uses for auditors and maintaining traceability for decisions that impact patients or drug manufacturing.
Rachel’s emphasis on auditability and explainability ties back to these requirements. In regulated environments, security automation must itself be auditable and defensible to regulators and internal compliance teams—otherwise it becomes a liability rather than an advantage. Health-ISAC’s work and the broader sector engagement help define practical expectations and cross-organization accountability. health-isac.org
Managing the attacker’s AI — red teaming and adversarial testing
As defenders deploy AI, attackers will increasingly weaponize AI as well. Rachel advocates for adversarial testing—both traditional red-teaming and AI-centred adversarial exercises:
-
Prompt-based attack simulations for LLMs: intentionally craft prompts that try to exfiltrate secrets, bypass filters, or cause unsafe behaviour.
-
Adversarial inputs for classifiers: small perturbations to telemetry or noise injection to test model robustness.
-
Automated spear-phishing generation to test detection systems and user awareness programs.
-
Model poisoning and data integrity tests: simulate compromised training data or supply-chain tampering to validate detection and retraining workflows.
These exercises are not theoretical; Rachel’s work with industry groups and standards bodies—particularly her OWASP prompt-injection leadership—helps organizations understand the unique threat models for generative AI and create concrete mitigations. Proactive adversarial testing turns what could be a systemic vulnerability into a managed risk. health-isac.org
Cross-functional culture: the people side of AI security
Technology alone won’t secure an enterprise. Rachel’s approach highlights cultural and organizational tactics:
-
Embedded security liaisons in product and engineering teams to ensure security is shaped into designs, not retrofitted.
-
Training analysts on model mechanics so operators can interpret model outputs correctly.
-
Playbook exercises where security, legal, compliance, and business owners rehearse incident scenarios that involve AI systems.
-
Transparent communication with executives: metrics that business leaders care about (downtime risk, compliance exposure, potential patient impact) to keep security investment aligned with corporate priorities.
She has built and led workshops that translate these concepts to practitioners—bringing theory into the daily operations of security teams and emphasizing that AI maturity is as much about people and process as it is about model accuracy. artificialintelligence-news.comhealth-isac.org
Tools, vendors, and open-source: a pragmatic mix
Rachel’s public commentary suggests a pragmatic, vendor-agnostic stance: use cloud and vendor capabilities where they accelerate value, but wrap them in your own governance and telemetry. Best practices include:
-
Instrument vendor tools for telemetry export so they can feed central detection engines and analytics.
-
Leverage open-source for transparency when possible (for example, community models for certain tasks) while acknowledging tradeoffs in maintenance and security.
-
Treat third-party models as supply-chain inputs: vet their provenance, run adversarial tests, and monitor for drift or manifest changes.
This hybrid view recognizes that no single vendor will cover every use case, and that architectural guardrails—logging, access control, canary models—must sit above any third-party capability. It’s a design pattern that scales across regulated enterprises. artificialintelligence-news.com
Industry collaboration: why Health-ISAC and standards matter
Rachel’s Health-ISAC award and her working-group leadership reflect a fundamental truth: cyber defense improves when organizations share indicators, playbooks, and lessons. For the health sector this is especially true because attackers often target supply chains and sector-wide dependencies. Industry bodies and ISACs provide:
-
Validated intelligence from peers who have seen live incursions.
-
Playbooks and whitepapers that accelerate program maturity (e.g., “CTI in a Box”).
-
Standards and consensus around emerging risks like prompt injection and model poisoning.
Rachel’s contributions show that individual companies—no matter how capable—benefit from coordinated defense. Standardized guidance helps smaller organizations implement safe defaults and gives larger organizations a forum to operationalize defensive advances. health-isac.org
The future: responsible AI, continuous validation, and resilience
Looking ahead, Rachel James’ work points toward a few durable trends for enterprises that want to use AI safely in security:
-
Continuous validation over one-time certification. Models and their data evolve; validation must be continuous, automated where possible, and embedded into deployment pipelines.
-
Hybrid human-AI workflows. The best systems combine rapid machine triage with human context, especially in sensitive sectors.
-
Sector-level playbooks for emergent risks. New attack classes (prompt injection, model poisoning) require shared mitigations and rapid dissemination.
-
Privacy-first model design. More organizations will prefer synthetic or differentially private approaches when sensitive data is involved.
-
Regulatory and audit readiness. As regulators focus on AI, security teams will need standardized artifacts and traceability to demonstrate responsible deployment.
Rachel’s emphasis on community, explainability, and practical playbooks is directly aligned with these trends—her work shows how to reach for future capability while keeping current compliance and safety obligations front and centre. artificialintelligence-news.comhealth-isac.org
Takeaways for security leaders
If you lead security or are responsible for AI adoption in a regulated enterprise, the practical lessons from Rachel James’ approach are clear:
-
Start with business-impact use cases. Choose detection or automation tasks with measurable ROI and manageable risk.
-
Instrument everything. Telemetry, data lineage, and model inputs must be recorded for reproducibility and audits.
-
Govern models like code. CI/CD for models, drift monitoring, and approval gates prevent surprises in production.
-
Invest in explainability. Analysts and auditors must understand model outputs—not treat them as opaque black boxes.
-
Engage the community. Join ISACs, contribute to playbooks, and adopt proven sector guidance.
-
Practice adversarial testing. Simulate attacks on models and pipelines to find weaknesses before adversaries do.
-
Prioritize workforce readiness. Upskill analysts on ML concepts, and embed security expertise into product teams.
These are not theoretical best practices; they are the steps that have enabled Rachel and teams like hers at AbbVie to deploy AI responsibly, strengthening detection while keeping governance intact. artificialintelligence-news.comhealth-isac.org
Conclusion
Rachel James’ career at AbbVie and her sector contributions illustrate how AI can be turned from a risky novelty into a force multiplier for corporate cybersecurity—if done with discipline, transparency, and community collaboration. Her combination of hands-on engineering (building ML pipelines and enrichment), standards-level leadership (CTI in a Box, OWASP prompt injection entry), and practical education (workshops and conference talks) provides a template for other enterprises aiming to extract value from AI without compounding systemic risk.
The lesson is pragmatic: AI will reshape both offense and defense. Organizations that invest in data hygiene, model governance, explainability, and community sharing will be best positioned to benefit while keeping patients, customers, and business operations secure. Rachel James’ work at AbbVie is a timely case study of that balancing act—a reminder that technical innovation and disciplined risk management must go hand-in-hand. artificialintelligence-news.comhealth-isac.orgai-expo.nettechexevent.comcrummer.rollins.edu
Sources & further reading (selected)
-
Rachel James, AbbVie: Harnessing AI for corporate cybersecurity — AI News (interview). artificialintelligence-news.com
-
Health-ISAC announcement: Rachel James received the Steve Katz Hero Award; includes CTI in a Box and leadership roles. health-isac.org
-
Rachel James — speaker profiles, AI & Big Data Expo / TechEx Events. ai-expo.nettechexevent.com
-
AI-EDGE/AI News repost of the interview (AI News publication date: August 22, 2025). crummer.rollins.edu
https://bitsofall.com/powell-fires-up-markets-but-some-investors-see-reason-for-caution/