AI Acceptable Use Policy

Version: 0.2
Date: March 2026
Author: Nick Brueggeman
Review Cycle: Quarterly
Role: AI Operations
Next Review: June 2026
TL;DR

We are an AI company. Using AI tools is expected. The default answer is yes.

One hard rule: PHI requires a BAA. Everything else is good judgment. Use whatever helps you move fast. Never put patient data into a tool without a signed Business Associate Agreement.

1

Purpose and Philosophy

We build with AI and ship with AI. This policy answers one question: what data can go where?

We operate "Default Open." Any AI tool is permitted for non-PHI work. No approval process, no waiting. The one non-negotiable: PHI only goes into tools covered by a signed BAA.

Company IP flowing through AI tools is an accepted risk. Leadership has chosen speed over lockdown.

🤝
Collaborate, don't duplicate. Where possible, we unify on shared company accounts so teams work in the same projects, build on each other's progress, and maintain shared context. Individual accounts create silos. Shared accounts create leverage.
2

Data Classification

PHI (Protected Health Information)

Any data that identifies a patient and relates to their health, treatment, or payment: names, DOBs, MRNs, diagnosis codes, clinical notes, claims data, or any combination that could identify an individual.

Hard rule: PHI only goes into Tier 1 (BAA-covered) tools. If it came from a patient record, it's PHI.

Sensitive Business Data

Financials, M&A strategy, employee PII, salary data, legal docs, investor materials. Prefer company-managed accounts. No hard block, but think twice before putting the cap table into a free chatbot.

General Business Data

Code, docs, research, specs, ideation, marketing, presentations, emails. Fully open. Use anything.

Data Residency & Provider Awareness

Be intentional about what data goes into any external service. Not all AI providers are equal when it comes to where your data ends up and what legal protections apply.

⚠️ Non-US Jurisdiction Providers

Cloud-hosted models under non-US jurisdiction deserve extra scrutiny. If a provider operates under a foreign government's data access laws, you have limited or no legal recourse for how your data is handled.

This applies specifically to cloud-hosted models under Chinese jurisdiction (e.g., DeepSeek cloud, Qwen cloud), where data is subject to China's national security laws with no US legal recourse.

✅ Local Models Are Fine

Running open-source models locally (DeepSeek, Qwen, Llama, etc. via Ollama, LM Studio, or self-hosted) is perfectly fine. Your data never leaves your machine. No data residency concern applies.

Extend general caution to all external providers when working with sensitive internal data, client information, pricing, or deal terms. This is a data residency awareness principle, not a blacklist. Match the sensitivity of the data to the trust level of the destination.

3

The One Rule: PHI

Never input PHI into any AI tool without a signed Business Associate Agreement (BAA) with Penguin AI.

This is a HIPAA requirement. No workaround, no exception, no "it's just a test." Violating it creates regulatory exposure for the company and for our patients.

Note: Most employees will never need Tier 1 access because their work does not involve PHI. The list of BAA-covered tools will be published once confirmed. If your role requires handling PHI, contact [email protected]. If you're unsure whether something is PHI, default to keeping it out and ask.

4

Approved Tool Tiers

Nothing is blocked. Tiers define what data you can put into a given tool.

Tier 1 BAA-Covered Cleared for all data including PHI
  • AWS (BAA on file)
If your role requires PHI access, contact [email protected].
Tier 2 Company-Managed Preferred for sensitive business data
  • Lovable (company team account)
  • Genspark (company account)
  • Fireflies (company account)
This is where collaboration happens. Shared accounts give teams visibility into each other's work and a single source of truth. Use them when available.
Tier 3 Personal & Free Tools Approved for general use
  • ChatGPT, Claude, Gemini, Perplexity, Grok, Mistral (personal accounts)
  • Any other publicly available AI tool
  • Local models (Ollama, LM Studio, etc.)
Not cleared for PHI or sensitive business data. Everything else, use freely.
5

AI Tool Budget

Every employee gets a monthly budget for AI and productivity tools, charged to the company card (RAMP).

$200/month
Per employee. AI tools + productivity tools. No rollover, use it or lose it.
Suggested Split
~$100 Claude Pro + remainder on role-specific tools
Shared Plans
Lovable, Genspark, Fireflies don't count against your $200
Covers
AI assistants, coding tools, productivity & sales tools (e.g., Apollo)
Charged To
Company card (RAMP) — not personal expense
💡
Spend smart, not max. This budget exists to make you faster, not to give you toys. Spend as little as you need, not as much as you can. A $20/month tool that saves you 5 hours a week is a great investment. A $50/month tool collecting dust should be cancelled. Use shared company tools first, then fill gaps with your personal budget.
6

Account Guidelines

  • Use company-managed accounts when they exist. They provide audit trails and centralized offboarding.
  • Personal accounts are fine for non-PHI, non-sensitive work.
  • Don't bypass SSO. If a tool has company SSO, use it. Don't create separate personal accounts.
  • Don't share credentials. Provision access through IT, not by sharing logins.
7

What You Must NOT Do

  • Do not input PHI into any tool without a signed BAA.
  • Do not disable security features or audit logging on company-managed tools.
  • Do not use AI outputs as final clinical decisions without human review.
  • Do not share company-managed account credentials.
8

Compliance Alignment

HIPAA

BAA requirement for any vendor handling PHI. Minimum necessary principle applies: only share PHI actually needed for the task, even in Tier 1 tools.

SOC 2

Company-managed accounts provide the audit trail and vendor management controls required for Type II evidence. The AI Tool Inventory serves as part of our vendor management program.

HITRUST CSF

  • AI Tool Inventory: Maintained list of tools in use, data classifications, BAA status, and risk tier.
  • Annual BIA: AI systems touching PHI or critical processes are included in the annual Business Impact Analysis.
  • Risk Documentation: This policy + tool inventory + incident logs = HITRUST AI risk management documentation.
9

Reporting and Questions

Policy questions or tool questions

Nick Brueggeman, AI Operations: [email protected]

Potential PHI exposure

Contact Security immediately at [email protected]. Do not wait.

This policy is a living document. If something creates unnecessary friction, say so.

10

Version and Review

Version Date Author Notes
0.1 March 2026 Nick Brueggeman Initial draft for ops review
0.2 March 2026 Nick Brueggeman Added tool budget, data residency guidance, corrected tool tiers

This policy is reviewed quarterly or when significant changes occur: new tools adopted company-wide, regulatory updates, or incidents that surface a gap.

Next scheduled review: June 2026