Not all AI tools carry the same risk to your data. This framework is specific to the threat model facing tribal nations: federal government access to community data. Where your query goes, and who can legally compel access to it, determines the risk tier. This is a sovereignty argument, not a capability argument.
The tiers -- highest to lowest data risk
What each tier means
Tiers 1 and 2: Both are subject to US surveillance law including FISA, which means the federal government can compel access without your knowledge. Free models typically also use inputs for training. The distinction between free and paid matters for privacy policy, but not for legal compellability.
Tier 3: Guardrailed tools like Copilot Enterprise may not train on your data, and enterprise contracts can include data residency terms. The protection is contractual, not technical -- it depends entirely on what your agreement says and whether it is enforced.
Tier 4: Running AI on tribally-owned and tribally-controlled server infrastructure removes the commercial vendor from the chain. Still requires network security discipline and the technical capacity to manage it.
Tiers 5 and 6: Local models running on hardware you own are the clearest sovereignty boundary. Queries never leave your machine. Tier 6 (air-gapped) eliminates even OS-level outbound traffic. The tradeoff is operational friction and the absence of cloud-scale capability -- which is often not the capability your teams actually need.
A note on non-US models: Models hosted outside US jurisdiction (EU, Canada, etc.) avoid US surveillance law but introduce different threat models. That tradeoff is worth understanding, but it is a separate analysis from this framework.
The five questions
- Where is the query going? Local machine or a cloud server you do not control?
- Who owns the output? Check terms of service -- most cloud AI providers claim training rights on inputs.
- What data am I including? Consider sensitivity: personal information, location, legally significant details, culturally restricted material.
- Is AI the right tool? It should not replace expert review for regulatory, legal, or high-stakes decisions.
- Can you explain what it did? You are responsible for the output. "The AI told me" is not a defensible answer.
This framework is a starting point for evaluating tools, not a compliance checklist. The right tier for a given task depends on the sensitivity of the data, the nature of the decision, and the governance capacity your organization has in place.