Privacy is architecture, not policy
Most civic platforms promise privacy in their terms of service while their architecture does the opposite. We built privacy into the system itself — it's not a policy we follow, it's a constraint we can't violate.
The problem with civic technology
When you use most civic technology platforms, you're creating a detailed record of your political interests. Which legislation you research. Which representatives you contact. Which meetings you attend. Which petitions you care about.
That data — what you care about politically — is some of the most sensitive information about you. And on most platforms, it's collected, stored, and often shared with third parties. Even platforms built with good intentions typically rely on external AI services that receive every query you make.
Your civic research should be as private as your vote.
How we protect you
Privacy protections built into the architecture, not bolted on after the fact.
Local AI processing
Every AI operation — document analysis, petition scanning, search — runs on your community's own infrastructure. Your queries never leave your network.
Location privacy
Location data is deliberately fuzzed — precise coordinates are never stored. The system only resolves to jurisdiction level, ensuring your physical location remains private while connecting you with the right civic information.
Data minimization
We collect only what's necessary to provide civic tools. No behavioral profiles, no interest tracking, no data enrichment from third parties.
Passwordless authentication
Sign in with passkeys and biometrics. No passwords to steal, no credentials stored on our servers, no phishing vectors.
Secure sessions
Short-lived sessions with automatic expiry. No persistent tracking across visits. When you close the browser, your session ends.
PII masking in logs
Server logs automatically redact personally identifiable information. Even system administrators can't accidentally see user details in diagnostic output.
Consent management
Full GDPR and CCPA compliance. You control your data, you choose what to share, and you can delete everything at any time.
What we never do
Explicit commitments, not vague assurances.
- Sell or share your data with anyone
- Track which legislation you research
- Profile your political interests or leanings
- Send your queries to external AI services
- Use analytics that identify individual users
- Retain data longer than necessary to provide the service
- Allow third-party trackers or advertising scripts
- Make decisions about what civic information to show you
These aren't aspirational goals. They're architectural constraints. The system is built so that most of these things are technically impossible — not just against policy.
AI and your data
How your documents and analysis results are handled during AI processing.
Your documents stay yours
Uploaded documents are processed and stored only for your use. They are never shared, sold, or used for model training. Delete them anytime — they're gone permanently.
Analysis is ephemeral
OCR text and analysis results exist only to serve you. You control retention. When you delete a document, the analysis goes with it.
No profiling
We don't build user profiles, track political preferences, or analyze usage patterns. The AI treats every request independently — it has no memory of you.
Node isolation
In the federated model, your data stays on the node you used. No cross-node data sharing. A compromise of one node cannot expose another node's data.
Full data portability
Export your documents, analysis results, and account data at any time. Your civic research belongs to you — take it wherever you go.
Common questions about AI privacy
Does the AI remember my documents?
No. We use pre-trained models with no fine-tuning on user data. The AI processes your document, returns the analysis, and retains nothing. It has no memory between sessions.
Can the AI be biased by what I upload?
No. Analysis prompts are standardized across all users and all nodes. What you upload doesn't affect how the AI analyzes the next document — yours or anyone else's.
Who sees my analysis results?
Only you, unless you choose to share them. Analysis results are tied to your account on your node. No administrator, no other user, and no other node can access them.
What happens if a node is compromised?
Node isolation means only that node's data is affected. Other nodes in the network are unaffected. The compromised node's certification can be revoked, cutting it off from the network.
How we compare
Architecture-level privacy vs. policy-level promises.
| Opus Populi | Typical civic platforms | |
|---|---|---|
| AI processing | Runs on your infrastructure | Sent to third-party APIs |
| Location data | Imprecise, jurisdiction-level only | Precise GPS coordinates stored |
| Search history | Never stored or tracked | Logged and used for profiling |
| Authentication | Passwordless (passkeys) | Email/password with tracking cookies |
| Data ownership | You own and control everything | Platform owns your data |
| Third-party sharing | Zero third parties | Shared with advertisers and partners |
| Source code | Fully auditable (AGPL-3.0) | Proprietary, closed source |
| Logging | PII automatically masked | Full user data in server logs |
Don't take our word for it
Every privacy claim on this page is verifiable. The entire codebase is open source under AGPL-3.0. Read the code. Audit the architecture. Deploy it yourself.