Claude
Non-EUAI assistant from Anthropic (US) for analysis, coding, and writing. Free tier available, Pro from $20/mo. Trains on consumer chats by default.
- •Consumer conversations used for AI training by default since September 2025
- •All consumer data stored and processed on US servers
- •No EU data residency for Free, Pro, or Max plans
- •Subject to US surveillance laws including FISA and CLOUD Act
- •Opt-out UI criticized for making training toggle easy to miss
- •Data retained for up to 5 years when training is enabled
Claude is Anthropic's AI assistant. The company was started in 2021 by former OpenAI researchers in San Francisco. Claude can work through long documents (up to 200,000 tokens), write and debug code, read images, and generate text. Free users get Sonnet 4.5 with daily message limits. The Pro plan at $20/month opens up all models, including Opus 4.5.
Anthropic talks a lot about AI safety, and Claude is generally good at following instructions without going off the rails. The company has SOC 2 Type II and ISO 27001 certifications. Your data is encrypted both in transit and at rest. Anthropic says it does not sell user data.
For people in the EU, things get tricky. Consumer account data sits on US servers. There is no EU residency option for individual users. In September 2025, Anthropic changed its policy so that Free, Pro, and Max conversations feed into model training by default. You can turn this off in settings, but the opt-out toggle was tucked under a big Accept button. Privacy researchers at Stanford flagged the design as misleading. If you opt out, data retention drops from 5 years to 30 days.
Worth knowing
Team and Enterprise plans are a different story. Those accounts are never used for training, come with admin controls, and can be set up for HIPAA compliance. EU data residency is possible through AWS Bedrock and Google Vertex AI, but that only works for API and enterprise setups. Use Claude through the website or mobile app and your data still goes to the US, no matter what you pay. Anthropic does offer a Data Processing Agreement with Standard Contractual Clauses, though that mostly matters for business customers.
- •Consumer conversations used for AI training by default since September 2025
- •All consumer data stored and processed on US servers
- •No EU data residency for Free, Pro, or Max plans
- •Subject to US surveillance laws including FISA and CLOUD Act
- •Opt-out UI criticized for making training toggle easy to miss
- •Data retained for up to 5 years when training is enabled
Mistral Le Chat
Get AI chat, research, and image generation from a Paris-based company that keeps your data in the EU.

by Switch-to.eu