How We Use AI
MITPO uses AI across research, strategy, competitor analysis, campaign planning, content generation, and workflow support. Different surfaces use different underlying models and different routing logic depending on the task, the quality target, and the cost profile that makes sense for that feature.
Where a task is sensitive (compliance-adjacent claims, medical or legal advice, financial recommendations), MITPO routes toward more conservative behavior or declines rather than providing possibly-wrong certainty.
What AI Cannot Do
AI output is probabilistic. It can be wrong, incomplete, outdated, biased, or overconfident — sometimes in ways that are hard to detect at a glance. MITPO does not present AI output as guaranteed fact. The product is designed to accelerate human judgment, not replace it.
- AI can and will make mistakes — verify anything material before acting on it.
- Training data has a cutoff. Claims about recent events (pricing, product launches, current regulations) should be confirmed against primary sources.
- AI does not always know what it does not know. Confidence and correctness are not the same thing.
Responsibility Model
Users are responsible for what they publish, send, or act on. This is not a disclaimer in small print — it shapes how MITPO is designed. The product does not auto-publish without user review by default, and surfaces that could send content to the world (social publishing, email, ads) require explicit action, not a passive timer.
- Review AI output before publishing, launching, or relying on it for material decisions.
- Check fact-sensitive claims (statistics, quotes, regulatory statements) against primary sources.
- High-stakes or regulated use cases still need human review. MITPO does not change that.
Guardrails and What We Restrict
Some outputs are either refused or filtered. The exact list changes over time as new abuse patterns emerge, but the shape is consistent: content that could harm users, targeted harassment, illegal activity, and impersonation of real individuals without consent are out of scope. The public demo has stricter limits than the authenticated product to reduce anonymous abuse.
Your Content and Model Training
MITPO does not train general-purpose foundation models on your content. Your brand documents, campaigns, competitor reports, and generated assets are used to deliver the product you asked for — not recycled into a model that other customers query.
Where a provider’s default behavior would train on input, MITPO enables the no-training option where the provider exposes one. Where a provider does not expose such an option, that provider is either not used for sensitive surfaces or is flagged in the Sub-processors list.
Transparency About Models
The Creative Studio model picker shows the provider for every image, video, and audio model so users can choose intentionally. For chat and assistant flows, MITPO routes between multiple providers based on availability and quality. The assistant does not surface the specific underlying chat model on demand — model selection is part of the routing layer rather than user-facing configuration. Where a routing fallback was taken (primary provider unavailable, input rejected), the response includes that signal so users can decide whether to retry.
Reporting Problems with AI Output
If an AI response produced something harmful, defamatory, or clearly wrong in a way that matters — report it. The in-app feedback surface captures the response, the model, and the prompt so we can investigate. Severe issues can be escalated to support@mitpo.com.