What we use AI for
We use a large language model (LLM) for one specific job: turning the deterministic findings of our analysis pipeline into a calm, plain-English explanation that an older adult can understand.
The LLM does not:
- Compute the risk score (that's a weighted sum of deterministic signals)
- Decide whether something is a scam on its own
- Have memory of previous checks you submitted
- Train or learn from your data
The LLM does:
- Read the text you submitted and the deterministic findings
- Pick the appropriate verdict label from a fixed list of four
- Write a 2–3 sentence explanation citing the specific findings
- Write a recommended action
Which model
We currently use Anthropic's Claude family of models, accessed through the LiteLLM library so we can swap providers if needed. By default we use a fast model for low-risk cases and a more capable model for ambiguous or high-risk cases.
We do not send your data to multiple model providers simultaneously. Only one provider is queried per check.
What the AI does not do
- It does not give legal, financial, or medical advice.
- It does not predict the future. A "Likely scam" verdict is our best assessment based on the signals available; it is not a guarantee.
- It does not contact anyone on your behalf. It cannot send messages, make calls, or transfer money.
- It does not access the internet on your behalf. It only sees the text and the findings we hand it.
What we never let the AI say
Our prompt and our output rules forbid the words "safe" and "legitimate" in any verdict. These words give a sense of certainty no AI can earn. We use phrases like "no obvious red flags found" instead.
We always include a "What we did NOT check" line so you know the limits of our assessment.
Cost and limits
Each check has a cost. We cap our daily LLM spend with a hard limit. If we hit it, we return an error rather than silently degrade the verdict — we do not want to give you a worse answer without telling you.
Errors and feedback
The AI sometimes makes mistakes. If a verdict seems wrong to you, please use the feedback button on the verdict page (👍 / 👎) so we can review it and improve.
Updates to this statement
If we change models, providers, or how we use AI, we will update this page and announce material changes by email.