Editorial policy
How AltAI ranks AI tools.
A directory is only useful if you trust how it sorts. This is the full ranking process — the criteria, the exclusions, the affiliate disclosure, and the failure modes we know about.
What we optimize for
AltAI ranks for the reader, not the advertiser. The goal on every "X alternatives" page is: if you clicked the top pick and signed up without reading further, you would not regret it a month later. That is the one test every ranking decision passes or fails.
Concretely, that means ranks are driven by — in order:
- Capability-for-price-paid. Does this tool do the job the category actually hires it for, at a price the target user can justify? A more expensive tool only ranks higher if the capability delta is real.
- Free-tier honesty. Tools with a genuinely useful free tier rank higher than tools with a "free trial that converts into a credit-card gate on day 15". Free means free.
- Track record. How long has the tool been around, has pricing been stable, has the product been around long enough that abandonment risk is low.
- Alignment with the person searching. "ChatGPT alternatives" is a different intent than "free ChatGPT alternatives" or "Claude vs ChatGPT". Each page is ranked for the specific intent its URL implies.
What "Best Pick" means
On every /tools/<X>-alternatives.html page, the top-ranked alternative carries a Best Pick badge. Best Pick is not "the most popular" or "the one that pays us most" — it is the answer to the question: "If I could only try one of these, which gives me the highest chance of solving my problem?"
Best Pick can change between visits. When it does, we do not hide the change — the page site.updated date reflects the last ranking review, and meaningful shifts are noted in the accompanying blog post.
What disqualifies a tool
We omit or de-rank tools that have any of the following:
- Opaque pricing. If you have to "book a demo" to see a price, the tool does not appear as a primary pick on a page targeted at individual users. Enterprise-only tools appear in separate contexts.
- Dark patterns on cancel or downgrade. If we test-subscribed and could not cancel in under two minutes, the tool is excluded.
- Persistent outages. Tools with status pages showing >1% downtime over the prior 30 days drop out of top-3 ranks.
- Safety or policy incidents still unresolved. Recent, documented data-handling problems with no public remediation.
- Product death. When a vendor announces sunset or ceases updates, the tool is archived from active ranks within a week.
How affiliate links affect ranking
They do not. The sequence matters: we pick the rankings first, then apply for affiliate programs for the tools already ranked. Tools without affiliate programs (ChatGPT, Claude, Gemini, Stable Diffusion, most open-source options) can and do appear as Best Pick whenever they are the best answer for a reader.
Every outbound link that is affiliated carries rel="sponsored" in the HTML, and the footer of every page has a plain-English disclosure. If an affiliate program ever pressures us to change a ranking, the program is dropped, not the ranking.
How we update the directory
The directory rebuilds from a single data/tools.json source whenever new entries land. The process we follow:
- We test-subscribe to every tool before listing and check at least one non-happy path (cancel, edge case, failure mode). Testing notes live in internal state, not on the public page — but they drive what we write in the why column.
- Every tool entry has at least one sentence you cannot get from the vendor's own homepage. If we cannot add that, the tool does not get an entry.
- Rankings are reviewed at minimum every 90 days per category, and whenever a notable pricing or capability shift happens in between.
How to tell us we got one wrong
If you have tried one of these tools and our ranking does not match your experience, we want to know. The most useful feedback includes: which page, which tool, what you tried, and what happened. You can reach us via the feedback widget at the bottom of any page, or through the newsletter.
We correct mistakes in public. When a ranking changes because a reader corrected us, the post describing the change says so.
Known limits
The directory is opinionated, editorial, and small-team-run. That means:
- We cover the tools we actively test. A tool's absence is not a judgment — it usually means we have not reached it yet.
- Our rankings reflect a specific implicit user — a generalist building in 2026 who values speed, clarity, and price transparency. Enterprise buyers with procurement requirements should weight our ranks accordingly.
- When we change a top pick, the affected page carries a dated "Ranking changed: <date>" line at the top linking to the review note that prompted the change. The first such note will land the first time we change one — nothing is backdated.