ANYÉ Rating Engine
A deterministic Python engine that turns 202 data points into archetype, trust, and ERRC prescriptions per entity.
The Rating Engine is the part of an ANYÉ audit that nobody else has. It is the bridge between raw evidence and a recommendation a board would sign off on — and it is deterministic, so two analysts running the same evidence get the same answer.
What it is
A Python module of five pure functions. Same inputs, same outputs, no LLM in the loop for the math. The functions read a populated Process Map v6 dictionary and emit:
classify_archetype()— places the entity on the Porter generic-strategies map. Cost Leader, Differentiator, Niche, or Stuck-in-Middle. Confidence is a number, not a vibe.compute_basic_compliance()— scores the foundational hygiene every business should have. Hours posted, photos uploaded, response time, schema markup, etc. Below 60 = compliance debt.compute_strategic_archetype_fit()— measures the gap between the strategy the entity claims and the strategy the entity executes. A clinic positioning as “premium aesthetic” but priced like a barbershop scores low.compute_maister_trust()— David Maister’s trust equation(Credibility × Reliability × Intimacy) / Self-orientation, fit to the data points the audit collects. Output is a 0–100 trust band per entity.generate_errc_prescription()— Blue Ocean’s Eliminate / Reduce / Raise / Create grid, calibrated per archetype so the prescription matches the strategic position.
The engine source lives at code/data/product/runs/{run_id}/rating_engine/engine_v1.py for every audit run. Each function is unit-tested against the Beautylosophy mock data set and re-tested on every audit before output ships.
Why deterministic, not LLM-judged
LLM scoring drifts. Run the same evidence twice through Claude and you get scores that differ by 5–15 points depending on the prompt. For a competitive recommendation that drives a marketing budget, drift is unacceptable. The engine therefore handles the math (classification, weighting, ratio computation) and Claude handles the prose (translating the math into a Minto-structured executive summary). Every score has a single computable origin.
What goes in
A populated Process Map v6 dictionary for the client and three to five peer entities, plus an industry vertical tag (beauty, dental, F&B, retail, etc.) that selects the right scoring band. The vertical tag is what stops the engine from grading a beauty clinic on the rules of a dental practice.
What comes out
For each entity, a structured object with:
- archetype label + confidence (0.0–1.0)
- basic-compliance score (0–100) with the bottom three failing data points
- strategic-archetype-fit score (0–100) with the top three mismatches
- Maister trust band (AA strong, A solid, BB watch, B weak, C broken)
- ERRC prescription with the top three actions per quadrant
These outputs feed the Five Lens view and the Investment Priority Matrix downstream.
Calibration debt we publish
Engine v1 has a known calibration issue: the Stuck-in-Middle archetype uses a neutral 0.6× tier-multiplier across all ERRC categories. Real Stuck-in-Middle entities benefit from a sharper bias toward Eliminate over Create. This is a v2 fix scheduled in the build cycle. We mention it on the public methodology page rather than hiding it because it is the kind of detail a sharp reader will spot — and we would rather the reader trust us for flagging it than catch us hiding it.
How a reader can verify the engine
The engine is open to inspection on request. Send the keyword ENGINE by WhatsApp and we share the read-only engine_v1.py file plus the Beautylosophy mock fixture so a reader with Python skills can reproduce the math. We do not open-source the engine to prevent verbatim cloning by direct competitors, but we do make it inspectable.
Where it sits
The Rating Engine is the deterministic layer underneath the analytical layer. The analytical layer (Claude reasoning) interprets engine outputs in plain prose. Together they form the hybrid analytical stack documented in audit_pipeline_mvp/MVP_PROMPT_CHAIN.md.
Related
- Process Map v6 — the schema the engine reads from.
- Five Lens — how engine output is presented to a non-analyst.
- Investment Priority Matrix — how engine output becomes a budget allocation.