Build a P6 math self-study prototype this week. The module should do three things generic AI tools cannot: (1) pick questions aligned to HK curriculum by skill prerequisite graph, (2) track per-student mastery across sessions, (3) give the teacher a dashboard of who's struggling and where. Skip gamification, skip streak mechanics, skip leaderboards. Focus on the post-wrong experience — that's where the competitive whitespace is widest and teacher demand is strongest.
The prototype should be demoable at P6 level because Essai already has 100 verified P6 questions in DB, teachers explicitly said P6 students struggle most with self-study,1 and KooBits (the dominant HK primary math platform) stops at P6 with no AI tutoring layer — making it the exact transition point where Essai can differentiate.
| Feature | Why | Competitive Position | Evidence |
|---|---|---|---|
| Adaptive question selection by skill prerequisite | Core differentiation from drill apps. Math Academy's knowledge graph is the gold standard; Essai has atomic skills + competency maps already built. | DIFFERENTIATED | Math Academy 4x learning speed with knowledge-graph traversal2 |
| Post-wrong hint escalation | Students who get stuck need immediate, structured help — not just an answer. Progressive reveal: hint → worked step → full solution. Leslie already built the hint infrastructure. | CONSENSUS | IXL step-by-step walkthroughs after every wrong answer3 |
| Skill-level mastery tracking (student view) | Students need to see which atomic skills they've mastered vs. need work. Duolingo's strength isn't gamification — it's the visible progress tree. | DIFFERENTIATED for HK | IXL SmartScore drives 16-point increase on state assessments4 |
| Teacher visibility (who's struggling, which skills) | This is the purchase decision-maker's need. B2B2C: school buys, student uses. Teacher sees which students are behind — this is what i-Ready gets wrong (teachers can't act on the data).5 | DIFFERENTIATED | Mar 15 sync: teachers want to monitor AI tutoring sessions1 |
| Feature | Why | Gate |
|---|---|---|
| AI chat per question (Socratic guidance) | Leslie owns Ask AI. Eric designs the prompt context — feed it the question, student's wrong answer, and the relevant atomic skill. Khanmigo charges US$4/mo for this.6 | Leslie ships the AI chat component |
| Session summary report | After each practice session: skills practiced, accuracy, areas to review. Maps to the teacher dashboard. | Depends on P0 tracking working |
| Spaced repetition on weak skills | Math Academy's FIRe algorithm shows that reviewing prerequisites implicitly during advanced practice is more efficient than standalone drill.7 Essai's competency map has prerequisite links already. | Needs enough session data to schedule reviews |
| Feature | Why | Gate |
|---|---|---|
| Question bank expansion (all grades) | Burst-expand to P3/S3/S6 using existing generator infra. One sprint. | School commitment |
| Tutorial resource mapping | Post-wrong: link to YouTube / worked examples per atomic skill. Cantonese content gap is real.1 | Resource curation effort |
| Difficulty calibration labels | Needs Renee input. First-pass possible without her. | Teacher feedback |
| Feature | Why Defer |
|---|---|
| Streak / XP gamification | Duolingo's advantage is a 500M-user social graph. Gamification without network effects is just a progress bar. Focus on substance first. |
| 3D rotation viewer | Cool but niche. Only relevant for ~5 questions per topic set. Deferred per Mar 15 sync. |
| Camera scan / OCR grading | VLM accuracy at 71% for handwritten K-12 math (DrawEduMath Dec 2025).8 Not production-ready. |
| Leaderboards / class competitions | Adoption risk: competitive elements demotivate weak students — exactly the cohort teachers want to help.5 |
The research reveals a consistent pattern across platforms and geographies. Math self-study fails for three reasons, always in this order:
The #1 failure mode. Students answer incorrectly, see the right answer, and still don't understand what went wrong. Drill apps that show only correct/incorrect create a learned helplessness loop.9 i-Ready is the canonical failure: parents report children going from loving math to hating it because the app provides no meaningful post-wrong support.5
Non-adaptive platforms serve the same difficulty level regardless of student ability. Advanced students are bored; struggling students are overwhelmed. The "goldilocks zone" — questions just beyond current ability — is where learning happens.2 Math Academy's diagnostic places students at their exact "knowledge frontier" in 30-45 minutes, avoiding both problems.10
Without spaced repetition, math practice has a half-life. Self-learners report a frustrating cycle of forgetting material after breaks, having to re-learn foundational concepts.9 Math Academy's FIRe algorithm addresses this by giving fractional credit for implicit prerequisite practice — when you solve a hard problem, you're also reviewing the easier skills it builds on.7
skillsTested, prerequisites, errorTraps, and toAce fields| Player | Model | Self-Study Flow | What They Got Right | What's Missing |
|---|---|---|---|---|
| Math Academy GOLD STANDARD | B2C, US$49/mo | Diagnostic → knowledge frontier → adaptive questions from 3,000-topic prerequisite graph → FIRe spaced repetition | 4x learning speed. 3rd grader completing Calc BC.11 Knowledge graph visualization is viral on X (15.8K likes).12 | US$49/mo is expensive for HK parents. No Cantonese. No HK curriculum alignment. No B2B school model. |
| Khan Academy / Khanmigo | Freemium + US$4/mo AI tutor | Mastery-based progress → video explanation when stuck → Khanmigo Socratic chat | Khanmigo at US$4/mo is the price benchmark for AI tutoring.6 Socratic questioning prevents answer-giving. | No HK curriculum. Videos in English only. Khanmigo accuracy issues (GPT-4 Turbo still fails on some math).13 |
| IXL | B2B + B2C | SmartScore 0-100 per skill → adaptive difficulty → step-by-step walkthrough on wrong → Challenge Zone at 90+ | SmartScore is simple and effective. 16-point increase on state assessments when students hit mastery.4 | Repetitive. No AI chat. No HK curriculum. Criticized for being drill-focused. |
| Photomath | Freemium + subscription | Camera scan → instant solution → step-by-step explanation → 400+ animated tutorials (Plus) | Camera input is magical UX for homework help. 400+ animated whiteboard explanations.14 | Encourages answer-copying, not learning. No adaptive practice. No teacher visibility. No curriculum alignment. |
| Brilliant | B2C, subscription | Interactive visual lessons → manipulate shapes/graphs in real-time → concept-first, not drill-first | Interactive visualizations make abstract math tangible.15 10M+ learners. | Not exam-aligned. Not suitable for HK school adoption. No teacher dashboard. |
| Duolingo Math | Freemium | Gamified lessons → Cash Dash, Secret Equation, Math Paths mini-games → streak + XP + leaderboard | Engagement loop is best-in-class. 93% of US adults have math anxiety; Duolingo tackles this with fun-first design.16 | Primary-only. Shallow depth. Not exam-aligned. No teacher tools. Not suitable for school adoption. |
| Mathspace | B2B | Worked examples → adaptive hints → step-by-step feedback → question streaks + daily challenges | Step-by-step feedback on worked solutions, not just final answers. Question streaks reward effort.17 | Limited HK presence. No Cantonese. |
| Player | Level | Key Features | Essai's Edge |
|---|---|---|---|
| KooBits | P1-P6 | 100K+ questions, animated Cantonese lessons, AI adaptive, HK$116/mo, 200K users, used by elite HK schools18 | KooBits stops at P6. No AI chat. No per-question guidance. Essai can own the P6→S1 transition and AI tutoring layer. |
| SmartQuest | DSE (S4-S6) | AI auto-marking, paper generation, personalized learning paths. 80+ HK schools on free trial. EDB-approved.19 | SmartQuest is exam-focused (paper generation + marking). Essai's self-study + AI tutor layer is complementary, not competing. |
| Practifly AI | Primary + Secondary | Cognitive diagnostic covering 6 core mathematical competencies. 200+ schools in Greater Bay Area. 92% diagnostic accuracy.20 | Diagnostic-only — no adaptive practice, no AI tutor, no question generation. Essai does all three. |
| Snapask | All levels | Photo-to-tutor Q&A. Integrated ChatGPT in 2023.21 4.5M students across Asia. | Snapask is reactive (student asks, tutor answers). Essai is proactive (system picks the right question and guides the student through it). |
| Geniebook | Primary + Secondary | AI worksheets + live lessons + physical centers. 300K users. Profitable 2024-25.22 HK + SEA. | Geniebook is moving to hybrid (online + physical). Essai is pure platform — lighter, cheaper, school-integrated. |
Every school considering Essai will ask: "Can't students just use ChatGPT?" The answer is yes, they can — and it won't work for math practice. Here's why, with evidence:
ChatGPT doesn't know what P6 students in HK should be learning this term. It can't distinguish a TSA-level question from a DSE-level question. Essai's question bank is tagged by grade, topic, difficulty, and curriculum standard — every question maps to the HK syllabus. This is table stakes for school adoption, and it's something generic AI simply cannot do.
ChatGPT has no memory between sessions (unless manually configured). It can't tell you: "This student has attempted 15 trigonometry questions and consistently fails on sin/cos ratio applications." Essai's atomic skills tracking persists across sessions, building a mastery profile per student. This is what powers adaptive question selection.
This is the purchase decision layer. Teachers need to see which students are struggling and on which skills. They need to assign specific practice. They need to disable AI help for exams. None of this exists in ChatGPT, Claude, NotebookLM, or Gemini. It's the institutional layer that justifies school-level procurement.
Don't fight GenAI — integrate it. The AI chat (Ask AI) that Leslie is building should use an LLM for Socratic guidance when a student is stuck on a specific question. The key design constraint: feed the LLM the question context, the student's wrong answer, and the relevant atomic skill, so it can guide without hallucinating. This is exactly what Khanmigo does — and Essai can do it better because Essai owns the question structure.
| Grade | Question Bank Ready? | Competitive Gap | School Demand Signal | Demo Impact | Verdict |
|---|---|---|---|---|---|
| P3 | No questions yet | KooBits dominates P1-P6. Duolingo Math covers basics. | Lower — basic arithmetic, less teacher pain | Low — simple questions not impressive | NOT YET |
| P6 | 100 verified questions | KooBits has no AI tutor. SmartQuest is DSE-only. P6 is the gap. | High — Mar 15 sync: P6 students struggle most with self-study1 | High — geometry + diagrams + skill-based adaptive is visually compelling | START HERE |
| S3 | TSA coverage exists | Some SmartQuest overlap. Fewer HK competitors at this level. | Medium — TSA prep is a known need | Medium — TSA questions are simpler than DSE | PHASE 2 |
| S6 (DSE) | 200 questions (DSE14 + DSE17) | SmartQuest (80+ schools). More competitive. | High — exam prep is urgent. But SmartQuest is already there. | High — but already served | PHASE 2 |
Math Academy's 3,000-topic knowledge graph is their moat. A third-grader completed six years of math in one year because the system never asks a question the student isn't ready for — it traces prerequisites all the way back.12 Essai's competency maps (workdir/math/knowledge/competency-map/) have prerequisite links. Use them.
IXL's SmartScore (0-100) is brilliantly simple. Wrong answers make it harder to advance; correct answers on harder questions earn more. Students always know where they stand. The "Challenge Zone" at 90+ gives advanced students something to reach for.3
Khanmigo never gives the answer. It asks: "What do you think the first step would be?" This is a design constraint, not a feature — and it's critical. The Ask AI chat must be Socratic, not solution-providing. Feed it the question's toAce and errorTraps fields so it can guide precisely.
Mathspace rewards question streaks and effort, not just accuracy. Students earn more points for sustained focus. Daily and weekly challenges create low-barrier return triggers.17 This is the minimal-viable engagement loop — simpler than Duolingo's full gamification stack, but effective.
Essai's Chinese/English oral module has school adoption. The pattern: student enters → picks level → immediate practice → immediate feedback. No complex setup. The math self-study module should follow this exact entry flow, adapted for adaptive question selection instead of fixed prompts.
Launched December 2025. HK$500 million total, HK$500,000 per school.26 Application deadline: February 28, 2026. Schools must implement AI-assisted teaching in at least 3 subjects and conduct public demo lessons. This is the single biggest GTM lever for Essai right now:
Geniebook achieved profitability in 2024-25 after pivoting to hybrid (online + physical centers).22 This suggests pure-platform may not be enough for scale in HK — but for Essai's current phase (trial schools, prove the product), pure platform is correct. Hybrid comes later if at all.
Source test: PASS — Math Academy's knowledge graph approach is backed by research papers and demonstrated 4x learning speed improvement.2
Feasibility test: PASS — Essai's competency maps already have prerequisite links. The algorithm is deterministic (JS), not AI-dependent. No accuracy gate.
Adoption test: CONDITIONAL — Adaptive selection only works if students actually use the module repeatedly. Essai's B2B2C model helps: teachers assign practice, students have to do it.
Counter-example: i-Ready is adaptive but universally hated.5 The difference: i-Ready makes students listen passively. Essai's approach is active problem-solving. The key is what happens after the question, not the question selection itself.
Reformed position: HOLD — Ship it. The existing competency maps make this low-effort, high-impact.
Source test: PASS — Every successful math platform (IXL, Khan, Mathspace) has some form of post-wrong support. Absence of this is the i-Ready failure mode.
Feasibility test: PASS — Leslie already built hint infrastructure. Eric's questions have errorTraps and toAce fields. Progressive reveal is straightforward.
Counter-example: Photomath gives full solutions immediately — and is criticized for enabling copying, not learning.14 The progressive reveal pattern (hint → step → solution) prevents this.
Reformed position: HOLD
Source test: PASS — Mar 15 sync: teacher explicitly asked to monitor AI tutoring sessions.1
Adoption test: PASS — This is the B2B purchase decision feature. Schools buy tools that give teachers visibility. Without this, Essai is just another practice app.
Reformed position: HOLD — Even a minimal version (list of students + skills attempted + accuracy %) is valuable for the prototype demo.
| Signal | Source | Implication |
|---|---|---|
| "Math Academy is the most complete and effective edtech resource I know... completely knowledge-graph traversal based and uses spaced repetition, interleaving etc." | @BVSrinivasan, X, Mar 15 202627 | Knowledge-graph approach is gaining mainstream credibility. Essai's competency maps are a lightweight version of this. |
| A 3rd grader completed Calc BC in one year on Math Academy. The tweet got 15,877 likes. | @ninja_maths, X, Mar 1 202612 | Accelerated learning stories are the most viral edtech content. Essai should track and surface similar stories as they emerge. |
| "Use Gemini Canvas to generate standard-aligned problems, then upload to Snorkl for instant student feedback." | @techcoachjuarez, X, Feb 11 202628 | Teachers are already cobbling together GenAI + feedback tools. Essai can be the integrated solution. |
| Tsinghua open-sourced OpenMAIC: AI multi-agent interactive classroom with AI teachers and AI students. | @aigclink, X, Mar 16 202625 | AI-native education infrastructure is the direction of travel. Purpose-built beats generic. |
| Grok summarizes Math Academy's 400+ page working draft: "4x faster math mastery via 3000+ topic knowledge graph." | @grok, X, Mar 6 202629 | Math Academy is publishing their methodology openly. Essai can learn from it without copying the scale. |
Build a P6 self-study prototype this week with three things: adaptive question selection using the existing prerequisite graph, post-wrong hint escalation, and a minimal teacher visibility dashboard.
Skip gamification, skip streaks, skip 3D rotation. The competitive whitespace in HK is not "more drill" — it's what happens after the student gets it wrong. KooBits drills without tutoring. SmartQuest assesses without practice. ChatGPT tutors without curriculum. Essai can be the first platform in HK that does all three: curriculum-aligned adaptive practice + post-wrong AI guidance + teacher visibility.
The EDB HK$500K grant programme is the GTM accelerant. Schools need AI tools for 3+ subjects by August 2028. Math self-study with AI tutor checks this box. Get the prototype demoable, then pursue EDB approval alongside SmartQuest.
Start at P6 because nobody owns it, the questions are ready, and the P6→S1 transition is where HK students fall behind permanently.