Featured schools
A representative slice of the market
| School | Curriculum & context | Why it matters | Source |
|---|---|---|---|
| Data freshness | Static roster: snapshot at export. SchoolIntel: weekly re-read. | Static rosters age from the moment they're delivered — every leadership change, hire, and inspection is invisible until the next paid refresh. SchoolIntel re-reads sources weekly so your queue reorders without you rebuilding it. | Vendor refresh cadence vs SchoolIntel signal modelVerified |
| Source citation per row | Static roster: rarely cited; sometimes 'compiled from public sources.' SchoolIntel: per-row source URLs and dates. | Reps and agencies cannot defend an outreach list to a sceptical client without provenance. SchoolIntel pins each fact to a KHDA report, IBO listing, BSO inspection, COBIS membership, school site, or hiring post — with the URL on the row. | ISC Research, EducationDataLists, scraped CSV vendorsVerified |
| Role coverage | Static roster: usually one generic 'info@' or 'principal@' alias. SchoolIntel: T1/T2/T3 buyer-role taxonomy. | Generic info inboxes route to admissions, not buyers. SchoolIntel maps staff to a role taxonomy — head of school, IB coordinator, head of digital learning, EAL/ELL lead, head of inclusion — so messaging matches the actual champion. | SchoolIntel role model + school staff pagesVerified |
| Source consensus / confidence score | Static roster: implicit single-source trust. SchoolIntel: explicit confidence per school across 8+ origins. | If only one source confirms a school, that's a reliability problem. SchoolIntel scores schools by how many independent sources agree (KHDA + IBO + BSO + COBIS + school site + hiring board), so reps know which rows to trust. | SchoolIntel source consensus engineVerified |
| Signal stamps | Static roster: none. SchoolIntel: leadership change, hiring, expansion, accreditation, group announcement. | A roster cannot tell you a school just lost an Outstanding rating, hired a new head of digital learning, or opened a campus. SchoolIntel attaches dated signal stamps so reps work the schools where something actually changed. | SchoolIntel signal modelVerified |
| Email verification + 90-day re-check | Static roster: one-time SMTP at export, often stale. SchoolIntel: SMTP + 90-day re-verification cycle. | An unverified contact list will burn sender reputation inside two campaigns. SchoolIntel re-verifies inside the product on a 90-day cadence and surfaces stale rows for review before they ship. | SchoolIntel verification pipelineVerified |
| Curriculum + group structure | Static roster: school name + city. SchoolIntel: curriculum stripe, school group, KHDA tier, accreditation. | Selling MIS, AI, or curriculum platforms requires curriculum and group context. SchoolIntel pre-tags every school with British/IB/American/Indian/MOE stripes plus group ownership (GEMS, Taaleem, Nord Anglia, etc.) so reps don't re-research. | KHDA + IBO + group sites + SchoolIntel structural mapVerified |
| Inspection / accreditation context | Static roster: not included. SchoolIntel: KHDA/DSIB rating, BSO inspection, IB authorization, NEASC. | Inspection cycles are the single most reliable buying-signal calendar in international education. SchoolIntel attaches the latest rating, the named improvement areas, and the timing window to each row. | KHDA DSIB + BSO + IBO + NEASC sourcesVerified |
| Cited outreach reason per account | Static roster: blank — reps invent the reason. SchoolIntel: paragraph-long 'why now' with source URLs. | Reply rates collapse without a real reason in the first sentence. SchoolIntel writes the cited reason — 'Just dropped from Outstanding to Very Good in the Feb 2026 DSIB cycle; named improvement areas include teaching and learning' — so reps don't fabricate context. | SchoolIntel signal narrative engineVerified |
| Pricing model | Static roster: one-time fee per export ($2k–$10k typical) or annual seat. SchoolIntel: live SaaS subscription. | A static export is a sunk cost that decays daily. A live subscription pays for the week-on-week refresh, the role coverage, and the citation trail. The two are not the same product. | ISC Research / EducationDataLists pricing benchmarksVerified |
| Defensibility to clients | Static roster: 'we bought a list.' SchoolIntel: 'here is the source, the date, and the change that made this account relevant.' | Agencies lose accounts when they can't show the work. SchoolIntel's source trail and signal stamps double as the audit trail an agency hands its client. | Agency client-brief patternsVerified |
| Privacy + removal request handling | Static roster: rarely handled; CSV is in 100 inboxes. SchoolIntel: authenticated product + removal workflow. | GDPR, the UAE PDPL, Singapore's PDPA, and the UK DPA all require named-individual handling pathways. A loose CSV in a Slack channel is a regulatory exposure. SchoolIntel handles personal data inside the authenticated product with a documented removal process. | SchoolIntel privacy controlsVerified |
| Coverage breadth | Static roster: usually one region or one curriculum. SchoolIntel: ~3,500 international schools across UAE, GCC, Asia, Europe. | Region-locked rosters force buyers to stitch together three vendors. SchoolIntel ships one cross-region account model and lets teams filter to UAE, Qatar, Saudi, Singapore, Hong Kong, or Europe with the same role and signal layers. | SchoolIntel coverage modelVerified |
| Updates / changelog | Static roster: no record of change. SchoolIntel: per-school change log with diffs and source URLs. | When a school's curriculum or leadership changes, SchoolIntel keeps the previous state with a date stamp. A static roster simply overwrites, which means trends and turnover are invisible. | SchoolIntel change logVerified |
| Workflow integration | Static roster: CSV → CRM, manual mapping. SchoolIntel: CRM-shaped account queue, role tags, signal triggers. | A roster lands as a flat blob a SDR has to clean. SchoolIntel exports CRM-ready records with curriculum, group, role, signal, and citation already structured — so the SDR opens a cited account, not a spreadsheet. | SchoolIntel CRM integration patternsVerified |
Data freshness
Static roster: snapshot at export. SchoolIntel: weekly re-read.
Static rosters age from the moment they're delivered — every leadership change, hire, and inspection is invisible until the next paid refresh. SchoolIntel re-reads sources weekly so your queue reorders without you rebuilding it.
Vendor refresh cadence vs SchoolIntel signal model
Verified
Source citation per row
Static roster: rarely cited; sometimes 'compiled from public sources.' SchoolIntel: per-row source URLs and dates.
Reps and agencies cannot defend an outreach list to a sceptical client without provenance. SchoolIntel pins each fact to a KHDA report, IBO listing, BSO inspection, COBIS membership, school site, or hiring post — with the URL on the row.
ISC Research, EducationDataLists, scraped CSV vendors
Verified
Role coverage
Static roster: usually one generic 'info@' or 'principal@' alias. SchoolIntel: T1/T2/T3 buyer-role taxonomy.
Generic info inboxes route to admissions, not buyers. SchoolIntel maps staff to a role taxonomy — head of school, IB coordinator, head of digital learning, EAL/ELL lead, head of inclusion — so messaging matches the actual champion.
SchoolIntel role model + school staff pages
Verified
Source consensus / confidence score
Static roster: implicit single-source trust. SchoolIntel: explicit confidence per school across 8+ origins.
If only one source confirms a school, that's a reliability problem. SchoolIntel scores schools by how many independent sources agree (KHDA + IBO + BSO + COBIS + school site + hiring board), so reps know which rows to trust.
SchoolIntel source consensus engine
Verified
Signal stamps
Static roster: none. SchoolIntel: leadership change, hiring, expansion, accreditation, group announcement.
A roster cannot tell you a school just lost an Outstanding rating, hired a new head of digital learning, or opened a campus. SchoolIntel attaches dated signal stamps so reps work the schools where something actually changed.
SchoolIntel signal model
Verified
Email verification + 90-day re-check
Static roster: one-time SMTP at export, often stale. SchoolIntel: SMTP + 90-day re-verification cycle.
An unverified contact list will burn sender reputation inside two campaigns. SchoolIntel re-verifies inside the product on a 90-day cadence and surfaces stale rows for review before they ship.
SchoolIntel verification pipeline
Verified
Curriculum + group structure
Static roster: school name + city. SchoolIntel: curriculum stripe, school group, KHDA tier, accreditation.
Selling MIS, AI, or curriculum platforms requires curriculum and group context. SchoolIntel pre-tags every school with British/IB/American/Indian/MOE stripes plus group ownership (GEMS, Taaleem, Nord Anglia, etc.) so reps don't re-research.
KHDA + IBO + group sites + SchoolIntel structural map
Verified
Inspection / accreditation context
Static roster: not included. SchoolIntel: KHDA/DSIB rating, BSO inspection, IB authorization, NEASC.
Inspection cycles are the single most reliable buying-signal calendar in international education. SchoolIntel attaches the latest rating, the named improvement areas, and the timing window to each row.
KHDA DSIB + BSO + IBO + NEASC sources
Verified
Cited outreach reason per account
Static roster: blank — reps invent the reason. SchoolIntel: paragraph-long 'why now' with source URLs.
Reply rates collapse without a real reason in the first sentence. SchoolIntel writes the cited reason — 'Just dropped from Outstanding to Very Good in the Feb 2026 DSIB cycle; named improvement areas include teaching and learning' — so reps don't fabricate context.
SchoolIntel signal narrative engine
Verified
Pricing model
Static roster: one-time fee per export ($2k–$10k typical) or annual seat. SchoolIntel: live SaaS subscription.
A static export is a sunk cost that decays daily. A live subscription pays for the week-on-week refresh, the role coverage, and the citation trail. The two are not the same product.
ISC Research / EducationDataLists pricing benchmarks
Verified
Defensibility to clients
Static roster: 'we bought a list.' SchoolIntel: 'here is the source, the date, and the change that made this account relevant.'
Agencies lose accounts when they can't show the work. SchoolIntel's source trail and signal stamps double as the audit trail an agency hands its client.
Agency client-brief patterns
Verified
Privacy + removal request handling
Static roster: rarely handled; CSV is in 100 inboxes. SchoolIntel: authenticated product + removal workflow.
GDPR, the UAE PDPL, Singapore's PDPA, and the UK DPA all require named-individual handling pathways. A loose CSV in a Slack channel is a regulatory exposure. SchoolIntel handles personal data inside the authenticated product with a documented removal process.
SchoolIntel privacy controls
Verified
Coverage breadth
Static roster: usually one region or one curriculum. SchoolIntel: ~3,500 international schools across UAE, GCC, Asia, Europe.
Region-locked rosters force buyers to stitch together three vendors. SchoolIntel ships one cross-region account model and lets teams filter to UAE, Qatar, Saudi, Singapore, Hong Kong, or Europe with the same role and signal layers.
SchoolIntel coverage model
Verified
Updates / changelog
Static roster: no record of change. SchoolIntel: per-school change log with diffs and source URLs.
When a school's curriculum or leadership changes, SchoolIntel keeps the previous state with a date stamp. A static roster simply overwrites, which means trends and turnover are invisible.
SchoolIntel change log
Verified
Workflow integration
Static roster: CSV → CRM, manual mapping. SchoolIntel: CRM-shaped account queue, role tags, signal triggers.
A roster lands as a flat blob a SDR has to clean. SchoolIntel exports CRM-ready records with curriculum, group, role, signal, and citation already structured — so the SDR opens a cited account, not a spreadsheet.
SchoolIntel CRM integration patterns
Verified
What people actually mean by 'static school roster'
When EdTech teams say they 'bought a list of international schools,' they usually mean one of four things. The first is a one-time CSV export from a public directory like the International Schools Database, Teach Away, or International School Search. The second is a paid market dump from a research vendor like ISC Research — typically a sized PDF plus a contact spreadsheet, sold annually. The third is a vendor-sold email list (EducationDataLists is the most common example) where the buyer pays a flat fee for a CSV of generic school inboxes. The fourth is a scraped artefact — usually one engineer's weekend project against the IBO directory, BSO list, or KHDA portal, dropped into a Google Sheet and shared in Slack.
All four share the same shape: rows of school names, city, sometimes curriculum, sometimes a generic email, sometimes a phone number. The selling point is breadth. The hidden cost is that none of them tell you what changed since the file was generated. By the time a SDR opens the spreadsheet on Monday, the head of school at four of those rows has already left, two campuses have opened that aren't on the list, and the IB authorization a roster claims is current is actually expired.
This page is the honest comparison between those four artefacts and a live alternative. We are not arguing that a roster is useless — it is a fine first universe. We are arguing that it is the wrong unit of work. The right unit of work is a sourced, ranked account queue with role coverage, signal stamps, and a cited reason per row. That is the gap SchoolIntel exists to close.
Roster archetypes covered
4: directory CSV, paid dump, email list, scrape
Source: SchoolIntel buyer interviews (2025–26)
Average roster age at delivery
3–9 months
Source: Vendor publication-cycle benchmarks
Refresh cost per cycle
$2k–$10k+
Source: ISC Research / list-vendor pricing benchmarks
Why the roster shape persists
Spreadsheets are the lingua franca of B2B sales tooling. Every CRM imports CSVs. Every SDR knows VLOOKUP. Every agency client has Excel. The roster shape persists because it is the lowest common denominator format — not because it is the best representation of a school market. SchoolIntel still exports CSV when teams ask; we just refuse to ship a CSV without the source URL, the freshness date, and the role tag attached to every row.
Why teams buy static rosters in the first place
Static rosters look like the right answer in the first hour of a project. They are cheap relative to building a research function, they ship fast, they include thousands of rows, and they let a team check the 'we have a target market' box before the next planning meeting. Most EdTech founders, agency leads, and head-of-marketing hires have bought one in the past 24 months. The instinct is correct: a list of names is the precondition for outreach. The mistake is treating the list as the workflow.
There are four real reasons teams reach for a static roster:
- Speed: a CSV arrives in a week. Building a sourced account queue from scratch takes one engineer 6–8 weeks. The procurement timeline often forces the wrong choice.
- Procurement shape: a one-time invoice is easier to approve than a SaaS subscription with annual renewal. Many EdTech CFOs prefer the capex shape even when it costs more over 18 months.
- Optimism about decay: first-time buyers underestimate how fast school rosters age. The internal pitch sounds reasonable: 'we'll buy the list, run the campaign, refresh next year.' By month four the bounce rate is unmanageable.
- Brand familiarity: names like ISC Research carry weight in EdTech procurement because they've sold lists for two decades. The brand makes the cheque easier; it does not make the data fresher.
What the roster is genuinely good for
We don't believe rosters are useless — we believe they're misused. A static roster is a fine artefact for three things:
- Market sizing: if you need a slide that says 'there are ~14,000 English-medium international schools globally,' a paid dump is the fastest way there. SchoolIntel publishes a similar count, but the directory or ISC report is the conventional citation.
- Initial universe definition: the roster gives you the set of schools to enrich, not the set of schools to email. Use it as the input to a research pass, not the output.
- Coverage validation: comparing your live data against a paid roster catches gaps. SchoolIntel uses ISDB, Teach Away, and International School Search among its ~8 inputs precisely because the directory shape is useful for cross-checking — but never as the primary truth source.
Why static rosters fail in practice — the decay arithmetic
International schools are unusually high-turnover institutions. Heads of school rotate roughly every three to four years. IB coordinators and heads of digital learning rotate faster — closer to two-and-a-half. New campuses open in growth markets like the UAE, Vietnam, and Saudi Arabia at a rate of 30–60 per year. KHDA inspection ratings shift annually. BSO inspection cycles run every three years. IB programme authorizations are reviewed on a five-year cadence. Each of these is a buying-trigger event the roster cannot see.
The arithmetic of decay is unforgiving. SchoolIntel's tracking across ~3,500 international schools shows roughly 18% annual head-of-school turnover and ~30% contact decay across vendor-sold lists at the 12-month mark. By month 18 the figure approaches 45%. A roster bought in January is materially worse by July, and structurally broken by the time the renewal invoice arrives.
Head-of-school turnover (annual)
~18%
Source: SchoolIntel leadership tracking 2025–26
IB coordinator tenure (median)
~2.5 years
Source: IB job-board cohort analysis
New campus openings / year
30–60
Source: KHDA + Saudi MoE + Vietnam EdTech market reports (cross-referenced)
Static roster contact decay over 18 months
Approximate share of contacts on a vendor-sold international-school list that are stale (left role, bounced email, or closed campus) at each interval. Based on SchoolIntel re-verification of sampled lists, 2025–26.
5% stale contacts
Month 0
delivery state
12% stale contacts
Month 3
first cycle
19% stale contacts
Month 6
post-summer turnover
30% stale contacts
Month 12
annual renewal trigger
45% stale contacts
Month 18
structurally broken
The five failure modes that catch buyers
Roster failure usually shows up as one of five patterns. Each is recoverable in isolation; together they are why teams stop trusting the spreadsheet:
- Bounce-rate creep: by month six the bounce rate on the 'verified' contacts has tripled. Sender reputation degrades and the next legitimate campaign underperforms even on fresh contacts. See the international school email list alternative page for the verification-cycle detail.
- Wrong-role routing: the only contact on the row is 'info@' or 'admin@.' Messages route to admissions or office managers, not buyers. Reply rates plateau at low single digits.
- Missing campuses: school groups like GEMS Education, Taaleem, and Nord Anglia open new campuses faster than any annual roster can update. Reps miss the highest-leverage accounts because they aren't on the list.
- Curriculum mislabelling: a school adds IB DP at sixth form or drops Cambridge IGCSE in favour of edexcel. The roster still labels it the old way, the campaign messaging is wrong, and the rep loses credibility in the first reply.
- No 'why now' on any row: the roster cannot tell a SDR which schools are actively buying. Without a freshness signal the SDR works the list alphabetically, which guarantees random-quality outreach.
What a live alternative actually needs to ship
Replacing a static roster is not just buying fresher data. It is replacing the artefact with a workflow. The minimum viable shape of a live alternative — built or bought — has six pieces. Each maps to a failure mode above.
- Multi-source consensus: at least 6–8 independent inputs per school. SchoolIntel cross-references KHDA, IBO, BSO, COBIS, Cambridge International, school websites, hiring boards, association calendars, and group press pages. Each row gets a confidence score, not a single-source assertion.
- Per-row freshness stamps: every fact (curriculum, head, IB authorization, BSO date) has a 'last verified' date. Reps see freshness inline and can sort the queue by it.
- Buyer-role taxonomy: a fixed schema for who actually buys: head of school, deputy head, IB coordinator, head of digital learning, EAL coordinator, ELL coordinator, head of inclusion, group CIO. Without this taxonomy, role coverage is a marketing claim, not a queryable field.
- Signal-event ingestion: weekly reads of TES international job listings, TIE Online appointments, school news pages, KHDA inspection portal, and association announcements like BSME, EARCOS, and COBIS conference. A signal that's six months old is decoration, not workflow.
- Cited reason per account: a paragraph-long, source-linked 'why now' on every recommended target. Reps stop fabricating context and clients stop asking why each account is on the list.
- Privacy-aware data handling: personal contact data lives inside an authenticated product with documented removal pathways. A loose CSV in Slack does not pass any 2026 procurement review.
Where SchoolIntel already covers this — and where it doesn't
SchoolIntel ships all six pieces today across UAE, Qatar, Saudi Arabia, Singapore, Hong Kong, Vietnam, and a tier of European international schools. Coverage is deepest where regulator data is rich (KHDA, BSO) and thinnest where regulators publish less (Latin America, sub-Saharan Africa). Where SchoolIntel doesn't yet cover a region, we'd rather say so than ship a thin row.
- Strongest coverage: UAE, Qatar, Saudi Arabia (deep KHDA + IBO + BSO + COBIS triangulation). See the UAE international schools page and the Qatar international schools page.
- Mature coverage: Singapore, Hong Kong, Vietnam, Indonesia, Thailand (EARCOS + IBO + school sites).
- Building coverage: European international schools (BSO + IBO base; group expansion underway).
- Lighter coverage: Latin America, sub-Saharan Africa — deliberately conservative until source consensus matches our other regions.
Side-by-side workflow — static roster vs SchoolIntel
The fairest comparison is not the data — it is the workflow each enables. Below is the same six-step EdTech outreach motion run against a static roster and against SchoolIntel. The inputs are identical: a vendor selling a digital-learning platform to international schools in the UAE, with a Q2 quarter target.
Step 1: Define the target market
Static roster: filter the spreadsheet by country = UAE. Get ~700 rows. Try to filter by curriculum and discover the column is freeform text. Settle for region.
SchoolIntel: filter to UAE, curriculum = British or IB, group = GEMS or Taaleem, KHDA tier = Outstanding or Very Good. Get ~60 sourced accounts with citations. See the Dubai international schools page for the structural map this filter runs against.
Step 2: Identify the buyer per account
Static roster: the row has 'info@school.ae' and a generic phone. The SDR opens LinkedIn for each row, manually finds the head of digital learning, and hopes the title pattern is consistent. Six hours of research per ten accounts.
SchoolIntel: the account already has a head of digital learning, an IB coordinator, and a deputy head tagged with verified email and freshness date. See the head of digital learning role page for the role-targeting model.
Step 3: Find a 'why now' for each account
Static roster: no signal layer. The SDR writes a generic 'I noticed your school is in the UAE' opener that gets ignored.
SchoolIntel: each account has a dated signal: 'New head of digital learning appointed Feb 2026 — TES listing,' 'Dropped from Outstanding to Very Good in DSIB Q1 cycle,' 'Group announced fourth campus opening Sep 2026.' SDR opens with the signal.
Step 4: Verify before sending
Static roster: the file shipped with 'verified' contacts but the SMTP check was at export. Bounce rate sits at 18%. Sender reputation tanks during the campaign.
SchoolIntel: every contact has a 90-day re-verification stamp and stale rows are flagged before send. Bounce rate stays under 4%.
Step 5: Defend the list to the team or the client
Static roster: the SDR or agency lead can't answer 'where did this account come from?' beyond 'we bought a list from X.' Trust degrades in the first review.
SchoolIntel: every account opens with a citation panel: KHDA report URL, BSO inspection date, hiring post link, group announcement URL. The audit trail is the brief.
Step 6: Refresh next quarter
Static roster: buy the new annual file, $2k–$10k, hope the new export reflects the changes. Repeat the de-duplication work against the prior file.
SchoolIntel: the queue has already re-scored weekly. The new quarter's filter inherits the prior filter's source trail and signal layer. No re-import.
Build it yourself or use SchoolIntel — the honest cost
Everything on this page is technically buildable from public sources. KHDA, KHDA inspections, IBO, BSO, COBIS, Cambridge, WhichSchoolAdvisor, school websites, TES jobs, TIE appointments, GESS Dubai, and EARCOS are all reachable. The honest question is whether your team should spend the time. Most don't — not because they can't, but because the integration, normalization, freshness, and privacy work is more expensive than the data itself.
Two paths, fully costed:
Build it yourself
Realistic effort to assemble a UAE-only target market that's defensible to a sales team or a client:
- Source inventory: 1–2 days to map ~8 sources, decide which to scrape vs API, set up rate-limiting, document refresh cadence, and build error handling.
- Normalization: 1–2 weeks to dedupe schools across spelling variants ('GEMS Wellington Intl' vs 'GEMS Wellington International School'), multiple campuses, and group naming. This is the biggest hidden cost.
- Role coverage: 1 week to scrape staff lists, infer titles to a buyer-role taxonomy (a real ontology, not a freeform text field), and verify emails (SMTP plus 90-day re-check pipeline).
- Signal layer: ongoing — weekly cron jobs against KHDA, TES Dubai, TIE, group press pages, and association calendars. Engineering owns this in perpetuity.
- Privacy + removal: build authenticated access, audit logs, named-individual removal request handling, and regional data-residency where required.
- Honest timeline: 1 FTE for ~6–8 weeks to build, then 0.25 FTE forever to maintain. Stops working the day that engineer leaves. Total 18-month cost for a single region usually exceeds three years of SchoolIntel subscription.
Use SchoolIntel
What you get without building any of the above:
- Same-day target market: filter by curriculum, group, KHDA tier, region, and signal — get a sourced list with cited reasons in one session.
- Live source consensus: every school carries a confidence score across the 8+ sources we read. You see which schools we trust and why, with the URL on the row.
- Role coverage built in: staff lists are pre-mapped to a buying-role taxonomy across EAL, ELL, IB, and head of digital learning — with SMTP-verified contact data inside the product.
- Weekly re-scored queue: we re-read sources weekly. Your account list reorders itself; you don't rebuild it.
- Cited reasons per account: every recommended target has a paragraph explaining why now — backed by source URL, date, and signal type.
- Privacy-by-default: personal data lives inside the authenticated product, not a CSV in Slack. Removal requests are handled inside a documented workflow.
- Region breadth: the same model spans the UAE, Dubai, Abu Dhabi, Qatar, and the GCC + East Asia tiers — without re-buying a separate roster per region.
- Comparable to other static alternatives: see the ISC Research alternative comparison and the international school email list alternative for vendor-specific side-by-sides.
Frequently asked questions
Questions this page answers
What exactly counts as a 'static school roster' in this comparison?
Anything that ships as a one-time export and isn't refreshed at the source. The four most common archetypes are: directory CSVs from sites like International Schools Database or International School Search, paid market dumps from ISC Research, vendor-sold email lists like EducationDataLists, and ad-hoc scrapes from BSO/IBO/KHDA dropped into a Google Sheet. All four share the same shape — rows that age from the moment they're delivered.
How fast does a static international-school list actually decay?
SchoolIntel re-verification across sampled vendor lists shows roughly 12% staleness by month three, 19% by month six, 30% by month twelve, and 45% by month eighteen. The single biggest driver is head-of-school turnover (~18% annually); the second is IB coordinator and head of digital learning rotation (~2.5-year median tenure). New campus openings (30–60 per year across our coverage regions) and curriculum changes compound the decay.
Is SchoolIntel just an ISC Research alternative, or is it different in shape?
Different in shape. ISC Research sells annual market reports plus a contact spreadsheet — a research-publisher motion. SchoolIntel is a live SaaS product: weekly re-read of public sources, per-row source citation, role taxonomy, signal stamps, and a CRM-shaped account queue rather than a PDF plus CSV. Use the ISC Research alternative comparison for the head-to-head.
Can I use SchoolIntel and a static roster together — or do I have to choose?
Use them together. The static roster is a fine universe-definition input. SchoolIntel ingests the roster as one of its ~8 sources, dedupes it against the live model, attaches role coverage and signals, and produces the working queue. The mistake is using the roster as the working queue itself. See the international school market intelligence hub for the source-stack pattern.
What sources does SchoolIntel actually cross-reference per school?
Eight or more, depending on region. The standard stack is KHDA and DSIB inspections for Dubai, IBO for IB authorization, BSO and COBIS for British schools, Cambridge for curriculum context, school websites for staffing, and hiring boards like TES plus TIE Online for live signal. Association calendars from BSME, EARCOS, and COBIS conference round out the signal layer.
Why don't static rosters include role coverage if it's so important?
Two reasons. First, scraping role coverage at scale requires a stable taxonomy and an ongoing verification cycle — that's an engineering function, not a publishing function. Second, vendors selling annual rosters optimise for row count, not row depth. A list with 14,000 rows and one generic email per row sells better than a list with 3,500 rows and four named buyers per row, even though the second list has more usable contacts. SchoolIntel ships the role-deep model because that's the unit reps actually work — see the EAL coordinator role page, IB coordinator role page, and the head of digital learning role page for the taxonomy.
Does SchoolIntel publish personal contact details on this comparison page?
No. Public pages explain methodology, sources, decay benchmarks, and the workflow comparison. Personal contact data — names, emails, phone numbers — lives inside the authenticated SchoolIntel product, governed by SchoolIntel's privacy controls and a documented removal-request process. A loose CSV is not the answer to a 2026 procurement review.
If I'm an EdTech agency with multiple clients, why does this matter more for me?
Because the audit trail is the deliverable. When an agency hands a client a target-account map, the client wants to know where each school came from, why it's relevant, and what changed last week. A static roster cannot produce that brief. SchoolIntel's per-row source URL, freshness stamp, and signal narrative double as the agency's client report. See the school intelligence for EdTech agencies hub and the education marketing agency data page for agency-specific patterns.
next step
Want this as a live ranked list?
SchoolIntel can turn this page into a sourced target market with account reasons, role coverage, and outreach angles your team can use this week.