Let me describe a system you’ll recognise. It has an attendance module, a behaviour module, an academic tracking module, a SEN module, and a safeguarding module. (Or worse yet, this data is in separate systems!) Each has its own screens, its own reports, its own logic. To find out what’s going on with one child, you need to visit five different places and hold the picture together in your own head.
This is the Management Information System. The MIS. The backbone of every school in the country. And I think it’s already obsolete. Not because the technology is bad, but because the design philosophy is wrong.
The traditional MIS is, at its core, a bad search engine for structured data. It takes information that staff put in (painstakingly, manually, screen by screen) and makes it searchable through pre-built reports and filtered views. But the interface is the bottleneck. The reports are rigid. And the modules create walls between information that should flow together.
Worse than that: by organising itself around attendance, behaviour, and academics as separate categories, the MIS has accidentally taught an entire profession to think about children in fragments.
The Accidental Fragmentation of the Child
Think about what all those modules are actually trying to do. Attendance tracking wants to know: is this child present, engaged, and safe? Behaviour monitoring wants to know: is this child regulated, supported, and coping? Academic tracking wants to know: is this child making progress? SEND provision tracking wants to know: is this child getting what they need?
These aren’t separate questions. They’re all asking the same thing: what is this child’s experience of school?
A child whose attendance drops, whose behaviour incidents increase, and whose academic progress stalls. That’s one story, not three. A child who’s been re-referred to SALT, whose key worker has flagged emotional dysregulation, and who hasn’t been in on Mondays for three weeks, is a pattern that needs to be seen whole.
But the MIS doesn’t show you the whole picture. It shows you attendance here, behaviour there, and SEN provision somewhere else entirely. The integration exists only inside the heads of the staff who care enough to piece it together.
We built software around database tables, and then expected children to fit into the structure. They don’t. They never did.
This happened because the MIS was designed in an era when software architecture dictated user experience. You had database tables, so you had modules. You had modules, so you had screens. You had screens, so you had navigation menus. Over time, everyone just accepted that “attendance” and “behaviour” were separate things to look at separately. The software became the mental model.
In mainstream settings with 1,200 students, there’s an argument for structured navigation. But in specialist SEN schools and alternative provision, settings with 40 to 80 pupils where staff know every child by name, the modular MIS is solving the wrong problem entirely.
What If It Doesn’t Have to Be This Way?
Something is shifting in how we think about software. A new generation of tools, from open-source AI agents like OpenClaw to conversational interfaces and agent-to-user protocols, is challenging the assumption that users need to navigate complex screens to interact with their data.
The core idea is simple: instead of the interface being the primary way people interact with a system, the data layer becomes the product, and the primary interaction is conversational. You don’t navigate to a screen. You ask a question.
This isn’t a chatbot bolted onto an existing system. It’s a fundamental rethink of what the system is. The data sits in a unified layer, organised around the child, not around modules. Intelligent agents have access to that data and can read, write, query, and reason about it. Staff interact primarily through conversation, and the system generates visual interfaces only when they genuinely add value.
The question isn’t “how do we make a better attendance screen?” It’s “why does an attendance screen exist at all?”
What This Looks Like in Practice
Imagine a SENCO arriving at school on Monday morning. Instead of logging into an MIS, navigating to three different modules, and running filtered reports, they send a message:
“Anything I need to know this morning?”
Response:
“Jayden has been late four days running. That’s unusual for him. Mia’s annual review is in three weeks and no professional reports have come in yet. Year 10 attendance is down 8% since half-term. And there’s a safeguarding flag from Friday that Sarah hasn’t acknowledged yet.”
That single response drew from attendance data, EHCP timelines, provision tracking, and safeguarding logs. The SENCO didn’t have to know where to look. The system understood the question and assembled the answer from across the whole picture.
Or imagine a headteacher preparing for a governors’ meeting:
“Show me which provisions are correlating most strongly with progress this term.”
The system analyses the relationship between provision delivery and outcome data, surfaces the patterns, and offers to generate a summary for the board pack.
No module. No screen. No report builder. Just a question and an answer that treats the child’s experience as a single, connected reality.
The Light Interface: What Still Needs a Screen
This isn’t an argument for abolishing visual interfaces entirely. Some things genuinely need persistent, glanceable visibility. Safeguarding boards need to be always-on and always visible, because you can’t rely on someone remembering to ask about a concern. A “today” view showing who’s in, who’s not, and what’s happening gives staff situational awareness. Statutory compliance dashboards showing EHCP deadlines, Ofsted-ready data, and LA reporting need to be scannable at a glance. Approval workflows, things that need a human to sign off, need a clear, deliberate interface.
But that’s perhaps 20% of what a traditional MIS does. The other 80%, the data entry, the report running, the filtering, the cross-referencing, is better served by conversation.
The new model is a thin, purposeful visual layer for the things that need persistent visibility, and a conversational layer for everything else. The interface becomes light, not because it’s less capable, but because most of the capability has moved into the intelligence of the system itself.
The Proactive School
Perhaps the most transformative shift is from reactive to proactive. Current MIS platforms are fundamentally passive. They store what you put in and show you what you ask to see. They require someone to remember to check.
But what if the system didn’t wait to be asked?
An intelligent system monitoring the unified data layer could send a message to a key worker on Tuesday afternoon:
“Jayden has been late four days this week. That’s unusual, because his average is 95% punctuality. You might want to check in with him.”
It could alert a SENCO three weeks before an annual review that no professional reports have been received, and offer to draft the chase emails. It could notice that a particular therapeutic intervention is consistently correlating with improved outcomes across a cohort and surface that insight to the head.
The most dangerous failure mode in a school isn’t bad data. It’s data that’s there but never gets looked at. A proactive system makes forgetting impossible.
In SEN and AP settings, where the stakes around individual children are extraordinarily high and the margin for missed deadlines or overlooked patterns is vanishingly small, this shift from “you have to remember to look” to “the system tells you what matters” isn’t a nice-to-have. It’s a safeguarding imperative.
Rethinking Data Entry
The input side of the equation matters just as much. Traditional MIS platforms require staff to navigate to the right screen, find the right form, and fill in the right fields. It’s slow, it’s friction-heavy, and it means data often doesn’t get recorded at all, or gets recorded hours later from memory.
A conversational interface changes this entirely. A teaching assistant can record a behaviour incident by saying: “Log a behaviour incident for Jayden. Lunchtime, dysregulated after unstructured break. De-escalated by Sarah within ten minutes.” Natural language in, structured data out. No forms, no screens, no navigation.
This isn’t theoretical. We’ve already built and tested chat-based data entry, and the reduction in time and friction is significant. This is now core functionality in our systems and a key differentiator. When data entry becomes as easy as sending a message, data quality goes up, because people actually do it, in the moment, instead of retrospectively.
The Child at the Centre
The real argument here isn’t about technology. It’s about what the technology enables you to see.
When you organise a system around the child rather than around modules, something fundamental changes. Every interaction, every observation, every data point becomes part of a single narrative. Attendance isn’t a number in a module. It’s a signal within a story. Behaviour isn’t an incident log. It’s a thread in a pattern. Academic progress isn’t a spreadsheet. It’s evidence of whether the provision is working.
The question stops being “what does the data say?” and starts being “what is this child’s experience?” And when you ask that question of an intelligent system that holds the whole picture, you get answers that no modular MIS could ever surface.
What about Security and Safeguarding?
Any conversation about AI and children’s data must start with security and safeguarding. This is non-negotiable, and it needs to be baked into the architecture from day one, not bolted on as an afterthought. We are talking about some of the most sensitive data that exists: EHCP details, safeguarding records, medical information, behavioural observations about vulnerable children. The highest protected category under UK GDPR, and rightly so.
But here is the uncomfortable truth: the risk isn’t AI. The risk is ungoverned AI. Right now, staff across the country are pasting sections of EHCPs into ChatGPT to help draft annual reviews. They’re copying sensitive assessment reports into consumer AI tools with no data processing agreement, no audit trail, and no institutional oversight. The “no AI” position doesn’t stop AI use. It pushes it underground.
With the right architecture, a conversational system can actually be more secure than the status quo. Data stays on your infrastructure, and the AI reasons about it without retaining it. Role-based access can be enforced at the query level, not just the module level, meaning a teaching assistant gets what they need to know about a child’s morning without inadvertently accessing their safeguarding file. Every interaction is logged and auditable. And critically, it brings AI usage inside the governance framework rather than leaving it to shadow IT on personal devices.
Security isn’t a reason to avoid this model. Done properly, it’s a reason to adopt it.
The Elephant in the Room
I’m aware this sounds like it’s coming from a technology company. It is. But the argument isn’t “buy our product.” The argument is that an entire sector is locked into a way of working that was designed around the limitations of 1990s database software, and those limitations no longer exist.
The large MIS providers have enormous installed bases, long contracts, and deep integrations with school workflows. They’re not going to disappear overnight. But I believe they’re operating on borrowed time, because the fundamental assumption they’re built on (that humans need to navigate screens to interact with structured data) is no longer true.
Schools, particularly SEN schools and alternative provision where every child’s journey is complex, individual, and high-stakes, deserve systems that work the way their staff actually think: holistically, responsively, and with the child at the centre.
The tools to build this exist now. The question is who’s brave enough to use them.



