Can AI Reliably Predict Emergency Room Volumes?
- Andrei Soran
- Apr 15
- 3 min read
Years ago, when I was running a large hospital, I posed a straightforward but crucial question to a group of complexity theory experts: Can you predict my ER volumes?
Back then, their answer was frustratingly inconclusive. They had the theoretical framework (with models steeped in concepts like the butterfly effect and black swan events), but they lacked the granular data needed to make accurate predictions. It wasn’t practical for the day-to-day grind of hospital operations.
At the time, it felt like a dead end. But fast-forward to today, and AI has shifted the landscape entirely. So, I decided to revisit that same question.
To understand how AI might tackle this challenge now, it’s worth looking at the different types of AI available. A helpful framework from a Harvard Business Review article breaks AI into three tiers: off-the-shelf tools (widely accessible, fast, and affordable, though often too generic for specific needs), custom-built systems (tailored to particular workflows), and fully customized models trained on proprietary data. That last category, while the toughest to implement, offers the deepest insights and the biggest competitive edge.
Recent studies, particularly in the UK, have started digging into this area. Operational headaches like bed planning, workflow disruptions, longer patient stays, and staffing shortages have made prediction algorithms more relevant than ever. Most hospitals still lean on rolling volume averages. Some have made small strides by factoring in public health and weather data. In a fascinating piece in NPJ Digital Medicine, Zella King explains how AI can boost its predictive power by pulling together even faint signals into stronger, more reliable models.
I wanted to see what today’s tech could do, so I tested a public AI model with a challenge: If I give you historical ER volumes, weather data, local events, and other variables, can you predict patient flow accurately?
The response was detailed and almost impressively so. It outlined the kinds of data I’d need, suggested modeling techniques like XGBoost, LSTM, or Prophet, and even described the software setup required to create a working forecast.
But then I threw it a simpler, real-world test: “My ER sees 650 patients a week. How many will I see in the first week of April?” The answer was a safe, uninspired “650,” which basically just echoed the average without accounting for seasonal trends, weather shifts, or other factors that could drive changes.
That moment drove home a key point: powerful tools are useless without the right data. Without detailed, contextual, real-time inputs (and the ability to handle gaps in that data) AI often falls back on broad statistical averages. It gives you the most likely answer, not the most useful one.
I still remember a winter years back when we were wrestling with ER forecasting. A week later, a school bus rolled over in a snowstorm. Forty patients flooded in overnight. Most injuries, thankfully, weren’t life-threatening, but it was a stark reminder of how fast volumes can spike.
No model could have predicted that specific crash, but could a fully customized model pick up on the underlying risks (like bad weather, road congestion, or flu season) and raise a warning before the surge hits?
The real opportunity now isn’t just in the promise of AI but in its practical application. With the right inputs (historical trends, diagnostic codes, weather, traffic, community health indicators), can we build fully customized models that are truly responsive? Something that helps with better staffing, bed availability, and smarter resource allocation?
The tools exist, and the data is there.
Now the question is: can we pull it all together in a way that delivers real operational impact, and, more importantly, would this work for your hospital?
コメント