Hemostasis Today

April, 2026
April 2026
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
27282930  
Armghan Ans: The 2026 Stroke Guidelines and the Clinical Logic Gap in AI Tools
Apr 7, 2026, 15:58

Armghan Ans: The 2026 Stroke Guidelines and the Clinical Logic Gap in AI Tools

Armghan Ans, Asst. Professor and Director of Stroke at UPMC Washington, Founder of MDAdopt, shared on LinkedIn:

”The 2026 AHA/ASA stroke guidelines updated in January.

Most stroke AI products were built before that.

Nobody is talking about what that gap actually means for the tools hospitals are relying on right now.

This edition of TAIN breaks it down:

  • Which AI stroke tools may now be generating the wrong clinical signal
  • What the 2026 guidelines taught me about the gap between reading a guideline and encoding one
  • Three questions every stroke AI founder should be asking right now

In stroke AI, the risk isn’t that your model is wrong.

It’s that it’s quietly outdated.

In the last edition of TAIN, I broke down what the 2026 AHA/ASA stroke guidelines actually demand from the hospitals where most stroke patients present first.

Seven major changes. Seven implementation problems.

The conclusion: guidelines don’t implement themselves, and the hospitals that feel that gap most acutely are the ones furthest from the academic centers where the guidelines were written.

That was the human side of the gap.

This edition is the technology side.

Because the 2026 guidelines didn’t just update clinical practice.

They quietly made some AI stroke tools less accurate than they were the day before publication.

And most of those tools have not been updated.

The Guidelines Created Real Opportunity for AI

Start with what’s true and fair.

The 2026 guidelines expanded the role of imaging-based patient selection in ways that directly increase the value of several AI tool categories.

ASPECTS scoring for large core infarcts is now more consequential than ever.

EVT eligibility just expanded to ASPECTS 3-5, the most complex scoring zone, the one with the highest inter-rater variability, the one where automated scoring tools add the most value.

Perfusion imaging for extended thrombolysis windows is now guideline-supported, and AI-assisted perfusion analysis is exactly the technology that makes those decisions faster and more consistent. Telestroke platforms fill the MSU gap for community hospitals that will never own a mobile stroke unit.

AI-assisted triage and notification tools are the scalable answer to the transfer coordination problem.

The guidelines created genuine new demand for these tools. That is real, and the companies building in these spaces are right to position around it.

But there is another side to this story that almost nobody in the industry is talking about.

The Tools That May Now Be Wrong

The same guideline update that created a new opportunity also quietly invalidated assumptions baked into several existing AI stroke tools.

Not because the tools were built badly. Because the evidence moved. And the tools didn’t.

Medium vessel occlusion alerts

Most AI large vessel occlusion detection tools are trained to identify occlusions and fire a notification, alerting the clinical team, initiating the evaluation pathway, creating momentum toward treatment.

This works well for M1 occlusions. It works for ICA occlusions.

The alert fires, the neurologist reviews, and the EVT pathway activates.

The 2026 guidelines changed the picture for medium vessel occlusions, specifically M2 non-dominant segment, M3, ACA, and PCA.

Two recent randomized trials, ESCAPE-MeVO and DISTAL, were neutral on their primary functional outcome: neither showed a statistically significant improvement in 90-day disability with EVT over medical therapy alone.

The 2026 guidelines reflect this evidence with a more cautious approach, moving away from routine EVT for stent retrievers in these vessel territories.

Here is the problem. Many AI LVO detection tools may not consistently distinguish between an M1 occlusion and a distal M2.

They identify an occlusion and fire the same notification.

At a comprehensive stroke center, a vascular neurologist intercepts that alert, applies the 2026 guideline nuance, and makes an informed decision.

The chain works because the human at the decision point has current clinical knowledge.

At a community hospital with patchy or delayed telestroke coverage, that same alert creates a different dynamic.

The ED physician sees a notification flagging an occlusion. The clinical pressure to activate is real.

The fact that this vessel location now carries different clinical evidence may not be surfaced clearly or quickly enough.

The tool hasn’t been updated to reflect that distinction. The alert looks the same as it did before January 26th.

This is not a technical failure. It is a clinical update that the algorithm hasn’t received.

And the consequences play out differently depending on where the patient presents.

ASPECTS scoring at the eligibility boundary

The guidelines expanded EVT to selected patients with ASPECTS 3-5, based on SELECT2 and ANGEL-ASPECT. This is a genuine advance.

It means more patients are candidates for a potentially life-changing treatment.

It also means AI ASPECTS scoring tools are now being asked to make consequential decisions in exactly the zone that represents a known area of higher inter-rater variability, and where the clinical stakes just went up significantly.

Research shows that inter-rater agreement on ASPECTS is weakest in the mid-range, particularly 4-6.

Even expert neurologists and neuroradiologists show meaningful variability in this range. The decision boundary, to offer EVT or withhold it, now sits precisely there.

AI ASPECTS tools were trained across the full scoring range, but the distribution matters.

High-scoring cases are far more common in training datasets than low-scoring cases with large established infarcts.

The cases that are now most clinically consequential (ASPECTS 3, 4, 5) may be the cases where these tools have the least robust training signal.

Getting it right at ASPECTS 4 versus ASPECTS 6 is the difference between offering EVT and withholding it.

The guidelines just made that decision more important. The training data didn’t update with them.

Post-EVT blood pressure monitoring tools

In the last edition, I described the clinical moment the BP reversal creates: a patient returns from the interventional suite with SBP 155, and the nurse reaches for the antihypertensive.

What I didn’t address is that in many stroke programs, the prompt comes not just from a paper protocol but also from a clinical decision support alert embedded in the EMR, a tool, correctly calibrated at the time it was built, to flag elevated BP post-EVT and recommend treatment.

The 2026 guideline now classifies that behavior as COR 3: Harm, specifically after successful recanalization in anterior circulation LVO.

The tool wasn’t built wrong. The evidence changed after it was deployed.

And most of those alerts haven’t been recalibrated since January 26th.

What Building on These Guidelines Actually Taught Me

I spent several months building an EHR-integrated clinical decision support tool directly on the 2026 AHA/ASA stroke guidelines, tested against real clinical scenarios.

This is worth distinguishing from imaging AI platforms. Companies like RapidAI, Viz AI, and Brainomix receive DICOM images directly from the PACS and run their algorithms on raw imaging data, independently of what Epic has documented.

They generate ASPECTS scores, perfusion maps, and LVO detection from the images themselves.

That architectural choice solves many of the data quality problems I’m about to describe.

EHR-integrated clinical decision support tools have a fundamentally different challenge.

They depend on structured and unstructured data inside the medical record: labs, medications, vitals, diagnoses, and radiology report text.

And that data environment is far messier than any guideline assumes.

The false positive problem is not theoretical.

The tool needed to detect intracranial hemorrhage as a contraindication to thrombolysis.

The obvious approach: search radiology reports for ‘acute blood.’

The problem: ‘acute blood’ appears constantly in normal CT reports, as a negative finding. ‘No acute blood identified.’

‘No acute blood products seen.’

‘Evaluation for acute blood is negative.’ Broad keyword matching would have flagged every one of those as a hemorrhage contraindication and incorrectly denied thrombolysis consideration.

The guideline says exclude patients with intracranial hemorrhage.

That is clear. What is not clear from reading the guideline is how radiology reports actually describe the absence of hemorrhage in clinical practice.

That knowledge lives in clinical experience, not in any document.

ASPECT score is not a structured field in the EHR.

For EHR-integrated tools, ASPECTS does not exist as a discrete data element.

It lives in the free text body of radiology reports, written in whatever format each radiologist prefers.

‘ASPECTS score: 8.’ ‘ASPECTS of 8.’ ‘8/10 on ASPECTS.’ Or not mentioned at all.

Any EHR-based tool using ASPECTS is doing natural language extraction on unstructured text, with all the failure modes that implies.

Imaging AI vendors solved this by going directly to the source. EHR-integrated tools haven’t had that option.

Symptom onset time has no standard FHIR data element.

The single most time-critical variable in acute stroke care, when did symptoms start, is not a standardized field anywhere in the electronic health record.

It is documented in triage notes, nursing flowsheets, attending notes, EMS records, and family interviews, different every time and different at every hospital.

Any EHR-integrated tool that needs onset time to calculate treatment windows is either asking the clinician to enter it manually or doing text extraction on unstructured notes, both of which have failure modes that are invisible until they matter.

The guideline’s philosophy shift creates a design problem with no clean solution.

The 2026 guidelines moved deliberately away from binary exclusion lists toward individualized risk-benefit assessment.

Take DOAC exposure, previously a hard stop, now a nuanced relative contraindication that depends on agent, dose, timing, and renal function.

Or an unruptured intracranial aneurysm, previously a near-absolute contraindication, now a condition the guideline says to ‘consider individually.’

A traffic-light system has to assign one of three colors to every condition.

When the guideline says ‘consider individually,’ there is no color for that. RED denies potentially eligible patients.

GREEN understates the risk.

YELLOW is technically accurate but clinically meaningless without context.

Every ambiguous condition required an explicit design decision, and each one carried patient-level consequences that no algorithm could resolve on its own.

Eleven false positives. One audit session.

Superficial head injury codes triggering the ‘major trauma’ exclusion.

Historical resolved diagnoses triggering the ‘active bleeding’ exclusion.

Seizure disorder on the problem list triggering the ‘seizure at onset’ relative contraindication, even with no seizure occurring at stroke onset.

Eleven systematic logic errors, each of which would have denied thrombolysis consideration to eligible patients.

Three Questions Every Stroke AI Founder Should Be Asking Right Now

The 2026 guidelines were updated in January.

Here is the honest question for every company with a stroke AI product in the market:

When did you last audit your clinical logic against the current guideline?

Not when the product was built. Not when the last version shipped.

Since January 26th. Medium vessel alert logic.

ASPECTS cutoffs for EVT recommendation. Post-EVT BP thresholds.

These are specific places where the 2026 guideline moved. Most products have not moved with it yet.

Was that audit done at the clinical logic level, not just the product level?

Most serious AI companies have clinical advisors involved in product design.

That is the right approach. But clinical advisors who reviewed logic against earlier guidelines reviewed a different document than the one currently governing care.

The 2026 guidelines changed the clinical evidence base for several key decisions.

The question is not whether a clinician was ever in the room.

The question is whether clinical review has happened since January 26th, at the level of specific alert thresholds and decision boundaries, not just general product direction.

Do you have a process for updating clinical logic when guidelines change, or does it require a full product cycle?

The writing group chairs at ISC 2026 described the AHA’s intention to move toward a living guideline model with more frequent iterative updates.

If your clinical logic update requires a full engineering sprint, a QA cycle, a regulatory review, and a deployment, you will always be behind.

The companies that build a structured process for rapid clinical logic review will have a durable advantage over the ones that treat guidelines as a one-time input at product build.

The Founders Who Will Win the Next Three Years

The opportunity in stroke AI is real. ASPECTS automation at ASPECTS 3-5.

Perfusion-based extended window support.

Telestroke and AI triage for community hospitals that cannot afford mobile stroke units.

These are the clinical gaps the 2026 guidelines just made visible and urgent.

But the competitive advantage in this space is shifting.

The algorithm is table stakes.

What separates durable companies from ones that quietly lose hospital trust is the process built around the algorithm, the one that ensures clinical logic stays current as evidence evolves, that catches the failure modes no documentation anticipates, that treats every guideline update as a product update.

The deeper question the industry is beginning to grapple with is who owns clinical logic when guidelines change.

The answer will vary by company and product, but the companies that have a clear, structured answer will have a meaningful advantage over those that don’t.

The 2026 guidelines were updated in January.

The clinical logic in most stroke AI products was written before that.

In stroke AI, the risk isn’t that your model is wrong. It’s that it is quietly outdated.

Last edition: The 2026 Stroke Guidelines Are Excellent.

Now the Hard Part – what changed clinically and what it costs hospitals on the floor.

Sources

  • 2026 AHA/ASA Guideline for the Early Management of Patients With Acute Ischemic Stroke. Prabhakaran S et al. Stroke. January 26, 2026. doi:10.1161/STR.0000000000000513
  • Goyal M et al. Endovascular Treatment of Stroke Due to Medium-Vessel Occlusion (ESCAPE-MeVO). NEJM. February 5, 2025. doi:10.1056/NEJMoa2411668
  • Psychogios M et al. Endovascular Treatment for Stroke Due to Occlusion of Medium or Distal Vessels (DISTAL). NEJM. February 5, 2025. doi:10.1056/NEJMoa2408954
  • Puetz V et al. Interobserver Reliability of Baseline Noncontrast CT ASPECTS for Intra-Arterial Stroke Treatment Selection. AJNR. 2012;33(6):1046.
  • Kallmes DF et al. Impact of e-ASPECTS software on the performance of physicians compared to a consensus ground truth: a multi-reader, multi-case study. Frontiers in Neurology. 2023. doi:10.3389/fneur.2023.1221255
  • Sarraj A et al. SELECT2 Trial. NEJM. 2023.
  • Huo X et al. ANGEL-ASPECT Trial. NEJM. 2023.
  • Zachrison KS. ISC 2026 Daily Coverage, American Heart Association. February 2026.”

Armghan Ans: The 2026 Stroke Guidelines and the Clinical Logic Gap in AI Tools

Stay updated with Hemostasis Today.