Skip to main content
  • Research article
  • Open access
  • Published:

Diagnostic accuracy of clinical tools for assessment of acute stroke: a systematic review

Abstract

Introduction

Recanalisation therapy in acute ischaemic stroke is highly time-sensitive, and requires early identification of eligible patients to ensure better outcomes. Thus, a number of clinical assessment tools have been developed and this review examines their diagnostic capabilities.

Methods

Diagnostic performance of currently available clinical tools for identification of acute ischaemic and haemorrhagic strokes and stroke mimicking conditions was reviewed. A systematic search of the literature published in 2015–2018 was conducted using PubMed, EMBASE, Scopus and The Cochrane Library. Prehospital and in-hospital studies with a minimum sample size of 300 patients reporting diagnostic accuracy were selected.

Results

Twenty-five articles were included. Cortical signs (gaze deviation, aphasia and neglect) were shown to be significant indicators of large vessel occlusion (LVO). Sensitivity values for selecting subjects with LVO ranged from 23 to 99% whereas specificity was 24 to 97%. Clinical tools, such as FAST-ED, NIHSS, and RACE incorporating cortical signs as well as motor dysfunction demonstrated the best diagnostic accuracy. Tools for identification of stroke mimics showed sensitivity varying from 44 to 91%, and specificity of 27 to 98% with the best diagnostic performance demonstrated by FABS (90% sensitivity, 91% specificity). Hypertension and younger age predicted intracerebral haemorrhage whereas history of atrial fibrillation and diabetes were associated with ischaemia. There was a variation in approach used to establish the definitive diagnosis. Blinding of the index test assessment was not specified in about 50% of included studies.

Conclusions

A wide range of clinical assessment tools for selecting subjects with acute stroke has been developed in recent years. Assessment of both cortical and motor function using RACE, FAST-ED and NIHSS showed the best diagnostic accuracy values for selecting subjects with LVO. There were limited data on clinical tools that can be used to differentiate between acute ischaemia and haemorrhage. Diagnostic accuracy appeared to be modest for distinguishing between acute stroke and stroke mimics with optimal diagnostic performance demonstrated by the FABS tool. Further prehospital research is required to improve the diagnostic utility of clinical assessments with possible application of a two-step clinical assessment or involvement of simple brain imaging, such as transcranial ultrasonography.

Peer Review reports

Rationale

Patients with acute stroke should have access to rapid assessment and early intervention with specialist care for optimal outcomes. Acute ischaemic stroke caused by a large vessel occlusion (LVO) is associated with high mortality rate of 80% [1] and can be optimally managed with intravenous (IV) thrombolysis followed by mechanical thrombectomy (MT). While IV thrombolysis can currently be provided in many general hospitals, MT can only be performed in specialised centres with neurointerventional facilities.

Recanalization therapy must be delivered within the first hours after symptom onset to improve functional outcome [2, 3]. This requires a reliable triage system for early identification of subjects eligible for reperfusion therapy. It is also crucial to exclude intracranial haemorrhage and stroke-mimicking conditions before initiating therapy to avoid giving inappropriate or potentially life-threatening IV thrombolysis. An ideal triage system could potentially be used in prehospital settings to determine both immediate care (particularly in remote areas) and transfer arrangements to appropriate hospital facilities.

An increasing number of studies assessing the diagnostic performance of clinical assessment tools has been seen in recent years. A systematic review of stroke recognition instruments in suspected stroke patients was performed by Rudd et al. (2016) [4], and included studies that were published before 10 August 2015. The current review follows directly on from this date, and has been designed with the aim of answering the following questions:

  1. 1.

    What is the sensitivity and specificity of currently available clinical assessment tools for detecting subjects with ischaemic stroke due to LVO?

  2. 2.

    What is the sensitivity and specificity of currently available clinical assessment tools for diagnosing acute haemorrhagic stroke?

  3. 3.

    What is the sensitivity and specificity of currently available clinical assessment tools for differentiating between acute stroke and stroke-mimicking conditions?

Methods

Protocol and registration

The registered protocol can be accessed on PROSPERO, the international prospective register of systematic reviews:

https://www.crd.york.ac.uk/PROSPERO/display_record.php?RecordID=112492

Eligibility criteria

Inclusion and exclusion criteria are presented in Table 1.

Table 1 Review inclusion and exclusion criteria

Information sources

A systematic search of the literature was conducted in October 2018, using a database-specific search strategy for each of the following electronic databases: PubMed, EMBASE, Scopus and The Cochrane Library.

Search strategy

The search strategy included the following combination of multiple iterations of MeSH and keyword terms relating to each component of the research questions: intracranial hemorrhages, cerebral intraparenchymal hematoma, cerebrovascular apoplexy, brain infarction, acute stroke, brain ischemia, cerebrovascular occlusion, cerebral infarction, transient ischemic attack, “stroke mimic*”, prehospital emergency care, emergency care, scoring methods, neurologic signs and symptoms, differential diagnosis, neurologic examination, predictive value of tests, sensitivity and specificity, logistic models.

The search was restricted to human studies, English language, adult participants, and publication years 2015–2018. This restricted publication date range was chosen to perform an updated analysis of the data available. A systematic review by Rudd et al. (2016) included prospective studies and excluded retrospective studies, research within a known stroke population, tools that were exclusively used by ambulance dispatchers or with telecommunication systems [4]; however all of these were included in our systematic analysis.

Study selection

Titles of studies retrieved using the search strategy were screened by one of the review authors to identify studies that potentially met the inclusion criteria outlined in Table 1. The abstracts of those potentially eligible studies were independently assessed for eligibility by three review team members. Any disagreements between them over the eligibility of particular studies were resolved through discussion with a fourth reviewer.

Eligible papers were tabulated and used in the qualitative synthesis. Studies which reported diagnostic accuracy values such as sensitivity, specificity, and positive and negative predictive values were included in the quantitative meta-analysis.

Data collection process

A dedicated data extraction form was developed and used to collect relevant information from the included studies. The inclusion of information fields in the data collection form was guided by the review questions. The following components were assessed:

  • study identification: name of the first author and publication year;

  • setting for the application of the studied clinical tool: prehospital or in-hospital;

  • inclusion/exclusion criteria for participants;

  • sample size;

  • name of the clinical assessment tool studied (where applicable);

  • clinical information collected;

  • background of personnel collecting and interpreting clinical information;

  • diagnostic approach used to establish a final diagnosis;

  • diagnostic accuracy values: true positive, true negative, false positive, false negative values, positive and negative predictive values, and/or positive and negative likelihood ratios, sensitivity and specificity.

As our analysis concerned only published data, no further data were sought from investigators.

Risk of bias assessment in individual studies

Two review authors qualitatively assessed included studies for a risk of bias and concerns regarding their applicability for each of three domains: patient selection, index test, and flow and timing, in accordance with the QUADAS-2 Tool quality assessment system [5]. A table summarising risk of bias and applicability concerns was constructed.

Data synthesis

The synthesis was performed in accordance with the Cochrane guidelines for diagnostic test accuracy reviews. The diagnostic accuracy data from each study were presented graphically by plotting sensitivities and specificities on a coupled column chart.

Results

Study selection

The results of the study selection process are illustrated in Fig. 1.

Fig. 1
figure 1

PRISMA flowchart. Outline of the study selection process using inclusion and exclusion criteria

Study characteristics

The current review includes 25 new studies whereas adding to Rudd et al’s review (2016) [4], which included 18 primary studies out of the total number of 5622 references identified. The main characteristics of the studies included in the current review are presented in Additional file 1.

In total, 25,642 cases were assessed across the included studies published between 2015 and 2018. Participants were recruited in the prehospital setting, or upon presentation to the hospital, or both. Of the included studies, 16/25 (64%) were retrospective.

Risk of bias assessment in individual studies

A summary of bias and applicability concerns is presented in Additional file 2.

All included studies recruited consecutive patients. Case-control methodology was avoided in all cases. Only studies with a high sample size of more than 300 participants according to Meader et al’s (2014) classification [6] were considered for inclusion to ensure greater reliability of the study results. All included studies were analysed against the adequate blinding criterion.

In 13/25 (52%) papers it was not specifically mentioned or was judged to be unclear whether the results of clinical assessment (the index test) were interpreted independently from those tests that were used to make a final diagnosis (the reference test) [7,8,9,10,11,12,13,14,15,16,17,18,19].

The approach for establishing the final diagnosis was described in all included studies. Hospital discharge diagnosis was referred to as the gold standard in six papers, brain or cerebral vessel imaging alone in 14 cases, and five papers used clinical assessment together with imaging to establish the definitive diagnosis.

Diagnostic accuracy of clinical tools in selecting subjects with LVO

More than 20 different clinical assessment tools with optimal cut-offs for selecting subjects with ischaemic stroke due to LVO were analysed in this review (Table 2).

Table 2 Diagnostic accuracy values of clinical tools for selecting subjects with large vessel occlusion

Sensitivity values ranged from 23% (NIHSS subitem LoC 1a) to 99% (NIHSS≥4, NIHSS≥6, a combination of reduced level of consciousness with inability to answer questions, facial weakness, arm weakness, sensation loss, and aphasia). Specificity ranged from 24% (OoH-NIHSS≥1, CPSS≥1) to 97% (G-FAST = 4). For simplicity, only those tools showing both sensitivity and specificity values ≥80% (an arbitrarily chosen threshold) were selected to be plotted (Fig. 2).

Fig. 2
figure 2

Bar chart. Sensitivity and specificity values across the clinical tools for selecting subjects with large vessel occlusion

It was suggested by Beume et al. (2018) [19] that cortical signs such as gaze deviation, aphasia or agnosia, and/or neglect were more accurate predictors of LVO than motor deficit alone (PPV 60%, NPV 94%). However, as demonstrated in Fig. 2, FAST-ED ≥ 4 (PPV 80%, NPV 100%), NIHSS≥10 (PPV 78%, NPV 99%), and RACE≥5 (PPV 81%, NP 99%) had the best diagnostic accuracy for selecting subjects with LVO. All three clinical assessment tools incorporate cortical signs as well as motor dysfunction.

The best sensitivity value of the combination of motor deficit and cortical signs incorporated into the Finnish Prehospital Stroke Scale (face drooping, limb weakness, speech difficulty, visual disturbance, and conjugate eye deviation) was for detection of proximal M1 occlusions (100%) and the lowest – for M2 and basilar artery – were 13 and 22%, respectively [13].

Moore et al. (2016) demonstrated that presence of all four components forming a combination of reduced consciousness level, lower limb weakness, dysarthria, and gaze deviation had sensitivity of 96% and specificity of 39% for LVO when compared with computed tomography angiography (CTA) [20]. Thus, those who do not have all four clinical features are less likely to have LVO, and therefore would not require CTA, decreasing the need for this test by about 32%. This approach might also contribute to decisions about immediate transfer to an endovascular centre for MT.

Diagnostic accuracy of clinical tools in detecting acute haemorrhagic stroke

The paper by Jin et al. (2016) [21] was the single eligible study for the present review that aimed to distinguish between two main subtypes of stroke – ischaemic stroke and haemorrhage. A total of 1989 cases from the Chinese population with suspected first-ever acute stroke were analysed. They proposed a discriminant function model based on the following clinical assessment findings: age above 65 years, past medical history of diabetes (DM), atrial fibrillation (AF), systolic blood pressure (SBP) above 180 mmHg, and vomiting at onset. It has shown a higher sensitivity but lower specificity for selecting subjects with ischaemic stroke (42–75.7% and 63.3–93.6%, respectively). Diagnostic accuracy values for haemorrhage were the opposite of the above: low sensitivity with higher specificity (58.5–93.6% and 42–79.2%, respectively). It has also been suggested that a history of AF and DM were more likely to be associated with ischaemic stroke, whereas high SBP and younger age were associated with haemorrhage.

Diagnostic accuracy of clinical tools for differentiating between acute stroke and stroke-mimicking conditions

There was a significant variation in diagnostic accuracy of tools designed for distinguishing between acute stroke and stroke mimics as shown in Fig. 3.

Fig. 3
figure 3

Bar chart. Sensitivity and specificity values of clinical tools for differentiating between acute stroke and stroke-mimicking conditions

Sensitivity values varied from 44% (LAPSS 1998) to 91% (sNIHSS-EMS). Specificity ranged from 27% (MPDS) to 98% (LAPSS 1998) (Table 3). FABS showed the best diagnostic accuracy values with 90% sensitivity and 91% specificity [22].

Table 3 Diagnostic accuracy values of clinical tools for selecting subjects with acute stroke and stroke-mimicking conditions

The MPDS tool was developed to facilitate early identification of stroke or transient ischaemic attack by emergency medical dispatchers to enable early notification to receiving hospitals, and demonstrated satisfactory sensitivity of 86% but low specificity of 27% (PPV 20%, NPV 90%). Similarly, sNIHSS-EMS, which consisted of six NIHSS items selected as “suitable for prehospital use” [24] (level of consciousness, facial palsy, motor arm/leg, sensory, language and dysarthria), had the highest sensitivity value (91%) when compared with other clinical assessment tools but fairly low specificity (52%) (PPV 43%, NPV 93%). In contrast, LAPSS 1998 and LAPSS 2000 had the highest specificity (98 and 97%, respectively) but the lowest sensitivity (44 and 49%, respectively) values among the other tools [12, 15].

The FABS tool was developed for identification of subjects with stroke-mimicking conditions and negative brain CT findings in the emergency department. The total score is calculated based on the absent risk factors for stroke (AF, hypertension, advanced age) and presence of sensory disturbance with no motor deficit. FABS showed the best overall diagnostic accuracy values of 90% sensitivity and 91% specificity (PPV 87%, NPV 93%) [22].

Discussion

A reliable triage system that could allow emergency transfer of patients eligible for MT directly to a regional centre with neurointerventional facilities following early IV thrombolysis (“drip and ship”) [25] could transform stroke care. The present systematic review attempted to evaluate the diagnostic accuracy of clinical assessment tools for (1) selecting subjects with ischaemic stroke due to LVO; (2) differentiating between two main subtypes of stroke – ischaemic stroke and haemorrhage, and (3) distinguishing between acute stroke cases and stroke mimics.

All reviewed studies had a minimum sample size of 300 consecutive participants [6] leading to good reliability of reported findings. There were however some limitations found in the included studies which were mainly related to unclear blinding of researchers interpreting the results of the index and reference tests, and discrepancy in the approach used to establish the gold standard.

As proposed by Beume et al. (2018) [19], cortical signs such as aphasia or neglect are more accurate predictors of LVO than motor deficit alone. However, a combination of signs suggestive of cortical involvement and motor deficit, for example, as assessed by FAST-ED, RACE or NIHSS scales, led to better diagnostic accuracy when compared to the performance of cortical signs alone as evaluated by the Pomona scale (Table 2).

Modest diagnostic accuracy was seen in clinical assessment tools aiming to distinguish between acute stroke and stroke mimics. The FABS tool which was designed specifically for detecting stroke mimics and included additional clinical information, such as atrial fibrillation compared to other well-established tools, for example, ROSIER, demonstrated high sensitivity and specificity rates of about 90% (PPV 87%, NPV 93%) [22]. Clinical assessment findings such as hypertension and younger age were indicative of haemorrhage, whereas a history of AF and DM were more likely to be associated with ischaemic stroke [21].

There are a few limitations of currently available tools that possibly prevent them from being widely accepted. First, their specificity rates for LVO remain quite low, which could potentially lead to inappropriate transportation of patients at high cost [27]. Second, many studies were designed in such a way that patients with haemorrhage and/or stroke-mimicking conditions were excluded, which therefore would preclude these clinical tools from being applied to prehospital settings [19].

An ideal clinical assessment tool would be a simple method that could be equally used in prehospital settings and in emergency department with high predictive values. It might be possible that a two-step approach using two different clinical assessment tools at the prehospital stage could be considered as an alternative option. The first step would be to select subjects with acute stroke who would benefit from reperfusion therapy and to exclude stroke mimicking conditions and acute intracranial haemorrhage. For this purpose, a tool with higher specificity should be considered, for example G-FAST [13]. This might allow prehospital thrombolysis to be offered to selected patients in remote areas in line with management of patients with S-T elevation myocardial infarction [29]. Thereafter, a decision on transferring subjects with suspected LVO to a specialised centre would be made on the basis of the clinical assessment score with high sensitivity value, such as NIHSS or a combination of clinical assessment findings as suggested by Moore et al. [14, 20, 27]. However, this approach requires further validation.

It might be beneficial to use an additional diagnostic tool in combination with clinical assessment that could provide valuable information and increase the accuracy of such a triage system. Transcranial Doppler ultrasound has been shown to detect occlusions in the major cerebral arteries with 68–100% sensitivity and 78–99% specificity [30,31,32]. It is a relatively inexpensive and readily portable diagnostic tool that takes on average not more than 15 min to complete an examination of the cerebral vessels [33], and can be used in prehospital settings, potentially with remote diagnostic support [34,35,36]. However, further assessment and validation of this proposed system is required.

Conclusion

A wide range of clinical assessment tools for selecting subjects with acute stroke has been developed in recent years. Assessment of both cortical and motor function using RACE, FAST-ED or NIHSS demonstrated the best diagnostic accuracy values for selecting subjects with LVO. There were limited data on clinical tools that can be used to differentiate between acute ischaemia and haemorrhage. Diagnostic accuracy appeared to be modest for distinguishing between acute stroke and stroke mimics with optimal diagnostic performance demonstrated by the FABS tool. Further research is required to establish a novel prehospital triage system with possible application of a two-step clinical assessment or involvement of simple brain imaging, such as transcranial ultrasonography.

Availability of data and materials

Materials are available from the corresponding author upon request.

Abbreviations

AF:

atrial fibrillation

DM:

diabetes mellitus

IV:

intravenous

LVO:

large vessel occlusion

MT:

mechanical thrombectomy

NPV:

negative predictive value

PPV:

positive predictive value

SBP:

systolic blood pressure

Se:

sensitivity

Sp:

specificity

References

  1. Hacke W, Schwab S, Horn M, Spranger M, De Georgia M, von Kummer R. “Malignant” middle cerebral artery territory infarction: clinical course and prognostic signs. Arch Neurol. 1996;53:309–15.

    Article  CAS  Google Scholar 

  2. Goyal M, Menon BK, van Zwam WH, Dippel DWJ, Mitchell PJ, Demchuk AM, et al. Endovascular thrombectomy after large-vessel ischaemic stroke: a meta-analysis of individual patient data from five randomised trials. Lancet (London, England). 2016;387:1723–31.

    Article  Google Scholar 

  3. Saver JL, Fonarow GC, Smith EE, Reeves MJ, Grau-Sepulveda MV, Pan W, et al. Time to treatment with intravenous tissue plasminogen activator and outcome from acute ischemic stroke. JAMA. 2013;309:2480–8.

    Article  CAS  Google Scholar 

  4. Rudd M, Buck D, Ford GA, Price CI. A systematic review of stroke recognition instruments in hospital and prehospital settings. Emerg Med J. 2016;33:818–22.

    Article  Google Scholar 

  5. Whiting PF, Rutjes AWS, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, et al. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155:529–36.

    Article  Google Scholar 

  6. Meader N, King K, Llewellyn A, Norman G, Brown J, Rodgers M, et al. A checklist designed to aid consistency and reproducibility of GRADE assessments: development and pilot validation. Vol. 3, Systematic Reviews. 2014. p. 82.

  7. Demeestere J, Garcia-Esperon C, Lin L, Bivard A, Ang T, Smoll NR, et al. Validation of the National Institutes of Health stroke Scale-8 to detect large vessel occlusion in ischemic stroke. J Stroke Cerebrovasc Dis. 2017;26:1419–26.

    Article  Google Scholar 

  8. Gropen TI, Boehme A, Martin-Schild S, Albright K, Samai A, Pishanidar S, et al. Derivation and validation of the emergency medical stroke assessment and comparison of large vessel occlusion scales. J Stroke Cerebrovasc Dis. 2018;27:806–15.

    Article  Google Scholar 

  9. Hastrup S, Damgaard D, Johnsen SP, Andersen G. Prehospital acute stroke severity scale to predict large artery occlusion: design and comparison with other scales. Stroke. 2016;47:1772–6.

    Article  Google Scholar 

  10. Katz BS, McMullan JT, Sucharew H, Adeoye O, Broderick JP. Design and Validation of a Prehospital Scale to Predict Stroke Severity: Cincinnati Prehospital Stroke Severity Scale. Stroke. 2015;46:1508–12. 7.

    Article  Google Scholar 

  11. Kuroda R, Nakada T, Ojima T, Serizawa M, Imai N, Yagi N, et al. The TriAGe+ score for Vertigo or dizziness: a diagnostic model for stroke in the emergency department. J Stroke Cerebrovasc Dis. 2017;26:1144–53.

    Article  CAS  Google Scholar 

  12. Mao H, Lin P, Mo J, Li Y, Chen X, Rainer TH, et al. Development of a new stroke scale in an emergency setting. BMC Neurol. 2016;16:168.

    Article  Google Scholar 

  13. Ollikainen JP, Janhunen HV, Tynkkynen JA, Mattila KM, Halinen MM, Oksala NK, et al. The Finnish prehospital stroke scale detects Thrombectomy and thrombolysis candidates-a propensity score-matched study. J Stroke Cerebrovasc Dis. 2018;27:771–7.

    Article  Google Scholar 

  14. Panichpisal K, Nugent K, Singh M, Rovin R, Babygirija R, Moradiya Y, et al. Pomona large vessel occlusion screening tool for prehospital and emergency room settings. Interv Neurol. 2018;7:196–203.

    Article  Google Scholar 

  15. Purrucker JC, Hametner C, Engelbrecht A, Bruckner T, Popp E, Poli S. Comparison of stroke recognition and stroke severity scores for stroke detection in a single cohort. J Neurol Neurosurg Psychiatry. 2015;86:1021–8.

    Article  Google Scholar 

  16. Rodríguez-Pardo J, Fuentes B. Alonso de Leciñana M, Ximénez-Carrillo Á, Zapata-Wainberg G, Álvarez-Fraga J, et al. the Direct referral to endovascular center criteria: a proposal for pre-hospital evaluation of acute stroke in the Madrid stroke network. Eur J Neurol. 2017;24:509–15.

    Article  Google Scholar 

  17. Scheitz JF, Abdul-Rahim AH, Macisaac RL, Cooray C, Sucharew H, Kleindorfer D, et al. Clinical selection strategies to identify ischemic stroke patients with large anterior vessel occlusion: results from SITS-ISTR (safe implementation of thrombolysis in stroke international stroke thrombolysis registry). Stroke. 2017;48:290–7.

    Article  Google Scholar 

  18. Zhao H, Pesavento L, Coote S, Rodrigues E, Salvaris P, Smith K, et al. Ambulance clinical triage for acute stroke treatment paramedic triage algorithm for large vessel occlusion. Stroke. 2018;49.

  19. Beume L-A, Hieber M, Kaller CP, Nitschke K, Bardutzky J, Urbach H, et al. Large vessel occlusion in acute stroke. Stroke. 2018;49:2323–9.

    Article  Google Scholar 

  20. Moore RD, Jackson JC, Venkatesh SL, Quarfordt SD, Baxter BW. Revisiting the NIH stroke scale as a screening tool for proximal vessel occlusion: can advanced imaging be targeted in acute stroke? J Neurointerv Surg. 2016;8:1208–10.

    Article  Google Scholar 

  21. Jin HQ, Wang JC, Sun YA, Lyu P, Cui W, Liu YY, et al. Prehospital identification of stroke subtypes in Chinese rural areas. Chin Med J (Engl). 2016(129):1041–6.

    Article  Google Scholar 

  22. Goyal N, Tsivgoulis G, Male S, Metter EJ, Iftikhar S, Kerro A, et al. FABS: an intuitive tool for screening of stroke mimics in the emergency department. Stroke. 2016;47:2216–20.

    Article  Google Scholar 

  23. Clawson JJ, Scott G, Gardett I, Youngquist S, Taillac P, Fivaz C, et al. Predictive ability of an emergency medical dispatch stroke diagnostic tool in identifying hospital-confirmed strokes. J Stroke Cerebrovasc Dis. 2016;25:2031–42.

    Article  Google Scholar 

  24. Purrucker JC, Härtig F, Richter H, Engelbrecht A, Hartmann J, Auer J, et al. Design and validation of a clinical scale for prehospital stroke recognition, severity grading and prediction of large vessel occlusion: the shortened NIH stroke scale for emergency medical services. BMJ Open. 2017;7.

    Article  Google Scholar 

  25. Holodinsky JK, Williamson TS, Demchuk AM, Zhao H, Zhu L, Francis MJ, et al. Modeling stroke patient transport for all patients with suspected large-vessel occlusion. JAMA Neurol. 2018;75:1477–86.

    Article  Google Scholar 

  26. Lima FO, Silva GS, Furie KL, Frankel MR, Lev MH, Camargo ECS, et al. Field assessment stroke triage for emergency destination: a simple and accurate prehospital scale to detect large vessel occlusion strokes. Stroke. 2016;47:1997–2002.

    Article  Google Scholar 

  27. Turc G, Maier B, Naggara O, Seners P, Isabel C, Tisserand M, et al. Clinical scales do not reliably identify acute ischemic stroke patients with large-artery occlusion. Stroke. 2016;47:1466–72.

    Article  Google Scholar 

  28. Carrera D, Campbell BCV, Cortes J, Gorchs M, Querol M, Jimenez X, et al. Predictive value of modifications of the prehospital rapid arterial occlusion evaluation scale for large vessel occlusion in patients with acute stroke. J Stroke Cerebrovasc Dis. 2017;26:74–7.

    Article  Google Scholar 

  29. Pedley DK, Beedie S, Ferguson J. Mobile telemetry for pre-hospital thrombolysis: problems and solutions. J Telemed Telecare. 2005;11(Suppl 1):78–80.

    Article  Google Scholar 

  30. Tsivgoulis G, Sharma VK, Lao AY, Malkoff MD, Alexandrov AV. Validation of transcranial Doppler with computed tomography angiography in acute cerebral ischemia. Stroke. 2007;38:1245–9.

    Article  Google Scholar 

  31. Wada K, Kimura K, Minematsu K, Yasaka M, Uchino M, Yamaguchi T. Combined carotid and transcranial color-coded sonography in acute ischemic stroke. Eur J Ultrasound. 2002;15:101–8.

    Article  Google Scholar 

  32. Tsivgoulis G, Sharma VK, Hoover SL, Lao AY, Ardelt AA, Malkoff MD, et al. Applications and advantages of power motion-mode Doppler in acute posterior circulation cerebral ischemia. Stroke. 2008;39:1197–204.

    Article  Google Scholar 

  33. Gerriets T, Goertler M, Stolz E, Postert T, Sliwka U, Schlachetzki F, et al. Feasibility and validity of transcranial duplex sonography in patients with acute stroke. J Neurol Neurosurg Psychiatry. 2002;73:17–20.

    Article  CAS  Google Scholar 

  34. Eadie L, Regan L, Mort A, Shannon H, Walker J, MacAden A, et al. Telestroke assessment on the move: prehospital streamlining of patient pathways. Stroke. 2015;46:e38–40.

    Article  Google Scholar 

  35. Eadie L, Mulhern J, Regan L, Mort A, Shannon H, Macaden A, et al. Remotely supported prehospital ultrasound: a feasibility study of real-time image transmission and expert guidance to aid diagnosis in remote and rural communities. J Telemed Telecare. 2018;24:616–22.

    Article  Google Scholar 

  36. Mort A, Eadie L, Regan L, Macaden A, Heaney D, Bouamrane M-M, et al. Combining transcranial ultrasound with intelligent communication methods to enhance the remote assessment and management of stroke patients: framework for a technology demonstrator. Health Informatics J. 2016;22:691–701.

    Article  Google Scholar 

  37. Heldner MR, Hsieh K, Broeg-Morvay A, Mordasini P, Bühlmann M, Jung S, et al. Clinical prediction of large vessel occlusion in anterior circulation stroke: mission impossible? J Neurol. 2016;263:1633–40.

    Article  Google Scholar 

  38. Kummer BR, Gialdini G, Sevush JL, Kamel H, Patsalides A, Navi BB. External validation of the Cincinnati prehospital stroke severity scale. J Stroke Cerebrovasc Dis. 2016;25:1270–4.

    Article  Google Scholar 

  39. Vanacker P, Heldner MR, Amiguet M, Faouzi M, Cras P, Ntaios G, et al. Prediction of large vessel occlusions in acute stroke: National Institute of health stroke scale is hard to beat. Crit Care Med. 2016;44:e336–43.

    Article  Google Scholar 

  40. Chen K, Schneider ALC, Llinas RH, Marsh EB. Keep it simple: vascular risk factors and focal exam findings correctly identify posterior circulation ischemia in “dizzy” patients. BMC Emerg Med. 2016;16:37.

    Article  CAS  Google Scholar 

Download references

Acknowledgements

This work is part of a PhD project supported by the University of Aberdeen’s Elphinstone Scholarship Programme. LE is funded by the European Space Agency SatCare grant.

The authors would like to thank Hannah Thomas and Emma Foster for the help with assessing full text of articles for eligibility for the current review.

Funding

No specific funding was received for this work.

Author information

Authors and Affiliations

Authors

Contributions

All four authors were involved in the conceptualisation and development of the methodology. DA performed a complete literature search, assessment of full text of articles for eligibility and data extraction. Risk of bias assessment in individual studies was performed by DA and LE. DA performed a formal analysis of the collected data and writing of the original draft (including all tables and figures). DA finalised all drafts in consultation with LE, AM and PW. LE, AM and PW reviewed the manuscript and approved its final version.

Corresponding author

Correspondence to Daria Antipova.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional files

Additional file 1:

Characteristics of included studies [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24, 26,27,28, 37,38,39,40]. (DOCX 37 kb)

Additional file 2:

Risk of bias and applicability concerns summary [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24, 26,27,28, 37,38,39,40]. (DOCX 34 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Antipova, D., Eadie, L., Macaden, A. et al. Diagnostic accuracy of clinical tools for assessment of acute stroke: a systematic review. BMC Emerg Med 19, 49 (2019). https://doi.org/10.1186/s12873-019-0262-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12873-019-0262-1

Keywords