Thursday, 1 September 2016

SMART Simulation

SMART Simulation
Welcome back!
In the immortal words of, among others, Slayer.

Apologies for the delay (as usual), there’s always a delay, and I’m always sorry. Mae culpa, mae culpa, mae maxima culpa

That being said, I’ve received enough positive feedback, comments, mentions, likes, retweets, pokes, shrugs, swipes right, whatever one does on LinkedIn, and even a good old fashioned e-mail to allow myself to consider the ideal that there is more to this blog than me simply howling into the void.

Many, many, many heartfelt thanks for the motivation and encouragement. Let’s get on with it, shall we? (Yes!) I promise to stop blogging when y’all eventually carry me around shoulder high singing hosannas and chanting my name as the saviour of modern simulation, or when I get bored, either feels like a natural end.

In this blarticle (blog/article) I’m going to be presenting a simulation concept I have been instrumental in co-creating with my excellent work colleague and future rock star John Karlsson of Akademiska University Hospitals Clinical Training Centre.
It’s called SMART simulation.

The mechanics and content of SMART are currently undergoing some form of copyright protection process, so I can’t share the material or programming but I can describe the concept.

S.M.A.R.T simulation - Full Metal Packet

“This is our SMART concept, there are many like it but this one is ours…”

SMART is an acronym for Simulation Made Accessible via Resource-Saving Training in-situ.
I’m aware that we’re not the first people to use SMART as an acronym, and that there even exists at least one another SMARTSim.

The great thing with acronyms is that you can change them, let’s refer back to popular heavy metal combo SLAYER, who were so kind as to welcome us back at the beginning of this article. Slayers name *CAN* be an acronym for Satan Laughs As You Eternally Rot *OR* it *CAN* be an acronym for Sings Lovely And Yes, Even Rhymes. You can adapt the acronym to fit the content (or the recipients of the content)

Originally SMART stood for Simpad, Manikin And Relevant Training, which sounds maybe a little more elegant and easier to understand but is then constrained to the use of Laerdals SimPad (more of which later). We found, however, that one of SMARTS strongest features was its modular nature, that we could use the SMART concept as a framework for any event that would benefit from a briefing and a debriefing.

As expressed in a previous blarticle, we run a lot of simulation, meaningless figures abound that I am unwilling to pluck out of the air but it’s a *loooooooooot* We run so much simulation that the resources of our centre are stretched trying to keep up with demand. Running in situ (point of care) simulations in the relevant hospital departments helps take the weight off of the physical resources of the centre and conveys with it the intrinsic benefits that in situ simulation delivers (more on this here, and everywhere else, another time) However, in situ simulation also demands human resources in the form of trained simulator instructors and operators, of whom we have a finite number.

So, saturation point for simulation? No more resources to commit without a drop in quality and for some we go back to being that place in the cellar that ward staff visit once a year (at best!) and play with scary dolls?

Well we could loan out a simulator to the staff responsible for teaching on the ward. But then we have *no* control over what is delivered during the teaching session. This isn’t something that is attractive for us as a centre charged with delivering learning that has the raising of  patient safety as its core value.

In short, we don’t want to say no to people who want to engage with simulation, so we need to find a way to make simulation accessible without it losing its meaning.

As we see it, the barriers to simulation include the following preconceptions
  1. A technologically advanced simulator is not something one can just pick up and use as a teaching resource without prior experience
  2. Debriefing a group of potential strangers according to Crew Resource Management (CRM) guidelines is not a job for the faint hearted and if mishandled can lead to as much harm as good.
  3. Simulation is HEAVY on resources, sending 10 members of staff down to the clinical training centre for a half day has the potential effect of removing a weeks worth of ward work

Being a successful simulation instructor or simulator operator could be said to require a set of skills that are if not unique, then *highly* specialised and reliant on experience. So how to translate that?

The vast majority of requests we have to deliver simulation are with specific CRM based goals as the preferred learning points (rather than practical/psychomotor skills) Starting with the preferred learning goals of the target group sounds pedagogically healthy and worthwhile, so let’s do that.

For entry level instructors, there exists a preconception that CRM as a concept can be clumsy and hard to get to grips with. One of the reasons for this is that the central principles of CRM can mean different things to different people, and another is that there are so damn many. Cognitive theory suggests we as humans can only effectively think about seven things at once, so how are we considering the impact of a dozen or more CRM principles on an active group of people with a view to debriefing them? Is it even possible?

The Ace of Spades

We have broken down the introduction of CRM principles and the corresponding debriefing into a card game *gasps from the audience, someone in Denmark just fainted, cries of “No!, you can’t DO that!”

“Oh but,” puts on sunglasses, turns to camera, “we did…”

Each card set has a theme (Communication, Effective Teamwork etc), and there are maybe 4-6 cards in each set. Upon each card is the title of one of the CRM principles (Closed loop communication for example), and on the back of each card is a description of that principle and how it might translate into actual work duties.

The card set forms the basis of an introduction to the simulation session. Once the principles and descriptions have been discussed and agreed upon, it’s time to meet our patient.

All our S:M:A:R:T patients are gender neutral, there’s even a word for a gender neutral person in the swedish language, hooray for a progressive society.

(Nearly) all our S:MA:R:T patient cases can happen whenever, to whomever and wherever. It’s up to the educator delivering SMART to add some finesse to the case if it needs to be made more relevant to a specialist ward or unusual healthcare environment.

The cases are medically straightforward, with the ideal that time spent in the debrief discussing the medical aspects of the case is time spent NOT discussing or reaching the CRM based learning goals.

Drive, she said

Driving a simulator is easy, it’s simply a matter of having 100% focus on the patient's vital statistics (SpO2, Heart rate, Blood Pressure etc) with respect to both the patient's underlying sickness (which you have expert knowledge of) and the actions and interventions carried out by the team of participants, and translating these actions and interventions into realistic physiological responses as displayed on the patient monitor and via the simulator. Simultaneously you are voice acting the patient's symptoms and emotions.

Did I say it was easy, well it is easy to do a bad job, far too easy. So how do we make it easy to do a better job?

As outlined in the previous blarticle “Programming People - Part 1” there are broadly speaking two approaches to driving a patient case on the simulator. “On the fly”, making things up as you go along, an approach that does not suit an inexperienced simulator operator, and Hard Wiring, an approach whose rigidity doesn’t suit a flexible simulation package like SMART

With SMART, we have programmed physiological responses linked to on screen button presses on a mobile device, with the buttons marked according to known likely medical interventions.

It works like this, the participants administer oxygen, the operator presses a button named “Administer Oxygen” or something similar and the physiological response to oxygen administration happens in the background over time. The participants raise the rate of oxygen delivery, there is a button for that with the physiology programmed in. Fluids? Drugs? Tipping the bed? For each likely intervention there is an appropriate button linked to a physiological response.

The patient's condition is preprogrammed to deteriorate over time, with choosable levels of severity of disease state.

For each case, one of the “Participant Action” buttons is red, and is considered the key treatment to allow the case to progress positively and the patient to begin to normalise. For example, the programmed allergic shock case requires the intervention of an EpiPen or similar. Once this intervention has taken place, the red button is pressed and the patient normalises over time.

Simple, elegant, SMART. A physiologically correct patient case that is easy to drive and focuses on the actions of the participants.

“But how does the patient feel?” I hear you ask

As outlined in another previous blarticle (see how its all coming together, long term readers?), patients are, surprisingly, people, as well as fleshy objects, and how they feel has an affect on how they react to treatment.

Scroll a little right on the SMART programmed mobile device to the Symptoms button to find the answers. Pressing this button will display different things according to which phase of treatment/sickness the patient is in. The information is provided in a pop up on the mobile device and includes key phrases the patient might say at this point, and the answers to questions the simulator itself can’t deliver (capillary refill, patient temperature, how warm does the skin feel etc etc)

After running the scenario, the participants are invited to take part in a debriefing. The debrief itself is based on the previously discussed CRM card set and introduces a specially prepared “board game” in order to define, structure and focus the debriefing towards the discussion of the chosen goals, rather than a general roundtable agreement that “good things are better than bad things”

Participants later receive a web based survey in which they may submit further reflections and feedback.

How is this solving the resource problem?
The ward personnel in charge of education come to us for a half days “Smart-Instructors education” course, after which they receive a crown and sceptre and are allowed to book the SMART package (Course material, card game, SMART-programmed simulator)

We have enjoyed huge success delivering SMART using Laerdals Resusci Anne Simulator (RA Sim) and Laerdals SimPad mobile device. The RA Sim is a medium fidelity simulator, that is to say it has pulses, it breathes, you can take its blood pressure. Its ability to be driven by the SimPAd mobile device also makes it very attractive as part of a physical, collectable/deliverable simulation package.

The (newly crowned) SMART instructor collects the package themselves and then delivers the session (often it needs two instructors) back at their own department in the hour long gap between two work shifts. Scheduling education under this hour means that NO patient hours are lost and as a SMART simulation session lasts 30 minutes (10 min intro, 10 min medical case, 10 minute SMART debrief) we can deliver two sessions.

This is the end, beautiful friend
Simplified CRM structured and focussed to deliver specific teaching goals.
Absurdly easy to drive simulation.
Flexible and light on resources.

Sure, it’s simulation LITE, in a way, but it punches way above its weight by directly addressing preconceptions about simulation.

We don’t see SMART as a way out of full scale simulation rather than a way into full scale simulation. We see it as one of the ways in which we don’t have to say no anymore.

After a successful pilot scheme, we launched our own SMART instructors course and are delighted to say we have now more than 40 SMART instructors across our local authority. Which, in context, is 40 plus more people engaged in delivering simulation in Uppsala Län than there was 6 months ago, boom!

Feedback has been overwhelmingly positive, both first hand and via the web survey. One overheard comment at the last course was how this was potentially going to make a MAJOR difference to patient safety, and how EVERYONE should become a SMART instructor.

I was struck by a certain feeling at that point which I can only describe like this...

Remember the film JAWS, when Roy Schneiders character is tossing rubbydubby/chum/fishheads off the back of their vessel and The Shark appears for the first time, revealing its true size?

The look on his face, and the phrase “I think we are going to need a bigger boat”


Exciting times at Simulated Towers!

Till next time, whenever that is, and whatever that is about, stay simulated!


Thursday, 5 May 2016

Programming People


Apologies for the delay in the (planned) fortnightly article, pesky minor inconveniences such as parenthood, and holding down a job in a foreign land have been tickling my procrastination bone.

And apologies in advance to all Charlies and Kims out there…

In this article we are going to begin to look at ways that simulators can be programmed.
Programmed you say? Well yes, I’m going to be using the word programming a *lot* in this article, and I make no apologies for doing so.
I am NOT a programmer. Real programmers use real programming languages, I’m just using the software as provided by the simulator manufacturer.
However, I’m using the software to generate a nested series of IF>THEN>ELSE loops and triggers in order to change the simulated patient’s condition.
The simplest example I can give is this, in the case of a patient struggling with an unspecified breathing problem
IF (Oxygen is applied) THEN (Raise Saturation) ELSE (Do nothing)
We can make this more sophisticated by asking how much oxygen is applied. By which method? And when in the scenario? But the principle remains the same.
IF (Intervention=True)
THEN (Alter the patient’s condition),
ELSE (Do nothing, or further deteriorate the patient)
Well, it looks like programming and smells like programming and…oh, delicious, this programming is beautifully cooked! It also works like programming, which is a bonus.
In my own simulation environment, where we don’t have the time, space or resources available to moulage or make up the patients and rooms, we need to try and introduce quality into our simulations in other ways. Programming the physiological responses of the simulator is one of these ways.
Much more on this later. Woo!

Traditionally and sadly currently, programming is seen as a substitute for medical knowledge and experience. I believe that when used effectively it can only augment and strengthen the delivery of medical simulation.
Operators; those that “drive” the simulators, respond verbally as the patient, and manage the physiological parameters of the patient via the software, fall veeeery broadly into two distinct camps.

  1. Experienced healthcare practitioners (HCP) that have “seen everything”
  2. Non medically qualified technicians/engineers/IT nerds/weirdos
And programming the simulator has itself traditionally taken one of the following forms
  1. Start parameters, lung and heart sounds etc. are set up for the beginning of the scenario and any changes or interventions are handled live “On the Fly”
  2. The patient moves from one hard wired state/set of parameters to another after a specific time, specific intervention or at the direction of the session leader. We’ll call this “Storyboarding”
\\lul-net\dfsLUL\Users\d\dip001\My Pictures\nurse-leather-strap-anesthesia.jpgLet’s call our experienced HCP Kim, for gender neutralities sake, and because I don’t know anyone called Kim. Kim has worked in emergency medicine and on the intensive care wards for many years, they have seen just about every simulatable situation there is. Kim trusts in their knowledge and experience, we trust in Kims knowledge and experience and many patients have trusted in Kims knowledge and experience. Kim prefers to drive the simulator “On the Fly”, because of the wealth of anecdotal knowledge available to them. Kim doesn’t like to Storyboard their simulations, they know what a Septic Shock looks like, and they have seen Septic Shock many times and are happy to trust themselves to manage the numbers and patient responses without relying on programming. After all, it’s received knowledge that we don’t know which interventions the participants are going to perform, or in which order.

However, from day to day, and from session to session, Kim remembers a different Septic Shock on a different patient, in a different place with a different level of care, with different vocal responses and demeanor on behalf of the patient. These inconsistencies play a part in making the rest of the team unsure as to how the session is going to go, especially when it is a session that is likely to be repeated or even run in parallel, or when the other instructors aren’t familiar with Kim. It’s pretty much all down to Kim (but Kim likes this!) They may even feel threatened or slighted by the idea that their expertise isn’t being called upon if the simulation relies on programming (but they shouldn’t)
When the goal of the simulation part of the teaching session is to illicit some sort of Crew Resource Management based teamwork that can be discussed during a structured debrief, it can be an advantage to have some idea of how sick the patient is going to be that’s not based on the whim of the operator. Additionally, despite Kim’s wealth of experience, they are likely at some point to make a mistake. CRM debriefing a group of participants is challenging enough without then folding their arms and thinking, “Well, medically, that was bullshit, so why are we taking it seriously?”
Let’s introduce our Technical Wonk, and let’s call them something gender neutral and non-familiar like Charlie.
Charlie doesn’t trust in their medical knowledge and feels sure they are going to miss something if they drive “On the Fly”, Charlie, however has consulted patient cases and textbooks and has constructed a Storyboard in the programming that matches exactly that patient on that day in that textbook as treated at that time by that team.
Patient presents as sick, patient gets worse, the team works, patient gets better. That’s the plan and generally speaking it’s a sound one. The physiological parameters fit the disease state, the simulator can be programmed to have a deteriorating condition over time and at the click of a button, the patient improves to a point where we are satisfied that the learning goals of the session are met and nothing is missed. After all, it is documented medical fact which interventions should take place for a particular disease state, and in which order.
This case can be repeated with the same results across multiple dates, across parallel sessions with predictable consequences.
\\lul-net\dfsLUL\Users\d\dip001\My Pictures\Medical-Sim-Lab-Control-room-800x400.jpg
However, this approach assumes a lot, not least of which is that the participants come to a specific diagnosis and the correct treatment plan. As the physiology is “hard-wired” in obvious stages (Sick, deteriorating, recovering) it can leave participants feeling that they didn’t really participate in a way that affected the patients’ recovery directly, rather than the patient became better in some semi arbitrary manner after an amount of time had passed, or enough treatment boxes where ticked. Challenging enough to CRM debrief a team without participants crossing their arms and saying “That may as well have been numbers on a screen, and I don’t feel part of a team”
Clearly, I’m oversimplifying things to emphasize my own point of view (It’s my blog and I’m allowed to!) But we’ve all worked with individuals that remind us of Charlie or Kim and recognize and appreciate the strengths and weaknesses of their own approach to simulation even if they don’t.
And again, there are many more ways to program simulators and simulation, I’ve just taken two of the most recognizable examples which, handily, happen to be polar opposites of each other.
But they don’t need to be.
As outlined in a previous article, we run an absurd amount of simulation, across multiple and parallel sessions with huge cohorts of students. What we know is that the participants crave consistency, they talk to each other before and after and compare their experiences, even when we ask them not to, y’know?
It is UNFAIR for one group of participants to suffer a poorly executed simulation session when the poor execution is entirely avoidable and down to inbuilt inconsistencies or inflexible programming.
Our simulated patients can’t be based on Kims anecdotes, as there is only one Kim.
Our simulated patients can’t be That patient in That case study treated by That team as it’s not That team.
In each of our sessions it is always This patient in This room treated by This team (But also, This patient in That room treated by That team)
So how do we do that?
I’m splitting this article in two, it is long enough already.
So, in the next exciting installment of The Simulated Man I’ll be introducing the approach to programming that I’ve developed here at Akademiska University Hospital
In the meantime,
Stay simulated!

Friday, 15 April 2016

Split simulated personality: Parallell simulations

Simulated People are People too!
Here at Akademiska KTC, we deliver upwards of 400 hours of simulation every term, to put that in laymans terms, thats a HELL OF A LOT! IT’s MADNESS!
We like to keep the group size as small as manageable, and find that maybe 6 or 8 is the maximum we can take and still deliver a debriefing that facilitates an advanced depth of reflection and a focus on Crew Resource Management.
“Thanks” to the enormous and increasing demand for Medical Simulation, we find ourselves running often two, sometimes three and on occasion FOUR simulation sessions simultaneously.
So that’s more repeat sessions, repeated more frequently.
In an ideal world we would like all our students/customers/stakeholders (insert your own description here) to have had a similar learning experience, and have reached their teaching goals regardless of which session they have attended.
How do we go about attempting that?
Four simultaneous sessions, in four different rooms. Importantly, it is THE SAME PATIENT experiencing the SAME SCENARIO in EVERY ROOM.
Much of what the students perceive during a simulation comes from the operator/voice of the patient. Therefore, there is a responsibility for the simulation team to make sure as much as possible that the broadcast information is correct, or at least not misleading.
Operating the simulator, and interacting with the students as the patient is a criminally underappreciated art form. Yes, I absolutely would say that, and I absolutely believe it to be true. So important that it merits its own article, so stay tuned for some serious soapboxing in the coming weeks.
Repeating the same case across four sim sessions is simply a matter of programming/storyboarding the process, right? As long as the numbers show the same thing at the same time then the students should be led along the right lines, and everyone kind of stumbles over the finish line together.
Well, that is assuming that each group of students does the same thing at the same time in every room, and that the patients disease state is non dynamic and everything follows a linear progression.
In the immortal words of trained simulator instructor ICE-T, “Shit ain’t like dat”
The students should be free to deliver whichever sort of intervention they feel to be relevant to the patients disease state as they themselves interpret it, however, and this is a BIG however…
A liter of fluid delivered to the patient in Room A should have roundabout the SAME effect as a liter of fluid delivered to the same patient in Room B and 5l/min of administered oxygen should have the same effect on Patient A as it does on Patient B, C and D and NOT just what the operator *thinks* is right based on their clinical experience. Yes the operator may have seen patients in this disease state, and yes, their experience is in no way to be discounted but to ensure parity of experience for our students, anecdotal evidence is not enough. *Much* more on numbers, trends and science on another occasion.
Feelings, nothing more than feelings
Patients are, of course more than a set of standard responses coupled to a set of trends in their physiology.
When we run standardized patient cases (using real people made up as patients) we often employ professional trained actors to convey the emotion of being a patient in a disease state. So why are we neglecting this important aspect of patient care when we run patient cases using a patient simulator? Patient simulators can be hard to communicate and empathize with due to how they look and their lack of mobility. It is often left to the simulator operator to interpret and convey the patients mental state how they see fit. Had a bad day? Grumpy patient maybe.
Fine I guess for one off simulations, but let’s remember, THE SAME PATIENT in the SAME SCENARIO in EVERY ROOM should mean the SAME EMOTIONAL STATE for every instance.
Realism is as important as repeatability, in every sim session we run.
A recent post from the excellent SimGhosts blog (Hail! My brothers and sisters in arms!) Shared a feature not commonly found on Sim Scenario templates. It’s taken from a journal article regarding integrating actors into a simulation program and is mainly aimed at the SP world.
The original looks like this
It can be agreed in advance with the Sim Team what sort of state the patient is in when encountered, and the appropriate numbers ringed, the sim operator then has a better idea of how to deliver their lines and frame their responses.
So far so good.
I’ve developed this strategy further and would like to share my suggestions.
Firstly, nine categories of patient feelings are maybe too many when considering all the other info the simulator operator is dealing with. I’ve gone for six.
Secondly, the info is presented in the form of a simple table, I’ve gone for something a little more intuitive
Thirdly, what happens as the scenario develops? Does the patient become calmer? Angrier?
My version looks like this
It’s a simple spider diagram easily created using Microsoft Excel (I say easily, we work with Office Online and it’s as *easy* as pushing water uphill with a fork)
I actually took the ideal from marriage-destroying football management simulation Football Manager where it is used as a way of comparing players attributes.
Firstly, six categories of patient feelings. Simpler, broader, less open to interpretation.
Secondly, a spider diagram allows for an at-a-glance overview of a patients emotional state. More coloured area equals higher emotions, more drama and overacting! (Hopefully not)
Thirdly, two shades of colour. Why? The darker shape represents the patients INITIAL emotional state, and the lighter shape represents the patients potential emotional state as the case develops. Overlapping trends don’t need to be hidden under each other with some clever formatting of lines and transparency.
In this medical case, the patient exhibits the symptoms of ongoing sepsis/septic shock.
Emotionally, we can see the patient is mostly stressed and increasingly confused as the scenario develops.
A patient in a diabetic coma, or heavily intoxicated might present with a 0 or a 1 in all the categories, but become extremely confused or even angry when awake.

We have started using this approach with some success during the running of multiple sessions. The success is purely based on anecdote as I can’t think of a way to measure parity of teaching across multiple scenario sessions without using the time travel machine that I am going to have invented last week tomorrow.
That being said, using this device could be argued toward contributing towards parity of experience for all our students, and that feels important to us.
Coupled to this, we are looking to program the parallel patients physiological responses to interventions in a novel way.
We are not looking to standardize the sessions, but we are looking to ensure that we can standardize our delivery without sacrificing the creativity and spontaneous nature of Medical Simulation.

Next up on thesimulatedman
Programming People : Pros, Cons and tricks
See you then, stay simulated!

Friday, 1 April 2016

*Published* Advances in Physiology Education *In Press*

Simulation as Science 

In my simulation career, I've been lucky enough to work with some very talented and knowledgeable people, and in work environments where our combined talents and knowledge where given the opportunity to flourish and grow.

During my time at Bristol University I was part of a team developing a novel approach to medical science based simulation, and we continue to be proud of the steps that we made.

I'm not going to go too much into the details now, but sure to bang on about it at length in good time.

However, I'm very proud to say that Dr Richard Helyer and my humble self have had a short communication on the subject accepted for publication in a forthcoming issue of Advances in Physiology Education.

I am happy to present a draft gratefully recieved.

Progress in the utilisation of high-fidelity simulation in basic-science education

Richard J Helyer* PhD & Peter J Dickens**

*School of Physiology, Pharmacology & Neuroscience, University of Bristol. Bristol, BS8 1TD, UK
**Akademiska University Hospital, Uppsala, Sweden

High-fidelity simulators with responsive, functional physiological models are typically used to teach clinical skills and clinical emergencies. This teaching is usually delivered to cohorts other than those in early years of undergraduate courses or those studying the basic-sciences that underpin medicine such as physiology and pharmacology.  It is now some 15 years since seminal papers by Euliano and others (Euliano, 2000, 2001; Zvara et al., 2001) first described the utility of using human patient simulators (HPS) to teach key physiological principles, 10 years since adoption of HPS (CAE, Canada) at the University of Bristol into basic-science teaching to early-years undergraduates, and some 5 years since the Bristol approach was summarised by Harris et al. (2011). 

In Bristol over the past 10 years we have continued to develop simulation as a core part of the curriculum embedded alongside traditional lecture, tutorial and practical class teaching (Harris et al., 2011). We currently use HPS to teach seven separate scenarios in physiology and pharmacology across three basic-science and three professional programmes including medicine. Over 1000 students per year receive some form of simulation teaching in their first two undergraduate years. Final year basic-science students are also able to select ‘laboratory’ projects using simulators to explore in-depth aspects of integrated human physiology that would otherwise be impossible eg. altitude and descent to depth as similarly reported by Hyatt & Hurst (2010).

Despite these exciting innovations, high-fidelity simulators with a functional physiology are still under-utilised in basic-science teaching, with only  few reports in the literature (Hyatt & Hurst, 2010; Gabi et al, 2013).  In fact, the converse is probably true in that they are more typically utilised in teaching basic skills that do  not require high-fidelity models – the ‘fidelity trap’ (Lampotang, 2008). Further, there may be a misconception as to what is actually being taught with simulation.  Teaching that demonstrates changes in heart rate and blood pressure during bleeding to nursing students, although clearly valuable, is far removed from using simulated physiological data to effectively demonstrate the action of Starling’s law during haemorrhage. The latter is an example of high-fidelity teaching aimed at complex principles that students may find difficult. The potential for using simulators in this type of teaching was first shown by Euliano et al. (1997) & Euliano (2000, 2001) and further developed at Bristol (reviewed by Harris et al. 2011) and a small number of locations elsewhere (including Waite et al., 2011; )

The question remains as to why high-fidelity simulation still remains under-utilised in teaching basic-sciences despite this potential and the increased adoption among teaching hospitals and university departments. A number of factors may be involved. First, developing physiologically accurate scenarios can be difficult and time consuming. Scenarios should be validated against published human data (Lloyd et al., 2008; Harris et al., 2011), which itself may be scarce, and the model modified in order to improve fidelity. Second, there are few simulators with an effective, integrated physiological model that produces data required for full exploration of physiological principles, and these are expensive in terms of basic cost and servicing. Other less expensive, commonly adopted simulators may fall short in terms of integration of even the most basic cardio-respiratory responses. Third, faculty may be wary of using simulator models versus traditional teaching or non-integrated computer simulations which may produce accurate, but limited data in terms of homeostatic integration with other systems, eg. an isolated heart model. In Bristol, concerns by faculty around the fidelity of pharmacolological models of HPS vs stand-alone computer simulations for calculating dose-responses and drug interactions, have hampered wider adoption. This is despite the attraction of being able to demonstrate effects across systems. Finally the complexity of scenario creation may dissuade even the keenest developer. It is very easy to produce a simple model of say, blood loss that can be demonstrated at a superficial level. It is very hard to develop one where all relevant physiological variables closely match published human data. 

Matching data produced with the literature is an example of the highly accurate, validated approach taken in Bristol. To add a further level of fidelity in terms of simulating homeostatic interactions, we adopted a ‘dogma’ that our scenarios should be exclusively ‘model-driven’. In theory, this means that layers of changes and perturbations can be applied over the primary scenario. For example, in demonstrating the response to low inspired O2, rather than simply setting controllable variables to simulate the response, data were entirely based on the actual response of the simulator via its ‘lung’ and in real-time. To do this, the basis must be a reasonably accurate model with responses that can be fine-tuned by applying gains and factors to variables, rather than overrides. And certainly without simply presenting static data to students, for example when blood-gases are requested. This though, produces a further level of complexity.

The question remains even for the teaching of basic science in some detail,  is this level of model-driven fidelity required? Are even physiologists effectively using simulation caught in the ‘fidelity-trap’. Has the ‘fidelity-trap’ hampered wider utilisation of simulation in basic-science teaching? It is evidently far more practical to produce accurate data by applying overrides and ‘fixes’ to models to produce data at valid values in terms of the literature, and as importantly, what students might expect to see in a textbook. This approach is also repeatable, as data will be identical for each session – in the model-driven HPS equipped with a lung, respiratory data in particular vary from run to run and drift over time. Further, setting variables to fixed values avoids having to work within a complex model with feedback loops where changing one parameter will have knock-on consequences on another. In other words inconvenient homeostatic algorithms can be circumvented . Finally, we could ask why use a simulator at all? This question is beyond the scope of the current discussion!

The future for high-fidelity simulation in basic-science education may be in finding a middle-way. Some lower-cost simulators without the ability of the HPS to effectively exchange gases or operate with a ventilator utilise similar physiological models (it should be noted that not all do, eg. presentation of blood gas data, so careful choice of mid-range platform is required) . In fact, a mixed-approach to producing teaching scenarios with some data produced by model-driven aspects of the scenario, and others determined by over-rides, can produce data where a dogmatic, purely model-driven approach fails. An example is demonstration of the classic alveolar gas equation derived by Fenn, Otis and Rahn that shows the relationship between O2 and CO2 (learning opportunities described by Curran-Everett, 2006). An accurate demonstration of this equation is not possible using a CAE HPS with a lung. However, using the HPS software-model alone, or with a manikin that does not have a lung, extremely accurate results can be obtained compared to published human data (Helyer & Lloyd, 2009). 

Unfortunately, a final area of consideration remains for even the keenest adopter that remains a prevailing question- does using simulation improve learning outcomes? Here there is no clear evidence in the basic-sciences. There is little doubt that simulation in the broadest sense is an effective tool in improving learning and outcomes in medical education (McGaghie et al., 2011). This is probably most apparent in disciplines assessed via achievement of skills and day-one competencies. In other areas, the relatively scarce evidence centres around improving confidence of students or in preferential learning methods (eg. Harris et al, 2011) rather than in measurable improvements in examination results. This is not limited to simulation as assessing impact on learning in terms of measurable outcomes is notoriously difficult. We may take some solice by consider whether this is really an issue in a climate where student satisfaction and learning-method preference seems to be becoming a prevailing driver.

We conclude that high-fidelity simulation in basic science education remains an under-developed resource with considerable potential. By careful matching of hardware and software to teaching and learning objectives, it remains a potentially highly-effective tool.


McGaghie, WC, Issenberg, SB Cohen, ER, Barsuk, JH & Wayne, DB (2011). Does Simulation-based Medical Education with Deliberate Practice Yield Better Results than Traditional Clinical Education? A Meta-Analytic Comparative Review of the Evidence. Acad Med. 86: 706–711.

Helyer, R & Lloyd, E (2009). The response to hypoxia: a refined Human Patient Simulator (HPS) model to demonstrate high altitude physiology. Proc Physiol Soc 15

Lloyd, E, Helyer, R, Dickens, P & Harris, JR (2008). Use of a high fidelity Human Patient Simulator to demonstrate the control of ventilation
Proc Physiol Soc 11

Curran-Everett, D (2006). A classic learning opportunity from Fenn, Rahn, and Otis (1946): the alveolar gas equation. Adv Physiol Educ. 30: 58-62.

Harris, J, Helyer, R & Lloyd, E (2011). Using high-fidelity human patient simulators to
teach physiology. Med Educ. 45: 1131–1162.

Hyatt, JP & Hurst, SA. (2010). Novel undergraduate physiology laboratory using a human patient simulator. Med Educ. 44:523

Zvara, DA, Olympio, MA & MacGregor, DA (2001). Teaching cardiovascular physiology using patient simulation. Acad Med. 76:534.

Waite, GN, Hughes EF, Geig, RW & Duong, T (2013). Human patient simulation to teach medical physiology concepts: A model evolved during eight years.
J. Teaching & Learning with Tech 2: 79-89.

Euliano, TY (2000). Teaching respiratory physiology: clinical correlation with a human patient simulator. J Clin Monit Comput. 16:465-70.

Euliano, TY, Caton, D, van Meurs, W & Good, ML (1997) Modeling obstetric cardiovascular physiology on a full-scale patient simulator. J Clin Monit. 13(5):293-7. 

Euliano, TY (2000). Small group teaching: clinical correlation with a human patient simulator. Adv Physiol Educ. (2001) 25:36-43.

Lampotang (2008) in Manual of Simulation in Healthcare ed. Riley, R. Oxford University Press.


Richard Helyer, PhD. School of Physiology, Pharmacology & Neuroscience, University of Bristol, Bristol BS8 1TD, UK;