.png)
Happy Hour with Bundle Birth Nurses
Happy Hour with Bundle Birth Nurses
#83 Turning Off Pitocin in Active Labor?! What’s the Evidence with Jen Atkisson
In this episode of Happy Hour with Bundle Birth Nurses, Sarah and Justine are joined by Bundle Birth Educator and expert witness Jen Atkisson for a practical conversation about how nurses can confidently read, interpret, and apply research in clinical practice. They explore common biases in studies, the difference between statistical and clinical significance, and how research can be both empowering and misused at the bedside. Jen breaks down a recent meta-analysis on discontinuing Pitocin in active labor, offering insight into what the evidence actually shows and how we can use our brilliant nurse brains to elevate care. Thanks for listening and subscribing!
Helpful Links!
- Reduced Risk of Cesarean Delivery with Oxytocin Discontinuation in Active Labor: A Systematic Review and Meta-Analysis
- Shifting the Pitocin Paradigm Class
- RNC-OB Exam Prep Class
- Subscribe to the Bundle Birth Buzz, our monthly newsletter
Justine Arechiga: Hi, I'm Justine.
Sarah Lavonne: I'm Sarah Lavonne.
Justine: We are so glad you're here.
Sarah: We believe that your life has the potential to make a deep, meaningful impact on the world around you. You, as a nurse, have the ability to add value to every single person and patient you touch.
Justine: We want to inspire you with resources, education, and stories to support you to live your absolute best life, both in and outside of work.
Sarah: Don't expect perfection over here. We're just here to have some conversations about anything birth, work, and life, trying to add some happy to your hour as we all grow together.
Justine: By nurses, for nurses, this is Happy Hour with Bundle Birth Nurses.
Sarah: This is an exciting episode for me because many of us know that as an expectation for our profession, it is that we are up to date, that we are reading the medical research, that we are knowing what's going on in our profession. That's one reason to subscribe to our newsletter, because we actually give you the OB updates of what's in the news and what's the new research coming out, et cetera. We're trying to help you with that. As a responsibility of our profession, we have some brilliant brains that are in our heads that we can use to apply to our clinical practice, to how we function as a nurse, and how we continue to level up our practice.
We have Jen back, one of our educators. If you haven't been to her Shifting the Potocin Paradigm class, that is a must-go-to. She teaches our NCOB review prep, or exam prep. Many of you have taken classes from her. You know that she is our research guru. She also has a background in expert witnessing. She is an expert at our profession. Today, we are going to talk about research. I know that sounds really sexy at face value.
Really, what I want to do is talk about the practical ways that we can use our brilliant brains to assess research, to not gatekeep what's out there, but be able to read a research article and know what's going on, and know how to look at a study and say, is this valid or is it not and really take that then, therefore, into our practice. Thanks for coming back. I guess, it's now a season-wide thing, that every season, we have you. We're excited to talk about this today.
Jen Atkisson: I've got a good article, too, today to talk about. I am a big, passionate person about research. I think it all started around the time I started doing expert witness work. Also, I was a hospital educator, and part of that is writing policies and things like that. I think a really common experience nurses have is sometimes we're going to more conferences, we're getting more education, then we have more of those opportunities to hear the updates than some of our other clinicians that we work with. It's not talking down, but it's a very busy life being an OB or being a midwife.
They're in clinic, and they're in the OR, and they're all these places. Anytime they want to leave for an educational opportunity, that's cutting out of their practice time. What a common experience might be is that nurses can hear about things first, especially with the advent of a lot of education being on social media. That's creating some new, interesting conversations for us with our clinician colleagues. Before research is really done, mostly through large medical schools, large teaching centers, that's where a lot of stuff comes out. As bedside nurses, this stuff can trickle down to us, and we want to maybe implement it.
Some of the things that we hear and that come out that are new and that we want to try might be in conflict with old behaviors or old knowledge, or old ways of being. We learn how to read research, usually at some part of nursing school. You have to read articles, and you have to talk about them, or you have to write a paper on it. I don't know that we maybe keep up that skill. We can feel really intimidated when either we bring something to a provider, and they're like, "What? I don't want to do that practice. Show me the research telling me why I shouldn't do this thing." I think the common one is why we don't do scalp stim during a prolonged D-cell.
It's like, well, we can't do that research because it would be unethical. We know physiologically that's a dangerous thing to do. No one could design a study and get it through. You have to have all research now since the '70s approved by something called the IRB, the Internal Review Board. Then they say whether you can do your study or not. It has to be ethical, and it would be unethical to do a study on something that we know is actually dangerous.
Let's see what happens if we do this dangerous thing. You could never do a study that's like, "Let's see what happens if we put a pillow on someone's face and then sit on it. They're going to die. [laughs] We don't need a study to do that. As nurses, I think, have gotten now predominantly bachelor's degrees, we're coming out with these college educations, we have a high level of knowledge, and it's really helped democratize the relationships that we have in the hospitals. In order to really capitalize on that, we have to be good at reading the research and understanding, I think, all of the different terms that come out or types of research that's out there.
Sarah: I think another thing that we see though is also that research being weaponized at the bedside or at the nurse's station, really, of like, "Well, the research says this." It's a very quick sort of shutdown response at least I feel of like, "Oh shoot. Do I know that research? What is that research?" It can feel like, "Okay, well, the research says," and then we repeat that. "Somebody said the research says that blah, blah, blah, blah," whatever it is. Then that gets perpetuated.
Jen: Oh, 100%. Yes, and then you're like, "Who, where, what?" It puts us on our heels. I think that's really the purpose when people do that is to put the nurse on their heels. To shut off the questions, like you said, it makes us feel already triggers that not good enough or not smart enough or really tries to reestablish that hierarchy that maybe is starting to go away a little bit. Then, automatically, the onus becomes on the nurse to refute it. You can lay it down, like, "Well, the research says this." What nurse goes, "Oh, really? Show me the research."
Sarah: The research also says this, which is even more of like a, "Damn."
Jen: Or just being like, "No, it doesn't." [laughs] In my work as an expert witness, that happens pretty frequently is the counsel, whatever the opposing side, if I'm on defense or if I'm on plaintiff, the opposing side will be like, "Well, are you familiar with an article that says da, da, da?" I have to be like, "I don't think that exists. You need to show me that article if I'm going to have an opinion on it." We do see it also happening at the nurse's station. Often, what is happening is people are maybe reading the abstract. They're not reading the whole thing, which is very tempting.
You're just like, "Oh, it's the CliffsNotes. It's the TLDR. It's the quick down and dirty." It's a totally fine place to start. It's just, it's probably not enough.
Sarah: Can you explain why it's not enough?
Jen: Yes, so an abstract is always going to have certain parts to it. It's going to have an objective. It's going to have the design. It's going to have the results, and then it's going to have the conclusion. After the abstract, we all jump to the conclusion. The conclusion is the author's interpretation of their results. Then there's what's called secondary analysis, which is where other people come and look at those results. They're like, "Would I have come to the same conclusion?" I think as nurses, we default like, "Well, these are these big, smart people. They got money to do this study. Their conclusion must be right."
We see lots of very famous, what are called secondary analyses, that show that actually that's not what we think that that study says at all. If I looked at this data, I would take away very different things from it. That brings us to something called bias. We do have to be really aware of bias in research, and being aware of our own biases and the biases that just exist out in the world.
We're not talking so much about racial bias or socioeconomic biases or something like that, but research bias is a whole different thing. Some of the main kinds of research bias are there's something called like selection bias. That means that we're picking certain groups over others, and that the sample is just not going to be representative of the population. There's something called publication bias, which is we know that studies that have a positive or significant results are going to get published more than those with negative or no findings.
If you study something and you're like, "That doesn't make a difference," that's less likely to get published than being really big, exciting. When we know that there's a publication bias, the conclusions that somebody might find, if you want to get your paper published, do you want to oversell the findings versus being humble about the findings, or realistic about the findings? There's definitely a motivation because of publication bias to say like, "I did this. These are my results, and this is what they mean," and make a much bigger deal of them than maybe they really are. That's why it's important to read the whole thing.
Justine: I wanted to say, this is Justine here. I went to an ADM program, and we didn't do much research until I went to my bachelor's program. Then the bachelor's program, I zoomed through it online. I feel like I don't read and I'm on an upward battle of learning how to read research. One of the things that you have taught me, which I guess through A1, but through you, was the huge, I feel like, unethical research study that came out about laboring down with hemorrhage. I feel like it was unethical what they did. If you want to talk about it a little bit, what bias did they do? Was it just they wanted publication bias? They wanted to show it increases?
Jen: No, so that type of bias that we really see, and that's called observer bias. That is where you do a study and you think-- And we all have observer bias. This is probably one of the biggest ones to be aware of. Also, when we're selecting research to highlight, I would say I teach the class, the Pitocin Paradigm class. I have to be very mindful to not just pick all the stuff that shows that Pitocin is really bad. That would be a selection bias or an observer bias.
In this study about laboring down, they claimed they ended the study early because all these women were hemorrhaging. The real reason, if you read the paper that they ended the study was because continuing the study wasn't probably going to show any-- They got the results that they were going to get and continuing the study wasn't going to change those results. They were so clear that they weren't going to change the results. What was really clear actually was that it's individual. People can push however they want to push. They can labor down or not labor down. Even their groups weren't really assigned super correctly.
You could be assigned the laboring down group, but if you got the urge to push in 20 minutes, you started pushing. You never actually did labor down the full hour. Then, the people who were assigned to immediately push, some of them didn't start pushing for up to 40 minutes. Those numbers are like, I think it was 43, they're different. There was people who ended up in the labor down group that pushed earlier than the push immediately group because they just didn't get around to pushing. I don't know that it was a particularly well-done study, but it hit the news waves like wildfire, and it's changed practice.
Although you have non-statistically significant findings in a lot of ways, especially around hemorrhage, they had three different measures for hemorrhage. In two of the three, there was no difference in people's hemorrhage. In one of the measures, there was a non-statistically significant change in hemorrhage rate. You have these non-statistically significant findings that now become very clinically significant. That's another thing to be aware of: is this statistically significant, and is it clinically significant? Those are two different things. You can make a study that's like, shows statistical significance, but it's like, do people do better with orange juice or grape juice?
It'd be like, clinically significant, doesn't really matter. You're just getting on grape juice. It's not going to really change anything for us. Some things can not quite raise the rate of statistical significance, but still be very clinically significant. That's like this article that I did want to talk about today because it, I think, highlights a lot of these principles. The study is called Reduced Risk of Cesarean Delivery with Oxytocin Discontinuation in Active Labor: A Systematic Review and Meta-Analysis. Those two big words, I'll tell you what they mean. A systematic review is a rigorous structured way of gathering and synthesizing available research on a specific question.
They say, "Does turning off PIT reduce C-section?" Then they go out and they look through all the different databases. They find every study. This is like the research papers we did in nursing school. We were doing systematic analysis. Nurses are very good at this. When you write a policy, you're doing a systematic analysis. You're like, "I'm going to write a PIT policy, I got to go find everything on PIT."
Sarah: Physiologic birth.
Jen: Physiologic birth, systematic analysis. The A1 practice guidelines, systematic analysis. They're pulling all of this stuff. I would say, this is your highest level of evidence. It's also the one that nurses are actually super good at. These are the rabbit holes [chuckles] they go down. This is something nurses are very good at. You define a question, you get these predefined criteria, you search all the databases, and you summarize all of the findings that you get, but you don't necessarily combine them statistically. A meta-analysis is a subset that they also, it goes one step further, and that's where they statistically combine the data from all of these studies.
They weight them depending on how big they were, and they give you a pooled result. They did both in this. What they found, looking at all of these studies, they were from all parts of the world. They had 16 studies. They were from all over the world. They did find, if you combined all of the studies, you did this meta-analysis, the meta-analysis did show a statistically significant difference in cesarean rates, turning off Pitocin in the second stage of labor by 20%.
Justine: Whoa.
Jen: Here's where it gets juicy, though. This is the juicy part. This is why you got to go read the whole thing, because they say in their conclusion, yes. Now, when they looked at all of these different studies, there was a couple that they were like, "These studies might've had some different biases in them." One of the big things that they looked at was most of these studies were not blinded because it's Pitocin. How do you blind a study on Pitocin? You want to know if you're beep-boop-booping saline or oxytocin. [laughs] It would be really hard, I think, ethically. The ones that were blinded were not from the United States and maybe have different ethical standards.
It wouldn't be possible to do a blinded Pitocin study in the United States. You just can't do that. It's too dangerous of a medication. It's labor. The implications are just too dangerous. You couldn't do it. When they got rid of those that weren't blinded appropriately, they found that there was still a positive effect. It just didn't reach that clinical significance. We're not left with, "Oh, it doesn't matter." We're left with, "At best, it's statistically significant. At worst, it's beneficial, but we can't say if it's going to work for everybody."
They also looked at all of these different studies, and they did find that in less than 28% of the patients who had their Pitocin turned off, that there were criteria for restarting Pitocin. They turned it off, and in less than 20% of people, I think it was 15% of people, did have their Pitocin restarted. Of everybody, it does lengthen labor by about 30 minutes, but that includes people who needed it restarted and people who didn't. There's going to be some teasing out of data. Now, what this will breed is another study. Because we're not done here. We're like, "Okay, what did that 20% who needed their Pitocin restarted look like? Were they on MAG? Were they on Oreo? Were they GDMs? Were they all 37--
Sarah: Inductions?
Jen: Yes, well, these were all inductions.
Sarah: Oh, they were?
Jen: Yes.
Sarah: Okay. Not augmentations.
Jen: There's no augmentations.
Sarah: Oh, okay, perfect.
Jen: Their main reasons were post-AIDS and PROM were the two main reasons that people were being induced. There's going to be now this like even different study of like the clinical significance is, it probably is a good idea to start thinking about turning off Pitocin. We want to know if we're going to counsel people this way, what is their likelihood of having their Pitocin restarted? Is that a problem?
Sarah: What are the criteria for restarting it, that it's consistent?
Jen: It was two hour. They restarted if more than two hours had gone along and they hadn't delivered.
Justine: Was there anything about how long they pushed, or were they pushing the whole-- Did they do delay like passive descent later?
Jen: Different studies were different things, and that wasn't something that they were looking at. Their main thing that they were looking at was reducing C-section, and then their countermeasure was neonatal morbidity. It was like, did we get more vaginal births? We have all these babies getting shipped to the NICU. Again, this is like, you can do this systematically. People did these studies, then they combined all these studies, and now this meta-analysis is going to breed new studies. The idea that we're ever done is a fallacy. I do--
Sarah: Or that we know everything already.
Jen: Right. We know the physiology of the physiologic birth. We know the Ferguson reflex. We know that labor should be self-sustaining. Really the story of the study, when they talk about why they decided to do this study, is basically nurses were saying like, "We want to turn off PET." There was one small study that was done. It made it into the A1 induction and augmentation guideline, which we talk a lot about in Pitocin paradigm. Nurses read that and then they were like, "Well, it says here we can turn it off in active phase." This makes its way to these researchers who are like, "Hey, they're going to see if they turn it off. Is this a good idea or not a good idea? Let's do this."
You got some doctors that were all females based on their first names. They were like, "Oh my God, these nurses are getting up our asses. Maybe we should look at this." They did. I just think that their review is super fair. Obviously, it like aligns with my belief system. Then they published it in AJOG. This is really, I think, an example of the best of how we can work together and the best of how we can use physiology and physiologic knowledge and scientific rigor to get to a truth. I'm a big fan. I really liked it. You can tell I like it. For a lot of people, they were like, no surprise, but it is helpful to have these things out there and published.
Justine: What I like about when you talk about research, too, Jen, is you always, I feel like there is always like, "And the tea of the study was."
Jen: It's so true.
Justine: To the people listening, how would you recommend? How do they find the tea? How do they make it interesting? Because it is not interesting face value.
Jen: Another area that's I think really interesting, and again, this is a nurse initiated research question is Tums and Labor.
Justine: You're all about Tums and Labor.
Jen: I'm all about Tums and Labor. I'm all about it. I'm all about that it's not evidence-based. Evidence-based is another term that gets thrown around that we have to be clear on what does evidence mean? There's three parts of evidence-based. One of those is research. Another thing is our experiences. As labor nurses, I've seen some things. I remember a patient that it's the Spidey sense. That is evidence. My 18 years of nursing is evidence. Sarah and Justine's hundreds of births that they've attended, that's evidence. We put that away.
We've also got our research studies, like I said, that can contribute to evidence. This is like a three-legged stool. Then the third thing is the patients. The applicability to an individual patient. Then evidence is like, we might say, oh, you're going to need to put to the end or something, and the patient will be like, "Oh no, but as soon as they break my bag, I deliver. I don't want that. I just want you to break my bag. I'm going to deliver." Would we say like, "Well, no, the research says we have to do X, Y, Z." No, we're going to say like, "I believe you." [laughs]
The evidence, the research says that people are going to make one centimeter of change per hour. Then this patient will come in, they'll be like, "No, once I hit four, I'm done in an hour." We'd be like, "No, no, no. The evidence doesn't show that." [laughs] Yes, but it clearly does because we're talking about this individual person. Right now we don't have research evidence for Tums in labor, the way we've been using it, which is they get stalled out. Then we have them chew down a bunch of Tums.
Nurses at least at two different hospitals, one in Wisconsin, and one in New York have designed large studies where they're starting people on calcium, on Tums, but it's at the beginning of labor to see if they need less Pitocin or to see if they go shorter or what. Because right now, the way we're using it is like, "Oh, oxytocin, it sits on calcium channels. Maybe if we give them more calcium it'll help," but it's like, they're already been 12 hours into this labor. There are two hospitals that have had their nurses be like, "Tums, what's up with that?"
Now they've designed studies, and we're going to have results within the next 6 to 12 months. We're going to have an answer. I'm going to have to change that slide or not change that slide in Pitocin Paradigm. [laughs] Which I'm very happy, I'm like so glad. Isn't this just the best part of nurses getting curious?
Sarah: Yes, it's a scavenger hunt.
Jen: Yes, I love it. That their hospitals and their providers probably were like, "Yes, why not? Let's study this." We had a lot of nurse research, I would say in 2000s, 2010s. It's gone down a little bit, and we're seeing it starting to come back. There's some really amazing, fabulous nurse researchers out there doing cool stuff. As far as like the ins and outs of like labor-y type things, we're starting to see these are happening more on the unit level, as opposed to being designed by PhD students who are like, "Oh, I need a PhD project. Let's study calcium and labor." No, these are on the floor nurses wanting to find these answers out. I just think that's super cool.
Justine: I'm a nurse, and I'm looking at this, let's say the one that you're just mentioning about Pitocin and discontinuing Pitocin in active labor. I get handed a study or maybe somebody's like, "Well, the evidence says," and you're like, "Well, what evidence?" They hand you a research study. What's your quick and dirty process for looking at a research article and going, "Yes, valid, ooh, likey." Versus, "Oh, no, no, no."
Sarah: Oh yes, my red flags and green flags, as I call them.
Justine: Yes, please.
Jen: The first one is you always want to look at who it is, and I do look at where they're at. Are they hospital? What have they published previously? Because that's going to give me a little bit of an idea of does this person have an angle? Is there an agenda here? I do look at that. Unless you're reading a lot of research, you're not going to recognize some of these names, but I start to be like, "Oh, the same person who did that is now on this." You're like, "What are you doing here, buddy? Pal."
For this oxytocin one, I didn't actually recognize any of these people, but they do come from Washington University. That's in St. Louis. Kathleen Rice Simpson comes out of there. You're already like, okay, they're talking about Pitocin. Pitocin Queen, our grand leader, Kathleen Rice Simpson. You're like, "Okay, they're going to give this a fair shake," is my impression. I don't know if that's accurate or not. They even know each other, but that's what went through my mind.
The second thing was, I will look at their study design and just see like, "Does the study design make sense for the question they're trying to answer? Or is this a question we even need an answer to? Is this a so what? Is this a like, this has no clinical significance? Has this already been answered?" Because again, do we need to study everything? If we know the physiology, do we need the study? What's up with this? Then I will look at their results. I do look to see how big is the sample size? I think that's important to look at. You look at their actual results and then see if those match their conclusions.
Again, you'll start to see some of these with their study design. You're like, "Oh." very famous one and a big issue that you can get is something called a Hawthorne effect. That's another type of bias that you want to be looking for is what's called a Hawthorne effect, which is by participating in the study, just by being part of the study, you get better outcomes. Famous one that they don't know about, but there's two famous ones. One is the six is the new four.
In this study, we saw that by giving people a little bit longer, we had higher vaginal birth rates. Everyone knew that they were in a study designed to improve vaginal birth rates. Was it really giving people not calling active labor until six versus four, or was it the fact that these people were just on best behavior? Because what we see is since six is the new four, absolutely nothing's happened or everyone's adopted it. Nobody's calling active labor before six centimeters, really, especially in an induction. Has it actually improved our vaginal birth rate? No. We can pretty well assume, is there just this massive Hawthorne effect?
Another one happened when we were trying to study something called the STAND monitor in the United States, and they used it in Europe. To get FDA approval, we had to show, is this better than fetal monitoring? To study these two different machines, to make sure you were studying STAND monitor versus EFM, you had to make sure all of the nurses knew EFM. They all got the A1 fetal monitoring class. They were all tested. It was very rigorous training on fetal monitoring that they maybe had not had before. What happened was both groups improved in outcomes, but they couldn't show a difference.
The Hawthorne effect is by participating in this study, there was just a benefit by being part of the study, and you couldn't really compare typical use. Versus, now you go to another hospital. Really, the story is train everybody in fetal monitoring better. Those are some of the things, like the red flags, like the so what of it, does your outcomes match your conclusions? Is there some sort of bias that we can think-- When they talk about their conclusions, like in this study about stopping Pitocin, they're very clear that when we include all of our studies, we reach statistical significance.
These three were kind of on the fence about, so if we just go ahead and throw them out, yes, it does impact statistical significance, but it doesn't impact our findings. That's really that forthrightness that you can find in people's conversations. When they talk about controlling for biases, that's always a green flag that they bring a level of awareness to this, versus our delayed pushing friends versus immediate pushing. Kathleen Rice Simpson did the secondary analysis and was like, "Hey, I don't really think that's what this is saying," showed all this stuff. They fully doubled down and did another type of a study, did not include her secondary analysis, purposefully left it out.
Then were like, "No, see, we're still right." They're just really no self-awareness, or they clearly have an agenda. If you're tracking the research, it's almost like soap operas, a little bit.
Sarah: Truly.
Jen: I sound like a dork. Those are my red flags, green flags for looking at this stuff.
Sarah: You made the comment that like, has this research article or this study been done before? I find that there is a pretty strong bias against, and we hear that because it's nursing school. It's what we say. That like, "Well, it's an older study, so therefore it's invalid." What would you say to that?
Jen: Oh my gosh. I'm working on my pre-con workshop on fetal monitoring. I have a lot of old research because once we do the research, we're not going to repeat it.
Sarah: You've said that the phrase is it's proven science.
Jen: Settled science? Settled science. I love that.
Sarah: Oh, I need to start using that for physiologic birth stuff because they're like, well, what's the recent evidence? I'm like, "The pelvis has existed for all of time." That's where physiology is another area of evidence that we're going to sit and prove that the ligaments are innervated with smooth muscle fibers. Like, "What?"
Jen: Yes. The fact that we're not going to just repeat the same study every five years to keep it current, that is one of the things that just drives me up the wall if there's no updates. With my fetal monitoring stuff, yes, we haven't studied some stuff since the '60s because we figured out what was happening. There's no one saying like, "Well, show me the evidence that variable D-cells are from cord compression." We're like, "We're not really. It didn't change. We know it," but it's from the '60s. There's not suddenly some different mechanism of action.
Every five year, I know it's crazy, especially in something like labor. It makes sense for some stuff, but not all the things. For instance, this meta-analysis, I'll just tell you the studies that they looked at ranged from 2004 to 2023. They had a study from 2004, '07, '11, '12, '13, '15, '15, '16, '16, '19, '19, and '21. Really, these studies were spread out over a few years, they weren't all just recent. It's good enough for this meta-analysis, good enough for me.
Sarah: What would you suggest saying/what would you say if somebody was like, "Well-- You're giving the evidence, or you're sharing about a research study. They're like, "Well, that study was done a way long time ago."
Jen: I guess it would depend on what it is, but just because science-- Some of our best science, unfortunately, is quite old, because we could do crazy stuff that we can't do now. An example was in 2000, the fetal pulse ox got approved, FDA approval. It was a fetal oximetry. It almost looked like a cervidil, and it slid up against their head. We could see the SpO2 of a fetus during labor. Actually, it was really useful, made it really great, but it fell out of favor. We don't use it anymore. All of the stuff we know about what a normal SpO2 for a fetus is in labor comes from that technology.
There's no other way for us to do that now. That all comes from like the late '90s, early 2000s when we use this technology and it's just not in existence anymore. Things change, research funding is very limited, and these studies do take a long time to conduct often because you have to recruit pregnant people. Fetuses are all considered vulnerable populations. They're extra sensitive to do research on. Something could be already five years old by the time it hits publication.
Sarah: Jen, as we sort of like wrap it up, what is your tips, words of wisdom for nurses when it comes to research, assessing critical thinking, sort of analyzing what we're looking at?
Jen: Yes, I think just start reading. Just start making it a habit of reading like a certain journal when it comes out. Most of our journals are like every other month or quarterly, so it's not a ton. Just start to build this skillset. The more we read, we actually know that the more open-minded we become, the more able we are to see a variety of perspectives. It's a skill, nobody's innately good at it. It's just practice. Read an article a month. I think that's a pretty reasonable thing to do. We give them to you, like you said, in Bundle Birth Newsletter, we give you those and you can just start flexing that muscle a little bit, knowing that it's going to raise your emotional intelligence.
It's going to help you have better conversations with all of your colleagues. It helps you become more open-minded. It helps you stay flexible in your practice. I think that those are all really important attributes of a good labor and delivery nurse.
[music]
Justine: Thanks for spending your time with us during this episode of Happy Hour with Bundle Birth Nurses. If you like what you heard, it helps us both if you subscribe, rate, leave a raving review, and share this episode with a friend. If you want more from us, head to bundlebirthnurses.com or follow us on Instagram.
Sarah: Now it's your turn to go and find yourself a research article and tear it apart. Use your brilliant brains to critically think your way through and initiate these conversations with your other coworkers. We'll see you next time.