dnp.musites.orgdnp.musites.org/.../2019/05/stevie-camp-dnp-project-final-paper.… · web...
TRANSCRIPT
Running head: QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 1
Quality Improvement Project within a Direct Primary Care Clinic
A Scholarly Project Presented to
The Faculty of the Maryville University
Catherine McAuley School of Nursing
In Fulfillment of the Requirements
For the Degree of Doctor of Nursing Practice
STEVIE CAMP, BSN, RN
APRIL 18, 2019
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 2
TABLE OF CONTENTS
Title Page 1
Table of Contents 2
Abstract 3
Chapter 1: Introduction 4
Chapter II: Review of Related Literature 10
Chapter III: Methods 18
Chapter IV: Findings 26
Chapter V: Discussion 33
References 39
Appendices 43
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 3
ABSTRACT
Quality Improvement Project within a Direct Primary Care Clinic
Background: Patient satisfaction is becoming a crucial component of healthcare delivery systems nationwide. In part, this is due to mandates set forth by the Centers for Medicare and Medicaid (CMS) and the overarching goal to transition healthcare to a quality-based system from a quantity-based system (Centers for Medicare & Medicaid Services, 2017). An innovative healthcare model, direct primary care (DPC), is at the forefront of this movement. Direct primary care boasts the ability to provide patients with more time and access to providers at a lower cost compared to traditional models (American Academy of Family Physicians, 2019).
Objective: The purpose of this doctor of nursing practice (DNP) project was to evaluate the results of patient satisfaction surveys within a DPC clinic located in the Southeast region of the United States to assess for strengths, weaknesses, and associations related to the perception of the patients who received care at the clinic.
Design: The study design was an exploratory quantitative design. A retrospective review was performed using patient satisfaction surveys collected over three months. Nine target variables were assessed based on the survey questions. Twenty-one patient satisfaction surveys were utilized for the study.
Results: Descriptive statistics were utilized to examine survey responses. Overall, the findings reflected that respondents were satisfied with the care they received at the DPC clinic. All respondents indicated that their provider listened to them, knew pertinent information regarding their health, respected them, and that their visit was of adequate length. The provider of the DPC clinic was rated as a 9.9 on a Likert scale with 10 being the best possible provider.
Conclusions: Measuring patient satisfaction is becoming commonplace within healthcare delivery systems. Patient satisfaction surveys are an effective way to solicit patient feedback to gauge satisfaction. This project demonstrated the successful implementation of a patient satisfaction evaluation program and determined that patients were satisfied with the care they received at a DPC clinic located in the Southeast region of the United States.
Key words: Direct primary care, patient satisfaction, patient experience
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 4
Quality Improvement Project within a Direct Primary Care Clinic
Chapter I: Introduction
Introduction to the Problem
Patient satisfaction surveys are an integral part of healthcare delivery systems. This can
be partially attributed to the 2007 mandates set forth by the CMS (CMS, 2017). Furthermore,
eliciting patient feedback through satisfaction surveys has been shown to improve
communication between patients and providers, improve patient outcomes by preventing
complications and mortality, and increase revenue from CMS (Bogner et al., 2016; CMS, 2018a;
Farrington, Burt, Boiko, Campbell, & Roland, 2016; Overveld et al., 2017). Patient satisfaction
surveys should be implemented in all areas of practice to ensure the highest level of care is being
rendered.
Interest and Level of Expertise
The author of this DNP project has no prior experience with patient satisfaction surveys.
However, working in the medical field as a registered nurse provides exposure to prompting
patient feedback and providing appropriate follow up interventions similar to the patient
satisfaction survey process. Additionally, Maryville University’s DNP program has prepared
this author to evaluate and implement evidence-based research.
Purpose and Aims
The purpose of this DNP project was to evaluate the results of patient satisfaction surveys
within a DPC clinic to assess for strengths, weaknesses, and associations related to the
perception of the patients who received care at the clinic. The overall aim was to improve the
care provided by the clinic and to comply with regulations set forth by the CMS to evaluate
patients’ satisfaction of their healthcare experience at the clinic (CMS, 2012). Furthermore,
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 5
examining patient satisfaction surveys in the DPC healthcare model will highlight the uniqueness
of this model and will hopefully support that this model lends a more positive patient experience
along with better health outcomes.
Patterns of Knowing
Barbara Carper divided the concept nursing knowledge into four essential patterns of
knowing in 1978. Carper argued that one must be able to examine each of these patterns of
knowing to be able to teach and learn different concepts in nursing (Carper, 1978).
Implementing a doctoral project regarding the concept of patient satisfaction can help inform
each pattern of knowing in the following ways.
The first pattern of knowing is empirics, the science of nursing. Carper described this
pattern as developing explanations behind theories (Carper, 1978). This project evaluated patient
satisfaction in a unique healthcare setting that is not supported by a strong body of evidence.
The results of the project were descriptive, scientifically analyzed, and publicly verifiable, which
are components of Carpers first pattern of knowing (Carper, 1978). This doctoral project has the
ability to add to the science of nursing by contributing to the body of literature supporting patient
satisfaction within the DPC model.
Carper noted that the pattern of personal knowledge, although one of the most important,
is often the most difficult to master and teach. A nurse must have a therapeutic use of self to be
able to view a patient as something more than an object (Carper, 1978). This doctoral project
strived to evaluate the patients’ perception and experience to ensure that all aspects of care and
wellness were met. By warranting a patient’s personal experience, the project is incorporating
the pattern of personal knowledge.
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 6
Carpers ethical pattern of knowing focuses on morality, obligation, and doing what is
best for the patient regardless of the provider’s personal beliefs (Carper, 1978). This project
eliminated all patient identifiers to ensure that the confidentiality of all patients was maintained.
Additionally, the project was a retrospective review so participants were not engaged or subject
to coercion. The purpose of this project was to highlight negative and positive findings within
the healthcare model. This ties into Carper’s ethical pattern of knowing as it focuses on ensuring
that the best level of care is rendered.
The final pattern of knowing is the esthetic pattern or the “art of nursing.” This pattern
can be difficult to define as the concept of art cannot be summarized in a few words (Carper,
1978). This doctoral project ties into the art of nursing by looking beyond the scientific results
that were yielded from this project. For example, the overall aim of this project was to improve
the quality of care provided within a clinic. The scientific results will not improve the care
rendered, rather, it will initiate follow-up interventions that will be address the experience of the
patients. The interventions that will be developed incorporate the art of nursing in that they will
not be solely scientific or rigid.
Background
Patient satisfaction is a standard of care within the medical field. This can be partially
credited to the mandates and evolution of patient satisfaction surveys set forth by the CMS.
Dating back to 2002, the CMS partnered with the Agency for Healthcare Research and Quality
(AHRQ) to develop the first national, standardized patient satisfaction survey called Hospital
Consumer Assessment of Healthcare Providers and Systems (HCAHPS). After rigorous testing,
the survey was first available for public use in October of 2006. To encourage participation, the
Deficit Reduction Act of 2005 created financial incentives for acute care hospitals who
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 7
participated in the HCAHPS. Hospitals who received the Inpatient Prospective Payment System
provisions were made to participate in the HCAHPS in order to receive full annual payments.
Lastly, in 2010, the Patient Protection and Affordable Care Act of 2010 indicated that HCAHPS
results would be used to determine value-based incentive payments for those participants in the
Hospital Value Based Purchasing program (CMS, 2018a).
Since the implementation of patient satisfaction surveys, research has been conducted to
determine how satisfaction impacts the care that is rendered. Research has found that several
outcomes from eliciting patients’ feedback of their healthcare experience have been identified.
These include improved patient and provider communication, improved patient outcomes
through prevention of complications and mortality, and increased revenue from CMS
reimbursement and incentives (Bogner et al., 2016; CMS, 2018a; Farrington et al., 2016;
Overveld et al., 2017). Additionally, research has found that patient satisfaction surveys serve as
evidence to support change in the management and delivery of healthcare services for patients
(CMS, 2018a; CMS, 2018b; Farrington et al., 2016).
Significance
Nursing. Nurses are the primary caregivers in all settings of healthcare. Therefore,
nurses have a significant impact on a patient’s experience. Not only do nurses directly impact
patient satisfaction through their intimate contact with patients, but factors such as their work
environment have also proven to impact patient satisfaction. Research has shown that higher
patient satisfaction scores aligned with better working environments (Berkowitz, 2016). Patient
satisfaction surveys have significance within the nursing field as surveys can potentially
highlight areas of concern within nursing environments.
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 8
Healthcare. The CMS has three main goals for their patient satisfaction surveys. They
are to provide data to aid consumers in their ability to make meaningful, objective comparisons
of healthcare delivery systems, to improve quality of care through encouraging hospitals to
publicly report findings, and increasing organizational transparency of the quality of care that is
delivered through public reporting (CMS, 2018a). Patient satisfaction surveys are becoming a
mandatory component of healthcare through these goals set forth by the CMS. Overall, patient
satisfaction surveys are shifting healthcare from a quantity approach to the overarching goal of a
value-based delivery system (CMS, 2018b).
Advanced Practice Nursing. Patient satisfaction surveys are intended to aid consumers,
encourage organizational transparency, and improve quality of care. Surveys provide a platform
for patients to express their perceptions and experiences. As primary care providers, advanced
practice nurses (APNs) are on the frontline of patient care and are a vital component of the
patient’s healthcare experience in all settings. APNs have the opportunity to directly impact
patient satisfaction through frequent patient communication and interaction. Working to foster
positive change that translates into positive outcomes for patients is a component of the APNs’
scope of practice (American Association of Colleges of Nursing, 2006; Gerrish et al., 2011).
Knowledge gained from this scholarly project will be translated into practice to improve and
provide quality care to the patients at the project’s clinic.
Support for Project
The DPC clinic that was used for this DNP project vowed support for this project by
providing a written letter of approval to conduct a retrospective chart review of patient
satisfaction surveys. The clinic’s founder previously had requested that this student create and
implement a process for evaluating patient satisfaction. As requested, a pilot patient satisfaction
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 9
survey was created and implemented into the practice for approximately three months. Prior to
this project, no formal process to measure patients’ satisfaction scores and patient experience
existed. The clinic’s founder supported this student in analyzing the results of the patient
satisfaction surveys that were collected during the pilot program. Once approval was granted by
Maryville University’s Institutional Review Board (IRB), a retrospective review of the patient
satisfaction surveys was conducted.
Benefit of Project to Practice
To advance and highlight the positive attributes of the DPC model, the clinic’s founder
asked this DNP student to create and implement a process for evaluating patient satisfaction.
Since the DPC model is a relatively new form of delivering healthcare, data regarding patient
experience and patient satisfaction is limited. Implementing and evaluating a patient satisfaction
survey will advance the body of supportive literature for the DPC model. Additionally, results of
the survey highlighted areas of improvement for this specific DPC clinic. From this information,
appropriate follow-up interventions will be formulated to improve the quality of care provided
within this clinic.
Conclusion
Patient satisfaction is becoming a crucial component of healthcare delivery systems as the
shift from quantity-based care to quality-based care occurs. This DNP project has contributed to
this shift through the retrospective review of patient satisfaction surveys that were collected in a
DPC clinic. The results of the review will used to develop interventions that can improve the
quality of care that is rendered. Prior to this project, the DPC clinic did not have any formal
method of measuring patient satisfaction. The clinic requested that this project be performed to
expose strengths and weaknesses of the care rendered. In addition to improving care, this project
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 10
has the ability to highlight positive attributes of the DPC model while simultaneously
contributing to the limited body of literature. Maryville University’s DNP program has prepared
this author to perform and implement an evidence-based research project.
Chapter II: Review of Related Literature
Critical Analysis of Conceptual and Theoretical Literature
Patient satisfaction is commonly used as an indicator to evaluate the successes and
shortcomings of healthcare delivery systems (Al-Abri & Al-Balushi, 2014). Historically, patient
satisfaction surveys have been used to measure patients’ perspective of the healthcare they
received. The CMS has defined essential goals for such satisfaction tools (CMS, 2018a; CMS,
2018b). Several outcomes from eliciting patients’ feedback of their healthcare experience have
been identified such as improved patient and provider communication, improved patient
outcomes through prevention of complications and mortality, and increased revenue from CMS
reimbursement and incentives (Bogner et al., 2016; CMS, 2018a; Farrington et al., 2016;
Overveld et al., 2017). Patient satisfaction surveys serve as evidence to support change in the
management and delivery of healthcare services for patients (CMS, 2018a; CMS, 2018b;
Farrington et al., 2016).
Patients’ experiences include aspects of care such as improved patient and provider
communication (CMS, 2018b). A patient’s perspective of being heard is rated as one of the
strongest drivers of patient satisfaction (Al-Abri & Al-Balushi, 2014). Patient feedback
facilitates providers to listen to the concerns of patients (Farrington et al., 2016; Overveld et al.,
2017). Through participation in surveys, patients can engage in reflection, criticism, and
therapeutic communication on topics that may have been less than adequate (Al-Abri & Al-
Balushi, 2014; Farrington et al., 2016).
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 11
In addition to improved communication, patient satisfaction can also improve the quality
of care that is rendered. This concept is illustrated by Bogner et al. (2016). Bogner et al. (2016)
performed a quantitative study using a multinomial logistic regression model to assess the
relationship between perceived patient satisfaction and functional decline, institutionalization,
and death. Interviews from 23,470 Medicare beneficiaries 65 years of age or older were
reviewed to establish baseline data. Two years after the survey, the researchers categorized
participants’ functional status and assessed for institutionalization and death. The participants in
the top quartile of patient satisfaction (those with the highest patient satisfaction scores) were
less likely to be institutionalized compared to those in the lower three quartiles “(adjusted RRR =
0.68, 95% CI: 0.54-0.86)” (Bogner et al., 2016, p. 11). Additionally, the participants in the top
quartile were less likely to be functionally declined “(adjusted RRR = 0.87, 95% CI: 0.79-0.97)”
(Bogner et al., 2016, p. 11) or dead “(adjusted RRR = 0.86, 95% CI: 0.75-0.98)” (Bogner et al.,
2016, p. 11). In conclusion, the researchers found that higher perceptions of satisfaction with
access to medical care and care coordination were associated with a lower risk of decline if
functionality, death, and institutionalization (Bogner et al., 2016).
Additional literature that supports the theme that there is a relationship between patient
satisfaction and the quality of care that is provided is the work performed by Boiko et al. (2014).
Boiko et al. (2014) performed a qualitative study using 14 different focus groups. The
researchers mailed surveys to patients who recently saw a provider. Once responses were
received, focus groups were created with a variety of staff members to include nurses, providers,
and front-line staff. The researchers held discussions that were aimed to uncover attitudes
toward patient surveys to include past experience with surveys and how survey results are
generally handled. Results indicated that most staff members valued the patient satisfaction
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 12
surveys that allowed for feedback regarding their performance and areas of improvement. Boiko
et al. (2014) found that feedback surveys were the first crucial step to making changes within
general practice and gauging the performance of providers.
The work by Farrington, Burt, Boiko, Campbell, and Roland (2016) also supports the
relationship between patient satisfaction and the quality of care that is rendered. Farrington et al.
(2016) performed a qualitative study to explore the perceptions physicians have regarding patient
experience surveys. Forty-one participants from both inpatient and outpatient settings were
utilized for face-to-face semi-structured interviews. The participants were selected as they
recently had received individual feedback from patient satisfaction surveys regarding visits they
had with patients. Farrington et al. (2016) found that physicians from both settings expressed
concerns about the credibility of patient surveys. Regardless of plausibility concerns,
participants articulated strong commitments to utilizing patient feedback for quality
improvement efforts. The researchers concluded that physicians’ views should be considered
with developing quality improvement initiatives in response to patient feedback (Farrington et
al., 2016).
Another incentive for healthcare agencies to administer patient satisfaction surveys is the
financial reward associated with reporting survey results (CMS, 2018a). In addition to
promoting the improvement of quality of care with financial incentives, one could expect the
opposite results if financial incentives were removed or not present. Minchin, Roland,
Richardson, Rowark, and Guthrie (2018) performed a retrospective review of United Kingdom
medical records from 2010 to 2017 to assess for the presence of 12 quality-of-care indicators. In
2014, half of the quality-of-care indicators were no longer associated with financial incentives.
Minchin et al. aimed to assess if removal of the financial incentives for the removed six quality-
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 13
of-care indicators decreased the presence of these indicators within medical records. Minchin et
al. examined data from 2,819 English practices and found that the presence of four of the 12
quality-of-care indicators within medical charts was improving prior to the removal of
incentives. Additionally, Minchin et al. (2018) found that there were significant reductions in
the documentation of quality-of-care indicators within one year after the removal of half of the
quality-of-care indicators. Contrarily, after three years, there were no changes in documentation
of the quality-of-care indicators that were still subject to financial incentive. Minchin et al.
(2018) summarized that the decline could be simply due to decreased documentation, however,
the decline of documented items such as laboratory testing is suggestive of changes in the
delivery of care.
Studies have also found that financial and reputational awards can improve performance
practice. Allen, Whittaker, Kontopantelis, and Sutton (2018) performed an observational study
aimed to quantify this notion. The researchers wanted to determine the effects financial and
reputational rewards had on performance status of general practices which were part of a pay-
for-performance program. Through examining data from nine years, the researchers found that
financial and reputational awards were statistically significant indicators of practice performance
within general practices enrolled in the United Kingdom’s (UK) pay-for-performance program.
Additionally, the researchers found that over time, financial rewards became less significant
while reputational rewards became more important. The results of this study highlight that
incentives, whether financial or reputational, can impact the performance of healthcare practice.
Allen et al. (2018) concluded that general practices in the UK might have shifted their efforts
from generating revenue after they achieved their baseline benchmarks to increasing their
reputation.
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 14
Wolk et al. (2013) performed a six-month study to evaluate the effects of implementing a
financial incentive for medical residents who have the lowest average time in dictating discharge
notes. The researchers had four research groups; at the end of a monthlong rotation, the medical
resident who had the lowest discharge-to-dictation time was awarded a monetary reward. In
addition to monetary rewards, other interventions such as verbal reminders and a discharge
orientation were performed. At the end of the study, the researchers found that the average
discharge-to-dictation time was 1.84 days with approximately 90% of dictations occurring during
the same day of discharge. The average amount of time for dictating discharge notes for medical
residents was 7.44 days prior to this study. Wolk et al. (2013) concluded that a modest financial
incentive resulted in a significant reduction in the average discharge-to-dictation time. This
study highlights the relationship between financial incentives and improved quality of care.
Overall Gaps, Limitations, Strengths, and Weaknesses
In general, the body of evidence supports that patient satisfaction can impact the quality
of care that is rendered and financial incentives are associated with performance. Soliciting
patient feedback, regardless of the method, can ultimately improve the quality of care that is
rendered. Overall strengths of the aforementioned literature include the design and analysis
methods. Bogner et al. (2016) performed a retrospective review of data using a multinomial
logistic regression model. This method of analysis was appropriate to examine the association
between numerous independent variables and dependent variables (Polit & Beck, 2017). Bogner
et al. (2016) adequately described the sampling strategies and provided detailed characteristics of
participants.
The qualitative works by Boiko et al. and Farrington et al. provide strength to the
available patient satisfaction literature in that the perspective of providers and practice staff were
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 15
considered. One weakness of the study by Boiko et al. is the vague explanation of the data
collection and data analysis processes. According to Polit and Beck (2017), qualitative studies
are difficult to describe and explain. In contrast, Farrington et al. thoroughly explained how their
sample was obtained and attempted to eliminate bias by randomly selecting physicians from
practice settings.
A strength of Minchin et al.’s (2018) study was the use of longitudinal data and a time
series analysis. The use of longitudinal data is appropriate if researchers need to assess changes
over time (Polit & Beck, 2017) which aligns with Minchin et al.’s aim to determine if the
removal of financial incentives decreased the presence of quality-of-care indicators.
Additionally, time series designs are ideal if a control group is present, but randomization is
lacking (Polit & Beck, 2017). Polit and Beck (2017) noted that time series designs allow for data
to be collected over an extended timeframe during which some form of intervention is
introduced. In this case, the researchers examined available data prior to and after the removal of
financial incentives which aligns with the time series design. Minchin et al. (2018) provided
95% confidence intervals to describe the significance of their findings. All in all, the researchers
provided adequate detail to describe their study design, data collection, sample, and data analysis
processes.
A significant strength of the research presented by Allen et al. (2018) is the level of detail
they provided when describing their methods and data collection process. However, the
researchers vaguely described their exclusion criteria. To strengthen the understanding and
findings of their research, the researchers should aim to provide a detailed list of inclusion and
exclusion criteria that were used to determine which practices would be included in the sample.
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 16
The findings of this survey provided literature regarding performance and reputational rewards
which are not well studied.
The research presented by Wolk et al. (2013) is a study that highlights how financial
incentives impact performance in healthcare. Unfortunately, the researchers failed to illustrate
how data analysis was performed. Although the results of the study show a significant decrease
in the discharge-to-dictation time, there are no statistics to support that the findings were
statistically significant. The lack of statistical analysis is a weakness of this research. A strength
of the study its readability, simplicity, and replicability. Additionally, the findings highlight how
simple monetary incentives can significantly impact performance.
Concepts and Definitions
Numerous concepts and definitions have been used throughout this project to depict the
importance of soliciting patient feedback through means of surveying. Patient satisfaction can be
defined as the degree of which a patient’s expectations were met (Agency for Healthcare
Research and Quality, 2017b). Patient satisfaction is also a performance measure for evaluating
the healthcare quality (New England Journal of Medicine Catalyst, 2018). Similar to patient
satisfaction is the concept of patient experience. In regards to healthcare, many components of
healthcare delivery can encompass patient experience. This may be clear communication with
healthcare staff and the amount of respect and responsiveness a patient receives (AHRQ, 2017b).
Many different methods of soliciting patient satisfaction and experience exist including
surveys. A survey can be defined as a method for gathering information from a sample of people
(Qualtrics, 2018). Patient satisfaction surveys are commonly used as a measuring device for
patient satisfaction as satisfaction is not something that can be directly observed. Satisfaction
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 17
surveys attempted to translate subjective data regarding satisfaction and experience into
measurable, quantifiable data (NEJM Catalyst, 2018).
Many different patient satisfaction surveys exist, but the most commonly known surveys
are from the CMS. The CMS is a division of the Department of Health and Human Services who
oversees numerous federal healthcare programs aimed at eliminating health disparities and
achieving the highest level of care for all (CMS, 2018b). The CMS has developed surveys, in
conjunction with the AHRQ, to include the survey used for this project, the Visit Survey 2.0.
The Visit Survey 2.0 is a publicly available survey that elicits patient feedback (AHRQ, 2017a).
Another commonly used term throughout this project is direct primary care. Direct
primary care is an alternative healthcare payment model where patients pay a membership either
monthly, quarterly, or annually for service; insurance is not billed. Benefits of this type of
practice include reduced patient volume, increase visit time with patients and fewer medical
errors (American Academy of Family Physicians, 2019).
Theoretical Framework
Maslow’s Theory of Needs is applicable to the concept of patient satisfaction and
experience and the effect these notions have on quality of care. This theory was developed in
1943 by Abraham Maslow and is a pyramid model depicting five levels of needs. Maslow’s
theory is that one must accomplish each level of need before proceeding to the next. Maslow
theorized is that one person cannot feel the achievement of higher levels until the current level
has been satisfied. The levels of need are the following in level of priority: physiological, safety,
love/belonging, esteem, and self-actualization (Maslow, 1943). The concept of patient
satisfaction is simply to meet one’s expectation and needs. If a patient’s needs, regardless of the
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 18
level, are not met, they are going to display dissatisfaction and potentially have poorer health
outcomes.
The first level, physiological needs, applies to this project. Maslow (1943) stated that
above anything else, human beings would be motivated by physiological needs. In regards to
patient satisfaction, if patients are not having their healthcare needs met, which can be
considered a physiological need, it is inferred that they would be unsatisfied with their care and
have poorer health outcomes. If satisfied, one would progress to the second level, safety
(Maslow, 1943). Within healthcare, if patients do not trust their healthcare provider's judgement
and plan of care, they will not be satisfied with the care they receive.
The next level, love/belonging, can be described as the need to give and receive affection
or belongingness. In regards to healthcare, the patient and provider engage in a health focused
relationship. If patients do not feel a sense of belongingness or affection from their healthcare
provider, their healthcare needs may not be met, and they may be dissatisfied with their care.
The fourth level, esteem, was described by Maslow (1943) as the need to have self-esteem or
have esteem from others. Maslow (1943) noted that a lack of esteem could lead to feelings of
helplessness or inferiority. If patients’ esteem is altered due to their health or healthcare
experience, their satisfaction will be impaired. The final stage, self-actualization, will be
fulfilled when people are functioning in a capacity in which they are fit (Maslow, 1943). It is
concluded that a patient’s final level of self-actualization will be supported by ensuring to meet
all four prior levels of need.
Chapter III: Methods
Methodology & Design
Direct primary care is an innovated, alternative payment model for accessing healthcare.
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 19
Members of the DPC clinic pay an affordable monthly fee to gain access to a primary care
practice in which there is no third party, insurance, or fee for service billing. This innovative
model provides a unique setting for both patients and providers (American Academy of Family
Physicians, 2019). According to Moran, Burson, and Conrad (2014), non-experimental designs
are valuable in assessing characteristics and needs of a distinct population. Therefore, an
exploratory, quantitative design was used to evaluate patient satisfaction in a DPC clinic. A
retrospective review of patient satisfaction surveys that were implemented in the clinic was
performed (see Appendix A). Questions that directly correlate to patient satisfaction were
converted to nominal data and were reviewed and analyzed using descriptive statistics (Moran,
Burson & Conrad, 2014).
This DNP project took place in a small DPC clinic located in the Southeast region of the
United States. A retrospective review of patient satisfaction surveys was performed after gaining
approval from Maryville University’s IRB. Twenty-five patient satisfaction surveys met
inclusion criteria. The age of the participants were 18 years or older, all genders were included
and comprised of approximately 25% Caucasian, 25% African American, and 50% other ethnic
groups.
Inclusion criteria for the results of selected questions pertaining to patients’ satisfaction
of their healthcare experience were the following: (a) 18 years of age or older; (b) seen at the
clinic from July 9, 2018 through September 9, 2018; and (c) providing a response to questions
five: “In the last 12 months, when you phoned Your Family MD’s office to get an appointment
for care you needed right away, how often did you get an appointment as soon as you needed?”;
seven: “In the last 12 months, when you made an appointment for a check-up or routine care,
how often did you get an appointment as soon as you needed?”; nine “In the last 12 months,
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 20
when you phoned Your Family MD’s office during regular office hours, how often did you get an
answer to your medical question that same day?”; 11: “In the last 12 months, when you phoned
Your Family MD’s office after regular office hours, how often did you get an answer to your
medical question as soon as you needed?”; 14: “Wait time includes time spent in the waiting
room and exam room. During your most recent visit, did you see your provider within 15
minutes of your appointment time?”; 15: “During your most recent visit, did the provider listen
carefully to you?”; 16: “During your most recent visit, did the provider explain things in a way
that was easy to understand?”; 17: “During your most recent visit, did you talk with provider
about any health questions or concerns?”; 18: “During your most recent visit, did your provider
give you easy to understand information about these health questions or concerns?”; 19:
“During your most recent visit, did the provider seem to know the important information about
your medical history?”; 20: “During your most recent visit, did the provider show respect for
what you had to say?”; 21: “During your most recent visit, did the provider spend enough time
with you?”; and 24: “Using any number from 0 to 10, where 0 is the worst provider possible and
10 is the best provider possible, what number would you use to rate Your Family MD?”.
The exclusion of the participant’s answers was carefully examined in the following
order: (a) if the participant is not 18 years of age or older, data was not extracted; (b) if services
were initiated at the clinic before July 9, 2018 or after September 9, 2018, no data was not
extracted; and (c) no data was extracted if no response was provided for questions five: “In the
last 12 months, when you phoned Your Family MD’s office to get an appointment for care you
needed right away, how often did you get an appointment as soon as you needed?”; seven: “In
the last 12 months, when you made an appointment for a check-up or routine care, how often did
you get an appointment as soon as you needed?”; nine “In the last 12 months, when you phoned
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 21
Your Family MD’s office during regular office hours, how often did you get an answer to your
medical question that same day?”; 11: “In the last 12 months, when you phoned Your Family
MD’s office after regular office hours, how often did you get an answer to your medical question
as soon as you needed?”; 14: “Wait time includes time spent in the waiting room and exam
room. During your most recent visit, did you see your provider within 15 minutes of your
appointment time?”; 15: “During your most recent visit, did the provider listen carefully to
you?”; 16: “During your most recent visit, did the provider explain things in a way that was easy
to understand?”; 17: “During your most recent visit, did you talk with provider about any health
questions or concerns?”; 18: “During your most recent visit, did your provider give you easy to
understand information about these health questions or concerns?”; 19: “During your most
recent visit, did the provider seem to know the important information about your medical
history?”; 20: “During your most recent visit, did the provider show respect for what you had to
say?”; 21: “During your most recent visit, did the provider spend enough time with you?”; and
24: “Using any number from 0 to 10, where 0 is the worst provider possible and 10 is the best
provider possible, what number would you use to rate Your Family MD?”.
Needs Assessment
The DPC model for accessing healthcare is still a novel concept but is gradually
expanding across the United States. In 2015, it was estimated that the model existed in 39 states
(Eskew & Klink, 2015), whereas now, it is estimated that there are roughly 950 DPC clinics
throughout 48 states (DPC Frontier, 2018). Eskew and Klink (2015) noted that 93.2% of the
practices identified in 2015 consisted of less than or equal to only four providers in the practice.
Therefore, most DPC practices are “young and small thus lack sufficient quality and cost data to
assess outcomes” (Eskew & Klink, 2015, p. 795). Eskew and Klink (2015) noted that future
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 22
studies regarding the DPC model should include evaluating the claims that quality of care is
improved with this model. If the DPC model is to continue to grow and be widely adopted, more
literature is needed to document potential improvements (Eskew & Klink, 2015).
The DPC clinic that was used for this DNP project is a small practice located in the
Southeast region of the United States consisting of one provider. To advance and highlight the
positive attributes of the DPC model, the clinic’s founder asked this DNP student to create and
implement a process for evaluating patient satisfaction. Prior to this project, no formal process
to measure patients’ satisfaction scores or patient experience existed. The author of this project
was asked by the clinic’s founder to collaborate with providers and supportive personnel to
develop, implement, and evaluate a patient satisfaction survey.
Furthermore, the clinic’s founder wanted to implement a process that would align with
the guidelines set from the CMS. In 2010, the Affordable Care Act established the Hospital
Value-Based Purchasing Program. This program is a CMS initiative that rewards hospitals for
providing high quality care to Medicare patients (CMS, 2012). Although not an acute care
facility, the DPC clinic wanted to meet the expectations of the CMS by providing quality
healthcare. A CMS based patient satisfaction survey was implemented to demonstrate that
quality healthcare is delivered within the DPC model. Additionally, the clinic’s founder hoped
to identify any areas of improvement to which follow up interventions would be developed to
ensure the highest level of care is being rendered.
Data Collection Instrument
Initially, the author of this project was asked by the clinic’s founder to develop a patient
satisfaction survey that would meet the requirements of the CMS. The patient satisfaction
survey that was chosen is a modified version of the Visit Survey 2.0 developed by the Consumer
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 23
Assessment of Healthcare Partners and Systems (CAHPS) (AHRQ, 2017a). The Visit Survey
2.0 is publicly available through the AHRQ and complies with the CMS guidelines. The AHRQ
provides access to the CAHPS surveys and tools for public use at no cost.
The Visit Survey 2.0 was developed to elicit patient feedback over a specific visit as
opposed to over a period of time. Visit specific surveys can elucidate quality information for
improvement (AHRQ, 2017a). The patient satisfaction survey was developed within Survey
Monkey® due to ease of dissemination. Once the clinic’s founder approved the content,
implementation was underway.
Specific Steps
After creating the patient satisfaction survey, the clinic disseminated surveys to all adult
patients actively enrolled in the DPC clinic who had a linked email. The patient satisfaction
survey was piloted at the clinic from July 9, 2018 through September 9, 2018. Emails with a link
to the patient satisfaction survey were sent to participants by supportive personnel on July 21st,
2018, August 15th, 2018, and September 4th, 2018. The emails encouraged participation in
completing the patient satisfaction survey. The patient satisfaction survey was to be a review of
the patient’s last visit in the DPC clinic.
After IRB approval was been obtained from Maryville University, the data collection
process began. The clinic’s founder provided the researcher with a copy of the results of the
selected questions pertaining to patient satisfaction of the surveys that were completed between
July 9, 2018 through September 9, 2018. The clinic’s founder removed demographic information
and other potential identifiers of the participants prior to providing the researcher with the results
of the selected questions. The abstracted data constituted the minimum necessary data to
accomplish the goals of the project.
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 24
The researcher sequentially reviewed the results of the selected questions to determine if
the inclusion criteria had been met. A sequential inclusion and exclusion process continued until
the written list provided by the clinic’s founder had been reviewed. Data collection stopped after
the results of the selected questions pertaining to patient satisfaction provided by the clinic’s
founder that met the inclusion criteria had been identified and recorded on the data collection
form.
A data collection document was created specifically for this project within Microsoft
Word®. Each patient satisfaction survey was assigned a numerical number on the data
collection form. Each specific question response was given a correlated numerical number. For
example, 1=never; 2=sometimes; 3=usually; and 4=always (see Appendix B).
After the data collection process was completed, the copy of the results of the selected
questions were returned to the clinic’s founder which will be destroyed after three years. After
the data was collected, Microsoft Excel® was used for data analysis.
Analysis Plan
To describe and provide understanding of the data, descriptive analysis occurred.
Categorical, nominal data were collected in Microsoft Word® for data collection which was then
converted into Microsoft Excel®. After conversion, the data was descriptively analyzed using
Microsoft Excel®.
Descriptive statistics allow researchers to describe or summarize their data. In addition,
descriptive statistics assists researchers in presenting their data in a meaningful way especially
for large amounts of data (Lund Research Ltd, 2018). For this project, numerous responses were
present which contributed to a large data set. Therefore, mode and average were used to
summarize the survey responses per each target variable.
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 25
Resources and Budget
An array of resources were needed to complete this project. First, a subscription to
Microsoft Excel® software was required for data analysis. It costs approximately $70 to procure
a yearly subscription for Microsoft Office® (Microsoft, 2019). Additionally, printing materials
such as paper and toner were needed throughout this entire project. It is estimated that one toner
and one ream of paper were used which cost approximately $105.
Other resources that were needed include electricity and internet usage. It is estimated
that electricity requirements costed approximately $142 based on local rates and 160 hours of
usage (Electricity Local, 2018). Internet usage was estimated to cost approximately $50 due to
average monthly internet costs (BroadBandNow, 2018). Lastly, resources included the time it
took the physician to remove all the patient identifiers and traveling via car to and from the DPC
to complete the project. At a rate of $75 per hour, $150 was needed for a physician fee to
download patient satisfaction surveys and remove all identifiers. Traveling fees were estimated
at $308 based on $0.55/mile (Internal Revenue Service, 2018). An estimated budget of $825 was
needed to complete this project.
Timeline
The timeline for this project was approximately nine months. One month was dedicated
to creating the patient satisfaction survey. The next four months were dedicated to composing
and submitting the IRB proposal. The last four months of the project involved data collection
and analysis as well as compiling the full project into this final report. The data collection took
approximately one month, while analysis and the compilation of the final report both took an
additional one-and-a-half months.
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 26
Protection of Human Subjects
There were no foreseen physical, psychological, social, economic, or legal risks that
resulted from this project. One possible risk was a potential breach of confidentiality. However,
this risk was minimal as the patient satisfaction surveys returned to the clinic are anonymous per
the protocol set forth by CMS. In addition, the clinic’s founder provided the researcher with a
copy of the results of only the selected questions pertaining to patient satisfaction. The clinic’s
founder removed demographic information and other potential identifiers of the participants
prior to providing the researcher with the results of the selected questions. The abstracted data
constituted the minimum necessary data to accomplish the goals of this project.
The data collected on a data collection sheet was stored on a password protected
computer located in the researcher’s locked office. The author of the project was the only person
with access to the password and office. To minimize the risk of breach of confidentiality related
to the list provided by the clinic’s founder, the researcher returned the list to the clinic’s founder
who destroyed the list. The author will destroy the data collected from this project after three
years.
This was a quality improvement project with confidentiality protections in place. There
were no direct benefits to the participants as this was a retrospective review. The results of this
project could potentially benefit future patients at the clinic. The overall aim was to improve the
care provided by the clinic, as well as comply with regulations set forth by CMS to evaluate
patients’ satisfaction of their healthcare experience at the clinic. The results of the retrospective
review provided information on the quality of care provided by the clinic in order to develop and
implement interventions to improve and deliver high quality care and improve patient outcomes.
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 27
The burden of the potential breach of confidentiality of the participants was minimal and
did not exceed the benefit of knowledge gained regarding the patients’ perception of their
healthcare service at the clinic. The benefits outweigh the risk associated with this research.
The results of this project were printed in a doctoral project and shared with the
researcher’s Maryville University faculty chair and community members. The results may also
be shared at a poster session at a medical conference or submitted for publication in a reputable
journal. The data was shared with the other healthcare providers and founder of the clinic. All
information was presented without identifiers and in aggregate form.
Chapter IV: Findings
Congruency of the Data Collection Method and Target Variables
This quantitative retrospective review of surveys was performed in the intended DPC
setting and population in the Southeast region of the United States. The data collection process
involved retrieving responses to selected questions aimed at measuring the target variables. The
data collection methods were designed to specifically measure the target variables prior to
conducting the review of surveys. The data collected measured the intended variables that
highlighted patients’ experiences. Implementing a web-based survey and performing a
retrospective review of surveys was the most appropriate method for this study due to potential
bias and confidentiality risks associated with an in-person.
Congruency is also present within the data analysis methods. Due to a small sample size,
descriptive statistics were used to evaluate the survey responses. The data analysis plan was in
congruence with the study’s purpose statement and IRB proposal, as the descriptive statistics met
the project goal of evaluating patient satisfaction surveys for strengths, weaknesses, and
associations related to the perception of the patients who received care at the clinic.
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 28
The purpose of this DNP project was to evaluate patient satisfaction surveys within a
DPC clinic to unveil patients’ perceptions of the care they received. The proposed target
variables were accessibility to care, wait time, perceived responsiveness from provider,
comprehension of provided information, opportunity to express concerns, confidence in
provider's knowledge, perceived respect from provider, adequate visit time, and overall
impression of provider. These variables were created based on specific questions from the
patient satisfaction survey that aligned with patient experience. Therefore, there is congruency
within the project’s purpose and the intended areas of measurement.
Study Replication
This DNP project involved implementing a patient satisfaction survey and then
performing a retrospective review of the surveys. This study could be replicated by following
the proposed methodology within the IRB proposal. Depending on the yielded sample size,
descriptive statistics could then be utilized to analyze the data collected in the chart review.
Additionally, if two target variables were identified, chi-square testing could be done to evaluate
the relationship. For example, if patients reported dissatisfaction with the care they received at
the clinic, and it was hypothesized this was due to high wait times, chi-square testing could be
performed to assess the strength of association between these two variables. The results from
this study as well as a replicated study can underscore the positive attributes of the DPC
healthcare model as well as identify areas of improvement.
Validity and Reliability
The data collection tool for this DNP project, the Visit Survey 2.0, was valid and reliable.
Polit & Beck (2017) noted that face validity refers to when the instrument appears to measure the
intended targets. The Visit Survey 2.0 has face validity as the tool measures specific patient
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 29
experience variables. Each variable has corresponding questions within the patient experience
survey. Face validity is present as the correlating questions appear to be specifically related to
the corresponding quality measures.
Additionally, the tool has content validity. To ensure content validity, Polit and Beck
(2017) noted that three core components, relevance, comprehensiveness, and balance are
necessary for content validation. When designing the Visit Survey 2.0, the AHRQ warranted
feedback from numerous groups including stakeholders, accrediting bodies, and consumers to
ensure the tool was valid. According to the AHRQ (2016), major steps were taken to ensure
soundness including performing literature reviews, focus groups, and field testing. Lastly, the
Visit Survey 2.0 has construct validity as CAHPS surveys are used as the gold standard for
measuring patient experience (Ginsberg, n.d.).
Polit and Beck (2017) defined reliability as something being consistent, stable, and
replicable. The Visit Survey 2.0 is a CAHPS survey. CAHPS surveys are consistent tools that
are used across a broad spectrum of healthcare delivery systems due to their reliability. For
example, the surveys are used by the CMS to determine compensation while the National
Committee for Quality Assurance uses the surveys for performance evaluation. Additionally, the
Veterans Health Administration and the Department of Defense utilize the Visit Survey 2.0 to
assess patient experience within military and contracted healthcare settings (Agency for
Healthcare Research and Quality, 2016). The use of these surveys across numerous agencies
highlights the tool’s replicability and reliability.
Quality of Data
The quality of data was ensured throughout this project by ensuring the data was
collected from the intended setting and population. The projected sample size was estimated to
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 30
be 25 surveys, but only 22 surveys were completed, and only 21 surveys met the inclusion
criteria. A larger sample size would be ideal, but the DPC clinic is a small practice with limited
patients, so a larger sample size was not feasible at the time of the project.
After the IRB granted approval, data collection occurred through a retrospective review
of the patient satisfaction surveys. Each survey was compared against inclusion and exclusion
criteria and numerically transcribed into a Word© document. The data was collected
consistently and was completed within two weeks of obtaining IRB approval. The quality of
data has been maintained throughout this project by ensuring the intended population and sample
size were abided by as strictly as possible, and by ensuring the data collection process was
consistent and timely.
Critique of Quality of Data
The purpose of this project was to analyze patient satisfaction surveys. Questions that
were directly correlated to patient satisfaction were extracted from the surveys and transcribed
into a Word© document during the data collection process. The intended concept of patient
satisfaction was measured by carefully extracting appropriate questions that matched the target
variables.
To evaluate the intended concept of patient satisfaction, electronic patient satisfaction
surveys were utilized. Utilizing a patient satisfaction survey was an ideal method of data
collection as self-reports, or patient-reported outcomes, are a versatile and efficient means of
gathering data. Self-reports can help capture outcomes and personal opinions that may not
otherwise be known (Polit & Beck, 2017).
The patient satisfaction survey was used as a data collection tool. The Visit Survey 2.0 is
a vetted, valid, and publicly available survey that complies with the CMS guidelines. The Visit
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 31
Survey 2.0 was developed to elicit patient feedback over a specific visit as opposed to over a
period of time (AHRQ, 2017a).
Descriptive Statistics
Specific demographics including age, gender, and race were eliminated from the patient
satisfaction surveys to reduce potential breaches of confidentiality. However, participants of the
patient satisfaction survey had to be over the age of 18 in order to receive the survey. Nine
different target variables were identified within the survey. Descriptive statistics were used to
analyze the corresponding questions to these target variables. Below, these results will be
discussed.
Results
The first target variable was accessibility to care. The results to questions five, seven,
nine, and 11 were assessed to identify this target variable. Eighty-four responses were included
in the descriptive analysis of this target variable. On average, participants answered that they
usually had access to care. The mode of the responses was always. The standard deviation of
these findings was 0.9.
The second target variable was wait time. The results of question 14 were utilized to
assess this target variable. Twenty-one responses were included in the descriptive analysis of
this variable. Twenty participants indicated that they were seen by their provider within 15
minutes of their scheduled appointment time. The standard deviation of these finding was 0.2.
The third target variable was perceived responsiveness from provider. The results from
question 15 were utilized to assess this target variable. All participants answered that their
provider, yes, definitely, listened to them carefully during their visit. The standard deviation of
these findings was zero.
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 32
The fourth target variable was comprehension of provided information. The results from
questions 16 and 18 were utilized to assess this variable. Forty-two responses were included in
the descriptive analysis of this variable. The average response to questions that correlated to the
patient’s comprehension of provided information answered yes, definitely when asked if their
provider explain things in a way that was easily understood. The standard deviation of these
findings was 0.1.
The fifth target variable was opportunity to express concern. Question 17 was utilized to
assess the results of this variable. Twenty-one responses were available for descriptive analysis
of this variable. On average, participants answered yes in regards to the opportunity to address
questions and concerns during their visit. The standard deviation of these findings was 0.4.
The sixth target variable was confidence in the provider’s knowledge. Responses to
question 19 were utilized to assess this variable. All participants reported yes, definitely when
asked if they felt as though their provider knew important information regarding their medical
history. The standard deviation of these findings was zero.
The seventh target variable was perceived respect from the provider. Responses to
question 20 were utilized to assess this variable. All participants reported yes, definitely when
asked if the provider showed respect for what the participant had to say. The standard deviation
of these findings was zero.
The eighth target variable was adequate visit time. Responses to question 21 were
utilized to assess this variable. All participants reported yes, definitely when asked if the
provider spent enough time with the participants during their visit. The standard deviation of
these findings was zero.
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 33
The ninth target variable was overall impression of the provider on a Likert scale with
zero being the worst provider and 10 being the best provider. Twenty-one responses to question
24 were utilized to assess this variable. The average rating of the provider was 9.9. The
standard deviation of these findings was 0.4.
Chapter V: Discussion
Summary of Findings
Patient satisfaction is becoming a core component of healthcare delivery systems (CMS,
2017). Patient satisfaction has been found to improve communication between patients and
providers, improve health outcomes, and increase revenue (Bogner et al., 2016; CMS, 2018a;
Farrington, Burt, Boiko, Campbell, & Roland, 2016; Overveld et al., 2017). Healthcare delivery
systems must recognize that measurement of patient satisfaction is becoming a standard of
practice that benefits themselves and their patients. Additionally, measuring patient satisfaction
may very well be mandated for all areas of practice in the future. Considering this, it is
necessary for healthcare practices to ensure there are proper policies and procedures in place to
measure patient experience. One form of eliciting patient feedback is through implementation of
patient satisfaction surveys. Surveys can assess patient experience which then can elucidate
areas of improvement from which follow-up interventions can be developed. This DNP project
demonstrated the successful development and implementation of a patient satisfaction program
through means of surveying in a practice that had not otherwise examined patient experience.
Interpretation of Findings
Eliciting patient feedback through means of surveying has been found to be a useful and
versatile tool. Polit and Beck (2017) noted that self-reports allow researchers to gather data on
topics that may otherwise be difficult to measure. For example, capturing personal opinion
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 34
would be impossible without a self-report tool. The findings of this study are congruent with
Polit and Beck’s statements. The study’s patient satisfaction surveys gathered data specific to
patients' personal opinion on the care they received within the DPC clinic.
Additionally, Al-Abri and Al-Balushi (2014) noted that patient satisfaction can be used to
identify successes and inadequacies within healthcare delivery systems. The study findings are
congruent with the literature as strengths within the DPC practice were identified. Some of these
strengths were perceived responsiveness from provider, confidence in provider’s knowledge,
perceived respect from provider, and adequate visit time. Contrarily, the study highlighted
accessibility to care as a shortcoming of the DPC practice.
Analysis
Polit and Beck (2017) noted that descriptive statistics help provide an understanding of
quantitative research. In this study, descriptive statistics were utilized to provide an
understanding of the results for each of the nine target variables. This report indeed provided the
descriptive statistical findings of each target variable with sufficient detail ensuring
understanding of the results. Additionally, this report refutes the performance of other statistical
analysis such as inferential statistics through the explanation of the pilot study and by noting that
no points of comparison exist at this level of study.
Furthermore, the results of this study were consistently and clearly reported for each
target variable in this report. The results were succinctly described by using mean, mode, and
standard deviation. Also, response rates were listed with each target variable to provide further
understanding of the methodologic features of the study.
Strengths and Limitations
Strength was achieved within this study through the consistency and reliability of the data
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 35
collection and data analysis processes. The data collection for this research abided to the IRB
proposal and occurred within two weeks of being granted approval. The collected data was
extracted from patient satisfaction surveys that had all patient identifiers eliminated prior to
dissemination. This helped eliminate any potential bias or confidentiality breaches. Descriptive
statistics were utilized to analyze each target variable. The results were analyzed and reported
consistently for each variable.
The research for this study was conducted by performing a retrospective chart review of
21 patient satisfaction surveys. The surveys were obtained for a DPC clinic in the Southeast
region of the United States. Those completing the survey had to be at least 18 years old. The
retrospective chart review included all surveys that met inclusion criteria. One out of 22 surveys
did not meet inclusion criteria so the results of this survey were not included in the data
collection process. One limitation of the study was the small sample size. A larger sample size
would be ideal but was not feasible as the DPC practice has a limited number of patients.
Additionally, this was a pilot study, so a small sample size was acceptable at this stage of
research.
Another limitation of the study was the unknown level of diversity of the sample. All
potential identifiers including race and gender were eliminated from the surveys before
implementation. Lastly, potential bias could exist within the survey responses as participants
could have felt compelled to provide positive ratings of their healthcare provider. Steps were
taken to eliminate these potential biases by implementing a web-based survey versus an in-
person survey, not providing any compensation for participation, and not actively recruiting
participants.
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 36
Implications for Research and Practice
The purpose of this scholarly project was to evaluate the results of the patient satisfaction
surveys that were implemented via a pilot study within a DPC clinic. The project aimed to
determine strengths and weaknesses within the clinic from which follow-up interventions could
be derived. The project aimed to implement a process to measure patient satisfaction to comply
with healthcare standards of practice. Lastly, the project hoped to highlight the strengths and
uniqueness of the DPC healthcare model.
Indeed, this project has successfully met the aims and purpose. The study implemented a
process for measuring patient satisfaction demonstrating that other healthcare practices and
providers are also capable of implementing such a program. Additionally, the results of this
study did uncover that patients are satisfied with the care they received within the DPC
healthcare model. These findings add to the literature of the positive attributes of the DPC
model of care. The overall findings of this study have positive implications for nurse
practitioners, other healthcare providers and practices, and patients.
Nursing and Healthcare
Patient satisfaction surveys help quantify a patient’s experience with the care they
received. The potential to improve the quality of care rendered is possible by quantifying patient
satisfaction. APNs have the ability to directly influence patient satisfaction as they work closely
with patients in a variety of settings across all avenues of healthcare. It is within an APNs scope
of practice to ensure that patients receive positive health outcomes. A component of this is
eliciting patient satisfaction and implementing interventions in response to the feedback that was
produced (American Association of Colleges of Nursing, 2006; Gerrish et al., 2011). The
scholarly project demonstrates how nurse practitioners and other providers can improve the care
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 37
provided within their area of practice.
Benefit of Project to Practice
The DPC healthcare model is a relatively new way of delivering care. There is limited
literature that supports the use of such model. The results of this project add to the existing
literature that patients are satisfied with the care they receive within DPC clinics. An area of
future research could include comparing the results of this study to the satisfaction results within
a traditional healthcare model.
In addition to expanding the DPC existing body of literature, this DNP project has direct
benefit to the DPC clinic where the study took place. First, a patient satisfaction tool has been
created for the clinic’s use. Secondly, this project has developed a method for polling patient
satisfaction within the clinic which previously had no way of measuring a patient’s experience.
Lastly, since the data analysis has concluded, the clinic can utilize the results to develop
appropriate follow-up interventions that may be needed to improve patient satisfaction within the
clinic.
Recommendations
Recommendations regarding future use of this study are as follows. First, further
analysis can be performed on the current data set as only descriptive statistics have been
performed at this time. For example, chi-square testing could be utilized to assess if there is a
relationship between any of the target variables. Secondly, the DPC clinic should make
appropriate follow-up interventions in response to the data analysis. For example, for the first
target variable, accessibility to care, some patients reported something other than always having
access to care. An appropriate follow-up intervention could be to implement a call back
protocol. For instance, the clinic could mandate that every call or email be returned with 24
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 38
hours to ensure accessibility to care. Each target variable should be examined closely, and
appropriate interventions should be developed. Now, there is baseline data regarding patient
satisfaction within the DPC clinic, the survey that was developed can be utilized consistently to
track and monitor patient experience.
Conclusion
Measuring patient satisfaction is becoming a crucial component of healthcare delivery
systems. Through eliciting patient feedback, healthcare delivery systems can improve the level
of communication between patients and providers, decrease health complications and mortality,
and increase revenue. This DNP scholarly project served to evaluate the results of patient
satisfaction surveys from a pilot study. This study retrospectively reviewed 21 surveys from a
DPC practice in the Southeast region of the United States. The results were congruent with
literature in that self-reporting via surveys can elucidate personal opinions that may be difficult
to measure otherwise. The results of this study identified numerous areas of strength and minor
areas of weakness within the DPC practice. From the results of this study, the DPC clinic can
now formulate corrective interventions which will ultimately improve the quality of care that is
rendered within the practice. Additionally, this report has modeled how to implement a patient
satisfaction program. Lastly, this study has expanded the body of evidence that supports the
uniqueness of the DPC healthcare model. In conclusion, this is the final chapter of this scholarly
project. It is anticipated that the results from this study will improve the care rendered at the
DPC clinic, encourage other providers to utilize the DPC healthcare model, and inspire over
healthcare delivery systems to measure patient satisfaction.
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 39
References
Agency for Healthcare Research and Quality. (2016). CAHPS: Assessing health care quality
from the patient's perspective. Retrieved from
https://www.ahrq.gov/cahps/about-cahps/cahps-program/cahps_brief.html
Agency for Healthcare Research and Quality. (2017a). CAHPS clinician & group Visit Survey
2.0. Retrieved from https://www.ahrq.gov/cahps/surveys-guidance/cg/visit/index.html
Agency for Healthcare Research and Quality. (2017b). What is patient experience? Retrieved
from https://https://www.ahrq.gov/cahps/about-cahps/patient-experience/index.html
Al-Abri, R., & Al-Balushi, A. (2014). Patient satisfaction survey as a tool towards quality
improvement. Oman Medical Journal, 29(1). doi: 10.5001/omj.2014.02
Allen, T., Whittaker, W., Kontopantelis, E., & Sutton, M. (2018). Influence of financial and
reputational incentives on primary care performance: A longitudinal study. British
Journal of General Practice, 68(677). doi: 10.3399/bjgp18X699797
American Academy of Family Physicians. (2019). Direct primary care. Retrieved from
https://www.aafp.org/practice-management/payment/dpc.html
American Association of Colleges of Nursing. (2006). The essentials of doctoral education for
advanced nursing practice. Retrieved from http://www.aacnnursing.org/DNP/DNP-
Essentials
Berkowitz, B. (2016). The patient experience and patient satisfaction: Measurement of a
complex dynamic. The Online Journal of Issues in Nursing, 21(1). doi:
10.3912/OJIN.Vol21No01Man01
Bogner, H., McClintock, H., Kurichi, J., Kwong, P., Xie, D., Hennessy, S., Streim, J., &
Stineman, J. (2016). Patient satisfaction and prognosis for functional improvement and
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 40
deterioration, institutionalization, and death among Medicare beneficiaries over two
years. Archives of Physical Medicine and Rehabilitation, 98(1). doi
10.1016/j.apmr.2016.07.028
Boiko, O., Campbell, J., Elmore, N., Davey, A., Roland, M., & Burt, J. (2014). The role of
patient experience surveys in quality assurance and improvement: A focus group study in
English general practice. Health Expectations, 18(6). doi: 10.1111/hex.12298
BroadBandNow. (2018). Internet service providers in Miami, Florida. Retrieved from
https://broadbandnow.com/Florida/Miami
Carper, B. (1978). Fundamental patterns of knowing in nursing. ANS, 1(1). Retrieved from
http://samples.jbpub.com/9780763765705/65705_CH03_V1xx.pdf
Centers for Medicare and Medicaid Services. (2012). Frequently asked questions hospital value-
based purchasing program. Retrieved from https://www.cms.gov/Medicare/Quality-
Initiatives-Patient-Assessment-Instruments/hospital-value-based-purchasing/
downloads/HVBPFAQ022812.pdf
Centers for Medicare and Medicaid Services. (2017). HCAHPS fact sheet. Retrieved from
https://www.hcahpsonline.org/globalassets/hcahps/facts/hcahps_fact_sheet_november_20
17.pdf
Centers for Medicare and Medicaid Services. (2018a). HCAHPS: Patients' perspectives of care
survey. Retrieved from https://www.cms.gov/Medicare/Quality-Initiatives-Patient-
Assessment-Instruments/HospitalQualityInits/HospitalHCAHPS.html
Centers for Medicare and Medicaid Services. (2018b). Consumer assessment of healthcare
providers & systems (CAHPS). Retrieved from https://www.cms.gov/Research-Statistics-
Data-and-Systems/Research/CAHPS/index.html
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 41
DPC Frontier. (2018). DPC mapper. Retrieved from https://www.dpcfrontier.com/mapper
Electricity Local. (2018). Florida electricity rates & consumption. Retrieved from
https://www.electricitylocal.com/states/florida/
Eskew, P., & Klink, K. (2015). Direct primary care: Practice distribution and cost across the
nation. Journal of the American Board of Family Medicine, 28(6). doi:
10.3122/jabfm.2015.06.140337
Farrington, C., Burt, J., Boiko, O., Campbell, J., & Roland, M. (2016). Doctors’ engagements
with patient experience surveys in primary and secondary care: A qualitative study.
Health Expectations, 20(3). doi: 10.1111/hex.12465
Gerrish, K., McDonnell, A., Nolan, M., Guillaume, L., Kirshbaum, M., & Tod, A. (2011). The
role of advanced practice nurses in knowledge brokering as a means of promoting
evidence-based practice among clinical nurses. Journal of Advanced Nursing, 67(9),
2004-2014. doi:10.1111/j.1365-2648.2011.05642
Ginsberg, C. (n.d.) AHRQ and AHRQ’s CAHPS program. Retrieved from
https://www.ahrq.gov/sites/default/files/wysiwyg/cahps/news-and-events/events/
webinars/nyp-webinar-ginsberg.pdf
Internal Revenue Service. (2018). Standard mileage rates for 2018 up from rates for 2017.
Retrieved from https://www.irs.gov/newsroom/standard-mileage-rates-for-2018-up-from-
rates-for-2017
Lund Research Ltd. (2018). Descriptive and inferential statistics. Retrieved from
https://statistics.laerd.com/statistical-guides/descriptive-inferential-statistics.php
Maslow, A. (1943). A theory of human motivation. Psychological Review, 50. Retrieved from
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.318.2317&rep=rep1&type=pdf
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 42
Microsoft. (2019). Office 365 personal. Retrieved from
https://www.microsoft.com/en-us/p/office-365-personal/
Minchin, M., Roland, M., Richardson, J., Rowark, S., & Guthrie, B. (2018). Quality of care in
the United Kingdom after removal of financial incentives. New England Journal of
Medicine, 379(10). doi: 10.1056/NEJMsa1801495
Moran, K., Burson, R., & Conrad, D. (2014). The Doctor of Nursing Practice scholarly project:
A framework for success. Burlington: Jones & Bartlett Learning
New England Journal of Medicine Catalyst. (2018). Patient satisfaction surveys. Retrieved from
https://catalyst.nejm.org/patient-satisfaction-surveys/polit
Overveld, L., Takes, R., Vijn, T., Braspenning, J., Boer, J., Brouns, J., … Merkx, M. (2017).
Feedback preferences of patients, professionals and health insurers in integrated head and
neck cancer care. Health Expectations, 20(6), 1275–1288. https://doi-
org.proxy.library.maryville.edu/10.1111/hex.12567
Polit, D. F., & Beck, C. T. (2017). Nursing research: Generating and assessing evidence for
nursing practice. Philadelphia: Wolters Kluwer.
Qualtrics. (2018). What is a survey? Retrieved from https://www.qualtrics.com/experience-
management/research/survey-basics/
Wolk, A., Wang, E., Horak, B., Cloonan, P., Adams, M., Moore, E., … Grossman, M. (2013).
Effect of modest pay-for-performance financial incentive on time-to-discharge summary
dictation among medical residents. Q Manage Health Care, 22(4). doi:
10.1097/QMH.0000000000000008
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 43
Appendix A
Patient Satisfaction Survey Questions Used for this Project
NOTE: Below are the questions that were abstracted from the survey.
CAHPS® Visit Survey 2.0 (English)
Your Privacy is Protected. All information that would let someone identify you or your family will be kept private. Your Family MD will not share your personal information with anyone without your OK. Your responses to this survey are also completely confidential.
Your Participation is Voluntary. You may choose to answer this survey or not. If you choose not to, this will not affect the health care you get.
5. In the last 12 months, when you phoned Your Family MD’s office to get an appointment for care you needed right away, how often did you get an appointment as soon as you needed?
Never
Sometimes
Usually
Always
7. In the last 12 months, when you made an appointment for a check-up or routine care, how often did you get an appointment as soon as you needed?
Never
Sometimes
Usually
Always
9. In the last 12 months, when you phoned Your Family MD’s office during regular office hours, how often did you get an answer to your medical question that same day?
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 44
Never
Sometimes
Usually
Always
11. In the last 12 months, when you phoned Your Family MD’s office after regular office hours, how often did you get an answer to your medical question as soon as you needed?
Never
Sometimes
Usually
Always
14. Wait time includes time spent in the waiting room and exam room. During your most recent visit, did you see your provider within 15 minutes of your appointment time?
Yes
No
15. During your most recent visit, did the provider listen carefully to you?
Yes, definitely
Yes, somewhat
No
16. During your most recent visit, did the provider explain things in a way that was easy to understand?
Yes, definitely
Yes, somewhat
No
17. During your most recent visit, did you talk with provider about any health questions or concerns?
Yes
No
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 45
18. During your most recent visit, did your provider give you easy to understand information about these health questions or concerns?
Yes, definitely
Yes, somewhat
No
19. During your most recent visit, did the provider seem to know the important information about your medical history?
Yes, definitely
Yes, somewhat
No
20. During your most recent visit, did the provider show respect for what you had to say?
Yes, definitely
Yes, somewhat
No
21. During your most recent visit, did the provider spend enough time with you?
Yes, definitely
Yes, somewhat
No
24. Using any number from 0 to 10, where 0 is the worst provider possible and 10 is the best provider possible, what number would you use to rate Your Family MD?
10 Best 0 WorstProvider ProviderPossible 9 8 7 6 5 4 3 2 1 Possible
.
QUALITY IMPROVEMENT IN DIRECT PRIMARY CARE 46
Appendix B
Data Collection Form
Participants were 18 years or older; all genders were included and were seen at the clinic during
July 9, 2018 through September 9, 2018.
Survey #
Q. 5: “In the last 12 months, when you phoned Your Family MD’s office to get an appointment for care you needed right away, how often did you get an appointment as soon as you needed?”1=never 2=sometimes 3=usually4=always
Q. 7: “In the last 12 months, when you made an appointment for a check-up or routine care, how often did you get an appointment as soon as you needed?”1=never 2=sometimes 3=usually 4=always
Q. 9: “In the last 12 months, when you phoned Your Family MD’s office during regular office hours, how often did you get an answer to your medical question that same day?”1=never 2=sometimes 3=usually4=always
Q. 11: “In the last 12 months, when you phoned Your Family MD’s office after regular office hours, how often did you get an answer to your medical question as soon as you needed?”1=never 2=sometimes 3=usually 4=always
Q. 14: “Wait time includes time spent in the waiting room and exam room. During your most recent visit, did you see your provider within 15 minutes of your appointment time?”1=yes2=no
Q. 15: “During your most recent visit, did the provider listen carefully to you?” 1=yes, definitely 2=yes, somewhat 3=no
Q. 16: “During your most recent visit, did the provider explain things in a way that was easy to understand?” 1=yes, definitely2=yes, somewhat 3=no
123456….25
Survey #
Q. 17: “During your most recent visit, did you talk with provider about any health questions or concerns?” 1=yes 2=no
Q. 18: “During your most recent visit, did your provider give you easy to understand information about these health questions or concerns?”1=yes, definitely2=yes, somewhat3=no
Q. 19: “During your most recent visit, did the provider seem to know the important information about your medical history?”1=yes, definitely 2=yes, somewhat3=no
Q. 20: “During your most recent visit, did the provider show respect for what you had to say?”1=yes, definitely 2=yes, somewhat3=no
Q. 21: “During your most recent visit, did the provider spend enough time with you?” 1=yes, definitely2=yes, somewhat3=no
Q. 24: “Using any number from 0 to 10, where 0 is the worst provider possible and 10 is the best provider possible, what number would you use to rate Your Family MD?”0=worst providerThrough 10=best provider
123456….25