Frequently Asked Questions

The information shared here is unique to this page and not available anywhere else on the website. You may want to skim the questions for each phase before your team begins working.

Phase 1: Analyzing a Problem

The questions below represent common issues that come up during the theory of improvement phase.
Q1 What is the first step in analyzing a problem? Are there certain essential starting points?
Before the full group begins to analyze a problem, leadership will have identified a problem focus area, an urgent and important area of need confronting the school. These are the kinds of issues that fall in the zone of being “big enough to matter, but small enough to win,” according to Jonathan Kozol. Once leadership has zeroed in on this balance, the next step for the whole group is to identify a specific problem within the focus area to target. This should emerge from the research and analysis, and will eventually be crafted into a much more tightly defined problem statement. Click here for examples of good problems to analyze.
Q2 People on the team disagree on what the problem really is. What should we do?

There should be little to no disagreement on what the problem focus area is—that should be decided by leadership before the group has fully assembled. However, expect plenty of disagreement about the factors affecting the specific problem, the wording of your problem statement, and the specific action steps the group will pursue. These are natural points for discussion, often arising from the complexity of the entire system the problem is embedded within, and lack of debate would be more worrisome than occasional disagreement. Rather than seeing disagreement as an obstacle, the group should dive into these areas of disagreement, using them as launching points for more research and analysis and thinking about the people and places within the system that need to be consulted to answer them. 

Q3 How do I know when I've arrived at a problem statement that is specific enough?

Once you have a firm grasp on the system and key features affecting the problem focus area, you will be able to identify the specific manifestation of the problem that you can influence. Begin by exploring a range of different factors that influence your problem focus area, eventually homing in on one which has both measurable outcomes and is a key obstacle for your school’s ability to boost student achievement. Keep in mind that your ultimate goal is to identify high-leverage changes that will have a large impact. You can feel confident that you are prepared to move on when you have identified between two and five causal factors that are both within your control to change and highly influential on the problem itself. However, if you experience any of the following concerns, consider doing more research before proceeding: 

  • You don’t feel confident the factors you have identified are highly influential on the problem itself. 
  • You have trouble imagining a specific picture of what success could look like, in terms of student behaviors or skills. If the image in your head is vague, you may still need to clarify your understanding of the problem. 
  • There are parts of your problem focus area that still feel mysterious or unclear. 
  • There are important stakeholders whose perspectives you have yet to consider. 
Q4 What do I do if I can't decide on which tool(s) to use?

Determining the right tool(s) to use at the right time depends on a few key factors: What are your time constraints? Whose perspectives are important to gather at this stage? What experience or expertise can you draw on from your team? The six questions in the Analysis Tool Selector can help you meet your needs. Consider where your team’s needs fall along the spectrum and circle the tool that best answers the question for you. Once you have answered all six questions, tally your marks and select the tool that you’ve circled the most.  

Q5 We've arrived at explanations nice and quickly. Is it possible we did something wrong or that our findings are too simple?

This is a common sentiment, and typically due to rushing through some of the research, data gathering, and investigative steps. Make sure that you: 

  • considered all angles and stakeholders within the system where your problem is located. Have you drilled down deep and asked the “5 Whys” to get past areas of comfort into less familiar areas? 
  • grasped the experience of “ground floor” users affected by the problem, be they students or teachers. Consider getting out of your typical routine and seeing things from new perspectives, whether through surveys, interviews, or shadowing. 
  • investigated specific cases rather than averages. Zoom in on outliers and anomalies—that is, experiences that look surprising or unusual—trying to figure out why they occur. 
  • approached the problem with fresh eyes. Sometimes we attack a problem with a solution already in mind, and then, even with a well-grounded approach, we are unlikely to truly explore the problem and consider all root causes. 
Q6 How long does this stage—analyzing a problem—typically take?

While there is no firm timeline for analyzing a problem, the work is fairly linear. It begins once you have a problem focus area, peaks when you have a tightly defined problem statement that provides clear avenues for research and investigation, and ends when you have started to identify actionable steps you think will effect change that you can measure. In the end, the exact timing will depend on the individual tool you use, but don’t spend too long on this step—more than a month is excessive. One common mistake is aiming for the perfect analysis. Be ready to move on even if you are not certain your action will lead to success—taking small, initial actions almost always means leaving the comfort zone of research. Your team won’t stop learning about the problem when you move on to action; some of the best learning occurs when we try to affect change. Your team will continue analyzing and rethinking as you move on to the next steps. 

Phase 2: Creating a Theory of Improvement

The questions below represent common issues that come up during the theory of improvement phase.
Q1 Where should I start when creating a driver diagram?

In general, there are two approaches to creating a driver diagram: working from your aim and working from potential solutions. In practice you’ll probably flip back and forth between the two rather than strictly following one approach or the other. We’ll examine both of these side by side below, but in either case the absolute first step begins by gathering and organizing your research and analysis from Phase 1: Analyzing a Problem. 

Working from your aim is an intuitive next step from your research. Your team will take what you have learned about causes and sub-causes from your fishbone, empathy mapping, and/or community pulse, and organize these insights into drivers. For example, you may have discovered that students give up when they encounter difficult math problems on exams because they: 

  • don’t believe they can pass. 
  • feel like they’ve tried before and never make any progress. 
  • think that they just aren’t “math people.” 

These issues could all be addressed by a common primary driver: “Students believe they can improve their math ability with effort.” Once you have a primary driver, you can drill into secondary drivers capturing the specific areas in which you might engage this driver (e.g., method of feedback, allowing opportunities for revision); you can map underlying change ideas on the diagram (e.g., mini-lesson on growth mindset every Monday or using words like yet in written feedback). In this way you are building your driver diagram from a broad vision, starting from the problem focus area and then thinking about causes with a progressively finer focus. 

Working from potential solutions takes the opposite approach. It means starting from possible change ideas and building outward. Begin by thinking about the potential solutions that have come up during your problem analysis and that might currently reside in your Solutions Parking Lot. Use those ideas for solutions to build outward, thinking about how each connects to drivers and your aim. This approach is often unavoidable during the brainstorm process—and there is nothing wrong with it. But beware of “solutionitis,” that tendency to latch onto easy, quick-fix solutions, by taking the time to elicit a clear theory of what you expect to accomplish and how, specifically, it will move you toward your aim. 

For example, consider how we might build a driver diagram working from a potential solution for the problem focus area “Increasing student persistence in math classes.” If, during a survey or interview in our research, we noticed that an individual teacher had great success building lessons around a set of techniques called “the math habits of mind,” we might start from there. We then might ask what moments, structures, or routines in our classroom could benefit from building these habits, thus finding secondary drivers. This would be an example of working upwards from a solution. Remember, in practice, you will probably find yourself using both approaches simultaneously when building your driver diagram—and there is nothing wrong with that! 

Q2 Who should be involved in creating a driver diagram? Should we solicit outside expertise beyond our team?

Including multiple stakeholders and perspectives is essential in the overall improvement science process, but seeking outside expertise need not be a primary focus at this point. By this stage, your own team has gathered insights, completed an analysis, and is a more important resource than outsiders who won’t be involved in implementing changes. That said, outside expertise can be a great complement to your team’s knowledge of the local context; try to strike the right balance. 

If you find yourself stalled out—struggling to identify drivers or unable to drill down into specific changes in practice—remember that this is a nonlinear process and there is always the chance to clarify your understanding of the problem by returning to and deepening the analysis in Phase 1: Analyzing a Problem. In these cases, soliciting input from research or other outside sources of expertise can be extremely helpful in propelling you forward. 

Q3 I'm having trouble identifying drivers. What are some suggestions for identifying them?

The first step when getting stuck with drivers is to return to your research with new eyes and get specific. Look at findings from individual students. Think about earlier results from your surveys. Make use of “user perspective” as much as possible, thinking about who specifically the changes will impact and what kind of improvement you expect to come from those changes. Consult with research articles and outside experts. If you are taking a bottom-up approach (i.e., working from potential solutions) and drivers are hard to identify, there is a chance the solutions you are examining are not specific enough. Again, ask yourself what changes you would expect to see in an individual student if your change idea was implemented. The resultant changes should be examples of your drivers. 

Q4 Can driver diagrams be updated? Or are they unchanging laminated reference tools?

Yes, driver diagrams can and should be updated! Laminate only if you are looking for a placemat! Your driver diagram is a working document not intended to be comprehensive or permanent. It is a snapshot of a working theory reflecting your knowledge and understanding at a given moment in time. It will evolve as your interaction with the problem continues. Incorporate your best thinking into the drafting process and then move on, knowing that the insights you gain from testing will be more valuable than any attempt at making your diagram perfect right now could be. Keep in mind that the purpose of having a documented working theory of improvement is so that you can revisit and revise it as you learn more about your improvement project. In particular, you will be revisiting and enhancing the driver diagram after you run your first tests, and again after those tests yield concrete insights you attempt to scale. Remember, if you don’t revise your driver diagram, you aren’t learning! 

Q5 How long does creating a driver diagram typically take? How do we know when it is time to move on?

You will know that the research and analysis you have completed are sufficient when you can create a draft of a driver diagram in a single 60-to-90-minute session. The surest sign it is time to move on is when you have a set of change ideas— specific, measurable, achievable, and within your control to influence—that you are excited to test and believe will have an impact on your problem. If you are unable to come up with these change ideas or are banging your head against the table trying to come up with a single driver, rather than belabor the driver diagram, consider moving back to the research and analysis stage described in  Phase 1: Analyzing a Problem. However, in most cases, it is desirable to embrace a change idea, begin implementation, and let experience develop your theory for a moment. Then you can come back to your driver diagram with a new perspective. 

Q6 How do I know if my driver diagram reaches the correct level of detail?

Finding the right level of detail in your driver diagram may take a little discussion. Generally you may be too “zoomed out” if there are areas that feel fuzzy or unclear, gaps between causes and effects, or statements that are vague, unmeasurable, or hard to define. On the other hand, you might be too “zoomed in” if your drivers overlap so much they are hard to distinguish, or if your aim feels small, inconsequential, or directly connected to your change ideas with no drivers in between them. If you suspect any of these might be plaguing your driver diagram, use the Revising Your Driver Diagram tool to deepen and clarify the components of your driver diagram. 

Q7 When selecting the change ideas to focus on, how do I know which to prioritize? I know I should focus on those that are "high leverage", but how can I identify them?

On a fundamental level, trust your gut when thinking about high-leverage change. Ask yourself which of the change ideas you are most excited about trying out based on your own experience. Which change ideas do you think will have the greatest impact? Which change ideas can you implement immediately? These are your high-leverage changes. 

If you get stuck, high-leverage change ideas typically meet the following three criteria: 

  • The change idea addresses a problem that affects a significant number of students or a few students in a significant way once implemented at scale. 
  • The change idea does not require a large investment of time, money, or other resources to test initially. 
  • The change idea is connected to other problems facing students such that solving this one problem may make solving subsequent problems easier. 
ADVANCED QUESTION: My team is working on a particularly thorny problem that feels too complex for a driver diagram. What should we do?

If you are working on particularly large and complex problem, it is possible to create a driver diagram within a driver diagram to explore a subsystem. These “nested” driver diagrams can be helpful if you are struggling to include specificity and a larger long-term aim in one diagram. If you do decide to use a nested driver diagram, make sure the subsystem is independent enough from the rest of the improvement project to stand on its own. 

In a similar fashion, if your aim statement is focused on a very distant “big dot” goal—perhaps only achievable in a 3-to-5-year time period—teams often find it helpful to create a “small dot” aim that articulates a goal that can be accomplished in a year or less. This creates a target that is both more attainable and gives teams a chance to narrow their focus so that drivers and change ideas might flow more freely. For more on “big dot” goals and “small dot” aims, see the Crafting an Aim Statement tool. 

Phase 3: Testing Changes

The questions below represent common issues that come up when learning how to test changes and use the process to drive progress towards your aim.
Q1 How do changes that are this small get us anywhere?

Improvement science follows on the back of years of more traditional approaches in educational reform—large initiatives, often backed by compelling research and big-money donors, involving years of planning before implementation—that have had a decidedly mixed impact on the lives of actual students. It is a cornerstone belief that sustainable improvement is never made all at once, but rather occurs in small, refined steps that build on each other over time. Starting small allows you to focus on the details of a new practice in your school, for your students, and optimize it so that as it expands it continues to work effectively for as many students as possible. Small changes also provide a safe space for people to acknowledge what they don’t know, to admit failure, and to learn. It is much better to do these things in a small way and scale deliberately as skill and confidence grow. 

Q2 I've bought into the idea of starting small. But what exactly am I testing? How do I know the difference between a small change and a meaningless one?

You are testing small changes to the way you usually work, to your normal classroom routine, or to your students’ typical approach to their work—and you are doing so in bite-size portions in connection to your theory of improvement (as outlined in your driver diagram). We call these change ideas to distinguish them from larger initiatives or vague concepts. They should involve a single step, occur over a short period of time, and be implemented within only a small sample size. It is also critical that they produce a measurable result, which you can collect as evidence. If your change idea is the thoughtful result of your problem analysis and produces a measurable effect, it is unlikely that it will be too small to be meaningful. Try it out. Learn from it. Refine and expand it. For ideas on choosing a change idea, see the Change Idea Checklist tool. For ideas on how to measure a change idea, see Phase 4: Measuring Your Progress. 

Q3 When does the testing happen?

As soon as possible! PDSA cycles are designed to be easy to implement, low stakes, and practical. We want to be thoughtful and deliberate with our actions, but we are not trying to build the perfect solution before testing, but through testing. Take one meeting to plan the test, fill out the PDSA form, and make predictions based on your hypothesis. Then begin implementation. This “do” portion of the PDSA—when you try out a change idea—should occur during the course of normal work. For instance, if you are testing a new feedback tool, make sure to test it during your normal classroom routine and collect data by using existing “embedded” measures—such as counting during class how many students use the feedback to revise their work. Study and act take place in separate meetings. It is more than possible for the reflection on a previous test to happen at the same time as the planning for the next one; it just depends on the specifics of the test, the gaps of knowledge revealed, and the group’s intuition as to whether they have fruitful change ideas to test next. 

Q4 How long does a test take?

A single test should take no more than two weeks from beginning to end. For your first several attempts, you should aim for even quicker than that: to plan, test, and review in one week or less. Remember you want rapid, low-stakes tests, especially in the beginning, to build knowledge and experience through action. As you gain expertise in running PDSAs, this timing might shift depending on your needs, and it is certainly possible for an experienced team that has built considerable expertise from the ground up to find itself running longer tests. But again, that is not a starting place; it is the result of hard-fought progress. 

Q5 Who should be involved?

Initially, you will want the whole team involved in planning and reviewing PDSA tests so that everyone learns the process by engaging in it. The person(s) carrying out the test should be someone whose daily work is being changed. As your team becomes more experienced with the PDSA process, you can reduce the number of people involved in testing individual change ideas, but there should always be a minimum of two people involved and frequent check-ins with team leadership. 

Having multiple people involved in the process is helpful because their experience and perspective can provide a useful check on the thinking of the person carrying out the test. Having fewer people makes it possible to carry out more tests at once, or free up team members to focus on other aspects of the work. Try to strike a balance that works for your team and the changes they are working on. 

Q6 What is the point of the PDSA form?

The PDSA form is your workspace: a place for planning, recording, allowing ideas to shift, and ultimately refining improvements. The PDSA form simplifies the execution of the Plan-Do-Study-Act cycle by allowing each of the steps to be recorded in a single document. It operates as guide, teaching you how to test a change idea within the PDSA cycle by clearly marking the steps to completion. Finally, the form (alongside any other evidence collected) can support group reflection and sharing of lessons beyond your team. You might also think of it in the same way you would a scientist’s log or an anthropologist’s field notes; it exists to help connect the discrete pieces of evidence you collect and ultimately provide rigor to your testing. 

Q7 Can we run multiple tests at the same time?

Absolutely! Eventually! Multiple tests are a great idea for advanced teams, but probably not appropriate for beginners. Wait until the team is experienced with the process and operating smoothly enough to allow work to be delegated. More tests mean faster progress and more opportunities for team members to work on change ideas that motivate them, but they also require more work, more coordination, and more meeting times to reflect on the results. When a team is ready to begin running multiple tests at the same time, the team lead should continue to oversee tests and give guidance when planning appropriate questions and data-collection strategies for each test. However, the team will function better when planning in small groups or pods. 

Q8 What kinds of questions should we be trying to answer through our test?

There are many types of questions you can choose to try to answer in a PDSA test. At first you will be most concerned with whether or not a change idea is even feasible—can it be implemented as part of your usual routine? Once implemented, you may focus on students’ behaviors or learning—for which students is the practice working and under what conditions? The important thing is to let yourself be guided by your doubts and your curiosity, and try to focus more on how a change is working rather than if it is working. 

If you are in need of tips: 

  • Ask yourself what you hope to accomplish with the test. Feel free to look back at your driver diagram and use the driver(s) this change idea attaches to. 
  • Predict what might go wrong or who might struggle. Paying attention to obstacles is another good approach to asking fruitful questions. 
  • Pay attention to what you need to learn in order to make progress towards your aim. Take an active role in investigating the problem by asking the kinds of hard questions that you don’t know the answers to. 
Q9 What is the point of making predictions?

Predictions may seem unimportant, but all human beings suffer from something called confirmation bias, where we interpret information so that it suits our preexisting beliefs. This can get in the way of our ability to interpret data correctly and miss opportunities for learning. By taking this seriously, we are making explicit a hypothesis about what we expect to happen, so that if we are wrong we are forced to deal with the question “Why didn’t this go as planned?” Those moments when our predictions fail are some of the most valuable opportunities for learning. Take advantage of these moments to make sense of what went wrong and how your previous thinking contributed. Remember that nobody has a perfect understanding of anything, especially the thorny problems improvement science is designed for. Uncovering the flawed assumptions that underlie how we work will lead to more effective practice and even breakthroughs that transform our work. 

Q10 How do we know what kind of data to collect?

PDSA cycles are supposed to be quick and informative. The data you choose to collect should support that end. Make sure the evidence you collect is well suited to answering the questions you care about, easy to find and gather, and embedded in your normal classroom routines. For instance, in the example of testing out a new feedback protocol to see if it increases the number of students who complete revisions on their work, the evidence that the protocol is working might include the number of pieces of feedback addressed in the revisions or the amount of time students spend revising. If data collection is too onerous, it can get in the way of the rest of your work and prevent you from noticing the many intangible effects of the changes you are making. 

Don’t fret if your data collection isn’t perfect; at this stage, it only has to be “good enough.” Once your tests start showing consistent positive impact, then it makes sense to start asking harder questions with more rigorous data. Until then, prioritize data that supports your learning over the desire for definitive proof of success. For more information on data collection, see Phase 4: Measuring Your Progress.

Q11 Should data be quantitative or qualitative?

In recent years, the emphasis within education has been to focus on hard, quantitative data. This should certainly be your approach to measuring your long-term progress, but you should not feel limited to numeric indicators while testing small changes. In fact, most early tests use qualitative data, such as open-ended questions to students or teachers. Simply asking them how the change went is a great place to start. The anecdotal evidence that you gather by simply paying attention is often the key to deciphering what “hard” data tell you. Later, when you have a more solid understanding of what tends to happen as a result of your change, then it will be easier to know what numeric indicators are appropriate. Remember, the data should be good enough to help you decide what you can do better next week. 

Q12 When is it time to abandon a change idea that isn't working?

You shouldn’t persist with a change idea that isn’t getting you anywhere. As a general rule, you should feel free to move on to something more promising any time you aren’t seeing results. However, you should keep a couple of caveats in mind. First, there is a lesson in every failure. You should try to make sure you reflect on what happened to ensure that any potentially valuable insights aren’t missed. This is especially important when a setback is unexpected, since these are often opportunities to deepen your understanding of the problem you are trying to solve. 

Second, abandoning a change idea outright can sometimes be an overreaction. Many changes to practice that are insufficiently effective by themselves can work when supplemented by other change ideas. If you have reason to believe that your change idea remains promising and your team feels good about continuing, then adapting it to work better next time may be a wiser course of action. Pay attention to how the change is working, rather than if it is working, and it will become easier to distinguish between the aspects of the change that are worth keeping and the areas that need work. 

Q13 When should we look back at the driver diagram?

The driver diagram is your guiding document, and should be revisited anytime you want to take stock of progress towards your larger aim. It can sometimes happen that spending time working on small changes can lead to feeling lost in the weeds. When this occurs, it is probably a good time to take out your driver diagram to update it based on what you have learned. Testing leads to all kinds of insights into the nature of the problem and which strategies are likely to be effective. Make sure you dedicate a few minutes to capturing these every once in a while, ideally after a “PDSA burst.” A driver diagram is only as useful as you make it. If you invest the time and refine it, it will become a living record of what you have tried, what works, and what still needs to be done. 

Phase 4: Measuring Your Progress

The questions below represent common issues that come up when selecting types of measurement for an improvement project.
Q1 I am unsure of where to begin. Where might we look for measures?

Choosing measures for improvement should always start with concrete predictions about what you expect to happen based on the changes you are making. However, once you have a general idea of those predictions, working backwards from a list of common data-collection instruments can be helpful to narrow towards a decision. For instance, if you are working to increase student persistence on challenging math problems by incorporating instruction on persistence and motivation into your lessons, and predict you will start to see improvement in challenging word problems, you may choose to look at the amount of time those students spend on a challenging problem during an in-class quiz. The following list contains just some of the places you may think about getting a practical measure from: 

types of measures chart

Q2 What if a part of our driver diagram is hard to measure?

It is quite common for improvement projects to contain hard-to-measure drivers of change such as “students feel a sense of belonging” or “students feel comfortable asking probing questions.” In cases like these, it may feel intimidating to try to decide how to measure improvement, but you shouldn’t let that stop you. Remember that it is all right to learn how to measure as you go. 

Start by visualizing what it would look like if your changes are successful, then list potential “look fors” that can serve as early indicators of progress. You can use these “look fors” in a quick observational checklist, or simply talk to your students about how they have experienced the changes you are trying. These are both great places to start. 

Quick, embedded measures like these are usually sufficient when you are trying new ideas on a small scale. They can tell you how a change is working and for whom. Later, when you have a bit more experience and know what it looks like to be successful, you can use that insight to measure for that success. For instance, starting with observations or conversations with students may reveal that some students benefit from jotting question ideas on a graphic organizer before they ask the questions out loud. These graphic organizers can be collected, and targeted students’ jotted questions can be compared to the questions they ask during classroom discussions. This approach might evolve into a quick tally sheet to track a handful of students over time, revealing the gaps between thinking of a good, probing question and having the confidence to ask it. 

Q3 Who should be included in decisions about measures? Whose responsibility is it to collect data?

Determining what should be measured—the indicators that make the most sense for your particular improvement aim, drivers, and action steps—is a whole-team decision. If you have a data specialist, this is a meeting they can lead or help organize, but the whole team should weigh in on both the types of measures appropriate for the project and the places and events that can be used to collect evidence of these measures. Ultimately, measures should align closely with your shared goals for what you want to accomplish, so it is important to be in agreement.  

However, when it comes time to record the evidence and implement the procedures and tools needed for collection, it can be very helpful to delegate responsibilities to individual members. Just like any other task, collecting evidence requires effort outside of standard meeting times, and delineating roles is critical to ensure the work actually gets done. Once this evidence has been collected, a specialist who feels comfortable with spreadsheets, numbers, and basic statistical principles can be a big asset in analyzing and visualizing it. 


Q4 How do I decide when to start collecting each of the measures?

A good rule of thumb is to start collecting data for a measure when you expect to see impact. There will be some cases when small changes can lead to immediate impact on the outcome measure, for example. In those cases it can be helpful to start collecting data immediately. If the desired impact is further down the road or requires several steps to achieve, you can prioritize the shorter-term measures for a while. Just make sure you revisit the decision when you start to see impact in the shorter-term measures. 

Ideally, we would always collect baseline data before making changes so that we can more easily measure impact. Where possible you should do so, but be aware of the time and energy required to collect and sort data. Educators’ time is among the most precious resources in schools, and collecting low-priority data is not the best way to spend it. On the other hand, you don’t want to make major decisions about the progress of a semester’s work without good data on hand. Try to balance the value of the data you are collecting with the time commitment and decide accordingly. 

Q5 Do I need to collect all of the different types of measures for my improvement project?

Perhaps in an ideal world with unlimited time and resources, you might try to collect data on all the different types of measures; but in the actual world of education that is not desirable, and it is neither a requirement nor a recommendation. Phase 4 lays out all the different types of measurement that may be useful to you as a step toward understanding your options and making the best possible choice. However, the exact types you choose to rely on will depend on the specifics of your improvement project. 

Generally speaking, for all improvement projects you will want to collect data on your overall aim, i.e., on your outcome measures. This is important because it will eventually allow you to know whether or not your improvement effort has had the desired broad impact. Similarly, but on the opposite end of the measurement spectrum, your improvement team will collect evidence around your short-term PDSA measures as well. This will let you know that the tests you are implementing are meaningful. However, the decision as to which driver measures to collect will depend on the specifics of your project, such as the change idea and drivers where you are focusing, and how far along you are in the work. At any given point in time, at least one form of evidence will jump out as both fairly easy to collect and helpful to know—this is the measure you want to focus on. 


Q6 Does this quick and informal approach to collecting evidence meet standards for good scientific research?

While the practical approach to data collection can appear to stand apart from what we think of as good research, it is actually based on several core principles of scientific practice. When we think of scientific data, we often think of rigorously designed studies with complex statistical analyses. What we often leave out is that this degree of rigor is only applied after the messy and open-ended work of developing a strong, testable hypothesis. Randomized controlled trials, complex regression models, and the like are almost never used in situations where prior, less quantitative methods have not already established some evidence that the hypothesis is correct. 

Another pillar of scientific methods that is often overlooked is the importance of replication of results. No study or data are ever considered definitive on their own, but are instead replicated multiple times across multiple contexts. The findings that are robust enough to stand up to multiple experiments are accepted; the ones that aren’t are reconsidered or refined. 

What this means for your work is that working from a clear theory and accumulating imperfect evidence on a small scale, then replicating your results with more students and across classrooms, is very scientific indeed! 

Q7 I want to make sure my data are accurate. Any tips for gathering data to ensure the data accurately informs our project?

Data gathered in improvement science has a different purpose from data gathered in an academic research context. Scientific researchers have developed complex, detailed, and rigorous schemes for measurement with the purpose of developing robust and universally applicable theories. By and large, those are not necessary here. Improvement science is focused on practice. Instead of worrying about comprehensive data collection or statistical rigor, concentrate on collecting evidence that will help you compare changes against your questions and predictions. The more closely your data is tied to your theory of improvement, represented by your driver diagram, the more informative it will be. 

Additionally, it can help to look at specific student populations within your class. Minimize the effort required by focusing on changes in groups as small as two or three students. Target evidence that appears in regular classwork or teacher preparation and can be repurposed. Always err on the side of data and measurement that is natural, comprehensible, and in context. 


Q8 When should the data be shared with the team and analyzed?

Data help guide and clarify, but can easily overwhelm if not shared thoughtfully. Accordingly, you want to share each type of measurement at a time that corresponds to its purpose. For instance, outcome measures, reflecting big-picture progress, are not relevant on a day-to-day or even a month-to-month basis and will only be a distraction if shared on that timescale. Outcome measures should be shared at the beginning of an improvement as a baseline—and again at major milestones like the end of the school year—to determine the effect. Likewise, long-term and medium-term driver measures should be shared on the timescale that they are collected—quarterly-monthly and monthly-weekly, respectively. By combining the short-term PDSA measures shared in weekly or biweekly meetings with less frequent looks at measures of longer-term progress, you will create a balanced view on where you are making progress and where you aren’t, and support the kind of nuanced decisions required to overcome the complex challenges our students face. 

Phase 5: Scaling and Sharing

The questions below represent common issues that come up when scaling a test during an improvement project.
Q1 How do I know if I am "moving up the ramp" at the right pace? What are some signs that I am moving too quickly? What are signs I am moving too slowly?

You should always refine a change idea before scaling. It is much easier to optimize when working at a small scale. When in doubt, follow this rule of thumb: do two more tests to establish stability and reliability; you do not want to miss the opportunity to become more confident in your change. Signs you are moving too quickly up the ramp include inconsistent results, internal doubts about the effectiveness of the change idea in new contexts, vague or incomplete tools that are part of your change idea, confusion or difficulty surrounding roles involved in implementing the change idea, and/or lack of complete measurement about the effectiveness of the change idea. 

On the flipside, improvement science thrives on feelings of insight and progress, and you do not want to allow your project to stall out by dwelling in the relatively safe zone of a well-established PDSA cycle. A change idea is ready to move up the ramp when your confidence (based on evidence collected, not enthusiasm!) that it will work in new contexts is strong. Signs you might be moving too slowly include three or more repeated tests that return near identical pieces of data or evidence, the timing between PDSA tests grows to 2–3 weeks, or you find yourself using the tool as a part of your regular routine but have not yet tried it with other subgroups. Additionally, if you end up feeling that you are spending more time than the benefits of a change idea can justify, that is a sign that you may need to speed up testing. 

In general, the speed of expansion should match your evidence-based confidence in the effectiveness of the change. If you have doubts, expand slowly; if you find yourself with a bevy of evidence that your change idea is effective, move forward.

Q2 How do I determine what to test next? What "direction" should I point my ramp?

You should let your experience and the data guide you. You should move in the direction that has the greatest likelihood to either benefit your students or teach your team how to do so. It may help to glance back at your driver diagram to consider your long-term goals. Several critical characteristics to think about when evaluating different areas are summarized in the chart below.

choose to test chart

Q3 What do I do if my change idea does not work outside of this initial context?

There is no single magic answer to educational problems; this makes it highly likely that at some point in the scaling process your change idea will encounter a context where it is no longer effective. This is not a sign that your improvement project is a failure, nor that you should abandon your change idea. Rather, it must instead be taken as an opportunity to learn more about your problem. Setbacks like these allow your team to refine your understanding of the problem, creating opportunities to either adapt your original idea to fit the nuances of the new setting or develop a parallel change idea that answers the new needs. In particular, reflect on whether the failure resulted from a method which you optimized or an intrinsic part of your core idea underpinning your improvement. If you find that a new method you used to optimize led the change idea to be less successful in the next context, then strip away any optimization so that you can start anew from the core idea. Your improvement project will grow more robust from thoughtfully and methodically grappling with these setbacks. Ultimately, your long-term goal should be to use these moments of learning to develop multiple change ideas into a collection of practices that can help you support different students in different situations in different ways. 


Q4 As I move up the ramp, can I test more than one change idea at a time?

The first time you begin to scale a change idea, you should focus on one change idea at a time. At this initial stage, you do not want your team or the work to get pulled in too many directions at once. However, as your improvement project continues to expand into new settings—and your change idea is forced to adapt to the new challenges you find there—it is almost inevitable that multiple critical questions will arise. In these cases, each new change idea should have its own “ramp.” Testing multiple change ideas at one time is fine—often necessary—but it is important to test them independently, allowing each to establish its own efficacy. In these cases, split the work so you can maintain focus on one issue at a time. Remember to move slowly. If we try to take on too much at once, we lose the ability to manage the changes, collect evidence, and track our progress at the same time. 

Q5 What types of changes will I need to make to my measurement strategies as I scale?

A larger-scale test often requires tweaks to evidence-collection strategies. Make sure you have prepared to collect evidence at the larger scale before moving up the “ramp.” Generally speaking, small-scale tests can use more informal, qualitative data because the data you collect should answer your questions about how the change works. You should feel free to let your curiosity lead you at this stage and not focus exclusively on effectiveness. 

Medium-scale tests should pay more attention to differences between students and contexts. We know that not all changes will impact all students equally, but if you understand for whom and under what conditions a change works, you will be better equipped to differentiate support or decide which additional changes to try. 

By the time you are testing large-scale changes, you will have already collected some evidence of effectiveness. At this point you want to make sure you are being rigorous enough to demonstrate improvement by measuring baseline data and change over time. You don’t want to spend 3–4 weeks testing something only to find that your measurement strategy produced unhelpful data. 


Q6 It sounds like ramping gives the team a lot of freedom to determine the direction of the work. How involved should school leadership be in this process?

It’s true that this process gives teachers on your improvement team a lot of agency. This is by design, since the people closest to the changes being made are the best placed to turn the insights that come out of the PDSA cycles into further improvements. 

That said, involving school leadership is important and can happen in a number of ways. The initial goal-setting and periodic reflection are important points for leadership to plug in, as are any major changes in direction of the work. The work of your improvement team should always align with the school’s larger goals, and leadership should be kept abreast of the work as it evolves to ensure that this is the case. 

Additionally, if your team’s learning leads you toward potential tweaks to existing school practices and structures, leadership should certainly share in that decision and help you decide what evidence should be collected to determine if these small-scale tweaks are successful before they are scaled. 

Finally, as you scale up to larger and more complex changes, or spread to more contexts, the practices themselves and the data you will have to collect will have to become more formalized. Here, leadership is important for providing support and making decisions about centralizing data-collection efforts. 

Q7 Do you have any advice for handing off a change idea to a new teacher or staff member? What about to a new building or context?

When handing off a change idea to a new teacher or staff member, it will often take more than one PDSA cycle for them to learn the change and tweak it to fit their context. Do not get discouraged if adoption isn’t immediate or perfect; adjustments by these new members are valuable for your team’s learning and can be made into PDSA ramps of their own. To smooth the handoff process, ensure that potential adopters understand the core elements and nuances of the improvement as they build their “how-to” knowledge around the specific steps required to implement the change idea. It is equally important for them to understand the background, rationale, and logic behind the improvement, which can help them understand the sense of urgency and need. This can often be done by talking less about the pilot team’s immediate experience (as exciting as that may be), and more by providing stories to help them feel the impact of the practice on users’ routines and the school’s capacity. As always, leave plenty of room for questions and reflections, as this will both encourage participation and uncover new areas of insight.