A common myth in the industry is that if you go fast and focus on speed you will sacrifice quality. But I'm here to tell you, speed does not mean low quality. Most traditional project management methods talk about three components of delivery: Being good, fast, and cheap and that you can only pick two. The idea being that you will always have a trade-off. For example, a project that needs to be developed very rapidly and retain a high level of quality will inevitably costs more. Logic goes that it is impossible to do all three. This is a fundamental rule that is taught not only to students at the university level, but it's consistently repeated throughout the corporate world. Unfortunately, this myth is untrue and results in an incorrect mindset towards product development. This is also why a lot of organizations will think they are doing agile, but in reality, they are still thinking about software delivery along those three dimensions: good, fast, and cheap. It can be challenging to help people see that there is a better way to think about software delivery, but facts and data to show that there is no trade-off between speed and quality are starting to emerge. From the book Accelerate, we learn that high performers understand that they don't have to trade speed for stability or vice versa, because by building quality in, they get both. In my experience as well, if you have low quality then you tend to move slower because you have a lot of rework and defects that you're finding in production. Plus, the default response usually to low quality is to add more gates and roadblocks instead of going faster. It often seems counter-intuitive to people who have been focused on traditional processes for managing change and releases. Another challenge I've seen has to do with lengthy QA cycles. In these scenarios, often quality is still low and change fail percentage is high. So, while in software development, we tend to think that having multiple test cycles in multiple environments is the correct approach, the evidence seems to indicate that does not actually improve quality. Rather, if you build quality in and have continuous delivery practice's, speed and quality go hand in hand. To share a bit more detail about this, I'm going to talk about a section from Accelerate because getting both speed and quality doesn't happen just because you start deploying more frequently. Naturally, there are strategies at play. It is important to look at multiple metrics to understand if you are really making progress with both speed and stability. The metrics to track are: deploy frequency, lead time, mean time to restore, and change fail percentage. In the analysis from the State of DevOps reports, there is evidence that there are no trade-offs between improving performance and achieving higher levels of tempo and stability. They move in tandem. Some additional pieces of data that are really interesting from this analysis are that deploy frequency is highly correlated with continuous delivery and version control. Lead time is highly correlated with the version control and automated testing, and mean time to restore is highly correlated with version control and monitoring. What I'm trying to say here is that investments and the underlying capabilities is required in order to maintain speed and stability. You could use this guidance as a way to focus on which capabilities to enable first. Version control is a great place to start for example. Making sure all of your code including infrastructure and configuration is version controlled should help immensely. When we first started practicing these techniques with the customer mobile team at Nordstrom, a lot of leaders thought we'd sacrifice quality by focusing on speed. It took persistence, transparency, and measuring results to demonstrate that we had no trade-off between speed and quality. In fact, our product resilience improved the faster we went. By focusing on speed, we were able to show that we can deliver value faster and have a higher quality product at the same time. At Starbucks, we had a similar situation. When we started moving faster, we had a higher-quality because the only way we could move fast was by building quality in and automating our testing and deployments. We also clearly understood our lead time and could see where the challenges were in the value stream. This enabled us to quantify all of the speed and quality metrics in a way that demonstrated to our stakeholders that we were actually going faster and improving our quality. We can also restore service faster because we could quickly deploy changes. If we had an incident, we could quickly test and deploy a fix. Also, it is widely known that complex systems will fail. So, focusing on service restoration versus planning for and attempting to mitigate all failures is a way better investment. As you start to practice these techniques in your organization, I would encourage you to capture your current quality metrics in addition to lead time, deploy frequency, MTTR, and change fail percentage. As you continue to focus on speed, you will see your quality metrics improve. This will be important to track because you may encounter skeptics and having data makes it really hard to dispute. Also, I would strongly encourage you to share the data and research from the State of DevOps reports with your stakeholders and team. This industry publication is free and it has facts and data to show that there is no trade-off. As I mentioned, I've seen that speed and quality can be hard in all three organizations I've been in, and when I've talked to other industry peers of mine at large organizations, they have all had the same experience. Ultimately, the main takeaway is to understand that you don't need to make a trade-off as long as you are leveraging the capabilities we've discussed and are tracking the correct metrics to help manage the conversation with facts and data.