I'd like to talk a little bit more about the email verifier codebase and specifically the custom benchmarking tool that's built into it. Before I do, let's talk may be first about why we should even use the custom benchmark. After all, there are lots of great off-the-shelf benchmarking tools out there, like K6, which you got to use in previous exercise, or J meter. K6 and J meter are great for bombarding endpoints with lots and lots and lots of traffic. They are really configurable. K6 even is pretty scriptable. Both have Cloud services that let you distribute load. They're really great tools. In this case, however, though for the email verifier, we've got a little bit more of a complicated flow that isn't going to work quite as well for tools like J meter and K6. If you think about our flow here, there are a couple of things going on. We're making two requests to this registration service. That's no problem. You actually benchmark one of those requests earlier in the course. Thing that's going to be hard though, is that we have to make that second request based on a code that we get in an email. We register, we get a confirmation code, and then we hit the confirmation endpoint with that code. Something like this is going to be really hard to do with K6 or J meter. We built our own benchmarking code to do this. Our benchmarking code will hit both of these endpoints really quickly. It also acts as a fake SendGrid server. It receives those requests for notification emails and takes actions based on those. Let's dive into the code a little bit. You see this benchmark is up in the applications folder because it's something that we run. If we dive into the benchmark, we can look at our app kt folder. Here we have kt file. Here we can see some configuration options. As you're running your benchmarks definitely poke around in these configuration options and see if you can change them and make things go a little bit faster by default where you were running f or workers to hit each endpoint. We're going to run a benchmark that hits 5,000 registrations. This benchmark will tell us how fast our system can register 5,000 users. We have a fake email server that will listen for these notification emails and then take actions on those based on the benchmark. This fake email server, if we look down at the bottom of this file, will process our confirmations using our benchmark. We knew up our benchmark with all the information that we need. Then we hit this start method. That start method is where things really get interesting. The start method and I encourage you to dive a little bit more deeply into the code, but we'll go over it briefly here. It launches our requests workers , our registration workers. Those are the workers that are going to hit the endpoints. It launches a reporter, which as you'll see later on, that's what prints out output into the console. We'll get to see the progress as we go. Then it measures the time that our benchmark takes. At the end, it stops and does a little bit of logging. Let's see the benchmark in action. This screen here should look roughly familiar to you now. We have our docker-compose running, which you've done. We have our registration server and our notification server running. You'll notice though that I don't have the fake SendGrid server running. It's really important that before you run any benchmarks, you stop the fake SendGrid server. That's because our benchmark itself will act as a fake SendGrid server. Let's run our benchmark. We can say applications benchmark run. Remember that's our Gradle shorthand. It's going to kick off our benchmark. We've got a little bit of logging along the way. This one is always fun to watch. We can see our registration server processing these registrations, both the requests and the registrations themselves. Up in the upper right, we can see our notification server running just as fast as it can go in through the notifications. If we stopped that, we can see that it's sending a bunch of confirmation codes. Lots of stuff going on in Docker Compose. If things are going wrong, if things were falling down, we'd be able to see it here. Then below we can see I guess, a real-time picture of things that are going on. We can see actually we've already sent all 5,000 requests at this point. We're just waiting for our registrations themselves to catch up. Once this I guess we can see how many registrations we've had successful. Then this registration total is how many users we had. Right now those two numbers are the same. Once that number gets up to 5,000, will know that, hey, we're in good shape and we've completed our benchmark and it's going to print out a nice time for us. Definitely, there'll be some labs and other exercises that you'll go through to tweak this may be horizontally scale, maybe edit the benchmark test a little bit. But before you get into any of those, I'd really take some time to run this benchmark on your own. Make sure you know how to run it, those different parameters, and can tweak it. Maybe take a little bit of time to see how fast you can get this going on your own machine. I always have a lot of fun just trying to eke out as much power as I can. Cool. It looks like we're done. If we look at our terminal output here, we've processed 5,000 registrations in just under two minutes. That's 2,500 registrations a minute. Pretty good for just running off a little laptop. I guess I'll leave it here. Definitely play around in here if you have questions, tossed them in the forums and I guess we'll, good luck. I'll see you on the next video lecture.