Welcome back. We spent several lessons talking about user interfaces and what the user experiences while using software. Nevertheless, there is software which runs and which the user has no interaction with. Without a user, you might say, what's there to design? Well, let's talk about that. When the user interacts with a web page, the web page serves simply to gather user input and send it to the server or display what the server has to say in response. The processing that goes on on the server side can be considered the back-end. In designing web services, it is common to have simple user interface programs on the internet facing web server, but have a lot of that heavy lifting done behind the firewall on different machines. Back-end software doesn't have to be user friendly. It doesn't really have a user. It might have a client, but not a user. The back-end simply needs to collect the input data, do the processing, and return the result to the user. Now, the word simply doesn't mean it's easy. It means that, conceptually, it does not have a subjective user element. The processing can, in fact, be pretty complex and can have nuances that extend certainly into security, but also into performance accuracy and fault tolerance. In the previous lesson, we talked about how some shopping website users abandoned their carts because the website crashed. Quite likely, the website didn't actually crash. The server didn't have to be rebuilt or, probably, even rebooted. Most likely, what happened is that an unhandled processing error caused the thread that was handling the user's requests to halt, rendering the attached browser unresponsive. This could happen for any number of reasons. Badly formatted and unchecked free form user input caused a database query to abort. It might be the case that null data was returned from the database where non-null data was expected. Any number of processing violations might have occurred: division by zero, making a credit card purchase with a negative amount, you name it. As a rule, back-end software shouldn't fail. And if it does have to take an anomalous exit, it should fail gracefully with an informative error message, "I'm sorry. Your last name may not have two dashes in a row." Even the first apologetic parts aren't necessary, just the informative non-insulting message. More than anything, this requires testing. It requires envisioning everything that can go wrong and handling it. It may require extensive test cases to check all the statements and logic branches in the code. And the number of times that these tests need to be repeated may argue strongly for automated testing. I, personally, am very fond of writing testing interfaces. This is, essentially, a scripting language which can be used to exercise back-end functionality. What I like about this system is the ability to gather input data from users as they use the system. And when a processing anomaly occurs, save the last however many interactions that are necessary to recreate the problem. This is amazingly effective because the users in the field, not being developers, can come up with the most creative ways of making the software fail. By recording their actions, it's possible to leverage their "goofs" into test cases. We had a script generator which would take that recorded user input and generate the script which could be played back to actually recreate the problem. In one interesting situation, this ability won us a lot of points with the customer. A user, in a test scenario, had been interacting with a web client for 15 or 20 minutes in a complex problem solving scenario. A person walking through the testing room, in full view of the customer monitors of the test, accidentally kicked out the power cord to the computer on which web browser was running and, effectively, destroyed the session. Of course, there was a common moment when folks realized that a bunch of input, 20 minutes worth, had been lost. Since I wrote the data collection and scripting system, even though it had not been designed for this purpose, I logged into the server, recovered the recorded transaction file, recreated the user's interactions in script form, went back to the user's machine which is now rebooted, played the script back through a testing interface we'd built. In just a few minutes, the user was back to where he was and the rest of the task could go on. There's just no substitute for gaming the system and thinking hard about what could go wrong. Having the diagnostics in the back-end software is useful too. If a database query is expected to return only one row, some sort of notation should be made in a log if something else happens. How thoroughly the code is engineered to detect problems and record their occurrence is partly a matter of design philosophy, partly a matter of code inspection. It's possible to build in automatic logging features, but it may also be necessary to have the code inspected by a coding and logic expert who can determine if the potential logic gaps are unhandled. Back-end design work is harder than front-end work because back-end programs embody the functionality used by the front-end. As such, there can be bigger challenges: how do the algorithms work, where do we get the data, how do we ensure fault tolerance. These are some of the issues which can come up. In my design classes, I emphasize that just because your job is design, you're not excused from knowing how to code this kind of functionality. How the pieces you design fit together, or whether they can, has a lot to do with their coding. When first beginning a project, it's necessary to survey the project landscape for technical challenges. At the abstract level you're working at, it's easy to overlook that something is difficult or risky to do, to implement. You have to know your own capabilities and match them up with what's required on the project. Pay attention to things you don't know how to do and research them, build prototypes. Make sure you understand how the technology works. This is not the same as building the code and deriving the design from it later. This is making sure that the design will work. When Frank Lloyd Wright, the architect I refer to before, built the Johnson Wax building in Racine Wisconsin in 1936, the glass roof was supported by, what are called, lily-pad columns. You'll see in the photo. The vertical supports were a challenge. Nothing like this had ever been done before. So Wright, and his colleagues, created prototypes of the columns and demonstrated that they could hold six times the weight they were required to support in the new building. Another interesting lesson in software design can be derived from Wright's Fallingwater house in Pennsylvania. Appropriate to the discussion of engineering challenges, the cantilevered structures, which sat on top of the waterfall, began to fail about 50 years after the building was constructed. Research revealed some bad calculations involving the strength of materials made by Wright's chief engineer were wrong. No prototype was built and the resulting miscalculation led to an expensive overhaul of the structure. Another interesting point was that the owner who commissioned the design and construction of the house, Edgar Kaufmann, was surprised that he couldn't see the waterfall from the house. Wright said that he thought it would be better if they could live with the falls so the houses built over the falls rather than next to it. So, there was some misunderstanding about even the requirements, which would have been a costly fix if it were necessary. In this brief discussion about back-end software, we've seen that although there are no users per se, there are design challenges. Robustness, diagnostics, and recovery are a few of these which we've talked about. Every design project is different, but I hope that the ideas presented here will be useful to you in future work.