The goal of cryptanalysis is to compromise cryptosystem and prevent it from accomplishing one or more of its security goals. The aim might be to defeat the confidentiality of the system and read the messages. It might be to violate the integrity of the messages and alter them or it might be to forge messages circumventing its authentication measures. In systems that adhere to Kerckhoff's principle which says the only portion of a cryptosystem that should have to remain secret is the key. Attacks on this system are usually aimed at recovering the key based on the available information. The techniques that are used are strongly influenced by the nature of that information, and hence, many attacks are categorized accordingly. Other attacks don't go after the algorithms at all, but rather weaknesses in either the implementation or the users. This is the realm of side channel attacks and social engineering. In this lesson, we will look briefly at many of these attacks including a few important ones that are really beyond the scope of our course of study. A cryptosystem doesn't have to be cracked in order to leak useful information. The mere knowledge that two parties are communicating may be extremely valuable. This is the realm of traffic analysis in which things like who is talking to who, how often, and in what order is the basis for making inferences about the enemy's plans and intentions. This is why military and other classified satellite dishes are often enclosed in golf ball structures. It's to keep the enemy from knowing what direction the dish is pointed. As that information alone gives clues as to what might or might not be happening. While very interesting and important, traffic analysis is not within the scope of our discussion, so we'll leave it at that. What we want to focus on are mathematical attacks against cryptosystems, usually with the goal of recovering the key. When the analyst only has access to intercepted ciphertext and little or no other information, they must perform a ciphertext-only attack. The most basic such attack is brute-force in which the key space is systematically searched until the key is discovered. But unless the key space can be significantly pruned, such attacks are usually computationally infeasible. Attacks against most of the classical cipher systems could be carried out this way because of the inherent weaknesses in those systems. Modern cipher systems are all but impervious to this type of attack. The next level up is when this analyst not only has ciphertext but also some of the plaintext. This is referred to as a known-plaintext attack. It could be the plaintext for a different message that used the same key or it might be for portions of the plain text within the target ciphertext. Classic cipher systems were generally wide open to this kind of attack because of the very simple relationship between plaintext key and ciphertext. In most cases, if you had access to any two of them, you could easily find the third. This was the most common attack against enigma intercepts. There were certain words and phrases that appeared in a large fraction of messages often at the very beginning, and hence, the analysts worked from the assumption that they were there and look for the telltale indications of finding one. These were known as cribs. Another source of cribs were messages such as weather reports that had been broadcast in the clear or perhaps in a different lower level code or cipher that had already been compromised, and that were known to frequently be enciphered with the enigma for relay to the U-boats. While knowing what some of the plaintext is can provide a significant advantage to the attacker, imagine the increased effectiveness if you can trick the attacker into encrypting and transmitting a plaintext message of your choosing. Known as a chosen-plaintext attack, this is seldom easy to undertake but some of the most valuable breaks have occurred this way. Again, using the enigma as an example, the British would sometimes place mines at specific locations and then take steps to ensure that the Germans would discover them. When they then transmitted the location of these mines to headquarters or other boats, the Allies had their crib. Another example comes from the Pacific War when American cryptanalyst who had partially broken the JN-25 code used by the Japanese Navy determined that the Japanese were planning a major attack on an objective and coded only as AF. Due to traffic analysis, the Americans suspected that AF might be Midway Island. To confirm this, the installation on Midway was instructed to broadcast a message in the clear that their water purification systems were broken. They later intercepted a message saying that AF was short on water and for the assault force to load additional desalination equipment confirming the target objective. An even more powerful version of a chosen-plaintext attack but one that is naturally more difficult to carry out, is the adaptive-plaintext attack in which after tricking the enemy into encrypting a message of your choice, you are able to craft additional messages designed to leverage what your prior messages have disclosed. It is also possible to go the other way and trick the enemy into decrypting ciphertext of your choosing. These type of attacks are most commonly carried out against commonly used encryption protocols such as those used against the early version of RSA used as part of the secure socket layer protocol used to protect web sites. Yet another form of attack is the related-key attack in which either encryptions or decryptions are carried out using keys that are known or believed to have some relation to the real key. Perhaps one of the most successful such attacks was against the original encryption protocol for Wi-Fi systems known as WEP, Wired Equivalent Privacy. The 64-bit encryption key used for the RC4 stream cypher consisted of the 40-bit WEP key combined with a 24-bit initialization factor randomly chosen for each packet of data. Because the WEP key was manually configured, it almost never changed on a given network. Thus, all of the keys used were highly related. Furthermore, because RC4 was a stream cipher that mimics a one-time pad, it is critical that the same key not be re-used. However, because the keys only varied in the initialization factor, the key space was limited to just 24 bits. While 24 bits allows over 16 million different keys, a phenomenon known as the birthday paradox means that in any group of approximately 4000 packets, it is likely that at least one key will appear twice. The end result is that it was shown that WEP keys could be recovered in as little as three minutes using off-the-shelf hardware and software just by eavesdropping on the wireless traffic. Another class of attacks are known as side channel attacks. There are many types of such attacks. What they have in common is they don't attack the algorithm directly but instead, exploit weaknesses in the implementation. Unless great care is taken in the software and hardware design, it is possible to monitor the power drawn by the device or measure the time it takes to perform an operation or the residual contents of memory that it's accessed or some other quantity, sometimes even the sound the device makes and from that deduce information about the keys involved. One example of a side channel attack that used the way the system behaved as it failed to carry out a related key attack involves smart cards that had the key burned into memory elements that, for our purposes, can be thought of as essentially fuses. If the fuse was intact, it read as a one while if the fuse had been blown, it read as a zero. These fuses were radiation sensitive and if the card was radiated at a certain level then the unblown fuses would blow in a random order until eventually, all of them were blown. The attack proceeded by having the card encrypt a message and recording the resulting ciphertext, then the card was radiated for a while and the process repeated using the same message. Occasionally, the ciphertext would change, indicating that one or possibly a few of the fuses had blown. Eventually, the ciphertext would match what was expected for a key consisting of all zeroes. The cryptanalyst will then try all of the keys that had a single one bit until he found the ciphertext that matched the prior record ciphertext. They then knew the location of one of the one bits in the original key. They then proceeded to test all of the keys that differed from this key by a single one and so on until the entire key was recovered. The final class of attacks were going to consider are among the most common and most effective today, in part because we've gotten pretty good at designing and implementing algorithms to resist the attacks mentioned thus far. These attacks are grouped together under the heading of social engineering. The list of variations on these attacks are far, far too numerous to even summarize. But what they have in common is exploiting human weaknesses such as laziness, our fears, or our greed. The counterpart to the brute-force attack is rubber hose cryptanalysis. It's a catch-all term for the use of threats of violence to coerce a user into giving up the desired information. The illusion refers to repeated application of a rubber hose to the bottom of the target's feet until the key is recovered. But this involves much more subtle forms of coercion such as threatening lawsuits or criminal prosecution if the information isn't divulged. This class also includes just understanding human nature and how we offer up compromises even without needing to be tricked. For example, even in fairly small groups of users, at least a few people will use common passwords. Wireless networks are penetrated all the time because the person that set them up never changed the default password. Another example has an attacker calling an employee claiming to be from the IT Department and requesting the person's username and password so they can perform some critical piece of maintenance in order to permit all of the person's data from being permanently deleted. We all know that we should never just click a link in an email even if it appears to come from someone we know. Yet countless people do that every single day because the email appeared to be from someone they knew and therefore, trusted not to send them anything evil. Attackers can always count on enough people looking for something free to give them the in that they need. It might be an email informing them that they've just won an all-expense cruise. Just click here to claim your prize. In one experiment, thumb drives with malicious software were scattered in a parking lot of a company that did highly sensitive work. Enough users couldn't resist the free drive that some poor soul had dropped that nearly all of the company's networks were thoroughly compromised in less than a day. Finally, there may be people with legitimate access whose intent is to compromise the system for a variety of purposes, high and low. And let's not forget the address that everyone has a price. If you try hard enough and offer a large enough bribe, you can count on eventually finding a taker. To wrap things up, we need to be aware of the vast number and types of attacks that our enemies can bring to bear against the cryptosystems we design and build. Not only are there a wide variety of technical attacks that our algorithms must withstand, but our adversaries can go after amazingly subtle artifacts of how our algorithms are implemented to leak information. More difficult are overcoming unintentional lapses of human nature in designing our system so that users have no alternative but to properly use the system, and accept that users will scream in protest about the inconveniences that result. Most insidious are the social engineering attacks that can render even the most astute and conscientious users vulnerable. But the only thing we can do here is design systems in which no one person has the knowledge or access to render unacceptable levels of damage, even if that is their objective. But such steps are expensive and cumbersome, and so are seldom used except in the most extreme situations. In the end, our quest for security often comes into conflict with our need for utility. A perfectly secure cryptosystem that no one can or will use serves no purpose.