Cryptography recently joined forces with neuroscience to propose a groundbreaking innovation in authentication. Hristo Bojinov of Stanford University along with Daniel Sanchez and Paul Reber of Northwestern, Dan Boneh of Stanford, and Patrick Lincoln of SRI published a paper on “Designing Crypto Primitives Secure Against Rubber Hose Attacks.”
Passwords, encryption keys, and other methods of verification are critical to keeping secrets in the digital era but have a number of major weaknesses. Simple passwords are vulnerable to guessing and brute force attacks but long, complicated passwords are hard to remember, leading to password reuse or writing passwords down which can then be found by attackers. Tokens can be stolen and research has shown that even expensive biometric systems can often be fooled. Coercion or “rubber hose” attacks are often the easiest way to defeat cryptography, as even the most secure password can be extracted through torture or other means of coercion. This was one of the ultimate problems of cryptography until a recent method was discovered to use implicit learning to teach users passwords they can’t actually recall but can reliably replicate.
Implicit learning is knowledge you can replicate but cannot describe, like how to ride a bike. Bojinov et al. applied this neuroscience concept to cryptography by creating a system which teaches users a 30 character password, much stronger and more secure than most regular passwords, which they do not consciously know. They do this through a computer game similar to “Guitar Hero” where players are prompted to press the S, D, F, J, K, and L keys in time according to a certain order. The game speeds up or slows down based on the player’s ability, and 80% of the characters presented are from the randomly generated 30 character code while 20% are random. After about 30 to 45 minutes of playing, the code is embedded into users minds but they cannot recall or share it even in part.
To authenticate users, the game makes them play a shortened, 5 to 6 minute round where they will have to enter their code interspersed with two other, random 30 character codes. The performance on all three codes are then analyzed, and users will score reliably better on their code than the two others. The difference is slight but extremely unlikely to occur by chance. This difference persists two weeks after the initial training and, as users have been shown to be unable to recall their passwords, they cannot give it away, even under coercion.
This approach to authentication is still early in the research phase, though you can try the training and authentications games out online, and is not yet perfect. The most obvious problem it that, even if training sessions only 30 minutes long and tests only 5 minutes, the whole process takes much longer than regular passwords. If the threat of coercion is high enough to warrant this method, however, it’s safe to assume that the password or key you are trying to protect is important enough that time is hardly an issue.
There is also a number of different attacks that remain possible against the technique. The most serious is for an expert player to intentionally perform poorly on two of the three test sequences presented, giving him or her a 1/3 chance of guessing the right sequence to do well on. This is, however, very difficult to do. The research assumes that the authentication game would be played in person and not remotely, hence it cannot be done by a machine. The pace of the test would be based on the time the player used in training, which would be the fastest time the player could handle, too fast for methodical planning. Human players would have trouble counting out 30 character sequences and keeping different sequences straight then adjusting their performance accordingly, and any slowdown to do so would be recognized by the game as an attack.
Still, with trained and gifted players, the risk may remain too high so the researchers suggested making guessing the right sequence more difficult by using 4 correct and 12 incorrect sequences for the test. Experimentally, separate implicitly learned codes did not interfere with one another so users could plausibly be trained for 4 codes instead of one. They key drawback here is that it would greatly lengthen the training and testing processes, and hence would only make sense for the most sensitive secrets.
Another attack that this method cannot consistently counter is an eavesdropping attack, literally or metaphorically standing over the shoulder of a user as they perform the authentication and watching the process. While it would be difficult to learn the sequence that way, this method like most others is not designed to counter such attacks, for which the paper recommends further research.
Lastly, the paper does not address the possibility of threatening the user so that he takes the authentication test himself. Compromising the user, however, is equally a problem for all forms of authentication. If you have a man on the inside, passwords, dual-factor authentication, or biometrics won’t help. Still, it’s possible that if a user was under enough stress and threat, he may not be able to perform well and fast enough to pass anyhow, given the test’s sensitivity.
But while not yet perfect, the method of using implicit learning for authentication solves classic cryptography problems in an innovative way. By opening up a new field, this method will inspire many more improvements and further innovations. If we hope to solve the daunting problems of cybersecurity, we need more novel approaches like this to advance our thinking.