A well-designed application has to have a good balance of various components: aesthetics, usability, security, and so on. I used to think if a system works well on the surface, it's okay even if it’s flawed behind the scenes. As long as the users aren’t able to tell, it's not a big issue. This logic, however, can only work in the short term. It fails miserably in the long term. There are many examples that could be covered in this topic, but for now I want to focus on one specialty of our team: web application security.









To use an analogy, the level of security applied in an application is similar to the amount of salt we pour onto a plate of our favorite dish: it has to be just enough to to make the food flavorful. If we add too much or too little, then our taste buds are unhappy.





Most web apps have measures employed to protect sensitive data, and a majority of those measures are related to information submission and account credentials. That said, this is one area where security and design decisions can easily clash with each other. Just try to recall: How many times have you been rejected to register for an account because there wasn’t a clear password requirement? Or failed to submit a form because you entered something in an incorrect format? While it’s important to validate form fields and data submission, if validation is implemented without careful research and testing, the overall cost might trump the benefit. But that’s not to say we should weaken security to favor ease of use. Vulnerable systems are more prone to warrantless access of users’ data and can potentially cause both users and system owners a greater headache.





Let’s proceed to some examples:





1) Password Strength Requirement

If you haven’t clicked on that link above (www.badpasswords.org), here is one of the images in the collection:

We, of course, want to advocate for users to have a stronger password than just “password.” However, let’s consider Macy’s customers: would you expect everyone in this group to know what the word “alphanumeric” means (I admit I had to look it up the first time)?





Now, let’s put that in contrast to MailChimp’s sign up page:

























































































The result is self-explanatory: both security and usability can co-exist if there is a clear set of requirements and live validation (For the exception of users who have JavaScript turned off, we'd need to make sure an HTML fallback is available).





2) Email/Username Enumeration

This phrase is app security jargon. In fact, I didn't know this is considered a vulnerability until I joined the team. Basically, it means you can get an application to tell you if an email address is registered with the application. This is done by intentionally entering the wrong password on log in or by submitting the address to a form that checks whether it exists in the app's database. This is a technique that could lead to a combination of different exploits, from social engineering to network interception. Even if not, one could at least find out whether or not a particular email is associated with a database…. Think about how others could find out you're registered to a site that you don’t want them to know about. *wink wink*





As seen here, we can test whether or not rwiguna@gmail.com is registered to Instagram by intentionally testing another email that’s unlikely to exist. Sorry, Riandi, false alarm; please ignore that email:

























An alternative solution would be to always respond with the same “Email is sent” message even when the account doesn't exist. However, this can prove a little problematic if the user owns multiple addresses and cannot recall which one was used for registering to this particular app. The trade-off is more debatable here, and ultimately, our decision should depend on the specifics of the app. For a general purpose site that has other protections in place, it might not be as necessary to worry about this risk. However, if the site involves sensitive personal info of any kind, it wouldn’t hurt to include an extra lock on the door.





If we were to take the safer route, I would recommend adding another line to our message to clarify our intentions, along the lines of “If you don’t see an email in your inbox, you might have used a different email to register.”





3) Information Recovery via Email/Text Message

Even before I acquired infosec paranoia, I already knew that sending unencrypted sensitive data via email isn’t a good idea, which is the exact reason why I cringe every time I receive a password recovery email like these:













































































The idea behind the “Click this link to change your password” method is to reduce the exploitation surface. If an attacker intercepts an insecure connection, then transmitting the email via plain text will be a risk factor, whereas sending a reset password session link will prevent such an issue to a certain degree. While sending the password directly means that the user doesn’t have to go through the hassle of setting up a new password, the downside is much more significant.





On top of this, we have the option of multi-factor verification to provide another layer of safety, as it’s less likely that an attacker would have access to multiple factors at once. As this research shows, multi-factor verification is considered quite user-friendly if implemented correctly. The important thing to remember is to allow users to choose whether or not they want to utilize it. If they do, also give them the option to decide whether or not a device can be trusted, so that they do not have to repeat the process every time they sign on. Lastly, give them a way to get into the system if they don’t have access to a verification device, either through a set of recovery codes or a backup email.





4) Client-side Code Implementations

Now, we are getting to the nitty-gritty stuff.





As I wrote above, I used to think that perceived reliability is more important than actual reliability (working on the surface but broken underneath). The reason I said this will make things fall apart in the long term is because it only takes some basic knowledge of web technology to crack the system. Take the password strength requirement above, for example. Even if the implementation is easy to use and functional, if the protection is only dependent on client-side scripts, then in a hacker's eyes, it's just a decoration.





This section deserves a post on its own considering how quickly the topic of web hacking can become complicated. The rule of thumb is that, if there is a way to tamper with your application’s code, then it will be tampered with sooner rather than later.





Also, designers, unless there are already security measures built into the system, please don’t just use JavaScript or CSS to hide/disable elements that are potentially vulnerable. Links, form fields, buttons—those are usually the first things that hackers look at.





All of this goes back to the central concept of balance: the stronger the security being employed, the more consideration has to be given to ensure it won’t impact the experience, and vice versa.









































In essence, offering a good UX really means keeping your users happy when they use the application, and the reason to have a solid foundation of security is simply to maintain that level of satisfaction in the long term.