Thoughts on User Safety

2019-09-18

We need to move beyond mere security and towards safety for our customers and our users. This is how we can do that.

Over time, platforms have generally focused on security – the projection of customer data from those who are not authorised to access it. This has generally been focused on infrastructure and product security, preventing exploitation of known and unknown vulnerabilities. Governmental agencies across the world have worked to implement standards for confidentiality, privacy and security (e.g. SOC 2, ISO 27001, ISO 27018 etc), and platforms have rushed to implement them and be certified to attract business that requires these certifications.

I feel that we, as an industry, have lost sight of what is important – the safety of our users. As we improve platform security, platforms sometimes leave users out in the cold, often chastising users for “holding it wrong”. We can do better. What follows is a brief overview of the “fronts” where user safety is at risk, what the table stakes are for these fronts, and what we can do to improve it.

Safety vs. Security

First, we must define the distinction between security and safety as it pertains to users of a platform. Much of this revolves around mutual trust – the user trusts the platform to keep their data safe from misuse, and the platform trusts the user not to misuse the platform. As you might imagine, this is not a balanced relationship – the user has a much higher standard of trust in the platform than the platform has in the user. The central thesis is that platform security does not equal user safety.

We now operate in a (groans) zero-trust model when it comes to users, where users are fundamentally untrusted until proven otherwise. This is problematic for the primary reason that it compromises on safety for the sake of security. By treating all users as potential bad-actors, we aren’t improving their safety, only the security of the platform. Users become vulnerable to attacks that take advantage of their nature of trusting platforms – a good example of this are campaigns conducted using compromised business email addresses. This leads to threats that are much harder for a platform to mitigate, such as user account takeover and fraud.

Having a platform increase user safety by encouraging better practices for the user, such as sensible defaults, instead of saying “you are holding it wrong”, is a win for the platform and the user. The platform is more secure and the user is safer as a result.

The Fronts

As I see it, there are three main “fronts” for protecting user safety. I am not calling this a threat model, as I feel that is the wrong terminology to apply here.

The three fronts are:

  1. External
  2. Internal
  3. Insider

These represent interfaces where a user might be exposed to the wider internet, exposed to other users, and exposed to your employees. These require different modes of thinking, and how user safety might be compromised.

We’ll start with External factors, and the other two factors in another post.

External

This is the classic positioning for a prospective attacker who wishes to compromise the security of a platform. When it comes to user safety, we are primarily concerned with account takeover and impersonation. Protecting users against these threats is challenging.

Proactive

First, we must have an excellent understanding of the ways users authenticate with the platform.

If your platform only supports username/password or long lived tokens, stop here. Table stakes are multi-factor authentication and time-limited, revokable tokens. My personal favourite combination is what has been required by the European Union’s requirement for Strong Customer Authentication 1. This is a combination of knowledge (e.g. a passphrase/PIN), possession (e.g. a hardware token, TOTP app on a mobile phone) and inherence (e.g. fingerprint or facial recognition). These factors must be independent of each other, so that a compromise of one does not compromise the others.

To increase user safety, the platform must offer these and ideally enforce their usage. SMS as a second factor is insufficient – phone numbers are not nearly as secure and unique as we would like to believe.

I generally recommend supporting WebAuthn 2 as a user-friendly way to handle a mix of knowledge and possession. Bonus points if you ship out reputable security keys (such as the Yubikey 5) to your customers as part of their signup process. In the grand scheme of things, $50 spent per seat should be small potatoes compared to the contract value of your customer, and the additional safety you have granted to your user is very large.

There are a couple of vendors that provide integration with existing biometrics, such as TouchID on macOS iOS, and these can be used with WebAuthn.

On the subject of tokens, if you allow the use of a token in lieu of a user/pass combination for programmatic access to your platform, you must be able to revoke these tokens (as mentioned earlier), and ideally you should have the relevant team(s) scanning common places where tokens might be inadvertently exposed, such as source control systems, unsecured pastes and so on.

It is also important to ingest publicly available sources of compromised passwords, and check if a password appears on known breach lists. Troy Hunt currently provides such a collection through Have I Been Pwned’s Password Lists, as per NIST guidelines.

Reactive

Given we have improved our user safety in a proactive sense, we must turn our attention to reactive action. This is where strong logging and understanding of user behaviour comes in.

Much has been said about what we can learn from Very Attacked People 3, both inside your organisation, but also your users. Users of high value systems, such as financial and marketing systems, will be constantly probed for weaknesses. This may be phishing (both generic and spearphishing), as well as other social engineering tactics. Your organisation may have a team dedicated to protecting the sensitive accounts of executives: turn them on to your most valued customers too.

In the event of an account breach, the first thing you need is to know about it. Humans are ultimately creatures of habit – a user will login the same way and go about their business in relatively stable patterns. Their cadence of typing and mouse movements are likely to be relatively unique. This is all data you can leverage to alert on deviations from normal behaviour for that user.

The full collection and analysis of this data is sometimes called User and Entity Behaviour Analytics. There is a large cost of performing this analysis, but the increased safety of your users is worth it.

Whenever abnormal behaviour is detected, ensure automated processes are in place to initially confirm the user account, and potentially lock the account with requirement for further validation. This is insurance in case, for whatever reason, your multi-factor authentication approach has been bypassed.

In cases where there is monetary impact to your user (e.g. your platform offers pay-what-you-use services), at the point of detecting anomalous behaviour, this timestamp should be a used to prevent erroneous billing, as seen in many 4 cases 5. It is not the user’s fault. Don’t make them pay for their possible mistakes when you can shoulder the cost.