LEARN THINK BLOG REPEAT
These are not edge cases…
Listening to a FIDO UX podcast, wanted to get this down while I’m thinking about it.
Listening to a FIDO UX podcast, wanted to get this down while I’m thinking about it.
Here is a list of sign on experiences that definitely are _not_ edge cases. Some are literally the first thing users do, others are the first thing customers ask about (in my experience). In no particular order:
New phone – pairing* / first use
Additional phone – pairing / first use
New computer/laptop – pairing / first use
Temporary device – such as in cases where phones are not allowed or phone has been forgotten
No access to phone / no phone at all
* RE pairing – this can refer to a few different things depending on the key / credential model, including pairing of a phone with a laptop for future use as a remote authenticator, or enrolling a platform authenticator (such as on a laptop) based on possession of the remote authenticator (such as on the phone)
A bazillion keys
This is my favorite and most interesting “new” problem that is not new 😊.
This is my favorite and most interesting “new” problem that is not new 😊.
One of the defining characteristics of Webauthn is that it requires a distinct key pair for each resource and each client device. This provides the advantage of privacy that comes from non-correlation across sites, but it also multiplies the number of new keys we’ll be dealing with, exceeding even the number of passwords each of us has today.
A world of widespread Webauthn/FIDO2 deployment will be one of approximately “one bazillion” keys across platform and roaming authenticators, users, devices, and relying parties. This creates the potential for a key management nightmare.
And unlike the mixed bag of API keys, tokens, confidential client secrets, passwords, and you-name-it that can be managed by a Secrets Management solution, these new keys will be user keys vs machine-to-machine keys. So these keys will require user experiences not only for usage, but for other common tasks we will need to perform to be able to use our keys easily. The problems are a bit more than simple create, get, list, and delete on a given authenticator.
We will need solutions to common situations like:
Oh shoot, which key did I use for this site and which authenticator device does it live on?
Do I have a backup key? Is it up to date?
I have keys across authenticator devices, computers and phones. How can these keys be kept track of?
What if I need to revoke or invalidate a key?
I have a new phone and want to provision keys on the platform authenticator for a large number of accounts all at once. How can I do this easily?
What if I lose my phone, which has a lot of keys on it?
Before leaping to “the user shouldn’t have to know” and “we’ll do it all behind the scenes silently”, please recall from Principles of Good Auth, where I said that access control decisions need to be explicit because, amongst other things, the “friction” or trouble taken to interact with [our keys] has a good result – it trains us to make better choices.
So let’s talk about some concepts and patterns for implementers and enterprises to help us have the right credential and key management solutions for the world of a bazillion keys.
Optimizing Key Issuance
The first way to mitigate this proliferation of keys is to ensure keys are enrolled in an efficient way so that we don’t have more keys than we need, but we have sufficient keys to cover the access scenarios. This is of course in the control of the websites / relying parties who support key-based sign in. Here are a few things they should think about:
Platform vs roaming authenticators. A platform authenticator – on the user’s phone or laptop – is going to be the most convenient to use for day to day sign on. A user will likely enroll more than one of these. A roaming / “cross platform” authenticator (a.k.a. a security key) can connect to a laptop, phone or other device, providing multi device support as well as a backup key.
Use of identity providers as aggregators. There are two ways for a relying party to support Webauthn sign in: either natively, consuming and validating Webauthn assertions, or via federated single sign on from identity providers (IDP) who support Webauthn. In the latter case, the website simply needs to support the federated authentication method – SAML, OAuth, etc – that the identity provider supports. How should websites and apps decide which approach to use? To answer this question you must consider the weight with which your app or service values privacy / anonymity of users vs the efficiency of simply pointing to an IDP. Also, what is the cost of implementing Webauthn natively, and taking on the management of all of your users’ credentials, vs outsourcing that task to an IDP?
Use of resident/discoverable keys. While the choice of resident or non resident keys doesn’t affect key proliferation (the keys still exist no matter where they are stored), this implementation choice does affect key management scenarios, mostly because taking on storage/protection of a private key shifts a management burden from the user to the relying party, both in terms of allocating capacity to manage the keys and in terms of protecting private information.
Implementation: Key and Credential Management Experiences
So, which key management scenarios and capabilities are most important? This is of course up to the implementers, but here are some thoughts from me about which are the most imperative:
For Webauthn supporting websites / relying parties
Enable enrollment of multiple factors, including of multiple Webauthn factors, to ensure a backup device can always be enrolled in advance of more seamless backup capabilities emerging in the industry.
Consider collecting a “hint” at Webauthn registration time to help the user recall which device (phone, security key, or laptop for example) to use at authentication time.
Provide easy, user-driven key management via self-service.
Provide flexible recovery experiences in case of loss of a key, and surface meaningful information when there are multiple available authentication methods or credentials for a sign in.
Consider providing user-elected sign in method preferences, such as per-device default methods or last-successfully-used method as default for next sign in from a device.
For authenticators (platform or cross-platform)
Ensure authentication devices and platforms provide basic enumeration of which relying parties / users have been enrolled on the device (gated behind PIN/Biometric as appropriate for spec compliance). For example, Yubikeys with firmware version 5.2.3 or higher offer this via the Yubikey Manager command line “ykman fido credentials list”. This applies only to resident credentials in the Yubikey case.
For non resident credentials, the above is not feasible today because no information is stored on the authenticator device. However, if we expect the non-resident pattern to be used widely going forward, the industry should consider paving new ground by enabling the above for non resident credentials as well, based on a mapping of which relying parties this key can decrypt keys for (again, respecting and mitigating security/disclosure issues by requiring PIN/Biometric).
A note on backups of Webauthn credentials/keys
While manually enrolled hardware backup keys, and new cryptographic schemes to make maintaining those keys easier have been proposed, my gut is that a backup solution that requires incremental effort on the users part, is not going to be adopted widely enough to be valuable, and that cloud-based backups will be a more successful path. While it pains me to defeat the purpose of the private key by sending it across the internet for storage at a cloud provider, the existence of non-resident keys in the Webauthn world means that we’ve crossed that bridge already, so we may as well enjoy some further benefits from it
So now the only remaining question becomes – who should take on the key backup? It will be fragmented and difficult for each service / RP to take on backups for its own credentials, and it won’t solve the problem of “have I backed up all my creds” for the user. IDPs could do it, but not every credential is an IDP credential. This leaves technology and security companies, who have both the expertise to backup keys safely and the relationship with the authentication device(s). Microsoft, Yubico, Google, Apple, Okta etc all have the security depth and technology breadth to execute successfully on an offering to back up (and manage) users’ keys easily. And any company who has a credible authentication app would have a great entry point to introduce such a service. It would then naturally follow on from a key backup service, that a new-device-provisioning experience could then be provided, as well as other key management experiences (what keys do I have where, etc).
Consent culture
I feel like maybe we all unknowingly agreed to something in Apple’s Terms of Service
Corinne FIsher (@philanthropygal)
“I feel like maybe we all unknowingly agreed to something in Apple’s Terms of Service”
Background
A few years ago I was inspired to write the following after traveling to Europe:
Dear Europe Internet,
I know that you use cookies.
I promise to always know that you use cookies.
I will not forget.
Please stop telling me that you use cookies.
I know.
Thank you.
Later, I was able to update it as follows:
Dear Europe Internet,
I know that you use cookies.
I promise to always know that you use cookies.
I will not forget.
Please stop telling me that you use cookies.
I know.
Thank you.
More recently, I was part of a team that was building tons of apps and APIs and looking, as are we all, for a way to govern who could call whom and for what and according to whom. One way to do it is to give every app a certificate, but that’s pretty high overhead and plants a bunch of time bombs in your infrastructure. Also, it doesn’t solve the problem. It gives every app an identity (a cert) but it doesn’t solve who can call whom. Similarly, neither does OAuth, which provides clients, apps, and api/resources with identifiers and even credentials, but which lacks a specification for how the actual authorization model mapping should be built and maintained.
Industry specs and working groups are constantly developing, and I’m always looking into various drafts and working groups, but as of now I haven’t seen an industry standard for mapping user permissions to resources. The closest I can find is something called “User-managed access” or UMA, which has product implementations including ForgeRock.
The team I was on came up with a concept that was kind of cute: they called it a “dating app”. It was a way for apps/APIs to pair with each other and establish mutual permissions (not sure if it included users or not).
My boss was a bit nervous about introducing the concept of swipe-left / swipe-right into the workplace, for fear of its effect on the women-folk and our fainting couches. I refrained from telling him my previous product had solved this problem via a concept called “application groups” so …. I guess you could say we were more liberal.
The Problem
From “Principles”
It’s common for sign on to be followed by a “consent” prompt. This means a downstream resource (another website or app) will gain privileges of its own if the user clicks “Accept”. In these cases it is crucial that the auth experience enables the user to know what access is being requested and for how long. Good auth provides this, as well as an easily discoverable way to revoke or manage access after the initial consent prompt.
The pattern in effect in most sign on flows today, which primarily use OAuth and OpenID Connect (OIDC), is that an application can be granted permission to a user’s resources by means of a “consent” screen on which the user approves the app’s access by choosing Accept or similar.
This is by design. The entire purpose of OAuth/OIDC was so that sharing amongst an ecosystem of apps could happen without users giving their passwords to multiple apps or people. The problem is that a couple of issues with these consent patterns (in their current implementations) have been exploited by malicious actors. For one, it is hard for a user to determine whether a consent request is appropriate or coming from a trustworthy app. For another, consent does not tend to expire. Once a user, prompted by their OAuth authorization server for consent to allow an app to access certain resources, clicks “accept”, the authorization server tends to persist this consent such that the app has access indefinitely.
This un-ending access comes by virtue of the (potentially malicious) app being granted not only access tokens (which tend to expire within hours) but refresh tokens that enable the app to keep obtaining new access tokens for weeks or months.
An aside
Not the focus of this article, but OAuth/OIDC do a great job of specifying interoperable token mechanics, good key hygiene, and access flows that acknowledge the proliferation of mobile apps, APIs and the diversity of browsers and devices. It’s just that we’re still struggling to provide the “who can access what and for how long” part of authorization. As of this point in time, we’ve done an insufficient job of enabling the entity we call “resource owner” to actually control who (whether app or person) has access to which of their resources and for how long.
Concepts of Consent
The relevance of the term ‘consent’ in a sign on context has expanded over the years as auth patterns and flows have evolved.
What it was historically
Between ten and five years ago, most customer requests I would hear regarding consent were simply to let the user know what profile information (for example, email address) was included in the sign on token sent to the app they are signing on to. These capabilities were of particular importance to European customers.
What it has become
Increasing privacy regulations and vastly expanded sign on patterns have extended consent’s charter to something more like let the user know and control which apps can access their info and what those apps can access. This goes way beyond sign on token contents to include apps allowed to reach back into your information while you are offline. And it’s not just profile information but all information: imagine an app that not only signs you in with your Google or Microsoft account, but can access your Google Drive or Microsoft OneDrive as well, for example.
Admin vs User vs App
Consent scenarios happen at several different layers, and it may be helpful to spell them out.
First, we have “admin” consent, in which an administrator configures a permission mapping to allow a client (a mobile, browser based or web app) to call a resource (app or API). This permission mapping may also include which users and/or which scopes (subsets of resources) are allowed within each mapping.
Next, we have “user to app” consent, in which a user provides consent to authorize app A to access her own information at app B. This requires an app A to app B permission mapping, as well as an authorization model by which app B knows which of its users have authorized app A to have access to what.
Going one step further, we have “user to user” consent, which can simply be called “sharing”, in which a user authorizes not only app A but user X of app A to access their information at app B. This requires app to app as well as user to user permissions. The User Managed Access (UMA) standard, which has several implementations including ForgeRock, is an example of a model for user to user app consent.
Consent Abuse
Consent abuse attacks exploit the current state of the industry, in which an app that wants access to a user’s resources only presents the user (via their auth server) with a one-time form that is often confusing or ambiguous.
Consent abuse happens when:
A nefarious app tricks a user into releasing that user’s information for the app to access (phishing)
A nefarious app tricks a user into releasing other users’ information for the app to access (phishing + failure of the authorization model)
As mentioned above, the access is often un-expiring because of a persistent authorization policy (aka “grant”) at the auth server. The nefarious app will tend to have ongoing access, unbeknownst to the user who will need never again be troubled.
Finally, it is too often unclear for the user to know where to go to manage which apps and/or users have what access.
Another aside
It wasn’t just consent abuse – it was also a sloppy, porous permissions model that has since been changed – that caused the widely publicized Facebook / Cambridge Analytica scandal’s mass disclosure of user profile information. But failure to provide a truly useful consent experience, resulting in users unknowingly consenting to release other users’ data to a 3d party app, certainly didn’t help.
Implementation
Here are some things we in the industry can do to mitigate the tricky problem of consent and consent abuse:
Provide better consent UX
“Consent” prompts need to do a much better job of informing the user of what they are allowing and the impact of their choice. However, because we live in a world where now every website is required to present every user with a “Do you want cookies” prompt which precisely no one reads, this can’t be the only answer.
Provide easy management of app consent state
What is really required is that every identity provider and resource that persists OAuth consent grants or other similar permissions should provide an easily discoverable way for users to manage what apps (and users) have been given access to what, and to revoke and manage those privileges as needed. Again this needs to be discoverable easily – not buried in the user profile’s advanced security settings.
Consider a time bound default for consent grants
Provided we ensure a simple workflow that allows the user to re-authorize their consent upon expiration, then we should consider a validity period for consent grants (with user testing to ensure we keep down the noise factor).
Make consent more dynamic
Better than the above, like MFA the “static” consent model should be evolved to provide risk and content based consent, in which previously-consented access can be subjected to an additional consent prompt only if the behavior looks suspicious. This could be because of the frequency of access, the apparent location of clients requesting access, or anything else. This experience could be tuned much further to keep the noise factor down and address real risks.
Extend model from human-to-app access to human-to-human access
As I mention above, the concept of “sharing” can and should be overlaid on an OAuth infrastructure to provide experiences for users to control their consent grants, application and user permissions explicitly. The industry would benefit from an improvement in the level of control users have over which users and apps have access to their data, so I hope to see some success stories here.
For organizations
In the meantime, to the extent that they can, organizations should evaluate admin based consent flows and configurable consent policies, vs simply individual user based consent. Also, ensure your periodic access reviews include consent grants in scope, so that unnecessary consent is purged.
The factors we choose
As most of us know by now, there is always more than one way in: to an app, to a website, to your house.
As most of us know by now, there is always more than one way in: to an app, to a website, to your house.
That sign in page with the username and password prompt? It’s just the digital equivalent of the front door. But guess what? There’s also a back entrance, a garage door, and when all else fails a shady locksmith who only wants your mother’s maiden name.
Today’s sign in and recovery methods
Most apps and websites today provide sign in by prompting for a username and password. Many sites including most major email, social media and personal financial sites also offer one or more additional multi-factor authentication (MFA) methods based on a phone call, a text to a mobile phone, or a mobile authenticator app, for example.
If you have forgotten your username and/or password, usually you can click a link and get back into your account based on an email, “security questions” you configured when you setup the account, or recovery codes. MFA methods such as text or mobile app can be used for this as well.
Fortunately and unfortunately, these recovery mechanisms provide an alternative path to sign in. Malicious hackers can choose which access path is easier: sign in or account recovery. Instead of trying to figure out your password, they may figure out answers to your security questions, hack into your email accounts, or socially engineer cell phone service providers into redirecting service for your phone number to a phone they control. They then use these methods to access and take over your account, resetting your password.
The good news is that the set of sign on and recovery mechanisms available is evolving from predominantly password and text, to include much more secure factors such as security keys, phones and computers using WebAuthn/FIDO2 credentials.
Pre-shared symmetric secrets (least secure) | Enrolled factors, non-key based (more secure) | Public/private key-based methods (most secure) |
---|---|---|
Passwords | SMS text to phone | FIDO2/Webauthn platform credential |
Security questions | Email code or temp link | FIDO2/Webauthn security key |
Recovery codes | Phone call | Phone app with key pair |
Mobile app OTP code | PKI Smart Card |
In order to gain the most value from this evolution, we need to update both sign in and account recovery experiences to use multiple, secure factors.
What websites should do in the near future
In the near term, the number of sites that support MFA will continue to increase, and some will adopt non-password factors as primary authentication, as for example the site medium.com has done with their email-based sign in. Because of this, the “I forgot my password” experience will need to broaden to encompass loss of other factors, for example, “I lost access to my email”, “I lost my phone and can’t use a text or phone app”, “I lost my security key”, or simply “I’m having trouble getting into my account”. Authenticator apps themselves may help by providing backup capabilities, however resources need to account for the possibility that no backup exists, either because the user did not create one or because the authentication method itself did not easily lend itself to backups (such as private key based methods). In short, “sign in” and “recovery” experiences should be offered for each factor. If sign in requires two factors, recovery should have an equivalent or higher bar
From a security perspective, account recovery should be seen as just another sign in method, as this is the way hackers see it. Strong auth doesn’t mean much if account recovery is weak.
What sites should be thinking about for the future
In the future, the user experience for signing in will converge with the experience for account “recovery” in that the user will be asked to provide whatever factors are most optimal given their device and scenario. There will be a graded scale of factors, where “not all factors are equal”. For example, proof of possession of an asymmetric private key will be considered more secure than providing a symmetric key or password. Multiple credentials / keys from different sources will be considered more secure than a single factor, and request context (risk level based on IP address, request info, device info, etc) can be incorporated into an overall risk score that serves as a factor.
Importantly, a single factor that is stronger, such as a security key, should not be recoverable based on a single factor that is weaker, such as a password or security questions. More generally, if a user needs n factors to sign in, resources should ensure at minimum that users have at least n + 1 factors registered, and ideally should ensure that users have registered additional factors of equivalent or greater strength, to enable recovery without lessening security.
And lastly a reminder from “Principles”:
Finally, new users or users who do not succeed in the account recovery experience need a way to create a new account or to appeal access to their existing account. Good auth anticipates these needs and provides entry points into these experiences, including an experience for when all credentials are lost.
Implementation ideas
In order to make this new world a reality, resources and identity providers should build features and experiences such as the following:
Basics
Provide support for additional factors for sign in, not just passwords
Provide a recovery experience for each sign-in factor
Help users enroll for additional factors to enable recovery
Expand options and policies for non-password sign in / account recovery
Provide sign in with single, non-password factor such as a phone app or security key
Provide recovery via security keys in addition to phone apps, email, and other less secure factors such as knowledge-based questions. For less secure factors, require multiple factors for account recovery.
For enterprise identity providers
Provide configurable policy to determine how many and which factors are sufficient to reset each factor, while enforcing some baseline criteria
Provide configurable policy to govern how often factors are re-verified
Provide policies to allow configuration of which factors users can use for what (for example, accessing resources vs managing auth factors)
Provide configurable policy rules for enrolling new factor(s)
The principles of good auth
The challenge of auth is the challenge of security in general: our goals are opposite right out of the gate. Let them in, keep them out. At the same time, efficiently, at scale.
The challenge of auth is the challenge of security in general: our goals are opposite right out of the gate. Let them in, keep them out. At the same time, efficiently, at scale.
According to unsolicited feedback from the guests at a recent (but pre-pandemic, so I guess it wasn’t that recent) barbeque at my house, repeated auth and MFA prompts are some of the most maddening experiences people have with technology.
Why good auth?
Sign in is something that IT professionals, administrators, CISOs, VPs, and developers all experience daily, just like the rest of the user community. No one likes inconvenient and repeated auth prompts, especially when they don’t do a great job of protecting resources and providing security. We get that things need to be protected, and we want to be part of the solution, just not if the experience sucks. And we don’t want to find out that we’ve unwittingly given all of our Facebook friends’ Equifax information to Russian bots (again).
This is why I decided to write out a set of “principles of good auth” (sorry for the grandiose naming). It’s my attempt to define some tenets we can hopefully all agree on across protocols and implementations, so that we’re cognizant of the trade-offs, and instead of getting mired in the ongoing argument of “security wants more prompts and users want less,” we can work toward some common goals.
So here we go: the Principles of Good Auth …
Good auth has a user experience (UX)
Over the past 20 years I’ve heard software engineers, architects, and executives espouse the gospel that “the user should not have to know,” and that “the best UX is no UX”. This is true for many if not most technology scenarios, but good auth cannot be 100% “transparent” or invisible. Not when a person is making access control decisions and choosing when and where to provide their credential information. There must be a clear experience that lets them know what is being accessed, by whom, and for how long. In short, access control decisions need to be explicit.
And there’s another reason that having a user experience is crucial, specifically for accessing and using sign in credentials. Consider your home or car keys. Those keys are obviously very important in your life, and they are not “transparent” or invisible. We can see and touch our keys. We know what they are for. And so we develop good habits around them. We most likely have muscle memory built around checking for our keys before we leave the house, work, gym, etc. Most of us probably know where a set of spare keys are stashed as well. This relationship we have with our keys enables us to keep track of them, to keep them protected and available. If our keys were somehow invisible, or if we didn’t interact with them very often or understand how they worked, this would not be the case. Basically, keys and credentials are different from other technical artifacts in that the “friction” or trouble taken to interact with them has a good result: it trains us to make better choices.
Good auth employs intelligent risk/threat mitigation
The internet of 2022 is one of extremes: for example, most websites interrupt each new user with a popup to explain that “we use cookies”, which is not news because every website does. Meanwhile, authenticated sites and services routinely trade user information with each other without the user knowing at all (see Consent Culture). And for workers, MFA prompts for each desktop, network and app lead to frustration and a tendency to “just approve everything”.
This one may be obvious, but good auth respects the context of a sign-in: for example, what’s being accessed, from where, how the sign-in was authenticated, the privilege level of the user, and environmental risk information; and it uses this information rather than resorting to simplistic rules such as prompts for every new resource, session time limits, or at the other extreme, troubling users just once and then never again. If something new is being requested from a previously consented website, the user should have the chance to provide new, informed consent. But if a new website is requesting a low level of access that multiple other websites likely already use, then maybe the user does not need to be bothered. For workers or for higher risk consumer scenarios such as shopping or financial transactions, the overall risk of the scenario should inform the frequency and experience of auth prompts. As the industry progresses, these experiences should be refined continuously, based on finer grained information, and evolve to trouble users only when the level of security or privacy risk merits it.
Good auth avoids methods that are known to be subject to common attacks
The security landscape is fast-moving enough without starting out in a deficit. Phishing, disclosure, and man-in-the-middle attacks are common ways to breach passwords, OTP codes, and other common auth methods. Good auth should embrace methods sufficiently resistant to the most common attacks.
Today this means avoiding factors that can be intercepted, disclosed, and used remotely without detection. Specifically, this means reducing the use of persisted symmetric shared secrets and bearer tokens, passwords, one time codes (OTP), API tokens and other secrets not bound to a proof of possession of a private key.
Good auth respects the practical realities of peoples’ lives
Good auth not only has an intelligent user experience, it has one that considers human factors such as peoples’ relationships with their phones, their ability or inability to carry phones or other items into work or school, their tolerance for wearables, their sense of privacy and/or aversion to biometrics, and their level of ability or disability.
Corporate security organizations will have their recommendations about what is the most secure authentication factor, but getting the broader population to adopt more secure practices necessitates meeting people where they are. A less secure factor that will be adopted by more of the population is better than nothing and is better than a highly secure factor which people will not use, will use incorrectly, or will attempt to find ways around.
Good auth helps the user to recognize it
The auth experience is one of the most sensitive in technology. Good auth respects the contract between the user and the “user agent” (usually an app or browser) and helps the user by giving them clear visual cues to help them determine if the interaction is safe. Good auth provides a consistent experience that helps the user to spot anomalies and/or warning signs of things like phishing or fraud.
Good auth guides the user to choose the authentication method most suitable for their device and scenario
Whether creating, accessing, or recovering access to an account, whether from a phone or laptop, app or browser, whether the account is for banking, social media, work, entertainment or shopping, the auth experience should guide the user to the best authentication method for their scenario. This could be a built-in authenticator, a security key, a phone app, a password, or something else. This applies to sign in but also enrollment and recovery scenarios. For example, when signing on from a phone to an account that was enrolled from a computer, it makes sense to guide the user to “enroll their phone” for sign on based on the on board PIN or biometric.
Learning from historically bad (for example PKI “cert picker”) user experiences, a good auth experience should surface meaningful information when there are multiple available authentication methods or credentials that are similarly suitable for a scenario. “Meaningful information” is of course subjective, but in general the experience should ensure that the information displayed for method or credential is distinct and that the easiest to use is prioritized. Providing hints based on information the user provided at enrollment time, “my security key” or similar, can help if done carefully. Hints for users are also hints for attackers, but with a secure factor such as a FIDO2 authenticator, the minimal information in the hint does not offer much help to an attacker who is not in possession of the device. (See Webauthn for PKI professionals).
Good auth treats credential enrollment and recovery, not just sign-in, as a first class experience
There isn’t just one way to sign in. For most apps and websites, you can either sign in or click something like an “I forgot my password” link, also known as account recovery, to get back into the account based on other information. Good auth makes sure the security of these experiences is consistent with the security of sign in and that users can easily enter the experience they need and that they have the credentials to use.
The auth experience should also account for common variants such as “I’m on a new computer or phone”, for example, by enabling a fallback method (relative to the ideal method for the device.) If I’m on a new phone, I may not have initialized and registered the phone app or platform authenticator. Similarly, if I’m on a new computer I may not have registered my device. There should be an easy entry point to enrollment of a suggested auth method for the new device, for example by identity verification, use of backup methods, or use of a previously enrolled device.
Finally, new users or users who do not succeed in the account recovery experience need a way to create a new account or to appeal access to their existing account. Good auth anticipates these needs and provides entry points into these experiences, including an experience for when all credentials are lost. (see The Factors We Choose).
Good auth recognizes that initial sign in is just the “front door” and that auth must be examined ongoing
Good auth recognizes that after a user signs in, a lot of things can change. The user’s location or authorization level could change easily, as could the security status of their device. Within an app, API or resource there are usually multiple sub-resources and authorization levels. Access authorized on initial sign in should be specific and minimal, and once signed in, if access to an additional or higher privileged resource is requested, or if environmental risk factors change, auth should be re-examined. (see Consent Culture, The Factors We Choose).
Good auth has clear, configurable, time-bound consent
It’s common for sign on to be followed by a “consent” prompt. This means a downstream resource (another website or app) will gain privileges of its own if the user clicks “Accept”. In these cases it is crucial that the auth experience enables the user to know what access is being requested and for how long. Good auth provides this, as well as an easily discoverable way to revoke or manage access after the initial consent prompt. (see Consent Culture).