Hardening OpenID Connect/OAuth Authorize Requests (and Responses)

One of the biggest strengths of OIDC and OAuth is the usage of the browser front-channel. The browser can show a UI and follow redirects, this makes it very powerful and flexible.

Guess what – the biggest weakness of OIDC and OAuth is the browser front-channel. The browser can show a UI and follow redirects, this makes it very vulnerable.

Upon closer inspection, the weakest link (no pun intended) is really the authorize request and response. This is where very important parameters and data are being sent and received. The most common attacks are done by adversaries who can “read but not modify authorize requests and responses”. If they can also modify data and mount additional endpoints, it’s deadly.

In other words, the real question is, how can we retain the enormous advantages of the browser, while making authorize requests/responses more secure?

Plain OAuth 2.0 has nothing to offer here – so let’s start with OpenID Connect.

The “default” mode in OIDC is just like OAuth, where the authorize request parameters are transmitted using unprotected query string parameters – which means data can leak, and is not tamper-proof.

On the other hand, the authorize response in OIDC is always wrapped inside a signed JWT (called the identity token) – either on the browser front-channel in implicit/hybrid scenarios, or on the back channel for code flow.

OIDC also supports the concept of request objects, where the authorize request parameters are wrapped in a JWT, instead using plain query string parameters. Request objects can be transmitted either by value (on a request query string parameter or body field) or by reference using the request_uri parameter. This helps making the parameters tamper proof and adds all the features we like about security tokens like authentication, replay protection and expiration.

What about OAuth?
The above features do not exist in plain OAuth 2.0 – but can be added by the “aftermarket”. Currently specs are in the making to add JWT secured authorize requests (JAR) and responses (JARM). While some details are slightly different compared to the OIDC spec, the intentions are exactly the same – make them tamper proof (and sometimes also confidential).

Of course, this has some consequences. The client must be able to deal with the necessary crypto to create those JWTs, and URLs definitely get longer. Now mix in some more advanced scenarios like additional data in the JWTs (e.g. the new Rich Authorization Requests, aka RAR), URLs are not the optimal transmission vehicle anymore.

Enter by-reference request objects
With this technique the client creates the request object, stores it somewhere where the AS can “download” it, and then simply passes the URL on the request_uri parameter. Sounds easy, but has some real-world challenges, e.g. the AS needs to have access to the location, there needs to be some house-keeping around creating and deleting those files and SSRF (server-side request forgery) is definitely something you have to think about.

Which brings us to the (so far) last step in the evolution of “authorize request hardening” which called Pushed Authorization Requests (PAR). The idea is simple (and great) – basically the AS also acts as a “drop location” for by-reference request objects.

With PAR, the client first pushes all the authorize request parameters via a back-channel connection to the authorization server. This call can utilize all the benefits of back-channels (e.g. strong client authentication or simply no being a front-channel). The AS will then respond with a URI that represents the request parameters (and a time-out).

The front-channel authorize request would then be reduced to:

https://as/authorize?request_uri=urn:par:1234

Thorsten wrote really good blog posts that go into more detail – check them out here and here.

What does that mean for IdentityServer users?
OIDC request objects are supported since version v3. JARs have slightly different validation rules and will be supported in v4. For PAR, we are looking for a sponsor right now, if your company wants that feature, let us know!

Posted in IdentityServer, OAuth, OpenID Connect | Leave a comment

Hardening Refresh Tokens

Refresh tokens provide a UX friendly way to give a client long-lived access to resources without having to involve the user after the initial authentication & token request. This makes them also a high-value target for attackers, because they typically have a much higher lifetime than access tokens.

For confidential clients, refresh tokens are automatically bound to the client via the client secret, this is not the case for public clients. While refresh tokens were historically not allowed for JavaScript-based clients, with the deprecation of the Implicit Flow, it’s technically now possible to also use them in SPA-style scenarios (see the BCP). I’ve already seen a number of popular sites that use them exactly for that purpose.

This post is not about whether you should store access or refresh tokens in a browser (you probably know my opinion about that already), but rather on how to reduce the attack surface of refresh tokens – SPA or not.

Consent

It’s a good idea to ask for consent when a client requests a refresh token. This way you at least try to make the user aware of what’s happening, and maybe you also give them a chance to opt-out of it. While this will go at the expense of the user experience, but maybe this is OK for the user.

In IdentityServer we always ask for consent (if enabled) if the client asks for the offline_access scope which goes in-line with the OpenID Connect spec.

Sliding expiration

Refresh tokens usually have a (much) longer lifetime than an access token. You can reduce the exposure though by also adding a sliding lifetime on top of the absolute lifetime. This allows for scenarios where a refresh token can be silently used if the user is regularly using the client, but needs a fresh authorize request, if the client has not been used for a certain time. In other words, they auto-expire much quicker without potentially interfering with the typical usage pattern.

In IdentityServer we support sliding expiration via the AbsoluteRefreshTokenLifetime and SlidingRefreshTokenLifetime client settings.

One-time Refresh Tokens

Another thing you can do, is rotating your refresh tokens on every usage. This also reduces the exposure, and has a higher chance to make older refresh tokens (e.g. exfiltrated from some storage mechanism or a network trace/log file) unusable.

The downside of this approach is, that you might have more scenarios where a legitimate refresh token becomes unusable – e.g. due to network problems while refreshing them.

In IdentityServer this is supported via the RefreshTokenUsage client settings, and we default to one-time only usage.

Replay detection

On top of one-time only semantics, you could also layer replay detection. This means, that if you ever see the same refresh token used more than once, you could revoke all access to the client/user combination. Again – same caveat applies – while increasing the security, this might result in false positives.

We will add this feature in version 4 of IdentityServer.

Additional Token binding

Another technique to consider could be the binding of the refresh token to the client using key material. In access tokens we have the cnf claim for this purpose.

You could apply the same technique to refresh tokens, in other words, if the client has presented some proof key at token request time, refreshing the token could require the same key again. This would go well with the ephemeral MTLS client certificates I wrote about in this blog post. I could see this as a nice additional tool on your belt, especially for public mobile or desktop clients.

There is currently an open issue to define the exact feature set around refresh tokens for vNext of IdentityServer – feel free to chime if you have additional ideas or input.

Posted in IdentityServer, OAuth, Uncategorized | Leave a comment

2020: IdentityServer4 Roadmap

It’s the time of the year – we are working on IdentityServer and lock down the features we want to implement for the next version(s).

Initially we planned to make our 3.0 release the big one – but then .NET Core 3 happened and we had to synchronise our release with Microsoft. That meant we really didn’t have the time to make the changes we wanted to.

This is different for v4 where we have much more control of the schedule. You can imagine that a lot of backlog has accrued over the last year and that many people have many opinions on what we should add or change or fix or whatever. But instead we picked five main themes that we think are important to improve IdentityServer as a whole.

Currently we plan to release 4.0 around June. After that we will work on 5.0 which probably will be released with .NET 5 (December-ish).

So here are the five big things we plan to do:

Make it easier to migrate to different signing algorithms
This is already done – and a more thorough description can be found here.

Make Proof-of-Possession tokens easier to use
Again, this work is already done, see the blog post here.

Update scope and resource design
Our scope validation system is a bit dated and not extensible at all. Our scopes were designed to be static, but newer scenarios like dynamic and structures scopes came up after that. In addition we are adding support for resource indicators which will give you more control over the shape of access tokens. This allows for isolating resources (meaning you can prohibit the issuance of an access token that is valid for more than one resource) and resource specific token content/processing.

Harden Refresh Tokens and make them more secure for SPAs
Refresh tokens in SPAs become a thing (and we can’t stop that). We already have a good feature set around refresh tokens to make them more secure, e.g. sliding expiration and one-time tokens. One missing feature though is detecting multiple refreshes so we can automatically revoke access. This requires some internal changes but it’s worthwhile.

Re-design Logging and Eventing
This is a huge one and might have to be moved to v5. But in essence we want to go fully structured/semantic and also give more control to the implementer – commonly requested features are full control over log levels, redaction of content, transformation of data, splitting of log streams etc… I am strongly leaning towards using the new System.Diagnostics.DiagnosticsListener API for that. Feedback welcome.

Of course there are some smaller issues that might get picked up along the way and you can see a full list here.

As always we encourage you to give feedback and try the new features – we will keep you posted as soon as something is ready.

Posted in IdentityServer, Uncategorized | 2 Comments

New in IdentityServer4 v4: Multiple signing Keys

So far IdentityServer4 only supported a single signing key at a time. There are historic reasons for that.

When we started with .NET Core, the only x-plat algorithm that really worked (without #ifdef hell) was RSA with SHA-256 (RS256) so we went with that. In the light of newer security requirements (e.g. FAPI), RS256 became discouraged, and newer algorithms like PSS or ES will take its place sooner or later.

A quick fix (one we could do without breaking changes) was to allow using a different signing algorithm than RS256. We added that in IdentityServer4 v3. But still we only allowed a single algorithm only, which makes migration from RS to something else, not really feasible.

Starting with v4, you can register multiple signing keys – or to be more precise, one for each algorithm you want to support, and clients and resources can express their preference if they want to. It’s a pure opt-in behaviour.

This way you can smoothly phase-out RS256 in favour of something else by switching your clients and APIs one by one when they are ready.

Check the updated docs – it’s really a small change, but pretty important going forward.

Posted in IdentityServer, Uncategorized | Leave a comment

OAuth 2.0: The long Road to Proof-of-Possession Access Tokens

I did a lot of WS-Security in my (distant) past – and whenever we started looking into migrating to OAuth 2.0, there was this one thing on the security check-list that was missing in the OAuth world: proof of possession tokens (short PoP).

Now might be finally the time where they become real. But some history first.

OAuth 1.x had PoP tokens, but they were complicated to use and involved some non-trivial crypto. As part of the OAuth 2.0 “simplification”, proof-of-possession became optional and bearer tokens became the standard choice. This was actually one of the reasons why the main OAuth spec author left just a couple of months before the document was actually finished. See this video for background/entertainment.

One of his legacies though was that the OAuth 2.0 spec got split in two specs – OAuth 2.0 itself (RFC 6749) – and token usage (RFC 6750). The only token usage spec that was written by that time, was the “bearer token usage” with “proof-of-possession” to follow at some point. This never happened.

PoP tokens have some nice additional security properties, because they are bound to the client that requested the token in the first place. This means that a leaked/stolen token cannot be used by an attacker without knowing the additional key material that defines that binding. This key material was typically never sent over the wire, which makes network-based attacks much harder. This is especially nice for mobile applications, where the client frequently connects to untrusted networks or for non-repudiation.

In WS-Security, the token binding was done purely at the application level (or message level as they called it). A couple of attempts have been made to replicate that idea in OAuth as well (see here, here and here).

All of these approaches were doable (though not straightforward) but suffered from problems like e.g. not being able to deal with streamed data (btw – WS-Security had the exact same problem). In the end, the extra complexity was questionable and all that was left from that effort, was the standardization of a claim to express the binding. It’s called cnf – or confirmation. But nothing beyond that.

Having a PoP mechanism at the transport layer seemed to be the better alternative as it was content agnostic and required less code to be written at the application level. Enter token binding – well don’t, it’s a waste of time. This looked promising, but Google backed out at some point, which made this not feasible as a standard at the internet level.

Mutual TLS
As kind of a “last resort”, there has been the OAuth MTLS spec cooking for a while. It describes both using X.509 client certificates for OAuth 2 client authentication, and using the certificate for PoP (or sender-constraining as they call it). Both parts can be used independently, as we’ll see later.

When most people hear X.509, CAs and “Mutual TLS”, they shy away because this is very complicated technology and – yes – this is true. But on the other hand, MTLS has kind of a renaissance and is very often used these days in “microservice” scenarios where it’s less about the PKI aspect than providing key material.

The Mutual TLS spec describes two authentication modes for clients – one is a full blown CA/PKI style where you authenticate based on a trust root and distinguished names, the other one though is for self-signed certificates, where you get the similar benefits without running a complicated CA infrastructure.

In both cases, the client certificate can be used to bind the issued token to the client using the cnf claim.

Another interesting idea is using “ephemeral client certificates” – this means the client must not necessarily use the client certificate for authentication – this can be done in any way you like, e.g. private key JWTs or  even shared secrets (though not recommended). The client cert’s only purpose is to provide some key material to the STS to do the token binding. This might be a gentle way to introduce PoP in your architecture.

To summarize: deploying client certificates is not necessarily as hard anymore as the last time you had a look it, and may be worth re-visiting.

What about SPAs?
Well – nothing really. MTLS is not really applicable, token binding is not happening. Other approaches have been tested, and while they work on a crypto level, they still can’t protected against the main attack vector in browsers, namely content injection attacks (aka XSS). So again – the extra effort is questionable.

In the SPA world, still the most secure option is to not store any tokens at all in the browser and only use them server-side. Feel free to layer PoP on top of that using the above technique.

What does this mean for .NET and IdentityServer Users?
IdentityServer4 had support for the MTLS spec since March 2019. We just recently extended that support for the upcoming version 4 for allowing more flexible hosting of the MTLS endpoints, e.g. on a sub-path, on sub-domains or on completely different domains. We also added support for adding the cnf claim also for ephemeral client certificates (.NET Core has an API to create X.509 certs on the fly which works really nice with that approach).

See here how to get the IdentityServer4 v4 pre-view build – and this is the updated documentation.

Posted in IdentityServer, OAuth | 1 Comment

IdentityServer3 and upcoming SameSite Cookie changes in Browsers

You have probably heard that starting with Chrome 80 in February, the behavior of cookies will change. This is a breaking change and effects every single web application on the internet.

Microsoft has patched their supported platforms (ASP.NET, Katana 4 and ASP.NET Core) and provides instructions how to deal with the changes until the web has stabilized again.

IdentityServer3 runs on Katana 3, which is not supported by Microsoft anymore. We announced the end of free maintenance for IdentityServer3 already end of 2017 and started offering a security maintenance program mid 2019.

We are aware that still many companies out there use IdentityServer3 – which means their applications will break in the next months because of the changes mentioned above.

There is no easy fix for this, since the underlying platform itself does not support the new cookie semantics. We took some engineering effort to update the old IdentityServer3 code-base to support the 2020 SameSite behavior, and make this available to our IdentityServer3 security maintenance customers. If you are not already in that program, please contact us immediately.

Posted in IdentityServer, Uncategorized | 9 Comments

Use explicit typing for your JWTs

JWTs are being used in many places these days – identity tokens, access tokens, security events, logout tokens…

You actually have to be careful when validating a JWT that you don’t mistakenly confuse it with a JWT that was issued for a different purpose, but “looks” very similar to what you expect.

One example is from the OpenID Connect back-channel logout spec. Besides the “normal” validation steps like signature, issuer, audience and expiration – in addition you need to check for the existence of a specific claim – and it MUST NOT contain a nonce claim. Why? – well because otherwise you might confuse it with an identity token.

Explicit typing
The new JWT BCP document recommends introducing explicit types for JWTs.

Confusion of one kind of JWT for another can be prevented by having
   all the kinds of JWTs that could otherwise potentially be confused
   include an explicit JWT type value and include checking the type
   value in their validation rules.  Explicit JWT typing is accomplished
   by using the "typ" header parameter.

And the JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens draft proposes using at+jwt for the value of the typ header in OAuth 2.0 access tokens.

We agree that this is an issue and added the typed typ header in the new IdentityServer4 v3 – here’s an example access token:

Screenshot 2019-09-13 14.37.26

The value is configurable via our options – and if you don’t like it, you can go back to the standard value as well.

 

Posted in IdentityServer, OAuth | 2 Comments