NDC London 2016 Wrap-up

NDC has been fantastic again! Good fun, good talks and good company!

Brock and I did the usual 2-day version of our Identity & Access Control workshop at the pre-con. This was (probably) the last time we ran the 2-day version on Katana. At NDC in Oslo it will be all new material based on ASP.NET Core 1.0 (fingers crossed ;))

The main conference had dozens of interesting sessions and as always – pretty strong security content. On Wednesday I did a talk on (mostly) the new identity & authentication features of ASP.NET Core 1.0 [1]. This was also the perfect occasion to world-premier IdentityServer4 – the preview of the new version of IdentityServer for ASP.NET and .NET Core [2].

Right after my session, Barry focused on the new data protection and authorization APIs [3] and Brock did an introduction to IdentityServer (which is now finally on video [4]).

We also did a .NET Rocks [5] and Channel9 [6] interview – and our usual “user group meeting” at Brewdogs [7] ;)

All in all a really busy week – but well worth it!

[1] What’s new in Security in ASP.NET Core 1.0
[2] Announcing IdentityServer4
[3] A run around the new Data Protection and Authorization Stacks
[4] Introduction to IdentityServer
[5] .NET Rocks
[6] Not yet published – will update the post
[7] Brewdog Shepherd’s Bush

 

Posted in .NET Security, ASP.NET, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI | 2 Comments

PKCE Support in IdentityServer and IdentityModel

PKCE stands for “Proof Key for Code Exchange” and is a way to make OAuth 2.0 and OpenID Connect operations using an authorization code more secure. It is specified in RFC 7636.

PKCE applies to authorization/token requests whenever the code grant type is involved – e.g. plain OAuth 2.0 authorization code flow as well as (the superior) OpenID Connect hybrid flow (e.g. code id_token).

It mitigates an attack where the authorization response can be intercepted and the “stolen” code can be used to request access tokens. It introduces a per-request secret between the legitimate client and the authorization server, that is unknown to an attacker that can only see authorization responses. This is mainly useful for mobile/native clients.

IdentityServer3 as of v2.4 fully supports PKCE and the authorization and token endpoint documentation has the new parameters. Also the discovery endpoint now includes the code_challenge_methods_supported entry.

IdentityModel v1.5 includes the client pieces to interact with PKCE. You can e.g. use the following code to construct an authorization request:

var nonce = CryptoRandom.CreateRandomKeyString(64);
var verifier = CryptoRandom.CreateRandomKeyString(64);
var challenge = verifier.ToCodeChallenge();
 
var request = new AuthorizeRequest("https://my.server/authorization");
var url = request.CreateAuthorizeUrl(
    clientId: "myclient",
    responseType: "code id_token",
    scope: "openid myapi",
    redirectUri: "https://my.client/cb",
    nonce: nonce,
    codeChallenge: challenge,
    codeChallengeMethod: OidcConstants.CodeChallengeMethods.Sha256);

and the TokenClient to exchange the code with the token:

var tokenClient = new TokenClient(
    "https://my.server/token",
    "client",
    "secret");
 
var response = await tokenClient.RequestAuthorizationCodeAsync(
    code: code,
    redirectUri: "https://my.client/cb",
    codeVerifier: verifier);
Posted in IdentityServer, OAuth, OpenID Connect, Uncategorized | Leave a comment

Which OpenID Connect/OAuth 2.0 Flow is the right One?

That is probably the most common question we get – and the answer is of course: it depends!

Machine to Machine Communication
This one is easy – since there is no human directly involved, client credentials are used to request tokens.

Browser-based Applications
This might be a JavaScript-based application or a “traditional” server-rendered web application. For those scenarios, you typically want to use the implicit flow (OpenID Connect / OAuth 2.0).

A side effect of the implicit flow is, that all tokens (identity and access tokens) are delivered through the browser front-channel. If you want to use the access token purely on the server side, this would result in an unnecessary exposure of the token to the client. In that case I would prefer the authorization code flow – or hybrid flow.

Native Applications
Strictly speaking, a native application has very similar security properties compared to a JavaScript application. Still they are generally considered a bit more easy to secure because you often have stronger platform support for protecting data and isolation.

That’s the reason why the current consensus is, that an authorization code based flow gives you “a bit more” security than implicit. The much more important reason IMO is, that there are a couple of (upcoming) protocols that are optimized for native clients, and they use code exchange and the token endpoint as a foundation – e.g. PKCE, Proof of Possession and AC/DC.

Remark 1: With native applications I mean applications that have access to platform-native APIs like data protection or maybe the system browser. Cordova applications are e.g. written in JavaScript, but I would not consider them to be a “browser-based application”.

Remark 2: For code based flows, you need to embed the client secret in the client application. Of course you can’t treat that as a secret anymore – no matter how good you protect it, a motivated attacker will be able to reverse engineer it. It is still a bit better than no secret at all. Specs like PKCE make it a bit better as well.

Remark 3: I often hear the argument that the client application does not care who the user is, it just needs an access token – thus we rather do OAuth 2.0 than OpenID Connect. While this might be strictly speaking true – OIDC is the superior protocol as it includes a couple of extra security features like nonces for replay protection or c_hash and at_hash to link the (verifiable) identity token to the (unverifiable) access token.

Remark 4: As an extension to remark 3 – always use OpenID Connect – and not OAuth 2.0 on its own. There should be client libraries for every platform of interest by now. ASP.NET has middleware, we have a library for JavaScript. Other platforms should be fine as as well.

Remark 5: Whenever you think about using authorization code flow – rather use hybrid flow. This gives you a verifiable token first before you make additional roundtrips (another extensions of remark 3 and 4).

HTH

Posted in .NET Security, IdentityServer, OAuth, OpenID Connect, WebAPI | 14 Comments

Announcing IdentityServer for ASP.NET 5 and .NET Core

Over the last couple of years, we’ve been working with the ASP.NET team on the authentication and authorization story for Web API, Katana and ASP.NET 5. This included the design around claims-based identity, authorization and token-based authentication.

In the Katana timeframe we also reviewed the OAuth 2.0 authorization server middleware (and the templates around it) and weren’t very happy with it. But as usual, there were deadlines and Web API needed a token-based security story, so it shipped the way it was.

One year ago the ASP.NET team decided to discontinue that middleware and rather focus on consuming tokens instead. They also asked us if IdentityServer can be the replacement going forward.

By that time there were many unknowns – ASP.NET was still in early betas and literally changing every day. Important features like x-plat crypto (and thus support for JWT) weren’t even existing. Nevertheless, we agreed that we will port IdentityServer to ASP.NET 5 and .NET Core once the builds are more stabilized.

With RC1 (and soon RC2), we decided that now would the right moment in time to start porting IdentityServer – and here it is: IdentityServer4 (github / nuget / samples)

What’s new
When we designed IdentityServer3, one of our main goals was to be able to run self-hosted. At that time MVC was tied to IIS so using it for our default views was not an option. We weren’t particularly keen on creating our own view engine/abstraction, but that’s what needed to be done. This is not an issue anymore in ASP.NET 5, and as a result we removed the view service from IdentityServer4.

In IdentityServer4 you have full control over all UI aspects – login, consent, logoff and any additional UI you want to show to your user. You also have full control over the technology you want to use to implement that UI – it will be even possible to implement the UI in a completely different web application. This would allow adding OAuth 2.0 / OpenID Connect capabilities to an existing or legacy login “application”.

There will be also a standard UI that you can simply add as a package as well as templates to get you started.

Furthermore, IdentityServer4 is a “real” ASP.NET 5 application using all the standard platform facilities like DI, Logging, configuration, data protection etc, which means you have to learn less IdentityServer specifics.

What’s not new
Everything else really – IdentityServer4 has (or will have) all the features of IdentityServer3. You still can connect to arbitrary user management back-ends and there will be out of the box support for ASP.NET Identity 3.

We still provide the same architecture focused modelling around users, clients and scopes and still shield you from the low level details to make sure no security holes are introduced.

Database artifacts like reference or refresh tokens are compatible which gives you a nice upgrade/migration story.

Next steps
We will not abandon IdentityServer3 – many people are successfully using it and are happy with it (so are we). We are also aware that not everybody wants to switch its identity platform to “the latest thing” but rather wait a little longer.

But we should also not forget that IdentityServer3 is built on a platform (Katana) which Microsoft is not investing in anymore – and that also applies to the authentication middleware we use to connect to external providers. ASP.NET 5 is the way forward.

We just published beta1 to nuget. There are still many things missing, and what’s there might change. We also started publishing samples (link) to showcase the various features. Please try them out, give us feedback, open issues.

Around the RC2 timeframe there will be also more documentation showing up in our docs as and the ASP.NET documentation site. At some point, there will be also templates for Visual Studio which will provide a starting point for common security scenarios.

IdentityServer3 was such a great success because of all the good community feedback and contributions. Let’s take this to the next level!

Posted in ASP.NET, IdentityServer, OAuth, OpenID Connect, Uncategorized, WebAPI | 43 Comments

Validating Scopes in ASP.NET 4 and 5

OAuth 2.0 scopes are a way to model (API) resources. This allows you to give logical “names” to APIs that clients can use to request tokens for.

You might have very granular scopes like e.g. api1 & api2, or very coarse grained like application.backend. Some people use functional names e.g. contacts.api and customers.api (which might or might not span multiple physical APIs) – some group by criteria like public or internal only. Some even sub-divide a single API – e.g. calendar.read and calendar.readwrite. It is totally up to you (this is how Google uses scopes).

At the end of the day, the access token (be it self-contained or referenced) will be associated with the scopes the client was authorized for (and optionally – the user consented to).

IdentityServer does that by including claims of type scope in the access token – so really any technique that allows checking the claims of the current user will do.

As a side note – there is also a spec that deals with return codes for failed scope validation. In short – this should return a 403 instead of a 401.

We ourselves had some iterations in our thinking how we deal with scopes – here’s a summary and some options.

ASP.NET 4.x

The most common way we do scope checking is via our token validation middleware (source/nuget), which combines token and scope validation into a single step:

app.UseIdentityServerBearerTokenAuthentication(new IdentityServerBearerTokenAuthenticationOptions
    {
        Authority = "https://localhost:44333/core",
        RequiredScopes = new[] { "calendar.read""calendar.readwrite" },
    });

This would validate the token and require that either the calendar.read or calendar.readwrite scope claims are present.

This middleware also emits the right response status code, Www-Authenticate header and respects CORS pre-flight requests.

For finer granularity we also once wrote a Web API authorization attribute – [ScopeAuthorize] that you can put on top of controllers and actions (source).

As mentioned before – you can always check inside your code for scope claims yourself using the claims collection.

ASP.NET 5

We will have the same “all in one” IdentityServer token validation middleware for ASP.NET 5 – but this time split up into separate middleware that can be also used stand-alone (I wrote about the introspection aspect of it in my last post).

The scope validation part (source/nuget) of it looks like this:

app.AllowScopes("calendar.read""calendar.readwrite");

This has the same OR semantics as described above.

You can also use the new ASP.NET 5 authorization API to do scope checks – e.g. as a global policy:

public void ConfigureServices(IServiceCollection services)
{
    var scopePolicy = new AuthorizationPolicyBuilder()
        .RequireAuthenticatedUser()
        .RequireClaim("scope""calendar.read""calendar.readwrite")
        .Build();
 
    services.AddMvc(options =>
    {
        options.Filters.Add(new AuthorizeFilter(scopePolicy));
    });
}

..or as a named policy to decorate individual controllers and actions:

services.AddAuthorization(options =>
{
    options.AddPolicy("read",
        policy => policy.RequireClaim("scope""calendar.read"));
    options.AddPolicy("readwrite",
        policy => policy.RequireClaim("scope""calendar.readwrite"));
});

and use it e.g. like that:

public class CalendarControllerController
{
    [Authorize("read")]
    public IActionFilter Get() { ... }
 
    [Authorize("readwrite")]
    public IActionFilter Put() { ... }
}

One last remark: We get this question a lot – scopes are not used for authorizing users. They are used for modeling resources (and optionally to compose the consent screen as well as to specify which client might have access to these resources).

HTH

Posted in ASP.NET, IdentityModel, IdentityServer, Katana, OAuth, Uncategorized, WebAPI | 9 Comments

OAuth 2.0 Token Introspection Middleware for ASP.NET 5

In my last post I described the value of reference tokens and how the OAuth 2.0 token introspection spec (aka rfc7662) gives us a standard way of using them.

Over the christmas break I worked on an ASP.NET 5-based middleware for token introspection – it is pretty simple to use:

app.UseOAuth2IntrospectionAuthentication(options =>
{
    options.AutomaticAuthenticate = true;
    options.ScopeName = "api1";
    options.ScopeSecret = "secret";
    options.Authority = "https://identityserver.io";
});

If your token issuer supports discovery, all you need to do is to pass in the base URL – the token introspection endpoint is then found via metadata (there is also a way to explicitly pass in the endpoint address).

ScopeName and ScopeSecret are used to authenticate against the introspection endpoint (you can also use an HTTP message handler if you need more control over the wire format).

The result will – as usual – get turned into a claims principal and your pipeline/business logic will have access to the claims.

The token introspection spec is quite new – but needless to say, this works with IdentityServer.

source code / nuget package

PS. This is the first beta version – there is definitely room for improvement. One thing that is missing right now is caching – but I will get to that soon. Please use github to give feedback. thanks!

Posted in ASP.NET, IdentityServer, OAuth, WebAPI | Leave a comment

Reference Tokens and Introspection

Access tokens can come in two shapes: self-contained and reference.

Self-contained tokens are using a protected, time-limited data structure that contains metadata and claims to communicate the identity of the user or client over the wire. A popular format would be JSON Web Tokens (JWT). The recipient of a self-contained token can validate the token locally by checking the signature, expected issuer name and expected audience or scope.

Reference tokens (sometimes also called opaque tokens) on the other hand are just identifiers for a token stored on the token service. The token service stores the contents of the token in some data store, associates it with an infeasible-to-guess id and passes the id back to the client. The recipient then needs to open a back-channel to the token service, send the token to a validation endpoint, and if valid, retrieves the contents as the response.

A nice feature of reference tokens is that you have much more control over their lifetime. Where a self-contained token is hard to revoke before its expiration time, a reference token only lives as long as it exists in the STS data store. This allows for scenarios like

  • revoking the token in an “emergency” case (lost phone, phishing attack etc.)
  • invalidate tokens at user logout time or app uninstall

The downside of reference tokens is the needed back-channel communication from resource server to STS.

This might not be possible from a network point of view, and some people also have concerns about the extra round-trips and the load that gets put on the STS. The last two issues can be easily fixed using caching.

I presented this concept of the last years to many of my customers and the preferred architecture is becoming more and more like this:

If the token leaves the company infrastructure (e.g. to a browser or a mobile device), use reference tokens to be in complete control over lifetime. If the token is used internally only, self contained tokens are fine.

I am also mentioning (and demo-ing) reference tokens here starting minute 36.

IdentityServer3 supports the reference token concept since day one. You can set the access token type to either JWT or Reference per client, and the ITokenHandleStore interface takes care of persistence and revocation of reference tokens.

For validating reference tokens we provide a simple endpoint called the access token validation endpoint. This endpoint is e.g. used by our access token validation middleware, which is clever enough to distinguish between self-contained and reference tokens and does the validation either locally or using the endpoint. All of this is completely transparent to the API.

You simply specify the Authority (the base URL of IdentityServer) and the middleware will use that to pull the configuration (keys, issuer name etc) and construct the URL to the validation endpoint:

app.UseIdentityServerBearerTokenAuthentication(new IdentityServerBearerTokenAuthenticationOptions
    {
        Authority = "https://localhost:44333/core",
        RequiredScopes = new[] { "api1" }
    });

The middleware also supports caching and scope validation – check the docs here.

There are also multiple ways to revoke a token – e.g. through the application permission self-service page, the token revocation endpoint, by writing code against the ITokenHandle store interface (e.g. from your user service to clean up tokens during logout) or by simply deleting the token from your data store.

One thing our validation endpoint does not support is authentication – this is a non issue as long as you don’t want to use the reference token mechanism for confidentiality.

Token Introspection
Many token services have a reference token feature, and all of them, like us, invented their own proprietary validation endpoint. A couple of weeks ago RFC 7662 – “OAuth 2.0 Token Introspection”, which defines a standard protocol, has been published.

IdentityServer3 v2.2 as well as the token validation middleware starting with v2.3 have support for it.

The most important difference is that authentication is now required to access the introspection endpoint. Since this endpoint is not accessed by clients, but by resource servers, we hang the credential (aka secret) off the scope definition, e.g.:

var api1Scope = new Scope
{
    Name = "api1",
    Type = ScopeType.Resource,
 
    ScopeSecrets = new List<Secret>
    {
        new Secret("secret".Sha256())
    }
};

For secret parsing and validation we use the same extensible mechanism that we use for client secrets. That means you can use shared secrets, client certificates or anything custom.

This also means that only scopes that are included in the access token can introspect the token. For any other scope, the token will simply look like invalid.

IdentityModel has a client library for the token introspection endpoint which pretty much self explanatory:

var client = new IntrospectionClient(
    "https://localhost:44333/core/connect/introspect",
    "api1",
    "secret");
 
var request = new IntrospectionRequest
{
    Token = accessToken
};
 
var result = client.SendAsync(request).Result;
 
if (result.IsError)
{
    Console.WriteLine(result.Error);
}
else
{
    if (result.IsActive)
    {
        result.Claims.ToList().ForEach(c => Console.WriteLine("{0}: {1}",
            c.Item1, c.Item2));
    }
    else
    {
        Console.WriteLine("token is not active");
    }
}

This client is also used in the validation middleware. Once we see the additional secret configuration, the middleware switches from the old validation endpoint to the new introspection endpoint:

app.UseIdentityServerBearerTokenAuthentication(new IdentityServerBearerTokenAuthenticationOptions
    {
        Authority = "https://localhost:44333/core",
        RequiredScopes = new[] { "api1" },
 
        ClientId = "api1",
        ClientSecret = "secret"
    });

Once you switched to introspection, you can disable the old validation endpoint on the IdentityServerOptions:

var idsrvOptions = new IdentityServerOptions
{
    Factory = factory,
    SigningCertificate = Cert.Load(),
 
    Endpoints = new EndpointOptions
    {
        EnableAccessTokenValidationEndpoint = false
    }
};

Reference tokens are a real problem solver for many situations and the inclusion of the introspection spec and authentication makes this mechanism even more robust and a good basis for future features around access token lifetime management (spoiler alert).

HTH

Posted in .NET Security, ASP.NET, IdentityServer, Katana, OAuth, OWIN, Uncategorized, WebAPI | 16 Comments