An alternative way to secure SPAs (with ASP.NET Core, OpenID Connect, OAuth 2.0 and ProxyKit)

You might have noticed the recent public discussions around how to securely build SPAs – and especially about the “weak security properties” of the OAuth 2.0 Implicit Flow. Brock has written up a good summary here.

The whole implicit vs code flow discussion isn’t particularly new – and my stance was always that, yes – getting rid of the tokens on the URL is nice – but the main problem isn’t how the tokens are transported to the browser, but rather how they are stored in the browser afterwards.

Actually, we had this discussion already back in May 2015 on the OAuth 2.0 email list, e.g.:

My conjecture is that it does not matter >>at all<< where you store tokens in relation to XSS. There is no secure place to store data in a browser in ways that cannot be abused by XSS. One XSS is complete compromise of the client.

And XSS resistant apps are illusive.

Jim Manico – here

What about CSP?

Content Security Policy was created to mitigate XSS attacks in the browser. But to be honest, I see it rarely being used because it is hard to retro-fit into an existing application and interferes with some of the libraries that are being used. Even in brand new applications it is often an afterthought, and the longer you wait, the harder it becomes to enable it.

And btw – getting CSP right might be harder than you think – check out this video about bypassing CSP.

Screenshot 2019-01-18 08.38.39.png

In addition, JS front-end developers typically use highly complex frameworks these days where they do not only need to know the basics of JavaScript/browser security, but also the framework specific features, quirks and vulnerabilities. Let alone the quality of known and unknown dependencies pulled into those applications.

Oh and btw – the fact that the new guidance around SPAs and Authorization Code Flow potentially allows refresh tokens in the browser, the token storage problem becomes even more interesting. the conclusion turned out to be:

SPA may turn out to be impossible to completely secure.

John Bradley – here

Just like there is IETF guidance for native apps, we always thought there should be a similar document that talks about SPAs. But it seemed everyone was trying to avoid this.

Finally, the work on such a document has started now, and you can track it here.

What about Token Binding?

For me, the most promising technology to protect tokens stored in the browser was token binding (and especially the OAuth 2.0/OpenID Connect extensions for it). This would have allowed to tie the tokens to a browser/TLS connection, and even if an attack would be able to exfiltrate the tokens, it could not be used outside the context of this original connection.

Unfortunately, Google decided against implementing token binding, and if the standard is not implemented in all browsers, it’s pretty much useless.

What other options do we have?

The above mentioned IETF document mentions two alternative architecture styles that result in no access tokens at all in the browser. Let’s have a look.

Screenshot 2019-01-17 17.57.42.png

Apps Served from the Same Domain as the API

Quoting 6.1

For simple system architectures, such as when the JavaScript application is served from the same domain as the API (resource server) being accessed, it is likely a better decision to avoid using OAuth entirely, and just use session authentication to communicate with the API.

Some notes about this:

  • This is indeed a very simple scenario. Most applications I review also use APIs from other domains, or need to share their APIs between multiple clients (see next section).
  • This is also not new. Especially “legacy” application often had local “API endpoints” to support the AJAX calls they sprinkled over their “multi-pages” over the years.
  • When traditional session authentication (aka cookies) is used, you need to protect against CSRF. Implementing anti-forgery for APIs is extra work, and while well-understood I often found it missing when doing code reviews.

The new kid on the block: SameSite cookies

SameSite cookie are a relatively new (but standardised) feature that prohibits cross-origin usage of cookies – and thus effectively stops CSRF attacks (well at least for cookies – but that’s what we care about here). As you can see, it is implemented in all current major browsers:

Screenshot 2019-01-18 09.33.49.png

ASP.NET Core by default sets the SameSite mode to Lax – which means that cross-origin POSTs don’t send a cookie anymore – but GETs do (which pretty much resembles that standard anti-forgery approach in MVC). You can also set the mode to Strict – which also would prohibit GETs.

This can act as a replacement for anti-forgery protection, but is relatively new. So you decide.

Bringing it together

OK – so let’s create a POC for this scenario, the building blocks are:

  • ASP.NET Core on the server side for authentication and session management as well as servicing our static content
  • Local or OpenID Connect authentication handled on the server-side
  • Cookies with HttpOnly and Lax or Strict SameSite mode for session management (see Brock’s blog post on how to enable Strict for remote authentication)
  • ASP.NET Core Web APIs as a private back-end for the SPA front-end

That’s it. This way all authentication is happening on the server, session management is done via a SameSite cookie that is not reachable from JavaScript, and since there are no tokens stored in the browser, all the client-side APIs calls don’t need to deal with tokens or token lifetime management.

The full solution can be found here.

Browser-Based App with a Backend Component

Quoting 6.2.:

To avoid the risks inherent in handling OAuth access tokens from a purely browser-based application, implementations may wish to move the authorization code exchange and handling of access and refresh tokens into a backend component.

The backend component essentially becomes a new authorization server for the code running in the browser, issuing its own tokens (e.g. a session cookie).  Security of the connection between code running in the browser and this backend component is assumed to utilize browser-level protection mechanisms.

In this scenario, the backend component may be a confidential client which is issued its own client secret.

This is a much more common scenario and some people call that a BFF architecture (back-end for front-end). In this scenario, there is a dedicated back-end for the SPA that provides the necessary API endpoints. These endpoints might have a local implementation or in turn contact other APIs to get the job done. These other APIs might be shared with other front-ends and might also require a user-context – IOW the BFF must delegate the user’s identity.

The good news is, that technically this is a really easy extension to the previous scenario. Since we already used OpenID Connect to authenticate the user, we simply ask for an additional access token to be able to communicate with the shared APIs from the back-end.

The ASP.NET Core authentication session management will store the access token in an encrypted and signed cookie and all token lifetime management can be automated by plugging-in the component I described in my last blog post. This allows the BFF to use the access token to call back-end APIs on behalf of the logged-on user.

One thing I noticed is, that you often end up duplicating the back-end API endpoints in the BFF to make them available to the front-end, which is a bit tedious. If all you want is passing through the API calls from the BFF to the back-end while attaching that precious access token on the way, you might want to use a light-weight reverse proxy: enter ProxyKit.

A toolkit to create HTTP proxies hosted in ASP.NET Core as middleware. This allows focused code-first proxies that can be embedded in existing ASP.NET Core applications or deployed as a standalone server.

While ProxyKit is very powerful and has plenty of powerful features like e.g. load balancing, I use it for a very simple case: If a request comes in via a certain route (e.g. /api), proxy that request to a back-end API while attaching the current access token. Job done.

app.Map("/api", api =>
    api.RunProxy(async context =>
        var forwardContext = context.ForwardTo("http://localhost:5001");
        var token = await context.GetTokenAsync("access_token");
        forwardContext.UpstreamRequest.Headers.Add("Authorization""Bearer " + token);
        return await forwardContext.Execute();

I think it is compelling, that combining server-side OpenID Connect, SameSite, automatic token management and ProxyKit, your SPA can focus on the actual functionality and is not cluttered with login logic, session and token management. And since no access tokens are stored in the browser itself, we mitigated at least this specific XSS problem.

Again, the full sample can be found here.

Some closing thoughts

Of course, this is not the silver-bullet. XSS can still be used to attack your front- and back-end.

Screenshot 2019-01-17 17.58.23.png

But it is a different threat-model, and this might be easier for you to handle.

Also you are doubling the number of round-trips and you might not find this very efficient. Also keep in mind that if you are using the reverse-proxy mechanism you are not really lowering the attack surface of your back-end APIs.

But regardless if you are using OAuth 2.0 in the browser directly or the BFF approach, XSS is still the main problem.


This article was quoted or questioned like “this approach is better than tokens”. This was not the point and I apologize if I wasn’t clear enough. It is an alternative approach to the “pure” OAuth 2 approach – and I am not saying it is better or worse. Both approaches have a different threat model and you might be more comfortable with one or the other.

Maybe you also need to evaluate your architecture based on the fact where the APIs live that you want to call. Would you bother with OAuth if all APIs are same-domain? Where is the tipping point? If all APIs are cross-domain then it certainly is questionable if they should proxied.

Anyways – time will tell and SameSite cookies are still very new but certainly give you a new interesting option.

Posted in ASP.NET Core, OAuth, OpenID Connect, WebAPI | 15 Comments

Automatic OAuth 2.0 Token Management in ASP.NET Core

As part of the recent discussions around how to build clients for OpenID Connect and OAuth 2.0 based systems (see e.g. Brock’s post here), we substantially updated our workshop and supporting libraries. The updated material (both workshop and break-out sessions) will be part of the upcoming conferences (Oslo, London, Porto, Copenhagen…).

One piece of the puzzle is how to manage OAuth 2.0 access tokens in ASP.NET Core server-side (e.g. MVC) web applications (another piece can be found here – more details soon.

In a nutshell, a client for a token-based system has these fundamental jobs:

  1. Request the token
  2. Use the token

Easy right? Well – there is one step missing (which is actually the most complicated): manage the token.

Looking a bit closer, the actual tasks are:

  1. Use OpenID Connect to authenticate a user and start/join a (single sign-on) session. This will result in an identity token that is used to validate the authentication process
  2. This identity token must be stored somewhere to do a clean sign-out at some later point
  3. As part of that you also request an access and refresh token to be able to communicate with APIs on behalf of the logged-on user
  4. These tokens must be also stored somewhere secure so that the application code can use them when doing API calls
  5. When the access token expires, use the refresh token to request a new access token and make this new token available to application code
  6. At sign-out time, use the identity token to authenticate the sign-out request, and revoke the tokens that you don’t need anymore (e.g. the refresh token)
  7. Make it work in a web farm

While you can build all of that from scratch – let’s have a look at what ASP.NET Core has built-in to assist you.

OpenID Connect Authentication Handler

The OIDC support in ASP.NET Core is pretty full featured. It supports Implicit Flow for authentication only scenarios and the more secure Hybrid Flow (aka code id_token) for authentication and access/refresh token requests. It would be nice if it would also support “Code + PKCE” out of the box, but that’s a topic for another post.

Most importantly, the handler has the concept of persisting the tokens into the authentication session. IOW – it makes the OIDC artefacts like identity, access and refresh token available to application or plumbing code.

If the identity token is available in the authentication session, it will be automatically used as a hint at sign-out time to authenticate the sign-out request to the identity provider.

The Authentication Session

ASP.NET Core has the concept of an authentication session. Even if they put that behind a whole lot of abstraction APIs, for all practical purposes, this is technically implemented by the cookie authentication handler – and – well a cookie.

Don’t confuse that with the more general session concept – the authentication session is solely for identity/security related information like claims, tokens, expiration times etc.

If you set the SaveTokens options on the OpenID Connect handler to true, the handler will persist the tokens and corresponding metadata in the session. You can then later access them via the GetTokens extension method on HttpContext, or by directly calling AuthenticateAsync on the name of the sign-in scheme. This returns an AuthenticateResult, which again has a properties dictionary containing the tokens.

Technically speaking the tokens are serialized into the authentication cookie, at least by default. You might not like that, because it potentially increases the size of your cookie substantially. Another good reason to keep those tokens small.

Another option is to plug in a server-side storage mechanism, e.g. Redis. This way all the data is stored server-side, associated with a GUID – and also the GUID is emitted into the cookie. For that you need an ITicketStore implementation and set that on the SessionStore property on the cookie handler options.

The cookie handler also has support for events, e.g. whenever a cookie is received, or when sign-out is happening. This is a convenient place to wire up automatic token management, e.g.

  • On every incoming request, check the expiration time of the current access token, and if a certain threshold is reached, use the refresh token to get a new access token
  • At sign-out time, call the revocation endpoint at the token service to revoke the refresh token

With that in place you can implement all necessary token management features at the runtime level, and your application code is completely unaware of these details. It simply uses the current access token from the authentication session. If an API call returns a 401, this means that the token management layer was not able to keep the token “fresh” and manual steps (e.g. a new authentication request) is necessary.

Sounds a bit abstract? Find a working sample here

Some key points about the sample:

  • All the cookie events are handled in a dedicated class which gets instantiated using the DI system
    • the ValidatePrincipal method is called on every request and contains the refresh logic
    • the SigningOut method is called at sign-out time to do the token revocation (for both self-initiated sign-outs as well as sign-out notifications).
  • All the OAuth protocol work is done by IdentityModel and is just a matter of calling some extensions methods on an HttpClient retrieved from the HttpClientFactory
  • Using some ASP.NET config black magic, you can condense the whole thing to one line of code .AddAutomaticTokenManagement (yay)



Posted in ASP.NET Core, Uncategorized | 12 Comments

What happened in 2018?

2018 has been really busy. We worked on a lot of different things, and I just realized that I only wrote eight blog posts in total.

I decided to block December to catch up on many work and non-work related things, work on a couple of “hobby” projects – and last but not least prepare for the holiday season (where I try to force myself to stay away from the computer – this time for real ;)

I thought I should write a final blog summarizing what we’ve been working and what kept us busy in the last 12 months.



PolicyServer is our commercial “authorization for modern applications” product that we announced at NDC in January this year. As you can imagine, this announcement required a lot of preparations and many things were very “last minute” – so much that the URL to the product was wrong on the slides when we made announcement (because we moved things around “last second”). Embarrassing.

But anyways – the feedback we got after the announcement was fantastic, and in the following weeks we got way more enquiries than we could handle. Ever since, this was our primary focus and we did a lot of customer work over the last 12 months to make sure that PolicyServer is really meeting the real world needs.

We have many plans for upcoming versions, and I must say it was a refreshing change to do some real product work as opposed to short to mid-term consulting and contracts. I also became “Mr. Devops” on our team (at least that’s how Brock likes to call me). Anyways – I figured out that automating “stuff” is actually fun.



This was also a big year for IdentityServer. We did tons of customer work around identity & access control, and it was very satisfying to see how good IdentityServer is at solving other people’s problems. While you can obviously always find things you would do differently today or would like to improve, the general design of IdentityServer has proven to be the right one. Some more facts:

  • we shipped three feature versions this year (2.1, 2.2 and 2.3) – and a couple of bug fix releases
  • 3 was a really big release and most notably – for the very first time a complete spec implementation was done as a contribution (meaning not from Brock or me). This spec is the so called “device flow” which allows devices without a browser or constrained input capabilities to connect to APIs using OAuth. Thanks Scott Brady!
  • As of today, IdentityServer has 153 contributors (thank you!), 3653 stars on Github and over 2,3 million downloads on Nuget.

We also started a Patreon page to allow companies to support IdentityServer, which in turn allows us to set more time aside from paid work.

As of today, we have 49 patrons – thank you all!! It is a bit surprising that most supportes are individual developers that use IdentityServer at work. I would have wished that more companies realize that it is important to back the OSS projects they rely on – let’s see what 2019 will bring.

Last but not least, the big news is, that the ASP.NET team decided to ship IdentityServer in their new templates that will be released shortly after v2.2. The integration comes with a simplified configuration system to target the specific template scenarios, but allows you to change over to the native configuration any time you want. I had to check my email archives, but this concludes a discussion we started with the ASP.NET team in 2012 (!)….

Btw – In case you wondered why we decided to strong name IdentityServer (and IdentityModel) – that’s ultimately the reason. It is required when you want to be part of an ASP.NET release. As part of that work, we also now Authenticode-sign our binaries as well as signed the Nuget packages.



The IdentityModel organization on Github is the home for our client libraries. The most popular one is IdentityModel itself with over 9,3 million downloads on Nuget.

IdentityModel has recently joined the .NET Foundation and has undergone the same treatment as IdentityServer (strong naming, Authenticode- and Nuget-signing). We also have proper docs now. I am currently working on a v4 which will have some breaking changes, but is a necessary clean-up for going forward.

Based on IdentityModel, there’s also OidcClient (a certified OpenID Connect client library for native clients) and AspNetCore.OAuth2Introspection (OAuth 2 introspection authentication handler for ASP.NET Core). Both get minor updates right now, and I am planning to release them all together beginning next year.

Brock is right now working on his JavaScript library called oidc-client.js to incorporate some of the latest security recommendations from the IETF. More on that in a separate post.

OK – that’s it. That’s pretty much how I split my work time. Of course there is also consulting and training and conferences – and it doesn’t really look like 2019 will be much quieter – and that’s a good thing!

See you next year!

Posted in .NET Security, ASP.NET Core, IdentityModel, IdentityServer, PolicyServer, Uncategorized | Leave a comment

Beware the combined authorize filter mechanics in ASP.NET Core 2.1


In ASP.NET Core 2.1 one of the security changes was related to how authorization filters work. In essence the filters are now combined, whereas previously they were not. This change in behavior is controlled via the AllowCombiningAuthorizeFilters on the MvcOptions, and also set with the new SetCompatabilityVersion API that you frequently see in the new templates.

Prior to 2.1 each authorization filter would run independently and all the authorization filters would need to succeed allow the user access to the action method. For example:

[Authorize(Roles = "role1", AuthenticationSchemes = "Cookie1")]
public class SecureController : Controller
    [Authorize(Roles = "role2", AuthenticationSchemes = "Cookie2")]
    public IActionResult Index()
        return View();

The above code would trigger the first authorization filter and run “Cookie1” authentication, set the HttpContext’s User property with the resultant ClaimsPrincipal, and then check the claims for a role called “role1”…

View original post 428 more words

Posted in Uncategorized | Leave a comment



In 2014 I developed and released the first version of IdentityManager. The intent was to provide a simple, self-contained administrative tool for managing users in your ASP.NET Identity or MembershipReboot identity databases. It targeted the Katana  framework, and it served its purpose.

But now that we’re in the era of ASP.NET Core and ASP.NET Identity 3 has come to supplant the prior identity frameworks, and it’s time to update IdentityManager for ASP.NET Core. Unfortunately I’ve been so busy with other projects I have not had time. Luckily Scott Brady, of Rock Solid Knowledge, has had the time! They have taken on stewardship of this project so it can continue to live on.

I’m happy to see they have released the first version of IdentityManager2. Here’s a post on getting started. Congrats!

View original post

Posted in Uncategorized | Leave a comment

Making the IdentityModel Client Libraries HttpClientFactory friendly

IdentityModel has a number of protocol client libraries, e.g. for requesting, refreshing, revoking and introspecting OAuth 2 tokens as well as a client and cache for the OpenID Connect discovery endpoint.

While they work fine, the style around libraries that use HTTP has changed a bit recently, e.g.:

  • the lifetime of the HttpClient is currently managed internally (including IDisposable). In the light of modern APIs like HttpClientFactory, this is an anti-pattern.
  • the main extensibility point is HttpMessageHandler – again the HttpClientFactory promotes a more composable way via DelegatingHandler.

While I could just add more constructor overloads that take an HttpClient, I decided to explore another route (all credits for this idea goes to @randompunter).

I reworked all the clients to be simply extensions methods for HttpClient. This allows you to new up your own client or get one from a factory. This gives you complete control over the lifetime and configuration of the client including handlers, default headers, base address, proxy settings etc. – e.g.:

public async Task<string> NoFactory()
    var client = new HttpClient();
    var response = await client.RequestClientCredentialsTokenAsync(new ClientCredentialsTokenRequest
        Address = "",
        ClientId = "client",
        ClientSecret = "secret"
    return response.AccessToken ?? response.Error;

If you want to throw in the client factory – you can register the client like this:


..and use it like this:

public async Task<string> Simple()
    var client = HttpClientFactory.CreateClient();
    var response = await client.RequestClientCredentialsTokenAsync(new ClientCredentialsTokenRequest
        Address = "",
        ClientId = "client",
        ClientSecret = "secret"
    return response.AccessToken ?? response.Error;

HttpClientFactory also supports named clients, which allows configuring certain things upfront, e.g. the base address:

    client => client.BaseAddress = new Uri(""));

Which means you don’t need to supply the address per request:

public async Task<string> WithAddress()
    var client = HttpClientFactory.CreateClient("token_client");
    var response = await client.RequestClientCredentialsTokenAsync(new ClientCredentialsTokenRequest
        ClientId = "client",
        ClientSecret = "secret"
    return response.AccessToken ?? response.Error;

You can also go one step further by creating a typed client, which exactly models the type of OAuth 2 requests you need to make in your application. You can mix that with the ASP.NET Core configuration model as well:

public class TokenClient
    public TokenClient(HttpClient client, IOptions<TokenClientOptions> options)
        Client = client;
        Options = options.Value;
    public HttpClient Client { get; }
    public TokenClientOptions Options { get; }
    public async Task<string> GetToken()
        var response = await Client.RequestClientCredentialsTokenAsync(new ClientCredentialsTokenRequest
            Address = Options.Address,
            ClientId = Options.ClientId,
            ClientSecret = Options.ClientSecret
        return response.AccessToken ?? response.Error;

..and register it like this:

services.Configure<TokenClientOptions>(options =>
    options.Address = "";
    options.ClientId = "client";
    options.ClientSecret = "secret";

…and use it e.g. like this:

public async Task<string> Typed([FromServicesTokenClient tokenClient)
    return await tokenClient.GetToken();

And one of my favourite features is the nice integration of the Polly library (and handlers in general) to give you extra features like retry logic:

    .AddTransientHttpErrorPolicy(builder => builder.WaitAndRetryAsync(new[]

This is work in progress right now, but it feels like this is a better abstraction level than the current client implementations. I am planning to release that soon – if you have any feedback, please leave a comment here or open an issue on github. Thanks!

Posted in ASP.NET Core, IdentityModel, Uncategorized, WebAPI | 6 Comments

Mixing UI and API Endpoints in ASP.NET Core 2.1 (aka Dynamic Scheme Selection)

Some people like to co-locate UI and API endpoints in the same application. I generally prefer to keep them separate, but I acknowledge that certain architecture styles make this conscious decision.

Server-side UIs typically use cookies for authentication (or a combination of cookies and OpenID Connect) and APIs should use access tokens – and you want to make sure that you are not accepting cookies in the API by accident.

Since authentication of incoming calls in ASP.NET.Core are abstracted by so called authentication handlers, and you can register as many of them as you want – you can support both authentication scenarios. That’s by design.

One way you could implement that is explicitly decorating every controller with an [Authorize] attribute and specify the name of the authentication scheme you want use. That’s a bit tedious, and also error prone. I prefer to have a global authorization policy that denies anonymous access and then rather opt-out of that with [AllowAnonymous] where needed.

This did not work prior to ASP.NET Core 2.1 because global policies rely on the default scheme configuration – but since we have two different schemes, there is no default. The effect would be e.g. that you would get a redirect to a login page for an anonymous API call where you would have expected a 401.

In ASP.NET Core 2.1 there is a new feature where you can dynamically select the authentication scheme based on the incoming HTTP request. Let’s say e.g. all your API endpoint are below /api – you could define a rule that for requests to that path you would use JWT tokens – and for all other, OpenID Connect with cookies. You do that by adding a forward selector to the authentication handler like this:

options.ForwardDefaultSelector = ctx =>
    if (ctx.Request.Path.StartsWithSegments("/api"))
        return "jwt";
        return "cookies";

For a full sample – see here.

Posted in ASP.NET Core, OpenID Connect, Uncategorized, WebAPI | 14 Comments

Improvements in Claim Mapping in the ASP.NET Core 2.1 OpenID Connect Handler

Here I described the various layers of claim mappings going on when doing OpenID Connect with ASP.NET Core.

Based on our feedback, the ASP.NET team added another mapping option to reduce the amount of “magic” going on, and thus makes it less confusing to get the expected claims in your client applications.

The new mapping is called MapAllExcept which does exactly what you think it does – it maps all the claims except the ones you don’t care about, e.g.:


This strips the protocol claims that you are not interested in, and all other claims get mapped forward (sample here). You still have to opt-out from the mapping to Microsoft proprietary claims – but well, we’ll get there eventually…

Posted in ASP.NET Core, OpenID Connect, Uncategorized | 2 Comments

The State of HttpClient and .NET Multi-Targeting

IdentityModel is a library that uses HttpClient internally – it should also run on all recent versions of the .NET Framework and .NET Core.

HttpClient is sometimes “built-in”, e.g. in the .NET Framework, and sometimes not, e.g. in .NET Core 1.x. So fundamentally there is a “GAC version” and a “Nuget version” of the same type.

We had lots of issues with this because it seemed regardless in which combination you are using the flavours of HttpClient, this will lead to a problem one way or another (github issues). The additional confusion was added by the fact that the .NET tooling had certain bugs in the past that needed workarounds that lead to other problems when those bugs were fixes in later tooling.

Long story short – every time I had to change the csproj file, it broke someone. The latest issue was related to Powershell and .NET 4.7.x (see here).

I once and for all wanted an official statement, how to deal with HttpClient – so I reached out to Immo (@terrajobst) over various channels. Turns out I was not alone with this problem.

Screenshot 2018-05-21 07.43.06

Despite him being on holidays during that time, he gave a really elaborate answer that contains both excellent background information and guidance.

I thought I should copy it here, so it becomes more search engine friendly and hopefully helps out other people that are in the same situation (original thread here).

“Alright, let me try to answer your question. It will probably have more detail than you need/asked for but I might be helpful to start with intention/goals and then the status quo. HttpClient started out as a NuGet package (out-of-band) and was added to the .NET Framework in 4.5 as well (in-box).

With .NET Core/.NET Standard we originally tried to model the .NET platform as a set of packages where being in-box vs. out-of-band no longer mattered. However, this was messier and more complicated than we anticipated.

As a result, we largely abandoned the idea of modeling the .NET platform as a NuGet graph with Core/Standard 2.0.

With .NET Core 2.0 and .NET Standard 2.0 you shouldn’t need to reference the SystemNetHttpClient NuGet package at all. It might get pulled from 1.x dependencies though.

Same goes for .NET Framework: if you target 4.5 and up, you should generally use the in-box version instead of the NuGet package. Again, you might end up pulling it in for .NET Standard 1.x and PCL dependencies, but code written directly against .NET Framework shouldn’t use it.

So why does the package still exist/why do we still update it? Simply because we want to make existing code work that took a dependency on it. However, as you discovered that isn’t smooth sailing on .NET Framework.

The intended model for the legacy package is: if you consume the package from .NET Framework 4.5+, .NET Core 2+, .NET Standard 2+ the package only forwards to the platform provided implementation as opposed to bring it’s own version.

That’s not what actually happens in all cases though: the HTTP Client package will (partially) replace in-box components on .NET Framework which happen to work for some customers and fails for others. Thus, we cannot easily fix the issue now.

On top of that we have the usual binding issues with the .NET Framework so this only really works well if you add binding redirects. Yay!

So, as a library author my recommendation is to avoid taking a dependency on this package and prefer the in-box versions in .NET Framework 4.5, .NET Core 2.0 and .NET Standard 2.0.

Thanks Immo!

Posted in IdentityModel, Uncategorized, WebAPI | 1 Comment

NDC London 2018 Artefacts

“IdentityServer v2 on ASP.NET Core v2: An update” video

“Authorization is hard! (aka the PolicyServer announcement) video

DotNetRocks interview audio


Posted in ASP.NET Core, IdentityServer, PolicyServer, Uncategorized, WebAPI | Leave a comment