Full-Scratch Implementor of OAuth and OpenID Connect Talks About Findings

Takahiko Kawasaki
36 min readOct 16, 2016

--

1. Introduction

In this post, a developer who has implemented an OAuth 2.0 and OpenID Connect server from scratch (me) talks about findings. Basically, consideration points for implementation are written discursively. Therefore, this is not a document for those who are looking for information like “How to set up an OAuth 2.0 and OpenID Connect server promptly”. If you are looking for such information, please visit java-oauth-server and java-resource-server on GitHub. Using these, you can start an authorization server and a resource server, issue an access token and call a Web API with the access token, in 10 minutes with no need to set up your DB server.

Bias

I’m a co-founder of Authlete, Inc. which is a company providing implementations of OAuth 2.0 and OpenID Connect on cloud, so this document may be affected by such a biased standpoint. Therefore, read this document with it in your mind. However, basically, I’m going to write this post from a viewpoint of a pure engineer.

2. Is OAuth Necessary?

“We want to do this and that on our corporate website. Should we implement OAuth?” — This is often asked. In essence, this question is asking what OAuth is.

A one-sentence answer I often use to explain OAuth is as follows.

OAuth 2.0 is a framework where a user of a service can allow a third-party application to access his/her data hosted in the service without revealing his/her credentials to the application.

The important point is “Not reveal credentials to a third-party application”. OAuth exists for this purpose. Once you understand this, you can judge whether you should or don’t have to prepare an OAuth server for your company’s service by checking whether the following conditions are satisfied.

  1. Your service manages users’ data.
  2. You want third-parties to develop applications for users of your service.
  3. You don’t want to reveal credentials of users to applications developed by third-parties.

Even if the conditions above are not satisfied and applications for your company’s service are self-made ones only, it is recommended to implement an OAuth server if there is a possibility that you may want third-parties to develop applications in the future and/or if you want to follow the best practices of Web API development.

Still, confusion may not be resolved. It is when you want to enable users to login your website using their accounts of external services such as Facebook and Twitter. Because the term, “OAuth authentication”, is often used in this context, you may think you have to implement OAuth for your service. However, in this case, because your service is a client which uses OAuth implemented by external services, your service itself does not have to implement OAuth. To be exact, your service has to write code to use other companies’ OAuth. In other words, from a viewpoint of external services, your service has to behave as an OAuth client. However, in this use case, your service does not have to behave as an OAuth server. That is, you don’t have to implement an OAuth server.

3. Authentication and Authorization

I explain the term that makes people confused — “OAuth authentication”.

Every explanation says “OAuth is a specification for authorization and not for authentication.” It is because RFC 6749 (The OAuth 2.0 Authorization Framework) explicitly states that authentication is “beyond the scope of this specification.” The following paragraph is an excerpt from 3.1. Authorization Endpoint in RFC 6749.

The authorization endpoint is used to interact with the resource owner and obtain an authorization grant. The authorization server MUST first verify the identity of the resource owner. The way in which the authorization server authenticates the resource owner (e.g., username and password login, session cookies) is beyond the scope of this specification.

Nevertheless, the term “OAuth authentication” is flooding and making people confused. This confusion is observed not only among business-side people but also among engineers. For example, questions like “OAuth Authorization vs Authentication” are sometimes posted to Stack Overflow (my answer to the question is this).

Information dealt by the terms, authentication and authorization (in the context of OAuth), can be described as shown below.

  • Authentication — Who one is.
  • Authorization — Who grants what permissions to whom.

Authentication is a simple concept. In other words, it is confirmation of identity. The most prevailing way to identify a person at a website is to request the person to present a pair of ID and password, but there are other ways such as biometric authentication using fingerprint or iris, one-time password, random number table and so on. In any case, whatever way is used, authentication is a process to identify who one is. Using developer words, it can be expressed like “Authentication is a process to identify the unique identifier of a user.”

On the other hand, authorization is complicated because three elements, namely, “who”, “what permissions” and “to whom”, are involved. In addition, what makes it confusing is that among the three elements, the process to identify “who” is authentication. In other words, the fact that authorization process includes authentication process as a part is making things confusing.

If the three elements should be replaced with words used by developers, “who” can be replaced with “user”, and “to whom” with “client application”. As a result, authorization in the context of OAuth can be said to be the process where a user grants permissions to a client application.

The figure below depicts the concept explained so far.

This figure illustrates which parts in an authorization page (a page where a user grants permissions to a client application) are for authentication and for authorization. The difference between authentication and authorization is clear.

Now, it’s time to talk about “OAuth authentication”.

Because authorization process includes authentication process as a part, being authorized means being authenticated. So, some people began to use OAuth for authentication. This is “OAuth authentication” and it has prevailed rapidly because of some merits such as “that the task to manage user credentials can be delegated to external services” and “that the hurdle for new users to start using the service becomes lower because the user registration process can be omitted.”

OpenID guys held a grudge against the situation. —Sorry, I don’t know whether they actually felt in that way, but at least I can imagine that they felt OAuth authentication was far from the level of specifications that they had defined by then such as OpenID 2.0 and SAML. However, it was undeniable fact that their specifications had not prevailed very much and developers in the world had chosen the easiness of OAuth authentication. Therefore, they have defined a new specification for authentication, OpenID Connect, on top of OAuth. OpenID Connect FAQ depicts the relationship as an equation like below.

(Identity, Authentication) + OAuth 2.0 = OpenID Connect

Thanks to this, authentication by OpenID Connect can be executed at the same time during the authorization process by OAuth.

As major players in the industry have been working on specification creation and proactively implementing it (FAQ), OpenID Connect will surely prevail. As a result, OAuth authentication libraries such as OmniAuth will gradually finish their roles.

However, people will surely be made more confused because OpenID Connect which is for authentication has been built on top of OAuth which is for authorization. It is difficult to explain, especially in my case because Authlete focuses on authorization and does not do anything for authentication although it supports OpenID Connect. I always have to explain the difference between authentication and authorization before starting to explain the product itself to customers.

Regarding the problem of OAuth authentication, please read an article The problem with OAuth for Authentication by Mr. John Bradley. In the article he says “This is a security hole that you can drive a house through.”

“Say OAuth is an Authentication standard again.” by Mr. Nat Sakimura and Mr. John Bradley. (from https://twitter.com/ve7jtb/status/740650395735871488)

4. Relationship between OAuth 2.0 and OpenID Connect

All the contents so far are, however, just a preamble of this post. Technical contents for developers start from here. The first topic is the relationship between OAuth 2.0 and OpenID Connect.

It was after I had finished implementing RFC 6749 (The OAuth 2.0 Authorization Framework) that I noticed the existence of OpenID Connect. As I gather information about OpenID Connect, I thought I should implement the feature and so read OpenID Connect Core 1.0 and other related specifications. After reading, the conclusion I had reached was “All should be rewritten from scratch.”

The OpenID Connect website says “OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol.” and this gives an impression that OpenID Connect can be implemented easily and seamlessly on top of an existing OAuth 2.0 implementation. However, the truth is utterly different. IMHO, OpenID Connect is virtually OAuth 3.0.

There are many specifications related to OpenID Connect and they are so puzzling that it is hard to decipher them. I almost went crazy and had to read them three times before I could manage to grasp the entire picture. Compared to OpenID Connect specifications, RFC 6749 can be said to be easy.

5. Response Type

Especially, what conflicts with an existing implementation is the way to process the request parameter, response_type. It is sure that RFC 6749 states the request parameter may take multiple values, but it is just a possibility in the future. If we read RFC 6749 straightforwardly, response_type is either code or token. It is almost impossible to imagine that these two are set at the same time. It is because the parameter is used to determine the flow to process a request from a client application. To be concrete, the authorization code flow is used when the value of response_type is code, and the implicit flow is used when the value is token. Who can imagine these flows are mixed? Even if one could imagine it, how should we resolve conflicts that exist between the flows? For example, the authorization code flow requires that response parameters be embedded in the query part of a redirect URI (4.1.2. Authorization Response) while the implicit flow requires that response parameters be embedded in the fragment part (4.2.2. Access Token Response), and these requirements cannot be satisfied simultaneously.

However, OpenID Connect has added id_token as a new value for response_type and explicitly allowed any combination of code, token and id_token as a value of response_type. Furthermore, none is added, too. Details are described in “3. Authentication” of OpenID Connect Core 1.0 and in OAuth 2.0 Multiple Response Type Encoding Practices.

It requires major changes to modify existing code written on the assumption of the either-or choice so that it can handle any combination of possible values and mixed flows. Therefore, implementors of OAuth library should write it with OpenID Connect in their head from the beginning if there is a possibility to support OpenID Connect in the future. In other words, existing OAuth libraries cannot support OpenID Connect without major modifications.

Spring Security OAuth, for example. This library has not supported OpenID Connect yet (as of June 2016). For the library to support OpenID Connect, to begin with, the request parameter response_type has to be able to take other values than code and token. A request for it is listed as Issue #619 Handling additional response_types, but it has not been resolved yet and the last comment of the thread is “Any comments are more than welcome as this turned out to be (as I predicted) a large refactor exercise.” I read some relevant source files and found it true that supporting OpenID Connect would require big changes. Unless some companies back up the project financially, I’m afraid it will take long time for the project to support OpenID Connect.

In passing, I’d like to mention Apache Oltu, too. The project claims that it supports OpenID Connect, but my guess is that the initial implementation supported OAuth 2.0 only and OpenID Connect support was added in a later phase. The reason I think so is that the package for OAuth 2.0 ( org.apache.oltu.oauth2) and that for OpenID Connect ( org.apache.oltu.openidconnect) are isolated. However, this kind of approach distorts the architecture. For example, it is inappropriate that OpenIdConnectResponse class is a descendant of OAuthAccessTokenResponse because a response containing an ID token does not necessarily contain an access token. Other examples are existence of a class named OAuthClientRequest.AuthenticationRequestBuilder (not “Authorization” but “Authentication” for some reasons) and existence of a GitHub-specific class GitHubTokenResponse. The architecture of Apache Oltu brings up questions at least to me. I don’t know the details about the project, but in my personal opinion, it is destined to shrink.

6. Metadata of Client Application

As written explicitly in 2. Client Registration of RFC 6749, a client application has to be registered into the target authorization server in advance before it makes an authorization request. Therefore, in a typical case, an implementor of an authorization server defines a database table to store information about client applications.

To decide what columns the table should have, an implementor lists up items by reading the specification. For example, reading RFC 6749 will make you realize that at least the following items are necessary.

  1. Client ID
  2. Client Secret
  3. Client Type
  4. Redirect URIs

In addition to these, an implementor may add more attributes. For instance, “application name”.

There are not so many attributes of a client application even if you rifle through RFC 6749, so the number of columns of the database table to store attributes of client applications won’t become big — such good old days have ended due to the emergence of OpenID Connect. Many attributes that a client application should have are listed in 2. Client Metadata of OpenID Connect Dynamic Client Registration 1.0. Below is the list.

  1. redirect_uris — Redirect URI values used by the Client.
  2. response_typesresponse_type values that the Client is declaring that it will restrict itself to using.
  3. grant_types — Grant Types that the Client is declaring that it will restrict itself to using.
  4. application_type — Kind of the application.
  5. contacts — e-mail addresses of people responsible for this Client.
  6. client_name — Name of the Client to be presented to the End-User.
  7. logo_uri — URL that references a logo for the Client application.
  8. client_uri — URL of the home page of the Client.
  9. policy_uri— URL that the Relying Party Client provides to the End-User to read about how the profile data will be used.
  10. tos_uri— URL that the Relying Party Client provides to the End-User to read about the Relying Party’s terms of service.
  11. jwks_uri— URL for the Client’s JSON Web Key Set document.
  12. jwks — Client’s JSON Web Key Set document, passed by value.
  13. sector_identifier_uri — URL using the https scheme to be used in calculating Pseudonymous Identifiers by the OP.
  14. subject_typesubject_type requested for responses to this Client.
  15. id_token_signed_response_alg — JWS alg algorithm required for signing the ID Token issued to this Client.
  16. id_token_encrypted_response_alg — JWE alg algorithm required for encrypting the ID Token issued to this Client.
  17. id_token_encrypted_response_enc— JWE enc algorithm required for encrypting the ID Token issued to this Client.
  18. userinfo_signed_response_alg— JWS alg algorithm required for signing UserInfo Responses.
  19. userinfo_encrypted_response_alg — JWE alg algorithm required for encrypting UserInfo Responses.
  20. userinfo_encrypted_response_enc — JWE enc algorithm required for encrypting UserInfo Responses.
  21. request_object_signing_response_alg — JWS alg algorithm that must be used for signing Request Objects sent to the OP.
  22. request_object_encryption_alg — JWE alg algorithm that RP is declaring that it may use for encrypting Request Object sent to the OP.
  23. request_object_encryption_enc — JWE enc algorithm the RP is declaring that it may use for encrypting Request Objects sent to the OP.
  24. token_endpoint_auth_method— Requested Client Authentication method for the Token Endpoint.
  25. token_endpoint_auth_signing_alg — JWS alg algorithm that must be used for signing the JWT used to authenticate the Client at the Token Endpoint for the private_key_jwt and client_secret_jwt authentication methods.
  26. default_max_age — Default Maximum Authentication Age.
  27. require_auth_time — Boolean value specifying whether the auth_time Claim in the ID Token is required.
  28. default_acr_values — Default requested Authentication Context Class Reference values.
  29. initiate_login_uri — URI using the https scheme that a third party can use to initiate a login by the RP.
  30. request_uris request_uri values that are pre-registered by the RP for use at the OP.

Therefore, a database table for client applications should be able to store these pieces of information. In addition, it should be noted that some attributes (such as client_name, tos_uri, policy_uri, logo_uri and client_uri) are allowed to be localized (2.1. Metadata Languages and Scripts). Additional consideration for database table design will be required to store localized attribute values.

Subsections hereafter are my personal opinions about client application attributes.

6.1. Client Type

I’m afraid it is a kind of a mistake in defining the specification that 2. Client Metadata of OpenID Connect Dynamic Client Registration 1.0 does not contain “client type”. The reason I think so is that the difference between the two client types, “confidential” and “public” (which are defined in 2.1. Client Types of RFC 6749), must be taken into consideration when we implement an authorization server. As a matter of fact, “client type” is listed as an example of client properties to be registered in 2. Client Registration of RFC 6749 as follows.

…registration can rely on other means for establishing trust and obtaining the required client properties (e.g., redirection URI, client type).

If this is not a mistake, there must be a consensus on the client type of client applications that are registered by Dynamic Client Registration. However, I could not find such information in relevant specifications.

In any case, I think that a column for client type should exist when defining a database table for client applications.

You can find some discussion about this in Issue 991.

6.2. Application Type

According to the specification, application_type is an optional attribute. Pre-defined values for application_type are native and web. If omitted, web is used as the default value.

If the default value is used when omitted, the natural consequence is that the application type of a client application must be either native and web. So, you may feel like adding NOT NULL to the column for application_type. However, the implementation of Authlete dare not add NOT NULL and allows NULL.

The reason is that I’m not sure that the restrictions on redirect URI values imposed by application_type, which are defined in OpenID Connect Dynamic Client Registration 1.0 as follows, should be applied to every OAuth 2.0 client.

Web Clients using the OAuth Implicit Grant Type MUST only register URLs using the https scheme as redirect_uris; they MUST NOT use localhost as the hostname. Native Clients MUST only register redirect_uris using custom URI schemes or URLs using the http: scheme with localhost as the hostname.

2 years ago, I posted a question “Does Application Type (OpenID Connect) correspond to Client Type (OAuth 2.0)?” to Stack Overflow, but I could not get any answer. So I investigated and answered by myself. Please see it if interested.

6.3. Client Secret

How long should the length of client secret be?

For example, OpenAM Administration Guide uses password as an example of client secret value. Below is a screenshot of 12.4.1. Configuring OpenAM as Authorization Server & Client.

It seems OpenAM allows users to use a short string as a client secret.

On the other hand, in the implementation of Authlete, client secrets are generated automatically and become long like the following.

GBAyfVL7YWtP6gudLIjbRZV_N0dW4f3xETiIxqtokEAZ6FAsBtgyIq0MpU1uQ7J08xOTO2zwP0OuO3pMVAUTid

The reason for this length is that I wanted to support 512 bits for symmetric signature and encryption algorithms. For instance, I wanted to support HS512 as a signature algorithm for JWS. Because client secrets have to have entropy of 512 bits or more to support HS512, the length of the example above is 86, which is a result of encoding 512-bit data using base64url.

Regarding entropy for symmetric signature and encryption algorithms, 16.19 Symmetric Key Entropy in OpenID Connect Core 1.0 states as follows.

In Section 10.1 and Section 10.2, keys are derived from the client_secret value. Thus, when used with symmetric signing or encryption operations, client_secret values MUST contain sufficient entropy to generate cryptographically strong keys. Also, client_secret values MUST also contain at least the minimum of number of octets required for MAC keys for the particular algorithm used. So for instance, for HS256, the client_secret value MUST contain at least 32 octets (and almost certainly SHOULD contain more, since client_secret values are likely to use a restricted alphabet).

And, 3.1. “alg” (Algorithm) Header Parameter Values for JWS in RFC 7518 (JSON Web Algorithms) states that HS256 (HMAC using SHA-256) must be supported as a signature algorithm for JWS. As a logical consequence, any implementation claiming compliance with OpenID Connect is required to generate client secrets with entropy of 256 bits or more.

6.4. Signature Algorithm

id_token_signed_response_alg is listed in “2. Client Metadata” of OpenID Connect Dynamic Client Registration 1.0. It denotes the algorithm that a client application requires the authorization server to use as a signature algorithm for ID tokens. Valid values are listed in RFC 7518 as mentioned above, and it should be noted that none is not allowed. If the value of id_token_signed_response_alg is omitted on registration, RS256 is used.

userinfo_signed_response_alg is also a signature algorithm that a client application requires the authorization server to use. This algorithm is used to sign information returned from UserInfo Endpoint (OpenID Connect Core 1.0, 5.3. UserInfo Endpoint). none is allowed, and in that case, the endpoint returns Unsecured JWS (= JWS without signature). In addition, userinfo_signed_response_alg can remain unspecified, and in this case, the endpoint returns information in plain JSON format.

Based on the information above, it will be okay to add NOT NULL to the column for id_token_signed_response_alg, but it will be better not to add NOT NULL to the column for userinfo_signed_response_alg and to allow NULL to enable the column to express the meaning “unspecified”. Of course, an implementor can choose to define a special value which means “unspecified” and store the value instead of NULL. If this design is adopted, NOT NULL can be added to the column for userinfo_signed_response_alg. It’s up to implementors.

From among existing JWT libraries, I chose Nimbus JOSE+JWT. However, while using it, I noticed that the library did not allow signing with none algorithm. In other words, the library cannot generate Unsecured JWS. If this restriction exists, userinfo_signed_response_alg=none cannot be supported. So, I sent a pull request to enable signing with none. However, it was rejected. The reason was that it was the library’s policy not to allow signing with none. According to the comment, “ There is significant security risk of confusing developers that alg=none objects can be signed or verified in the same way as truly protected JWS objects”, and I was advised to treat the case of alg=none as a special one.

Well, there are various opinions, but mine is that it is not a good approach to save naive developers by restricting features. As a matter of fact, Nimbus JOSE+JWT library cannot generate Unsecured JWS which is listed in RFC 7515 as an example although there was at least one developer (me) who understood what Unsecured JWS is and still wanted to generate one. Therefore, I had to generated Unsecured JWS manually. (But, it should be fairly noted that I think Nimbus JOSE+JWT is a good library because it supports all the algorithms listed in RFC 7518.)

This is off topic, but a certain issue was created for nv-websocket-client (info in Japanese) which is a WebSocket client library for Java I open to the public on GitHub. The issue was a proposal of feature improvement suggesting that the library have a mechanism to warn when a developer calls both setSSLContext() method and setSSLSocketFactory() method. This proposal was made because the reporter was troubled with unintended behaviors as he had called the both methods improperly. My answer was that it is explicitly written in the JavaDoc about which setting takes precedence when the both methods are called and that such a meddling check would make WebSocketFactory class awkward to use. Then, the reaction was “It was my fault for not reading the documentation in details first before calling both methods. But how many other developers do you think will read the documentation in details first before making the same mistake?”

Oh, it’s just a well-deserved punishment if a developer wastes time on self-made bugs due to the reason he/she has not read documents…

Trials to help those who don’t read documents would be endless. Even if a library prevents signing by alg=none, such engineers would not hesitate to include private keys in a JWK Set which is published through a JWK Set endpoint of an authorization server. Why? Do you think that those who don’t read documents can notice the existence of toPublicJWKSet() method of JWKSet class (in Nimbus JOSE+JWT library) and understand the meaning of the method? Probably, they would naively say, “Yes, I could create an instance of JWKSet class. Let’s publish it! I’ve finished implementing the JWK Set endpoint!”

Engineers who don’t refer to primary sources such as RFCs cannot notice errors in answers they find and believe the answers without doubt. However, an engineer must not avoid reading RFCs to become a true engineer.

To become a true engineer, don’t avoid reading RFCs. Only searching technical blogs and Stack Overflow for answers will never bring you to the right place.

6.5. Client Application Developer

Some open-source authorization servers provide a mechanism to enable dynamic registration of client applications such an HTML form (OpenAM by ForgeRock) and Web APIs (MITREid Connect by MITRE). But, it seems only administrators of authorization servers can register client applications. However, an ideal approach will be to create something similar to Twitter’s Application Management console, let developers login there, and provide an environment in which each developer can register and manage his/her own client applications. To achieve this, a database table for client applications should have a column which holds developers’ unique identifiers.

It is often forgotten because implementing an authorization server is cumbersome in itself, but a mechanism to manage client applications also needs to be provided in order to open Web APIs to the public. If intended users of your Web APIs are limited to closed groups, the administrator of your authorization server may be able to register a client application every time he/she is requested to. As a matter of fact, there is a company whose administrator types SQL statements manually for each registration request. However, if you want to open Web APIs to the general public, such operations won’t work and you will realize that you have to provide a decent management console for client applications. If you succeed in securing budget for development of an authorization server and Web APIs but forget to secure budget for a management console for client applications, it will result in “Has implemented Web APIs but cannot open them to the public”.

As an example of such management consoles, Authlete provides Developer Console for use cases described above. Authlete itself does NOT manage developer accounts, but by a mechanism named “developer authentication callback”, developers whose accounts are managed by Authlete’s customers can use the developer console. Therefore, Authlete customers don’t have to develop a management console for client applications.

7. Access Token

7.1. Access Token Representation

How should an access token be represented? There are two major ways.

  1. As a meaningless random string. Information associated with an access token is stored in a database table behind an authorization server.
  2. As a self-contained string which is a result of encoding access token information by base64url or something similar.

A choice between these ways will lead to consequent differences as described in the following table.

If access tokens are random strings, it is needed to inquire of an authorization server every time in order to get information about access tokens. By contrast, if access tokens themselves contain information, there is no need to inquire of an authorization server. This makes the self-contained style sound better, but because an inquiry to an authorization server has to be made to check whether an access token has been revoked or not, even if the self-contained style is adopted, in any case, network communication is required every time an access token is presented by a client application.

What is cumbersome in the self-contained style is that we have to add a record denoting “revoked” every time access token revocation is requested, and such a record must be kept until the access token expires. Otherwise, if the record were deleted, the revoked access token would be resurrected and become valid again (if the original expiration date has not been reached yet).

By contrast, in the case of the random-string style, access token revocation can be achieved simply by deleting the access token record itself. Therefore, there is no way for a revoked access token to be resurrected due to any accident. In addition, the negative effect “revocation increases records” observed in the self-contained style won’t happen.

To enable access token revocation, unique identifiers must be assigned to access tokens even in the case of self-contained style. Otherwise, it is impossible to tell which access token has been revoked. To put it the other way around, an authorization server which adopts the self-contained style but does not assign unique identifiers to access tokens is an authorization server which cannot revoke access tokens. It could be one of implementation policies, but such an authorization server should not issue long-lived access tokens and should not issue refresh tokens.

“Authorization server that cannot revoke access tokens?!”, you might wonder. But, such an authorization actually exists. A certain global big system integrator bought a company and was developing an authorization server using the product of the acquired company, but in a later phase, the system integrator and its customer noticed that the authorization server could not revoke access tokens. When I heard the story, I guessed that the authorization server issues access tokens of self-contained style without unique identifiers.

The self-contained style is seemingly good because there are some merits such as “no need to inquire of an authorization server to extract information of access tokens” and “no need to maintain access token records on the authorization server side”, but when you consider access token revocation into account, there is room for discussion.

7.2. Access Token Deletion

To prevent a database from growing infinitely, expired access tokens should be deleted from the database periodically.

Client applications which request an authorization server to issue access tokens unnecessarily are trouble makers. Although they already have an access token which has not expired yet, they repeat to discard such a valid access token and request a new one. If this happens, access tokens which are not used but cannot be deleted (because they have not expired yet) are accumulated in the database.

To prevent this situation, save the timestamp at which an access token is last used into the database in addition to the timestamp at which the access token will expire, and run a program periodically which deletes access tokens unused long time. Of course, it depends on characteristics of a service whether it is acceptable or not to delete unused access tokens while they have not expired.

Once before, I met an engineer who had worked in a project for OAuth implementation in a certain big company while he belonged to it. He told me that the system had built without consideration about access token deletion and so the database of the system was perhaps holding hundreds of millions of access tokens then. Scary, scary. When a system which generates something is developed, the timing to delete the generated thing should be considered at the same time.

8. Redirect URI

8.1. Redirect URI Validation

In May, 2014, a Ph.D. student in Singapore posted an article, and it made people buzz about “Vulnerability in OAuth?” It is an issue about so-called Covert Redirect. Those who understand OAuth 2.0 correctly soon realized that it was not due to vulnerability in the specification but just due to the improper implementations. However, the topic made so many people upset that experts in the OAuth area could not help writing explanatory documents. Covert Redirect and its real impact on OAuth and OpenID Connect by Mr. John Bradley is one of such documents.

If redirect URIs are not treated properly, security problems occur. How redirect URIs should be treated are described in the related specifications, but it is difficult to implement it correctly because there are many things to care about, for example, (a) requirements by RFC 6749 and those by OpenID Connect are different and (b) the value of application_type attribute of client applications must be taken into consideration.

How correctly the part treating redirect URIs is implemented depends on how carefully and exhaustively the implementor has perused the related specifications. Therefore, reading the implementation code of the part can make a good guess about the implementation quality of the entire authorization server. So, everyone, make your best efforts to implement it!

…… I would feel sorry if I coldly deserted you like this who have read my long post so far, so I show you Authlete’s implementation know-how. The following is pseudo code to handle the redirect_uri parameter contained in an authorization request. Note that the pseudo code dare to be written without being split into methods for browsability, but in the actual Authlete’s implementation, the code flow is finely split into methods. Therefore, and for performance purposes, the actual code flow is different from the pseudo code. (It would be shame if the actual implementation contained so many nested if’s and for’s like the pseudo code.)

// Extract the value of the 'redirect_uri' parameter from
// the authorization request.
redirectUri = ...
// Remember whether a redirect URI was explicitly given.
// It must be checked later in the implementation of the
// token endpoint because RFC 6749 states as follows.
//
// redirect_uri
// REQUIRED, if the "redirect_uri" parameter was
// included in the authorization request as described
// in Section 4.1.1, and their values MUST be identical.
//
explicit = (redirectUri != null);
// Extract registered redirect URIs from the database.
registeredRedirectUris = ...
// Requirements by RFC 6749 (OAuth 2.0) and those by
// OpenID Connect are different. Therefore, the code flow
// branches according to whether the request is an OpenID
// Connect request or not. This is judged by whether the
// 'scope' request parameter contains 'openid' as a value.
if ( 'openid' is included in 'scope' )
{
// Check requirements by OpenID Connect.
// If the 'redirect_uri' is not contained in the request.
if ( redirectUri == null )
{
// The 'redirect_uri' parameter is mandatory in
// OpenID Connect. It's optional in RFC 6749.
throw new Exception(
"The 'redirect_uri' parameter is missing.");
}
// For each registered redirect URI.
for ( registeredRedirectUri : registeredRedirectUris )
{
// 'Simple String Comparison' is required by the
// specification.
if ( registeredRedirectUri.equals( redirectUri ) )
{
// OK. The redirect URI specified by the
// authorization request is registered.
registered = true;
break;
}
}
// If the redirect URI specified by the authorization
// request matches none of the registered redirect URIs.
if ( registered == false )
{
throw new Exception(
"The redirect URI is not registered.");
}
}
else
{
// Check requirements by RFC 6749.
// If redirect URIs are not registered at all.
if ( registeredRedirectUris.size() == 0 )
{
// RFC 6749, 3.1.2.2. Registration Requirements says
// as follows:
//
// The authorization server MUST require the
// following clients to register their
// redirection endpoint:
//
// o Public clients.
// o Confidential clients utilizing the
// implicit grant type.
// If the type of the client application which made
// the authorization request is 'public'.
if ( client.getClientType() == PUBLIC )
{
throw new Exception(
"A redirect URI must be registered.");
}
// If the client type is 'confidential' and if the
// authorization flow is 'Implicit Flow'. If the
// 'response_type' request parameter contains either
// or both of 'token' and 'id_token', the flow should
// be treated as a kind of 'Implicit Flow'.
else if ( responseType.requiresImplicitFlow() )
{
throw new Exception(
"A redirect URI must be registered.");
}
}
// If the authorization request does not contain the
// 'redirect_uri' request parameter.
if ( redirectUri == null )
{
// If redirect URIs are not registered at all,
// or if multiple redirect URIs are registered.
if ( registeredRedirectUris.size() != 1 )
{
// A redirect URI must be explicitly specified
// by the 'redirect_uri' parameter.
throw new Exception(
"The 'redirect_uri' parameter is missing.");
}
// One redirect URI is registered. Use it as the
// default value of redirect URI.
redirectUri = registeredRedirectUris[0];
}
// The authorization request contains the 'redirect_uri'
// parameter, but redirect URIs are not registered.
else if ( registeredRedirectUris.size() == 0 )
{
// The code flow reaches here if and only if the
// client type is 'confidential' and the authorization
// flow is not 'Implicit Flow'. In this case, the
// redirect URI specified by the 'redirect_uri'
// parameter of the authorization request is used
// although it is not registered. However,
// requirements written in RFC 6749, 3.1.2.
// Redirection Endpoint are checked.
// If the specified redirect URI is not an absolute one.
if ( redirectUri.isAbsolute() == false )
{
throw new Exception(
"The 'redirect_uri' is not an absolute URI.");
}
// If the specified redirect URI has a fragment part.
if ( redirectUri.getFragment() != null )
{
throw new Exception(
"The 'redirect_uri' has a fragment part.");
}
}
else
{
// If the specified redirect URI is not an absolute one.
if ( redirectUri.isAbsolute() == false )
{
throw new Exception(
"The 'redirect_uri' is not an absolute URI.");
}
// If the specified redirect URI has a fragment part.
if ( redirectUri.getFragment() != null )
{
throw new Exception(
"The 'redirect_uri' has a fragment part.");
}
// For each registered redirect URI.
for (registeredRedirectUri : registeredRedirectUris )
{
// If the registered redirect URI is a full URI.
if ( registeredRedirectUri.getQuery() != null )
{
// 'Simple String Comparison'
if ( registeredRedirectUri.equals( redirectUri ) )
{
// The specified redirect URI is registered.
registered = true;
break;
}
// This registered redirect URI does not match.
continue;
}
// Compare the scheme parts.
if ( registeredRedirectUri.getScheme().equals(
redirectUri.getScheme() ) == false )
{
// This registered redirect URI does not match.
continue;
}
// Compare the user information parts. Here I use
// an imaginary method 'equalsSafely()' because
// the code would become too long if I inlined it.
// The method compares arguments without throwing
// any exception even if either or both of the
// arguments are null.
if ( equalsSafely(
registeredRedirectUri.getUserInfo(),
redirectUri.getUserInfo() ) == false )
{
// This registered redirect URI does not match.
continue;
}
// Compare the host parts. Ignore case sensitivity.
if ( registeredRedirectUri.getHost().equalsIgnoreCase(
redirectUri.getHost() ) == false )
{
// This registered redirect URI does not match.
continue;
}
// Compare the port parts. Here I use an imaginary
// method 'getPortOrDefaultPort()' because the
// code would become too long if I inlined it. The
// method returns the default port number of the
// scheme when 'getPort()' returns -1. The last
// resort is 'URI.toURL().getDefaultPort()'. -1 is
// returned If 'getDefaultPort()' throws an exception.
if ( getPortOrDefaultPort( registeredRedirectUri ) !=
getPortOrDefaultPort( redirectUri ) )
{
// This registered redirect URI does not match.
continue;
}
// Compare the path parts. Here I use the imaginary
// method 'equalsSafely()' again.
if ( equalsSafely( registeredRedirectUri.getPath(),
redirectUri.getPath() ) == false )
{
// This registered redirect URI does not match.
continue;
}
// The specified redirect URI is registered.
registered = true;
break;
}
// If none of the registered redirect URI matches.
if ( registered == false )
{
throw new Exception(
"The redirect URI is not registered.");
}
}
}
// Check requirements by the 'application_type' of the client.// If the value of the 'application_type' attribute is 'web'.
if ( client.getApplicationType() == WEB )
{
// If the authorization flow is 'Implicit Flow'. When the
// 'response_type' request parameter of the authorization
// request contains either or both of 'token' and 'id_token',
// it should be treated as a kind of 'Implicit Flow'.
if ( responseType.requiresImplicitFlow() )
{
// If the scheme of the redirect URI is not 'https'.
if ( "https".equals( redirectUri.getScheme() ) == false )
{
// The scheme part of the redirect URI must be
// 'https' when a client application whose
// 'application_type' is 'web' uses 'Implicit Flow'.
throw new Exception(
"The scheme of the redirect URI is not 'https'.");
}
// If the host of the redirect URI is 'localhost'.
if ( "localhost".equals( redirectUri.getHost() ) )
{
// The host of the redirect URI must not be
// 'localhost' when a client application whose
// 'application_type' is 'web' uses 'Implicit Flow'.
throw new Exception(
"The host of the redirect URI is 'localhost'.");
}
}
}
// If the value of the 'application_type' attribute is 'native'.
else if ( client.getApplicationType() == NATIVE )
{
// If the scheme of the redirect URI is 'https'.
if ( "https".equals( redirectUri.getScheme() ) )
{
// The scheme of the redirect URI must not be 'https'
// when the 'application_type' of the client is 'native'.
throw new Exception(
"The scheme of the redirect URI is 'https'.");
}
// If the scheme of the redirect URI is 'http'.
if ( "http".equals( redirectUri.getScheme() ) )
{
// If the host of the redirect URI is not 'localhost'.
if ( "localhost".equals(
redirectUri.getHost() ) == false )
{
// When a client application whose 'application_type'
// is 'native' uses a redirect URI whose scheme is
// 'http', the host port of the URI must be
// 'localhost'.
throw new Exception(
"The host of the redirect URI is not 'localhost'.");
}
}
}
// If the value of the 'application_type' attribute is neither
// 'web' or 'native'.
else
{
// As mentioned above, Authlete allows 'unspecified' as a
// value of the 'application_type' attribute. Therefore,
// no exception is thrown here.
}

8.2. Other’s implementation

In OpenID Connect, the redirect_uri parameter is mandatory and requirements about how to check whether a presented redirect URI is registered or not are just ‘Simple String Comparison’. Therefore, if what you have to care about is OpenID Connect only, implementations will be simple. For example, in IdentityServer3 which has won about 1,700 stars on GitHub as of October, 2016 and been certified by OpenID Certification program, checking a redirect URI is implemented as follows (excerpt from DefaultRedirectUriValidator.cs with additional newlines for formatting).

public virtual Task<bool> IsRedirectUriValidAsync(
string requestedUri, Client client)
{
return Task.FromResult(
StringCollectionContainsString(
client.RedirectUris, requestedUri));
}

That OpenID Connect only is cared about means, to put it the other way around, that traditional authorization code flow and implicit flow which do not contain openid in the scope request parameter are NOT accepted by the authorization server. That is, such an authorization server cannot respond to any existing OAuth 2.0 client applications.

So, does IdentityServer3 reject traditional authorization requests? Take a look at AuthorizeRequestValidator.cs and you will find this (formatting is adjusted):

if (request.RequestedScopes.Contains(
Constants.StandardScopes.OpenId))
{
request.IsOpenIdRequest = true;
}

//////////////////////////////////////////////////////////
// check scope vs response_type plausability
//////////////////////////////////////////////////////////
var requirement =
Constants.ResponseTypeToScopeRequirement[request.ResponseType];
if (requirement == Constants.ScopeRequirement.Identity ||
requirement == Constants.ScopeRequirement.IdentityOnly)
{
if (request.IsOpenIdRequest == false)
{
LogError("response_type requires the openid scope", request);
return Invalid(request, ErrorTypes.Client);
}
}

You don’t have to understand details of this code. The point is that there are paths which allow cases where openid is not contained in the scope parameter. That is, traditional authorization requests are accepted. If so, the implementation of IdentityServer3 is not correct. However, on the other hand, at another place in AuthorizeRequestValidator.cs, the implementation rejects all authorization requests that do not contain the redirect_uri parameter like below (formatting is adjusted).

//////////////////////////////////////////////////////////
// redirect_uri must be present, and a valid uri
//////////////////////////////////////////////////////////
var redirectUri = request.Raw.Get(Constants.AuthorizeRequest.RedirectUri);

if (redirectUri.IsMissingOrTooLong(
_options.InputLengthRestrictions.RedirectUri))
{
LogError("redirect_uri is missing or too long", request);
return Invalid(request);
}

So, the implementation does not have to care about the case where the redirect_uri parameter is omitted. But, because the redirect_uri parameter is optional in RFC 6749, the behavior — authorization requests without the redirect_uri parameter are unconditionally rejected although traditional authorization requests are accepted — is a violation of the specification. In addition, IdentityServer3 does not do validation for the application_type attribute. To implement the validation, as a first step, a property for the application_type attribute must be added to the model class that represents a client application ( Client.cs) as the current implementation misses it.

9. Violations of Specifications

Subtle violations of the specifications are sometimes called “dialects”. The word “dialect” may give an impression of “acceptable”, but violations are violations. If there are no dialects, it will be enough to have one generic OAuth 2.0 / OpenID Connect library for each computer language. But, in the real world, custom client libraries are needed for authorization servers which violate the specifications.

The reason Facebook’s OAuth flow requires its custom client library is that there are many violations of the specifications in Facebook’s OAuth implementation. For example, (1) commas are used as a delimiter of scope list (it should be spaces), (2) the format of the response from the token endpoint is application/x-www-form-urlencoded (it should be JSON), and (3) the name of the parameter for access token’s expiration date is expires (it should be expires_in).

Not only Facebook but also other big names have violations of the specifications. The following are other examples.

9.1. Delimiter of Scope List

Scope names are listed in the scope parameter of requests to an authorization endpoint and a token endpoint. RFC 6749, 3.3. Access Token Scope requires that spaces be used as delimiters, but the following OAuth implementations use commas:

  • Facebook
  • GitHub
  • Spotify
  • Discus
  • Todoist

9.2. Response Format of Token Endpoint

RFC 6749, 5.1. Successful Response requires that the format of a successful response from a token endpoint be JSON, but the following OAuth implementations use application/x-www-form-urlencoded:

  • Facebook
  • Bitly
  • GitHub

The default format is application/x-www-form-urlencoded, but GitHub provides a means to request JSON.

9.3. token_type in Response from Token Endpoint

RFC 6749, 5.1. Successful Response requires that the token_type parameter be included in a successful response from a token endpoint, but the following OAuth implementation does not include it:

  • Slack

Salesforce once had this issue, too (OAuth Access Token Response Missing token_type), but it has been fixed.

9.4. token_type Inconsistency

The following OAuth implementation claims that the token type is “Bearer”, but its resource endpoints do not accept an access token by the means defined in RFC 6750 (The OAuth 2.0 Authorization Framework: Bearer Token Usage):

  • GitHub (it accepts an access token via the format of Authorization: token OAUTH-TOKEN)

9.5. grant_type Is Not Required

The grant_type parameter is mandatory at a token endpoint, but the following OAuth implementations don’t require it:

  • GitHub
  • Slack
  • Todoist

9.6. Unofficial Values for The error Parameter

The specifications have defined some values for the error parameter which is included in an error response from an authorization server, but the following OAuth implementations define their own:

  • GitHub (e.g. application_suspended)
  • Todoist (e.g. bad_authorization_code)

9.7. Bad Parameter Name on Error

The following OAuth implementation uses errorCode instead of error when it returns an error code:

  • LINE

10. Proof Key for Code Exchange

10.1. PKCE Is A MUST

Do you know PKCE? It is a specification defined as RFC 7636 (Proof Key for Code Exchange by OAuth Public Clients) and was published in September, 2015. It is a countermeasure against the authorization code interception attack.

Some conditions are required for the attack to succeed, but if you are thinking of releasing a smartphone application, it is strongly recommended that PKCE be supported by both the client application and the authorization server. Otherwise, a malicious application may intercept an authorization code issued by the authorization server and exchange it with a valid access token at the token endpoint of the authorization server.

It is in October, 2012 that RFC 6749 (The OAuth 2.0 Authorization Framework) was released, so even developers who are familiar with OAuth 2.0 may not know RFC 7636 which was released relatively recently in September, 2015. However, it should be noted that the draft of “OAuth 2.0 for Native Apps” states its support is a MUST under some conditions.

Both the client and the Authorization Server MUST support PKCE [RFC7636] to use custom URI schemes, or loopback IP redirects. Authorization Servers SHOULD reject authorization requests using a custom scheme, or loopback IP as part of the redirection URI if the required PKCE parameters are not present, returning the error message as defined in Section 4.4.1 of PKCE [RFC7636]. It is RECOMMENDED to use PKCE [RFC7636] for app-claimed HTTPS redirect URIs, even though these are not generally subject to interception, to protect against attacks on inter-app communication.

The authorization endpoint of an authorization server that supports RFC 7636 accepts two request parameters, code_challenge and code_challenge_method, and the token endpoint accepts code_verifier. And in the implementation of the token endpoint, the authorization server computes the value of code challenge using (a) the code verifier presented by the client application and (b) the code challenge method specified by the client application at the authorization endpoint. If the computed code challenge and the value of the code_challenge parameter presented by the client application at the authorization endpoint are equal, it can be said that the entity that made the authorization request and the entity that made the token request are identical. Thus, an authorization server can avoid issuing an access token to a malicious application which is different from the entity that made the authorization request.

The entire flow of RFC 7636 is explained with illustration at Authlete’s website: Proof Key for Code Exchange (RFC 7636). If you are interested, please read it.

10.2. Server-Side Implementation

In the implementation of an authorization endpoint, what an authorization server has to do is to save the values of the code_challenge parameter and the code_challenge_method parameter contained in an authorization request into the database. So, there is nothing interesting in the implementation code. Something to note is just that an authorization server which wants to support PKCE has to add columns for code_challenge and code_challenge_method into the database table storing authorization codes.

The entire source code of Authlete is confidential, but for your interest, here I show you the actual Authlete’s implementation that validates the value of the code_verifier parameter at the token endpoint.

private void validatePKCE(AuthorizationCodeEntity acEntity)
{
// See RFC 7636 (Proof Key for Code Exchange) for details.

// Get the value of 'code_challenge' which was contained in
// the authorization request.
String challenge = acEntity.getCodeChallenge();

if (challenge == null)
{
// The authorization request did not contain
// 'code_challenge'.
return;
}

// If the authorization request contained 'code_challenge',
// the token request must contain 'code_verifier'. Extract
// the value of 'code_verifier' from the token request.
String verifier = extractFromParameters(
"code_verifier", invalid_grant, A050312, A050313, A050314);

// Compute the challenge using the verifier
String computedChallenge = computeChallenge(acEntity, verifier);

if (challenge.equals(computedChallenge))
{
// OK. The presented code_verifier is valid.
return;
}

// The code challenge value computed with 'code_verifier'
// is different from 'code_challenge' contained in the
// authorization request.
throw toException(invalid_grant, A050315);
}


private String computeChallenge(
AuthorizationCodeEntity acEntity, String verifier)
{
CodeChallengeMethod method = acEntity.getCodeChallengeMethod();

// This should not happen, but just in case.
if (method == null)
{
// Use 'plain' as the default value required by RFC 7636.
method = CodeChallengeMethod.PLAIN;
}

switch (method)
{
case PLAIN:
// code_verifier
return verifier;

case S256:
// BASE64URL-ENCODE(SHA256(ASCII(code_verifier)))
return computeChallengeS256(verifier);

default:
// The value of code_challenge_method extracted
// from the database is not supported.
throw toException(server_error, A050102);
}
}


private String computeChallengeS256(String verifier)
{
// BASE64URL-ENCODE(SHA256(ASCII(code_verifier)))

// SHA256
byte[] hash =
Digest.getInstanceSHA256().update(verifier).digest();

// BASE64URL
return SecurityUtils.encode(hash);
}

The Digest class used in the implementation of computeChallengeS256(String) method is included in my open-source library, nv-digest. It is a utility library to make digest computation easy. With this library, computing an SHA-256 digest value can be written in a line as shown below.

byte[] hash = Digest.getInstanceSHA256().update(verifier).digest();

10.3. Client-Side Implementation

What a client application has to do for PKCE are two. One is to generate a random code verifier which consists of 43–128 letters, compute the code challenge using the code verifier and the code challenge method (plain or S256), and include the computed code challenge and the code challenge method as the values of the code_challenge parameter and the code_challenge_method parameter in an authorization request. The other is to include the code verifier in a token request.

As examples of client-side implementations, I introduce the following two.

  1. AppAuth for Android
  2. AppAuth for iOS

They are SDKs to communicate with an OAuth 2.0 and OpenID Connect server. They claim that they include best practices and support PKCE.

If you implement a computation logic for code_challenge_method=S256, you can test it by checking whether the value of the code challenge becomes E9Melhoa2OwvFrEMTJguCHaoeK1t8URWbuGJSstw-cM when the value of the code verifier is dBjftJeZ4CVP-mB92K27uhbUJU1p1r_wW1gFWFOEjXk. These values are found as example values in “Appendix B. Example for the S256 code_challenge_method” of RFC 7636.

11. Finally

Some may say it is easy to implement OAuth and OpenID Connect, and others may say it’s not. In either case, as a matter of fact, even big tech companies such as Facebook and GitHub that have sufficient budget and human resources have failed to implement OAuth and OpenID Connect correctly. Famous open-source projects such as Apache Oltu and Spring Security have problems, too. Therefore, if you implement OAuth and OpenID Connect by yourself, take it seriously and prepare a decent development team. Otherwise, security risks would be increased.

It is not hard to implement RFC 6749 only, but it would drive you crazy to implement OpenID Connect from scratch. So, it is recommended to use an existing implementation as a starting point. The first step will be to search the page “Libraries, Products, and Tools” in the OpenID Connect website for software related to OAuth and OpenID Connect (although Authlete is not listed there). Of course, as a co-founder of Authlete, Inc., I will be glad if you choose Authlete.

Thank you for reading this long post.

--

--

Takahiko Kawasaki
Takahiko Kawasaki

Written by Takahiko Kawasaki

Co-founder and representative director of Authlete, Inc., working as a software engineer since 1997. https://www.authlete.com/

Responses (1)