Posts Tagged 'oAuth'

OAuth or bust

Hot off the press! (is that still an expression given the apparent demise of newspapers?) Mashups: Google’s Adoption Makes oAuth a Must Have for All Apps. This right after MySpace announcing support for OAuth with their Data Availability initiative the day before.

IMHO, this is huge for data portability, in this case, OAuth support for all Google Data APIs, everything from Gmail contacts to Google Calendar to Docs to YouTube. Bottom line, users no longer need to give up their confidential Google accounts username and password to 3rd party services in order for the 3rd party services to access their data on Google services. I suspect that Google is doing this because it helps them become the service provider of choice using an open and standard means of authentication hence channeling even more traffic through Google servers that they can figure out how to monetize later, much like their Friend Connect effort.

This is a major win for OAuth, in fact I would say that OAuth has now become a bigger player than OpenID in the space of data portability technologies. Given the recent history of big players announcing back to back support of similar features, I predict (ok I hope) that Facebook and Microsoft will follow suit.

OAuth authentication flow

I came across this thread by Jeff Hodges on the Google OAuth group and thought it worth sharing.

OAuth authentication flow

In the wild snapshot#2:

An excellent in the wild post written by Josh Patterson about his project with some feedback from me, so the credit really goes to him. For those of you unfamiliar with this series of posts, the idea is to create blog-length interviews with various in the wild apps describing their processes and the technologies that they use with regards to data portability. The goal is to profile real use cases, solutions, and lessons learned when it comes to the current state of affairs for data portability technology. Note that these posts aren’t meant to recommend or not recommend certain technology, I leave that up to the developers/architects to decide based on their needs. If you have such an app and are interested in being interviewed, please leave me a comment on one of my posts and I will get in touch with you.

Application Overview
Basically with we started out just wanting to try out some video ideas we had. We started out with a simple playlist of videos on the internet linked together at a site, which evolved into a full own video editor. After a number of months of development we hit a point where the team sat down with some users and did a testing session, asking questions and gauging responses to see how well we were hitting our marks. We came to the subject of data storage, local hardrives, and getting media online, and just as a thought exercise I asked “well, what if just knew about all your online media by your login name, and referenced it automatically in your libraries the first time you logged in — just as if it was an app installed locally on your hd?” and immediately both of them became excited and one asked “can I do that right now? when can I use that?” and I knew from experience that the market was speaking very loudly and clearly in my direction, and that I had better listen very closely.

The very next meeting I posed this question to our team:

What if our app was “inherently installed” in the internet? What if someone logged in, and the app just acted like a desktop app that “knew” about your flickr images, your youtube videos, it knew about your myspace friends, facebook friends, and automatically treated them as one logical database, one logical social graph? And someone started right into an app tutorial right off the bat with their contacts, files, and assets already referenced (but fully respecting privacy, control, etc)?

So the next question naturally becomes “that all sounds really great, but … how do we get there?”

From there we really began to push a “what if this/that” scenario, and drew up our ideas into a document entitled WRFS and from that we began to re-engineer the app towards a truly linked data experience.

We are currently using FLEX/as3 for the editor and player with ASP.NET for the server technology. Discovery is a big deal with how I view next gen web apps — dynamically finding data at runtime without having to go through “Big Data’s” walls to get to it. How I see this happening is

  1. the user logs in with an identity url, be it an openID or a XRDS file location. is an openID only application, but user’s can map multiple openIDs to one account with it.
  2. the app authenticates the user, and then uses multiple fallback methods to find the XRDS-S file if not specified.
  3. The XRDS file is parsed, and relevant data types (here image and video) location uris are pulled out
  4. Each uri points to a data container, which is then queried via its API for a list of resources the user has stored there.
  5. these results are then aggregated back together into a single “recordset” to be returned to the application layer

The fallback methods for XRDS discovery (done in the FLEX client) are ordered as:

  1. First do basic yadis discovery to see if the identity delegate is the openID provider or a blog of some sort head link delegation setup. Either the openID provider or the XRDS location may here, or both. In some cases, such as the DiSo XRDS plugin, the XRDS file located in the head link tag will have the openID provider location.
  2. The secondary method we have kicked around is to query the openID provider for an Attribute Exchange key that points to the XRDS file. This is not well defined yet but has been discussed amongst various groups.
  3. Lastly we fallback to having our flex app prompt the user for a XRDS url so that we can “enhance their user experience”.

So although I think our secondary option with the AX key is a little shaky right now, overall we degrade gracefully.

For a quick demo of how some of the data query and aggregation mechanics might work, I’ve built a short demo illustrating the step by step mechanics.

Lessons learned
We aren’t done with this application, obviously, and a lot of work remains to be done – I should note that I am currently at Step 2 of 5 as stated above. However, the application is evolving into the embryo of what I think a linked data application can and will be. What I can share are the places that we are actively looking for solutions, simple decentralized solutions, that solve these issues.

One thing that is slowly changing is the perception of cross domain scripting on the internet. As services get more complex, and require more aggregation of data from multiple sources, we are going to push more data handling duties to the client, as scalability will suffer. For flash crossdomain scripting we need the crossdomain.xml file present on the server of the api would like to call for it to work. This is a trivial thing to setup as it consists of a simple xml file located off of the subdirectory level you wish to grant access to.
Once this file is exposed, the flash runtime will then allow calls from the client to those servers.

For cross domain client side javascript, things get a little bit trickier. A lot of cross domain tricks, such as widget embedding, is done via iframe embeds. This type of embedding significantly restricts access to the rest of the page, so effectively the widget is isolated in terms of interaction with the rest of the page DOM. Firefox 3 will allow cross domain client side scripting with certain http headers being present on the server response from the remote cross domain endpoint. I’m not sure how future versions of Internet Explorer will address this issue, but I think evolutionary pressure from both Firefox and Flash will yield some effect there. A new development that is supposed to make javascript more secure is Google Caja. I’ve begun to follow Google Gaja but I’ve yet to deeply jump into that project.

We’ve been waiting to see how the discovery wars pan out, as it stands now XRDS-S is looking like the service index of choice amongst the big players (presently Yahoo is endorsing it, hiring Eran Hammer Lahav as their Open Standards Evangelist. How the XRDS resource is discovered automatically without a tremendous amount of user interaction is something that we are taking many approaches towards, as discussed above. For now we’re going to focus on finding XRDS files as our catalog of service endpoints for a user. The DiSo project is going to be publishing its user’s service endpoints in the XRDS format and already has a plugin for it, so I think in the short term we’ll be focusing on consuming that data in terms of an early conceptual demo in runtime linked data.

Once we find the XRDS file, we aren’t out of the woods. How do I set my XRDS file up so that I can tell the application that I have “images in flickr”? This is a fundamental question being setup and worked on at the site by many different groups and people, and has yet to fully be fleshed out.

There’s also the issue of once we find a data endpoint, how do we talk to it? ATOM? Some sort of standard data interop api that spits out RDF? In a perfect world I’d love to see a self organizing web, a linked data web that can find a data endpoint at runtime, find its semantic schema, and wire itself such that it can talk to that api without ever needing user intervention — it simply will understand how to talk to it.

Another key development will be the permission system, possibly using OAuth+Discovery to automate the updating of a XRDS-S file for someone when they add data to a service. I need to learn a lot more about OAuth and the direction its heading. I’d prefer a world where the user doesn’t need to manually go to a site to allow their resources to be used by a third party, but for now, this is how we have to operate.

So really, I think I have more “lessons underway” than “lessons learned”, but sharing this information is key since it sparks interest in other like minded developers who may know a lot more about some of these areas than me. There’s going to be some places that we punt and just hard code some scaffolding code in to just get going, but over time I’d like to evolve towards a linked data web that auto discovers new connections at runtime and self organizes to give a smarter and far more intuitive user experience than we’ve seen so far.

If you are interesting in linking data or trying some data interop experiments, please feel free to email me (jpatterson at or check out the WRFS workgroup or my blog

Feedback and suggestions are welcomed.

OAuth Explained

Ok, I admit this post is for geeks but even geeks can’t keep up with all the latest technology all the time, so I guess I am ungeeking OAuth (pronounced “Oh Auth” and short for Open Authorization) for geeks, wait, is that like oxymoron?

Problem Domain
If you have accounts on multiple social sites like youtube, facebook, myspace, flickr, etc, you have probably been asked by at least one if not all of the sites to invite your friends during signup and probably repeatedly afterwards. Usually this involves handing over your private username and password to your favorite email accounts like Yahoo, Gmail, etc. By handing over your private information, you allow the site(s) to scrape your email contacts for their emails so the site(s) can spam them with invites, lovely huh. In the back of my mind, I always have this discomfort about what the site(s) might do with my private login information, it’s like giving someone the keys to your house and hope that they don’t make a copy and raid your house later on.

Solution OAuth
I extracted most of the 2 paragraphs below from the OAuth About page.
Obviously sharing the same discomfort as me, a few open source developers got together and studied several existing proprietary authentication implementations (Google AuthSub, AOL OpenAuth, Yahoo BBAuth, Upcoming API, Flickr API, Amazon Web Services API, etc). Each protocol provides a proprietary method for exchanging user credentials for an access token or ticker. Out comes OAuth based on the best practices and common functionality of the proprietary implementations.

So what is OAuth? OAuth allows the you the User to grant access to your private resources on one site (which is called the Service Provider), to another site (called Consumer Application, not to be confused with you, the User). This isn’t the same as OpenID. While OpenID is all about using a single identity to sign into many sites, OAuth is about giving access to your stuff without sharing your identity at all (or its secret parts).

OAuth Process Flow
I extracted most of the following from an excellent post Developing OAuth clients in Ruby.

To better understand things, let’s look at the process flow – you probably need to be a developer to make sense of it.

  1. Register your consumer application with the OAuth compliant service provider to receive your Consumer Credentials (This is only done once)
  2. You initiate the OAuth Token exchange process for a user by requesting a RequestToken from the Service
  3. You store the RequestToken in your database or in the users session object
  4. You redirect your user to the service providers authorize_url with the RequestToken’s key appended
  5. Your user is asked by the service provider to authorize your RequestToken
  6. Your user clicks yes and is redirected to your CallBack URL
  7. Your callback action exchanges the RequestToken for an AccessToken
  8. Now you can access your users data by performing http requests signed by your consumer credentials and the AccessToken.

If you want more details (especially if you are a Ruby on Rails guy), check out the post Developing OAuth clients in Ruby.

DaPo Acronym Soup

In my first post, I said that DaPo list of existing open source technology reads like an acronym soup, here’s why (below extracted from DaPo site).

Authentication Standards

  • User authentication – openID
  • API authentication – oAuth

Data Transfer, Interchange, and Exchange Standards

These standards enable data dumps via import and export across data spaces on the Web. They also aid publish and subscribe methods of data exchange. The list includes:

  • Messaging – XMPP
  • Syndication – Atom and RSS
  • Attention – APML
  • Services and Service Discovery – YADIS and XRDS
  • Subscriptions – OPML
  • Personal details – hCard
  • Relationships – XFN
  • Personal Profile Data & Social Networks – FOAF
  • Online Communities – SIOC (discussed at SIOC-DEV)
  • Publishing data – AtomPub

Linked Data or Data Referencing Standards

These standards enable access to data by reference via HTTP based data object identifiers called URIs. The list includes:

  • HTTP based URIs (for Location, Value, and Structure independent Object / Resource Identifiers)
  • Personal Profile Data & Social Networks – FOAF
  • Online Communities – SIOC (discussed at SIOC-DEV)
  • Other Schemas and Vocabularies in the Semantic Web realm

Other standards

Other standards groups and initiatives that may not yet have a place in the Blueprint, but still deserve help and support!

  • Content Identity Validation – MicroID
  • REST

Despite the acronym soup, I found it to be the most useful page on the site because it lists all the open source technology promoted by DaPo. Now I just need to research each acronym to understand the technology, I will write subsequent posts on each technology as I get to it.

Note that there are policies and legal aspects to DaPo also which I am ignoring, at least for now.