Posts Tagged 'openID'

In the wild snapshot#2:

An excellent in the wild post written by Josh Patterson about his project with some feedback from me, so the credit really goes to him. For those of you unfamiliar with this series of posts, the idea is to create blog-length interviews with various in the wild apps describing their processes and the technologies that they use with regards to data portability. The goal is to profile real use cases, solutions, and lessons learned when it comes to the current state of affairs for data portability technology. Note that these posts aren’t meant to recommend or not recommend certain technology, I leave that up to the developers/architects to decide based on their needs. If you have such an app and are interested in being interviewed, please leave me a comment on one of my posts and I will get in touch with you.

Application Overview
Basically with we started out just wanting to try out some video ideas we had. We started out with a simple playlist of videos on the internet linked together at a site, which evolved into a full own video editor. After a number of months of development we hit a point where the team sat down with some users and did a testing session, asking questions and gauging responses to see how well we were hitting our marks. We came to the subject of data storage, local hardrives, and getting media online, and just as a thought exercise I asked “well, what if just knew about all your online media by your login name, and referenced it automatically in your libraries the first time you logged in — just as if it was an app installed locally on your hd?” and immediately both of them became excited and one asked “can I do that right now? when can I use that?” and I knew from experience that the market was speaking very loudly and clearly in my direction, and that I had better listen very closely.

The very next meeting I posed this question to our team:

What if our app was “inherently installed” in the internet? What if someone logged in, and the app just acted like a desktop app that “knew” about your flickr images, your youtube videos, it knew about your myspace friends, facebook friends, and automatically treated them as one logical database, one logical social graph? And someone started right into an app tutorial right off the bat with their contacts, files, and assets already referenced (but fully respecting privacy, control, etc)?

So the next question naturally becomes “that all sounds really great, but … how do we get there?”

From there we really began to push a “what if this/that” scenario, and drew up our ideas into a document entitled WRFS and from that we began to re-engineer the app towards a truly linked data experience.

We are currently using FLEX/as3 for the editor and player with ASP.NET for the server technology. Discovery is a big deal with how I view next gen web apps — dynamically finding data at runtime without having to go through “Big Data’s” walls to get to it. How I see this happening is

  1. the user logs in with an identity url, be it an openID or a XRDS file location. is an openID only application, but user’s can map multiple openIDs to one account with it.
  2. the app authenticates the user, and then uses multiple fallback methods to find the XRDS-S file if not specified.
  3. The XRDS file is parsed, and relevant data types (here image and video) location uris are pulled out
  4. Each uri points to a data container, which is then queried via its API for a list of resources the user has stored there.
  5. these results are then aggregated back together into a single “recordset” to be returned to the application layer

The fallback methods for XRDS discovery (done in the FLEX client) are ordered as:

  1. First do basic yadis discovery to see if the identity delegate is the openID provider or a blog of some sort head link delegation setup. Either the openID provider or the XRDS location may here, or both. In some cases, such as the DiSo XRDS plugin, the XRDS file located in the head link tag will have the openID provider location.
  2. The secondary method we have kicked around is to query the openID provider for an Attribute Exchange key that points to the XRDS file. This is not well defined yet but has been discussed amongst various groups.
  3. Lastly we fallback to having our flex app prompt the user for a XRDS url so that we can “enhance their user experience”.

So although I think our secondary option with the AX key is a little shaky right now, overall we degrade gracefully.

For a quick demo of how some of the data query and aggregation mechanics might work, I’ve built a short demo illustrating the step by step mechanics.

Lessons learned
We aren’t done with this application, obviously, and a lot of work remains to be done – I should note that I am currently at Step 2 of 5 as stated above. However, the application is evolving into the embryo of what I think a linked data application can and will be. What I can share are the places that we are actively looking for solutions, simple decentralized solutions, that solve these issues.

One thing that is slowly changing is the perception of cross domain scripting on the internet. As services get more complex, and require more aggregation of data from multiple sources, we are going to push more data handling duties to the client, as scalability will suffer. For flash crossdomain scripting we need the crossdomain.xml file present on the server of the api would like to call for it to work. This is a trivial thing to setup as it consists of a simple xml file located off of the subdirectory level you wish to grant access to.
Once this file is exposed, the flash runtime will then allow calls from the client to those servers.

For cross domain client side javascript, things get a little bit trickier. A lot of cross domain tricks, such as widget embedding, is done via iframe embeds. This type of embedding significantly restricts access to the rest of the page, so effectively the widget is isolated in terms of interaction with the rest of the page DOM. Firefox 3 will allow cross domain client side scripting with certain http headers being present on the server response from the remote cross domain endpoint. I’m not sure how future versions of Internet Explorer will address this issue, but I think evolutionary pressure from both Firefox and Flash will yield some effect there. A new development that is supposed to make javascript more secure is Google Caja. I’ve begun to follow Google Gaja but I’ve yet to deeply jump into that project.

We’ve been waiting to see how the discovery wars pan out, as it stands now XRDS-S is looking like the service index of choice amongst the big players (presently Yahoo is endorsing it, hiring Eran Hammer Lahav as their Open Standards Evangelist. How the XRDS resource is discovered automatically without a tremendous amount of user interaction is something that we are taking many approaches towards, as discussed above. For now we’re going to focus on finding XRDS files as our catalog of service endpoints for a user. The DiSo project is going to be publishing its user’s service endpoints in the XRDS format and already has a plugin for it, so I think in the short term we’ll be focusing on consuming that data in terms of an early conceptual demo in runtime linked data.

Once we find the XRDS file, we aren’t out of the woods. How do I set my XRDS file up so that I can tell the application that I have “images in flickr”? This is a fundamental question being setup and worked on at the site by many different groups and people, and has yet to fully be fleshed out.

There’s also the issue of once we find a data endpoint, how do we talk to it? ATOM? Some sort of standard data interop api that spits out RDF? In a perfect world I’d love to see a self organizing web, a linked data web that can find a data endpoint at runtime, find its semantic schema, and wire itself such that it can talk to that api without ever needing user intervention — it simply will understand how to talk to it.

Another key development will be the permission system, possibly using OAuth+Discovery to automate the updating of a XRDS-S file for someone when they add data to a service. I need to learn a lot more about OAuth and the direction its heading. I’d prefer a world where the user doesn’t need to manually go to a site to allow their resources to be used by a third party, but for now, this is how we have to operate.

So really, I think I have more “lessons underway” than “lessons learned”, but sharing this information is key since it sparks interest in other like minded developers who may know a lot more about some of these areas than me. There’s going to be some places that we punt and just hard code some scaffolding code in to just get going, but over time I’d like to evolve towards a linked data web that auto discovers new connections at runtime and self organizes to give a smarter and far more intuitive user experience than we’ve seen so far.

If you are interesting in linking data or trying some data interop experiments, please feel free to email me (jpatterson at or check out the WRFS workgroup or my blog

Feedback and suggestions are welcomed.

Why openID matters to you

Ok, so I have mentioned openID several times now but what does it mean to you and me exactly. If you are like me, you probably have different accounts and usernames / loginids on sites like Facebook, MySpace, Yahoo, Youtube, etc. Unfortunately this is still the norm for the most part, practically every site requires you to create an account and username / loginid on their site.
NOTE: A username / loginid is not the same as an account, usually an account includes information like your name, address, etc.

openID is created to address this problem, especially the username / loginid part. The technology is made available to any site and user that wants to use it. With a non-openID site, you are typically asked to create a unique username / loginid, for me it is usually bobngu or bngu, but you are free to choose whatever your heart desires as long it is available on that site. Instead of creating a unique username / loginid, it is also common for sites to ask for your email address as the username / loginid.

However, openID does not use either of those mechanism to create your loginid. Instead, openID uses a URL as your loginid, for example, one of my openID is (issued by clickpass),

another is (issued by Yahoo).

IMO, this is counter-intuitive for an end user because a URL typically means a site of some kind. When most users see a URL, they usually enter it in the browser to bring up the site. This is not so with openID, with openID, you use it as a loginid – technically you can enter your openID url into the browser also except that sometimes you will see a blank page (for clickpass) or a simple page saying this is your open id (Yahoo).

Here’s an example of an openID enabled site, Plaxo.
Plaxo Login Page

When I clicked on “Sign in with Yahoo ID”, I get this page showing my Yahoo openID – I have to previously activate my Yahoo id for use with openID here.
Plaxo Yahoo OpenID

Note that Plaxo redirected me to Yahoo openID login page. Once I log in on Yahoo, I will be redirected back to Plaxo.

I think a good analogy to openID is credit cards. Examples of popular credit cards issuing parties are Visa, Mastercard, American Express and examples of openID issuing parties are Yahoo, AOL. You can have more than one credit card just like you can have more than one openID. You can also use a credit card at multiple stores just like you can use an openID at multiple sites. A store usually accepts more than one type of credit card (Visa, Amex, MC) just like a site usually accepts different openIDs even if it is the same person.

I hope this help explains why sites that support openID are good for you, the end user because it promotes data portability, i.e., using one openID and one password for multiple sites.

A problem for DataPortability to tackle

Today, Michael Arrington from Techcrunch posted an article on “Is OpenID Being Exploited By The Big Internet Companies?” According to the article, 4 big companies Google, Yahoo, Microsoft, AOL claim to support openID but upon closer inspection, all of them have only implemented partial support for openID – technically Google implemented the full openID framework but only for Quoting from the post,

Microsoft has done absolutely nothing, even though Bill Gates announced their support over a year ago. Google has limited its support to Blogger, where it is both an Issuing and Relying party. Yahoo and AOL are Issuing parties only.

To put openID framework into context, again quoting from the post,

There are two ways companies/websites can participate in the OpenID framework – as “issuing parties” or as “relying parties.” Issuing parties make their user accounts OpenID compatible. Relying parties are websites that allow users to sign into their sites with credentials from Issuing parties. Of course, sites can also be both. In fact, if they aren’t both it can be confusing and isn’t a good user experience.

So what can DataPortability do in this case? IMO (and strictly mine), DataPortability could publish a technical and policy blueprint stating that to claim support for a particular standard like openID, a vendor needs to at least implement the relying party feature. From the user perspective, that’s the key value of openID, i.e., get an openID and use it everywhere. It seems like vendors don’t need much incentives to be an openID issuer, there is inherent value to a vendor to provide that feature as is already done by Yahoo and AOL.

If I have my drudders, these are the types of issues that DataPortability should swiftly address and publish standards for, just my $0.02.

DaPo Acronym Soup

In my first post, I said that DaPo list of existing open source technology reads like an acronym soup, here’s why (below extracted from DaPo site).

Authentication Standards

  • User authentication – openID
  • API authentication – oAuth

Data Transfer, Interchange, and Exchange Standards

These standards enable data dumps via import and export across data spaces on the Web. They also aid publish and subscribe methods of data exchange. The list includes:

  • Messaging – XMPP
  • Syndication – Atom and RSS
  • Attention – APML
  • Services and Service Discovery – YADIS and XRDS
  • Subscriptions – OPML
  • Personal details – hCard
  • Relationships – XFN
  • Personal Profile Data & Social Networks – FOAF
  • Online Communities – SIOC (discussed at SIOC-DEV)
  • Publishing data – AtomPub

Linked Data or Data Referencing Standards

These standards enable access to data by reference via HTTP based data object identifiers called URIs. The list includes:

  • HTTP based URIs (for Location, Value, and Structure independent Object / Resource Identifiers)
  • Personal Profile Data & Social Networks – FOAF
  • Online Communities – SIOC (discussed at SIOC-DEV)
  • Other Schemas and Vocabularies in the Semantic Web realm

Other standards

Other standards groups and initiatives that may not yet have a place in the Blueprint, but still deserve help and support!

  • Content Identity Validation – MicroID
  • REST

Despite the acronym soup, I found it to be the most useful page on the site because it lists all the open source technology promoted by DaPo. Now I just need to research each acronym to understand the technology, I will write subsequent posts on each technology as I get to it.

Note that there are policies and legal aspects to DaPo also which I am ignoring, at least for now.