There’s little doubt that Google’s Project Glass is going to be a disruptive technology, although whether that comes from revolutionizing the way we interface with technology or more because of the social implications will remain to be seen. Considering that the device has been limited to the technically elite and the few that got in on the #ifhadglass competition (disappointingly restricted to US citizens only) we still don’t have much to go on as to how Glass will function as an everyday technology. Sure we’ve got lots of impressions about it but the device is still very much in the nascent stages of adoption and third party development on the platform is only just starting to occur.
We do have a much better idea of what is actually behind Google Glass though thanks to the device reaching more people outside the annals of the Googleplex. From what I’ve read it’s comparable to a mid range smartphone in terms of features with 16GB of storage, a 5MP camera capable of taking 720p video and a big enough battery to get you through the day with typical usage. This was pretty much expected given Glass’ size and recent development schedule but what’s really interesting isn’t so much the hardware that’s powering everything, it’s the terms with which Google is letting you interface with it.
Third party applications, which make use of the Mirror API, are forbidden from inserting ads into their applications. Not only that they are also forbidden from sending API data, which can be anything from feature usage to device information like location, to third party advertisers. This does not preclude Google from doing that, indeed the language hinges on the term 3rd party, however it does firmly put the kibosh on any application that attempts to recoup development costs through the use of ads or on-selling user data. Now whether or not you’ll be able to recoup costs by using Google’s AdSense platform remains to be seen but it does seem that Google wants to have total control of the platform and any revenue generated on it from day 1 which may or may not be a bad thing, depending on how you view Google.
What got me though was the strict limitation of Glass only talking to web applications. Whilst this still allows Glass to be extended in many ways that we’re only really beginning to think of it still drastically limits the potential of the platform. For instance my idea of pairing it with a MYO to create a gesture interface (for us anti-social types who’d rather not speak at it constantly) is essentially impossible thanks to this limitation, even though the hardware is perfectly capable of syncing with BlueTooth devices. Theoretically it’d still be possible to accomplish some of that whilst still using a web app but it’d very cumbersome and not at all what I had envisioned when I first thought of pairing the two together.
Of course that’s just a current limitation set by Google and with exploits already winding their way around the Internet it’s not unreasonable to expect that such functionality could be unlocked should you want it. There’s also the real possibility that this limitation is only temporary and once Glass hits general availability later this year it’ll become a much more open platform. Honestly I hope Google does open up Glass to native applications as whilst Glass has enormous amounts of potential in its current form the limitations put a hard upper barrier on what can be accomplished, something which competitors could rapidly capitalize on.
Google aren’t a company to ignore the demands of developers and consumers at large though so should native apps become the missing “killer app” for the platform I can’t imagine they’d stave off enabling them for long. Still the current limitations are a little worrying and I hope that they’re only an artefact of Glass being in its nascent form. Time will tell if this is the case however and the day of reckoning will come later this year when Glass finally becomes generally available.
I’ll probably still pick one up regardless, however.