Monthly Archives

August 2010

Some emails are more equal than others

By | Content, New Media, Social networks | No Comments

You might have seen the news today that Google launched a priority email feature within Gmail.  I am still using Microsoft Exchange for the vast bulk of my emails so I haven’t been able to experience the power of Google’s offering myself, but Jason Kincaid on Techcrunch thinks ‘it’s fantastic’.

This is a feature I would love to have in my email client, and I hope that it comes to Outlook soon, maybe via a plugin.  I have to process a couple of hundred emails per day (excluding spam) and whilst most days I’m successful in getting to Inbox zero every week there is a day or two when I fail in that ambition.  When that happens I spend increasing amounts of time searching through my Inbox for important mails – which is a real drag, particularly given that the reason I haven’t got to Inbox zero is that I’m going through a busy patch.  So a priority email feature in Outlook would really help me.

The thing that interests me more about this development is the way it will change email usage.  If our email clients start telling us that certain emails are important, and therefore others are not, then we are going to spend less time reading the unimportant ones.  For many ‘Inbox zero’ will become ‘Inbox zero for the important emails and check the others once per week’.  Then ‘check the rest once per week’ will become ‘once per month’, then ‘once every three months’ and so on.  Ask yourself when you last checked your spam folder.

Some people have already declared ’email bankruptcy’ and given up on getting back to everyone who has emailed them.  As we all get more and more email more it is inevitable that more and more people will take this step.  Google Priority Mail will accelerate that trend.

People already treat their Facebook and Twitter feeds like this, so it is a small step to get happy with emails flowing through our systems without getting seen.

I don’t think this is a bad development.  I buy into the ‘river of news’ concept first explained to me by Stowe Boyd.  The ‘river of news’ idea is that in the age of information overflow we can’t hope to stay on top of everything that happens but we shouldn’t worry about that, because if something is important multiple sources will bring it to our attention. Therefore, if it is important we will see it at some point, so long as we keep a reasonably close eye on our news and message feeds.

As an example – I try to keep an eye on Techcrunch and Techmeme and catch most deals when they are announced via those websites.  If I miss something there, I often learn about it via email from a friend or someone related to the deal.  If I don’t hear about it via email, sooner or later someone will tell me face to face, or I will read an article about something else which mentions it.  I can’t think of an occasion recently when a deal of even minor importance has happened and I haven’t heard about it somewhere within a day or two.

As I mentioned at the beginning of this post I am still using Exchange.  In fact our whole partnership is using Exchange and it wouldn’t be straightforward for me to switch to Gmail if I want to, and much as this might not be the fashionable thing to say, I’m not sure I want to switch.  I have a Gmail account I use for some personal mail, and in my experience, for the volumes of emails I read and write Outlook is still quicker – I have also read blog posts from Fred Wilson and others complaining about slow response times from Gmail. 

I take a lot of care to keep my laptop running fast, including regular rebuilds of the entire machine, and I do that because performance is incredibly important to me.  I do lot of stuff (mostly blogs, web apps and email) and it drives me nuts when my machine and/or services run slowly, because it means I don’t get through my work.  Worse still, poor response times can sap my motivation to get stuck into tasks like clearing email backlog.

Twitter Weekly Updates for 2010-08-29

By | Weekly Twitter digest | No Comments

Powered by Twitter Tools

Establishing authority on the web

By | Advertising, Facebook, Google, Social networks, Startup general interest, Yahoo! | 4 Comments

image Companies have been building ways to establish authority and credibility online since the beginning of the web.  Yahoo started life as ‘Jerry’s guide to the world wide web’, a site where Yahoo founders David Filo and Jerry Yang lent their authority to what they viewed as the best sites.  Google’s link based algorithm did the same thing, but scaled better because it was automated.  Turning from search to commerce, one of ebay’s great innovations was the supplier ratings system which put enough trust into the system for people to buy from other people they didn’t know.

More recently a lot of the innovation seems to be around looking for ways to lend authority to entities rather than content – entities can be people or businesses (often small businesses).

On the small business side reviews and recommendations are the most obvious way to extend the ebay concept to entities like restaurants, clubs, bars and other local businesses, and for the last couple of years a number of companies have been doing that with varying degrees of success – e.g. Yelp and Qype.

Where things are starting to get exciting now is a focus on people.  A couple of weeks back Fred Wilson wrote about a UK company called Peer Index which is getting started in this space with an algorithm which establishes authority by analysing social media activity to determine the extent to which people are listened to (Azeem Azhar founder of Peer Index is a friend of mine).

The underlying activity which is being measured here is sharing of links and I’m writing this post this morning after reading on Techcrunch that sharing widget company ShareThis have started measuring social reach by tracking the number of times a link is shared AND the number of times it is clicked.  Incorporating data on clicks is a good step as to my mind a link that is shared and clicked has a lot more significance than a link that is merely shared.  (ShareThis is a DFJ Mercury company.)

Facebook’s Like button is also all about establishing authority – in this case via crowd sourced data.

So where is all this headed?

It seems to me there are parallels with the online advertising world.  There is an increasing amount of authority related data swilling around and companies that have the traffic to help create more data will be in a good place (e.g. Facebook and ShareThis), but perhaps the most value will accrue to those that build services off the back of algorithms that crunch the data to tell us something useful (e.g. Facebook and startups like PeerIndex).  I also think that this data could be profitably incorporated into ad targeting services.

To close I’m going to pull out a couple of interesting facts that ShareThis found.  Firstly more links are now shared through Facebook than over email (45% of the total compared with 34%, Twitter is in third place with 12%), but secondly email links are much more likely to be clicked on than Facebook links (91% chance of click through on email compared with 80% on Facebook).  This suggests a difference between the way people view content shared between the two services which is relevant for authority calculations.  This data is from the Techcrunch post I also linked to above and is un-corroborated.  The click through rates seem a little high to me.

Enhanced by Zemanta

Google’s acquisitions – a European take

By | Exits, Google | 14 Comments

The graphic towards the bottom of this post was on Techcrunch this morning.  It shows Google’s acquisitions over the last nine years.  It is interesting for a number of reasons (not least size, (sub) sector, accelerating rate, and reason for acquiring) but I’m going to focus on geography.

I did a bit of analysis on the location of the last thirty acquisitions which takes us back to July 2007 which, yielded this pie chart.

image

The US bias is stark.  I am sure Google is aware of all the great innovation going on outside the US and will be looking to change this balance in favour of the rest of the world (indeed I know some of the people inside Google who are trying to make that happen), but this shows how much there is to do.  I’ve listed the companies by geography at the end of the post.

If anybody has any pointers I’d be interested to see comparable data for the other big tech acquirers, particularly in the internet space, and also for small internet acquisitions in the UK/Europe generally.

UPDATE: Ian Darroch just sent me some Cisco data he pulled together a few years ago.  Of the 106 acquisitions they made from 1993 to 2005 85% were in the US (90 out of 106).

google-acquisitions

Location of the last 30 deals

US – Jambool, Slide, Instantiations, Metaweb, ITA, Invite Media, Simplify Media, Global IP solutions, Bump Technologies, Agnilux, Episodic, Docverse, Picnik, Aardvark, Appjet, Teracent, Gizmo5, Admob, reCAPTCHA, On2, Omnisio, Postini, Image America

Israel – LabPixies,

UK – PlinkArt

Finland – Jaiku

South Korea – TNC

Unknownd – reMail (Y-Combinator company though, so probably US), Zingku

Enhanced by Zemanta

Profitable streaming services – will movies get there before music?

By | Apple, Google, Microsoft, TV | 12 Comments

Netflix, the US DVD rental cum video streaming business is out cutting $1bn deals with movie studios for streaming rights and Hulu is contemplating an IPO – both developments which suggest the premium video streaming business is starting to reach maturity.  The music streaming business, by contrast, is still finding its way, and is characterised by conflict between the record labels and streaming service providers, none of whom are cutting $1bn deals or preparing to IPO. 

The interesting thing here is that even though the traditional music business model is collapsing the music industry is more loath to embrace streaming than the movie/TV industry, whose legacy model is merely starting to decline (the number of US pay TV subscribers suffered a quarter on quarter decline for the first time ever in Q2 this year – detail here).  Paradoxically, it might be precisely because the music industry is in freefall that its executives are unable to countenance the short term sacrifices that moving to internet distribution might entail.

The FT has a very good piece of analysis on developments in video streaming today which makes a number of noteworthy points:

  • Analysts are divided on whether pay TV has a future [the first time I’ve seen this].  Some argue that Pay-TV’s advantages in live sport and prime time shows like American Idol will sustain it going forward, whilst others argue that “cable will go the way of the landline phone industry.  It is nothing more than an empty pipe which the internet will replace”.  [I’m in the second camp.]
  • HBO plans to launch its own streaming service and won’t make its content available via any other sites.  [I think this is the way forward for large content companies.]
  • Netflix is in the first skirmishes of what could turn into a full-blown bidding war with the cable companies for movie rights.  The Girl with the Dragon Tattoo was available on Netflix before pay TV.  [A bidding war seems likely to me.  Exclusivity drives subs like nothing else (look at the way Sky’s UK business was built on its Premiership football rights) and we may be on the cusp of a general land grab for streaming customers.]
  • To enjoy streamed video services you need a 2MB internet connection to your home.

The other interesting piece of the movie/TV streaming conundrum is the device on which the streams are watched, and if it is to be the TV, how the streams will get from the PC to the TV.  Anecdotally, increasing numbers are watching direct on their laptop screens, and there are a number of companies looking to sell set top boxes and/or technology embedded into TVs which will help bridge the living room – not least Google, Apple and Microsoft.  And, of course, people can simply hook their PC up to the TV, either via cable or wireless.  That’s what we’ve done in our house, and it seems to me like it could turn out to be the simplest solution for many others.

Enhanced by Zemanta

Musings on app stores, portals and search

By | Apple, Google, Mobile | 5 Comments

Reading this Techcrunch piece on Chomp, a provider of search and discovery for iPhone apps, prompted me to explore a question that has been circling in my mind in a somewhat unstructured fashion for a couple of weeks now – namely what is the difference between an app store and a portal?

It seems to me that from a consumer perspective an App Store is really a portal dedicated to selling software written for a particular platform.  It provides search and discovery, payment services (app purchases and virtual goods), reviews and provides a stamp of approval guaranteeing certain minimum standards, for example no malware – lets call this last piece ‘validation’.

Search and discovery is typically done very badly.  This is the reason for Chomp’s existence and I’m sure many of you will have suffered as I have trying to search for apps.  The weekend before last I was looking for an app to play BBC Radio One on our iPad and couldn’t find anything in the app store (I bought two, but neither delivered), I also searched on Google, but to no avail.  When I tweeted my frustration I got two recommendations, one of which we’ve been using happily ever since (TuneIn Radio).  I’m sure this story is pretty typical, and there really ought to be a technology solution which saves us from having to resort to social media.

Of course, App stores (or at least the Apple App Store) perform one other function – they are the only way to legitimately load a new application onto your phone.  This is great for the phone manufacturer who can take 30% of all purchases, but it is hard to see how it helps anyone else. 

It is of course possible to separate search, discovery, reviews, and validation from the payment and download function.  This will become important once we have the same apps running on different platforms.  Users of any given platform will benefit from reviews and validation of apps that run on other platforms and will most likely want a view of what is most popular across all platforms, not just their own.  This implies they will most likely look to third party portals for search and discovery, rather than the one provided by the manufacturer of their phone.

If this scenario comes to pass (and I hope it does) Apple’s App Store will be reduced to providing payment and download services, and their 30% rake will look pretty steep, so they, along with other app store owners, will fight it.  Their control of downloads and payments are formidable weapons.

HTML 5, of course, offers the prospect of getting (web) apps on mobile devices outside of the app store process, which is why it is so important.  Or rather, might be so important, as I wrote in a comment on Fred Wilson’s blog last week my recent experience with Google’s HTML 5 Maps and Google apps on my iPhone leads to believe that there is a little way to go before the app paradigm faces a serious challenge.

I’m also starting to think that the longevity of the app paradigm versus open web standards will depend in large measure on who wins the hardware battle.  Open standards at the software level probably will probably only prevail if hardware manufacturers with a PC mindset prevail over those with a preference for closed ecosystems, if you get my drift.

Enhanced by Zemanta

The counter arguments to Kurzweil’s Singularity thesis

By | Ray Kurzweil | 6 Comments

This is the sixth and final post in a series summarising the key arguments of Ray Kurzweil’s The Singularity is near: When humans transcend biology.  The previous posts were:

————————–—————

image Most of the disagreement with Kurzweil’s thesis stems from what he describes as ‘incredulity’ or ‘simple disbelief that such profound changes could possibly occur’.  Many people think he is simply talking about too much change too quickly.  The key counter arguments are variations of ‘the world is more complex than Kurzweil allows for and these changes are far more difficult/will take longer/may never happen’.

These arguments are applied in the specific to the different strands of Kurzweil’s Singularity thesis – e.g. hardware development, re-engineering the human brain in software and the nanotech revolution.  For examples look in the comments on the posts in this series, or check out this recent conversation on Hacker News where people debate the depth of Kurzweil’s understanding of the brain and one contributor suggests it will take 75 years for us to reverse engineer the brain’s functionality, rather than the 20 years Kurzweil is predicting.

Kurzweil dedicates a chapter of The Singularity to countering the arguments of his detractors.  His broad response to the ‘incredulity’ criticism is to point out the long history of exponential rates of increases in all key technologies (covered in the first post in this series) and to repeat the point that our brains aren’t wired to notice when we are on exponential curves, but instead interpret the progress as linear (i.e. the tangent of the exponential) and hence routinely underestimate the pace of change.  He also notes (perhaps displaying a little hubris…) that humans have a long history of resisting notions that threaten the accepted view that our species is special – be it Copernicus’s insight that the earth was not at the centre of the universe or Darwin’s idea that we are only slightly evolved from other primates.

My view, having soaked up a lot of Kurzweil related material over the last few weeks, is that (rather boringly) the truth lies somewhere in the middle.  Kurzweil has done a lot of good thinking and makes a lot of predictions across a lot of areas.  Inevitably some of them will be right, and some will be wrong, particularly when it comes to timing.  For me the important thing is not exactly when the brain will be reverse engineered or we will have nano-bots in our blood stream, but rather that we are headed in that direction.  This last point is one that not too many people seem to take issue with.  I also think that Kurzweil suffers from being a generalist, and as such he attracts criticism from specialists wanting to defend their turf.

Beyond incredulity there is one other objection to Kurzweil’s theories that I’m going to cover off in a little detail here, which is the argument that exponential trends don’t last forever.  Proponents of this criticism point out that most exponential trends we observe hit a wall, usually driven by the environment.  E.g. human population growth tails off when population density approaches critical levels.  Kurzweil’s response is to me convincing.  He recognises that the tailing off in computing power will come, but argues it is a long, long way off.  To prove the point he describes at some length how we will be able to use more and more of the matter in the universe for computational purposes, and shows how this will provide all the computing power we need to get us a long way past the singularity.

For completeness I’m going to close by noting that there are a number of other criticisms that are beyond the scope of this post and which in my opinion Kurzweil adequately covers off in his book – e.g. brains are too complex to model, quantum computing will be required and is impossible, lock in to legacy technologies will prevent progress, and various philosophical and religious arguments which hold machines can’t be conscious.  In most of these cases my (relatively uneducated) view is that Kurzweil’s arguments are much the stronger, although none of these issues are black and white.

————————–—————

So that is it for the ‘Kurzweil series’ – I haven’t enjoyed writing it as much as I expected I would, largely because the subject is much heavier than my normal posts.  That said, I have learned a lot :-).  Thanks to everyone who commented and retweeted.

Enhanced by Zemanta

On Marks and Spencer’s ecommerce ops and legal disclaimers

By | Startup general interest | No Comments

A couple of weeks back I ordered two jumpers from Marks & Spencer’s website. I have to say that it has been a pretty frustrating process from start to finish, or rather almost finish, because I haven’t got them yet.

The annoyances:

  • the jumpers weren’t in stock 3-4 months ago when I first tried to order them – disappointing as they are a bog standard item
  • I could only ship to my billing address – usually I bill to home and ship to the office
  • delivery took 2-3 weeks
  • I only got notified of delivery the day before and was given a twelve hour window

All of this is way off market in my opinion.  I post it here to give hope to pure play ecommerce businesses who are competing with traditional retailers who have recently opened up a web operation.  It is surprising how poor many of these companies are when it comes to ecommerce.

There was one final annoyance, which is what resulted in me writing this blog post.  The email they sent announcing the delivery  contained the following sentence:

Please note, this email indicates our acceptance of your offer to purchase the above

  • .

    Very kind of them to ‘accept my offer to purchase’, I’m sure you would agree.

    I’m guessing a lawyer told them to put that in the email. I can’t believe anyone else who would think it was a good idea. If I’m right and it was a lawyer then this is an example of a decision to mitigate an un-quantified legal risk at the expense of the customer experience.

    I make this point because decisions like this should be made following a cost benefit analysis, rather than simply because a risk is there (unquantified).  When ecommerce companies decide how big to make their delivery windows they typically undertake a cost benefit analysis weighing up the extra cost of offering a smaller window against:

    • the loss of business they would suffer from people deciding not to shop because the window is too big,
    • the cost of the increased number of failed deliveries that comes with a big window, and
    • the impact of increasing the delivery charge.

    In my experience legal risks are rarely subjected to this sort of analysis.  Rather, it is common for a legal risk to be postulated, a potentially massive loss or impact discussed, but then little attempt be made to assess the likelihood of the risk materialising. It is then a brave executive who sticks their neck out and decides to take that risk when the alternative is to incur a small ongoing cost – be it real expense or a general worsening of the customer experience.

    I think the world would be a better place if we all made more effort to quantify legal risks and were more supportive of executives who advocate accepting some legal exposure in order to advance their other business goals.

    Twitter Weekly Updates for 2010-08-22

    By | Weekly Twitter digest | No Comments

    Powered by Twitter Tools

    Kurzweil part 5 – the impact

    By | Ray Kurzweil | 2 Comments

    This is the fifth post in a series summarising the key arguments of Ray Kurzweil’s The Singularity is near: When humans transcend biology.  The previous posts were:

    ————————–—————

    image Kurzweil begins his chapter titled ‘The Impact…..’ with the sentence “A Panopoly of Impacts”, which makes the point that the range of impacts is extremely far and wide, with the potential to change almost every part of life as we know it.  I will give a small number of examples of positive impacts in this post, before turning to a brief discussion of the downside.

    I have held off much discussion of the impact until now because many of the potential scenarios seem fantastical, and might serve to reduce confidence in the other arguments and predictions.  Most of the things I have discussed in this series up to now have had a 20-30 year time horizon.  These scenarios go beyond that.

    A complete understanding of our genetic make up and a command of nanotech will enable us to build devices that plug directly into and interact with our conscious minds.  This is the obvious next step from the rudimentary brain implants in use today for purposes including treating Parkinson’s disease.

    One application that is much talked about is to integrate with the optical nerves to introduce computer generated images into our visual field – augmented reality inside the eyeball, if you will – this would replace in-visor displays for fighter pilots and other military applications and could also be used to augment our memories – e.g. to remind us of people’s names as we see them across the room.

    Perhaps more exciting is the possibility of full immersion virtual reality.  A computer could take over your entire visual field and by integrating with your other senses make it seem as if you were in another place, in every sense.  Applications could include games and virtual meetings (which would become indistinguishable from real meetings).

    Similarly, our memories could become downloadable and storable elsewhere for back up purposes – making every memory retrievable (should we want it).  Taking it a step further our entire mind could be downloaded, and therefore uploaded again somewhere else – in a biological or non-biological body.  The essence of what it is to be ‘me’ in this scenario shifts from the heap of flesh and bones it is today to pure information.  I could then travel at the speed of light and exist (live) indefinitely.

    On the more immediate horizon (20-30 years, at least to get started) is what Kurzweil terms ‘programmable blood’.  Nanotechnologist Rob Freitas has already produced designs to replace our red blood cells, platelets and white blood cells with nano devices.  Programmable blood has the potential to circulate itself around the body, removing the need for a heart, transport oxygen much more efficiently enabling us to hold our breath for vastly extended periods of time (and ultimately dispense with breathing altogether), and finally to download information on new diseases and pathogens via wireless connections to the outside world to target and remove dangerous foreign bodies before the patient experiences any symptoms.

    There are, as far as I can see two types of downsides to these developments, specifically to the genetics, nanotech and robotics (strong AI) revolutions.  The first is that they come with increased risks to life and society.  Advances in genetics raise the possibility of terrorists releasing fatal designer diseases into society, nanotech advances raise the real risk of self replicating nano-scale devices enveloping the whole world (the grey goo problem), and strong AI carries the risk of the machines running rampant and using their vastly superior capabilities to destroy humankind.  I think Kurzweil is spot on in his response, which is to say that the benefits outweigh the risks and we can’t stop it any way – if we were to ban development in the UK it would continue in the US, or in China, or Iran or Pakistan.  I think everyone would agree it is far better for leading scientists to be operating in countries where the risk of irresponsible development is lower.  There are also activities we could, should, and are undertaking to minimise each of the specific risks, and any other risks that exist or emerge – these activities are centred around policies, regulation and self regulation.

    The second fear that these discussions bring is that in this vision where we all become cyborgs we will lose what it means to be human.  We will lose touch with our bodies and our minds, and if we are reduced to information then what are we?  This is a philosophical and possibly religious question and one where views are formed as much in the gut as in the mind.  I will say only two things.  Firstly, for me this view of the future is much more exciting than frightening – the possibilities for increased exploration and (self) understanding are endless.  Secondly our sense of what it is to be ourselves has grown as we have evolved from no consciousness, to barely conscious to a more refined consciousness today – so there is nothing new about changes in what it is to be human.

    On Monday I will close out this series with a post examining some of the critiques of Kurzweil’s position.

    Enhanced by Zemanta