Using software to analyse the content of pages and then serve ads based on the results does two things for you:
- Allows targeting of ads based on the content of the page – e.g. read that a blog is about cars and serve a car ad, or getting more sophisticated read that it is about football and serve a beer ad
- Allows owners of user generated content sites to not serve ads for brands next to types of content that they don’t want to be associated with – e.g. topless teenagers on Myspace.
This is a big deal as at a stroke site owners can increase CPMs and serve ads on inventory that they previously couldn’t use. This could be great for blogging aggregation plays like Feedburner (which I love) and Federated Media and more importantly for social networks. As a byproduct social networks could use the same software to uncover unsuitable behaviour and identify paedophiles.
There is of course some devil in the detail. The analysis of the page will not be perfect so advertisers will need to educated to expect some mistakes (that is also the situation today – for example on football sites problems with ads being served for brands that compete with the clubs sponsor are commonplace). Also the analysis will have to extend to photos and videos to do this properly – whilst I have seen plenty of text analysis I haven’t seen too much on the picture side.
I’m thinking the technology need only be a fairly simple re-working of classic search/categorisation technology – distilling the contents of a page down to a few key words and phrases. All of which makes it surprising to me that we haven’t seen more progress in this area. I feel a bit out on a limb here though and would be interested in other people’s experience.