New Firefox Feature Blocks Behavioral Ads

Mozilla, the developer of the Firefox browser, is working a feature that will allow users to opt-out of online behavioral advertising.

The goal is to give users "a deeper understanding of and control over personal information online," Mozilla's head of privacy said in a blog posted on Sunday [see pic below].

The feature will allow users to configure their Firefox browser to tell websites and advertisers that they would like to opt-out of any advertising based on their behavior, Alex Fowler [cq] wrote in his blog post. The user's preference is communicated to websites and third party ad servers using a new "Do Not Track HTTP header", which is sent with every click or page view in Firefox.

http://firstpersoncookie.files.wordpress.com/2011/01/mozilla-dnt-diagram3.png

 

Qwiki Alpha Launch - Tigho

I just was invited to Qwiki Alpha, the interactive, "information experience' platform that creates multimedia-rich wikis algorithmically out of data sets, instead of by user input and peer review!

Despite simply being fascinating and amazingly cool, Qwiki has profound implications upon the future of search and data organization.

Please note that according to Qwiki:

1. This experience was not generated by humans. It was generated by machines.

2. This experience is completely curated.

3. The experience is completely interactive

I had written about how exited I was about the possibilities of Qwiki a few weeks ago, but apparently they have now launched the Alpha version to select users. Also on Friday, Qwiki also announced a round of funding from tech-celebs Eduardo Saverin (Facebook co-founder) and Jawed-Karim (YouTube co-founder).

I cannot wait to test this out more, but see below for an entirely computer generated entry about the word "Wiki."

differentiation through great UX - Tigho

This is a great slide show about the future of real estate tech that I had the privilege to see Brian Boyero and Joel Burlem of 1000Watt Consulting present at Classified AdVentures’ Property Portal Watch Workshop last week.

They are insistent up on UX and design becoming more integral to the future of the web, and I agree that search will (and needs to) become more intuitive, taking "the search out of search" as they say.

I might be a bit of a techno-utopian marketer to expect to eventually be served everything that I want immediately from the web, rather than to slog through pages of bundled media and ads, but I don't see the days of the vertical content provider (like AOL or Yahoo) being reincarnated as a cultural norm. Everything is trending in the opposite direction.

Ads are now targeted upon ever-more granular (albeit sometimes invasive) criteria. Social and hyperlocal filtering provide additional credibility to information. And industries will need to build smarter platforms that deliver relevant results to consumers, relieving us from the clutter and confusion that typifies a web search, from an MSN homepage to an apartment-search on Craigslist.

We have moved from the age of uniformity to the moment of the meta-niche and it's about finding yours, individuating your content, and doing it for the customer.

For All Its Flaws, Wikipedia is the Way Information Works Now

Wikipedia, which turns 10 years old this weekend, has taken a lot of heat over the years. There has been repeated criticism of the site’s accuracy, of the so-called “cabal” of editors who decide which changes are accepted and which are not, and of founder Jimmy Wales and various aspects of his personal life and how he manages the non-profit service. But as a Pew Research report released today confirms, Wikipedia has become a crucial aspect of our online lives, and in many ways it has shown us — for better or worse — what all information online is in the process of becoming: social, distributed, interactive and (at times) chaotic.

 

According to Pew’s research, 53 percent of American Internet users said they regularly look for information on Wikipedia, up from 36 percent of the same group the first time the research center asked the question in February of 2007. Usage by those under the age of 30 is even higher — more than 60 percent of that age group uses the site regularly, compared with just 33 percent of users 65 and older. Based on Pew’s other research, using Wikipedia is more popular than sending instant messages (which less than half of Internet users do), and is only a little less popular than using social networking services, which 61 percent of users do regularly.

The term “wiki” — just like the word “blog,” or the name “Google” for that matter — is one of those words that sounds so ridiculous it was hard to imagine anyone using it with a straight face when Wikipedia first emerged in the early 2000s. But despite a weird name and a confusing interface (which the site has been trying to improve to make it easier to edit things), Wikipedia took off and has become a powerhouse of “crowdsourcing,” before most people had even heard that word. In fact, the idea of a wiki has become so powerful that document-leaking organization WikiLeaks adopted the term even though (as many critics like to point out) it doesn’t really function as a wiki at all.

Most people will never edit a Wikipedia page — like most social media or interactive services, it follows the 90-9-1 rule, which states that 90 percent of users will simply consume the content, 9 percent or so will contribute regularly, and only about 1 percent will ever become dedicated contributors. But even with those kinds of numbers, the site has still seen more than 4 billion individual edits in its lifetime, and has more than 127,000 active users. Those include people like Simon Pulsifer, once known as “the king of Wikipedia” because he edited over 100,000 articles. Why? Because that was his idea of fun, as he explained to me at a web conference.

Yes, there will always be people who decide to edit the Natalie Portman page so that it says she is going to marry them, or create fictional pages about people they dislike. But the surprising thing isn’t that this happens — it’s how rarely it happens, and how quickly those errors are found and corrected.

With Twitter, we are starting to see how a Wikipedia-like approach to information scales even further. As events like the Giffords shooting take hold of the national consciousness, Twitter becomes a real-time news service that anyone can contribute to, and it gradually builds a picture of what has happened and what it means. Along the way, there are errors and all kinds of other noise — but over time, it produces a very real and human view of the news. Is it going to replace newspapers and television and other media? No, just as Wikipedia hasn’t replaced encyclopedias (although it has made them less relevant).

That is the way information works now, and for all their flaws, Wikipedia and Jimmy Wales were among the first to recognize that.

-via gigaom.com

The Decade's 10 Best Digital Ad Campaigns

The future of advertising may be unclear, but these 10 campaigns have definitely helped shape it. Here's a breakdown of the decade's top digital promotions--we're looking at you, Subservient Chicken--as selected by industry leaders at the One Club, which recognizes excellence in advertising.

 

Google's New Honor System for Highlighting Original Journalism on the Web

google_news_logo_nov10.jpgA lot of content on the Web today is syndicated across multiple sites. For Google News, that's a problem, as the service has to determine which one of these sources to pick as a headline. Today, Google introduced two new metatags that allow publishers to give "credit where credit is due," as the company puts it, and highlight original sources and indicate when something is a syndicated copy. Google will use this information to rank stories on Google News.

The two new tags that Google introduced today are syndication-source and original-source. The syndication-source tag can be used to indicate the location of the original story. The original-source tag should be used to highlight the URL of "the first article to report a story." A story that uses material from a variety of original sources can include more than one original-source tags to point to these. Both of these tags can also point to the current page URL, so publishers can call attention to their own original reporting. You can find more details for how to implement these tags on your site here.

For now, Google still calls this an experiment is only using the syndication-source tag in its rankings to distinguish among groups of duplicate articles. The original-source is "only being studied" and doesn't factor into Google's rankings yet.

It is worth noting that the hNews microformat, which was developed by the Associated Press and the Media Standards Trust, already offers a similar functionality, including a tag for identifying the originating organization for a news story. According to Google, though, "the options currently in existence addressed different use cases or were insufficient to achieve our goals."

Can You Trust the Internet?

The problem with this system is that it is based on trust, as Search Engine Land's Matt McGee rightly notes. Nobody can stop a spammer from marking unlicensed copies of a story as original sources, for example. In it's FAQ for these tags, Google says that it will look out for potential abuse and either ignore the source tags from offending sites or completely remove them from Google News.