As online content absorbs and adjusts to the new traits characteristics of Web 2.0 publishing, quality of content online has increasingly taken a backseat to the context within which this same content has been published.
Is context becoming more important than quality in the online content publishing equation?
Photo credit: Oz Photo
While traditional business analysts pick holes in social media and new media points fingers at the flaws in mainstream online publishing, standards for content are dropping across the board.
In this great, forward-looking article, which I strongly recommend to all online independent publishers, John Blossom looks at how to strategically address the need for high-quality content in a online universe increasingly driven by context and aggregation forces.
Intro by Robin Good
"Quality is as quality does"may not be a saying that came out of Forrest Gump's mouth but it's a simple formula that seems to be proving itself on the Web as traditional sources of quality content lose audience share to search engines and social media sites.
At the same time, though, the ever-increasing popularity of social media sites does not always seem to be balanced by mature quality control. But don't mistake immature techniques with inadequate potential: the techniques used to generate social media are carving out a new path to content quality that's here to stay.
Professional publishers find themselves oftentimes railing against the Web as a devil's den of half-baked information and hailing the quality of their content. And, as underscored by ongoing issues with Wikipedia content quality and conflicts of interest, there are some real concerns out there in the world of user-generated content.
Yet online content quality is suffering on the traditional side of the equation these days as much as on the "new" media side. For every time that a weblog like Engadget manages to be taken in by a fake memo on a major product announcement there are instances such as paidContent.org correcting a claim from a major newspaper of a deal between Google and major U.K. news outlets based on an unattributed source. And then there's venerable Nature magazine's claim (premium) of measurably higher content quality in Wikipedia compared to long-established Britannica that was debunked in an online journal's publishing of Britannica's response as a highly questionable piece of research.
In this crazy mix of pots and kettles everyone seems to have a bit of soot on their face. Social media has a lot going for it but the economics of today's social media scene have a very dark side as well.
Weblogs are bulking up with more professional editorial staffs to take on traditional publishers but their quality seems compromised oftentimes by the rush to cover too many stories too thinly.
At the same time traditional news outlets continue to cut editorial staffs back to the bone - and then some - in a race to build online revenues in time to offset sinking print revenues and soaring print costs.
The net result is an explosion of half-baked online content from all corners striving to attract online audiences whose perceptions of the news cycle have shrunk down to the attention spans of teenagers and foreign currency traders.
A recent New York Observer article points to a major culprit in the flight from quality in online content: search engines. Forbes.com, one of the more successful and growing online business media portals, is noted by the article as having a notably high level of editorial staff turnover, linked by one disgruntled ex-employee to its being a "page-view sweatshop." In the era of auditable circulation numbers, concerns about advertising revenues could be separated fairly cleanly from the editorial process.
Today each and every article produced by a publisher winds up being its own independent publication in the eyes of search engines trying to perceive its relative value - and hence determining its ability to draw in ad revenues.
In essence, each article becomes a brand of one.
Add in the increasingly standard technique of creating links to news content in social media services and the editorial merits of a given story are fast outweighed by other techniques that are likely to drive its publisher's revenues.
How does one address the need for high-quality content in a context-driven publishing environment?
Here are a few thoughts as to how and where quality content will survive and thrive:
While forming well-researched articles and information sources does require a great deal of quality control, the experience of the Web points towards evolutionary quality control as the most promising route for publishing.
Having every fact and figure exactly right at a fixed point in time was a "must" in the era of print-oriented publishing. Content management systems and simpler tools such as weblogs and Wikis make it far simpler to publish revisions online, but editorially we're still caught oftentimes in the print-oriented quality cycle.
The experiences offered by search engines and social bookmarking services suggest that people perceive quality on a given topic as a highly movable feast.
Being able to evolve content quality on a continuous basis therefore becomes at least as important as any initial efforts.
Although Wikipedia's editorial processes are far from perfect and worthy of some skepticism, Wikipedia has served as a critical proving ground to demonstrate that open social editing processes can scale effectively.
The PLoS ONE experiment with online collaboration is developing peer-reviewed scientific research articles successfully through an open comment and review process that supplements traditional peer reviewing.
Not every peer review process need be as open as PLoS ONE or Wikipedia. But, as the Web offers the broadest opportunity for peer input, it would appear that the quality of audience engagement in developing materials is perhaps as good a measure of quality as the engagement of audiences in a finished product.
As much as tools such as wikis, weblogs and social bookmarking are about what people write, they're also important for what they bring together as reference content through links, comments and embedded content.
Social media is challenging search engines as a starting point for finding answers to questions, in part because people come to trust the insights and expertise of specific communities to provide both their own insights and insights from their own research.
Answer-oriented communities such as Yahoo! Answers, WikiAnswers and LinkedIn Answers provide audiences with the ability to vote on answers to specific questions - a competitive aspect to publishing that helps to both aggregate potential high-quality content and to rank its value.
If you need some help accepting these suggestions take a close look at traffic statistics for major Web sites and note a disturbing pattern: sites focused on traditionally authored content are dropping in ratings across the board as content created and aggregated by social media and search engines continues to rise.
Quality is as quality does is a formula that challenges both traditional publishers and new media sources to consider the evolving nature of content quality.
In this in-between period in which social media techniques are still very young but very popular we'll continue to have confusion about what's quality content.
But don't mistake growing pains for permanent awkwardness.
Quality is moving towards the heuristics that drive social media quickly - and permanently.
Original article by John Blossom published on May 30th, 2007 as "The Quality Gap: The Race for Context Pushes Content Quality to the Sidelines" on Shore.com.
Find out more about John Blossom and the management consulting services of Shore Communications Inc., covering the business of enterprise, media and personal publishing at Shore.com.John Blossom -