ETFs & Funds
View as an RSS Feed
How Should We Improve Seeking Alpha's Comment Rating System?
There are a handful of very interesting themes in this discussion (each deserving of its own discussion) but I wanted to focus on just one: the "rating" of commentators. I have tried to group my thoughts around a couple of themes. If I could boil the (unfortunately) long comments below into one theme it would be:
Where have you seen other rating/ranking systems and what can we learn from them?
My (random) thoughts:
While there are lots of good ideas about the way to build a better self-generating, data-driven rating system, there is an acceptance that any transparent system can be gamed. It's no surprise that Google relevance rankings are fairly opaque and constantly evolving. While it minimizes (eliminates?) gaming it does require a not significant and ongoing investment in managing the algorithm (just look at the whole SEO industry). Which is fine ... if that is your main business, but a big challenge if it is not. (Is this a space that SA should be playing? Is there a third party rating system that could be licensed?)
There is an interesting piece comparing and contrasting the different food rating systems (eg, Zagats, Michelin, Yelp, AAA).
. There are a lot pros and cons of each system but, IMHO, one of the interesting insights is that "reliability" increases the more that there is independent human intervention. While Yelp is "based on the opinions of real people" there are frequent allegations of manipulation. As we move through the Mobil and AAA models, manipulation is less of an issue but it is a bigger business model challenge for Mobil/AAA to be able to create and maintain those ratings.("Restaurants pay a fee to be considered by AAA".)
It makes me wonder if this is a natural extension to the SA business? The SA contributor status and SA certification are examples of credibility and relevance filters on the information overload of financial commentary. Taking it a step further to provide a credibility/relevance rating to the commentators would be another useful exercise. But how much effort would it be in human resources? How timely would it be updated? Would it be accepted? Who would pay for the ratings - the rated or the user of the ratings (where does that sound a familiar issue?)? Would people pay to be rated (even if the rating was bad)? Would (some) readers pay extra for a better rating system if it saved them time and increased the efficiency of their research process?
Epinions had an interesting hybrid model (
) with Category Leads, Top Reviewers and Advisors. While they have some algorithm and some behind the scenes intervention, it does seem a complicated process. Having too many options only substitutes one information overload problem.for another.
So .... I was wondering, where have you seen other rating/ranking systems and what do you think we can learn from them?
Dec 23, 2009. 01:09 AM
Link to Comment