Coming change to the scoring system

After following many discussions on the forum about the scoring system, it became evident that everybody notices that serial stories tend to score higher than single shot stories for various reasons. The most obvious reason for this trend is that those who do follow serial stories tend to be those who like them to begin with, so they will definitely score them higher.

Due to this fundamental difference in reader behaviour, it makes it unfair to compare single stories’ scores to serial stories’ scores when weighing each story’s score.

The change that I’ve decided to make to the weighing formula is to make it aware of the story’s serial status.

So the system will have two separate medians for the each scoring period. One for single shot stories and one for serial stories last updated in this time period.

I’ve tested this change already in the development environment and one thing became clear: serial stories’ scores will drop after this change and single shot stories’ scores will rise after the change.

I believe this change will make for a weighing forumla that is more fair for single shot stories that don’t benefit from inherent reader bias.

Although my mind is kind of set on making this change, I thought I would get your opinion about it first.

Please use the form below to voice your opinion.

Update: Before you post telling me that longer stories are better, this algorithm change isn’t about Long vs. Short. This is Serially posted stories vs stories (long and short) posted in a single shot. This isn’t about giving advantage to short stories, this is about removing formula bias against stories (Long and Short) posted in one shot. The change in algorithm does not look at the story’s length, it only looks at whether the story is posted in one shot or over time.

Update 2: Since most replies were negative, I cancelled the change.

Scoring System

I knew that as soon as I create this Blog, that the issue of scoring on the site will be brought up into the discussion of every article that I ever post.

So, I thought I would address the issue with my first actual article.

I hope, whether you’re an author, or a reader, you’ll read this article with an open mind and a bit of understanding. I’ve dealt with the subject for the longest time, since the scoring system started on the site in 1999. And rest assured, ever since the start, the subject is brought to my attention by an author or a reader a minimum of once a week, if not more. Needless to say, I’ve put quite a bit of thought into it over the years, and in this article, I’ll try to share the results of these thoughts.

The concept of scoring an artistic work like stories by people is a difficult one to tackle.

Ideally, scores on the site would enable any reader to take a story’s score as a measure of what to expect of the story in any facet. The score should tell how well the story is written, how well the story’s story is told, how extensive the plot is, etc… So theoretically, the score should help you pick out the best stories for reading and theoretically, it should be infallible.

From a reader’s point of view: the scores should tell him/her how much they’re GOING to enjoy the story if they decide to read it.

From the author’s point of view: the scoring mechanism should give him/her 10s and everybody else a 6 or lower. Of course, unless the author is trying to find a story to read, then the score should reflect this need, temporarily at least.

But the reality is, there could never be a way for a single score to reflect every facet of the story. There could never be a single way to account for different people’s tastes and education levels, let alone account for their personal fetishes. For example, some people like cheating wives stories and score them higher than anything else, others think those stories are an abomination and the issue should not be discussed without mentioning the stoning of the wife at the hand of the raging masses; so they always score any story that fails to do that with a 5 or less vote. Does that mean the story is not good? Does it means it’s not written well? Does the score take into account your personal stance on the genre?


I wanted to build that ideal system mentioned above. So the first iteration of the scoring mechanism, who’s existence is probably not remembered by anybody, was a reflection of the ‘Celestial Reviews’ scoring method. The reader gets to select three separate scores, one for each of the criteria: ‘Technical quality’, ‘Plot & Character’ and ‘Appeal to Reviewer’; the points range was from 1 to 10.

This mechanism stayed on the site for a whole week and, at the time, the site had about a thousand stories. During its existence for that whole week, the system gathered exactly 5 votes. Yes, you’ve read correctly, that’s ‘FIVE’ votes in total.

I can’t really tell why it was such a failure. Was it because it required too much work from the reader? Was it because most readers didn’t think they were qualified enough to make judgment on subjects like ‘Plot and Character’ and ‘Technical Quality’?

I don’t know, but I know it was a miserable failure, and it needed rethinking on my part to make it work; it required me to lower my expectations from the readers.

The result of the rework is the current system. As soon as it was online, scores came pouring in. So the participation problem was resolved.

Soon after, like within a week, I came to the realization that there is something extremely wrong with the scoring mechanism. Scores were fluctuating wildly, and it was evident that what I built at the time was a weak system, and it was being abused. At the time, the system that I used to run the site was very easy to use and implement, but very limited in what it allowed me to do. The only thing that I could do at the time is to track the last IP address that cast a vote for the story and stop them from voting again, but since it tracked only the last vote, when somebody else voted, the first voter could vote again. Not perfect, but better than the initial system. That system stayed online from the fall of 1999 to June of 2001. At the end of that period only 4 stories had a score higher than 9.

It took me that long to bring up my skill level to use a more powerful system. In June of 2001 I finished the re-implementation of the site using PHP and MySQL and I instituted user registration. Now, the site could tell who’s voting, and could keep track of who voted for what story and stopped them from voting again. So while most people think that I use the site to harvest email addresses to sell to spammers (which I don’t), I had to do it so that people couldn’t abuse the voting system easily; it’s not perfect, but it works well enough to make it a viable system.

The most recent change since then was about a month ago, when I changed the location of the voting mechanism to above the ‘To be Continued…’ or ‘The End’ line and changed the wording of the form itself. This recent change had a tremendous effect on the rate of participation. The rate of participation went up to almost double the rate of participation pre-change.

Remarks about the system

Inflation of scores

Many people are pissed about the seemingly artificial high scores, and have commented on how unrealistic they think they are.

Unfortunately, there is no easy way around this one. It has to do with human psychology. People tend to be nice and illogical simultaneously.

First, they don’t like hurting authors on purpose, so if they like the story, they give it a high score, and if they don’t like the story, many people would abstain from voting. I’ve been told by many people that they won’t vote at all if they don’t think that the story is good and deserves a 9 or 10. So the tendency for votes to be on the higher side of things can’t be avoided for this very peculiar phenomenon.

Second, they tend to be affected by existing scores for the story in question and the scores for other stories. So for example, if John Doe is reading the story ‘Mary Had a Little Caboose’ and the story has an existing score of 9.2. And John Doe had already read the story ‘House in the Mountains’, and it has a score of 9.5. John Doe liked ‘Mary Had a Little Caboose’ more than ‘House in the Mountains’, then his vote will be a 10 automatically, regardless of what he really thinks of the story ‘Mary Had a Little Caboose’ on its own; maybe a 9. He’s now looking at it in context of the other story’s score.

That effect causes an ever rising top score on the top list and cannot be broken unless new readers are introduced into the site, and they bring with them a fresh perspective.

Score Manipulation

I’ve often received messages from authors that somebody must be tampering with scores, or somebody has a grudge against them, and casting multiple low votes to lower their stories’ scores. While this is possible, it’s not possible at the scale that some people think.

When story scores change, they change because of valid votes 99% of the time. So if you’re an author and your story’s score drops, take it like a man and accept the fact that somebody doesn’t like your story. No story can please every person on every level.

Author and Reader Suggestions for Scoring Enhancements

Over the years, I’ve received many suggestions to what I can do to make the scoring system ‘better’.

  1. Forcing readers to vote: Does not work and would make the site less appealing. Nobody likes to be forced to do anything.
  2. Formulas that involve download counts, size of the story, number of chapters of the story, consistency of number of downloads from one chapter to another; They all don’t work. Most authors post their stories on other sites, so a reader can read one chapter here and one chapter on ASSTR and one chapter on EWP and another on ASSM. So the counts would be skewed and unreliable at best. Number of chapters? Well, how does that affect a long and repetitive story in comparison with a shorter story that is built tighter? How does the chapter length get counted? Can’t force a format on the author, so that is out of the window. Everybody knows that length and chapter count don’t correspond in any way, shape or form to the quality of the story. Every formula has an optimum variable combination, which would boost a story’s score to the max and that will have no real reflection on how good the story is. So we’re back to square one.
    Anything that somehow messes with the reader’s evaluation of the story is not acceptable, after all, the score should reflect the readers’ opinions and not a morphed version of it.
  3. Allowing people to change their scores. Well, it’s feasible, but very resource intensive and messes with the site’s ability to thwart manipulation. However, I implemented the system to allow a reader to vote again after 125 days, so you could cast a different vote if the serial degenerates over time or improves over time.

For now, the scoring mechanism is staying the same way it is. There will be no change, unless someone can come up with a scheme that somehow can read the minds of all those who’ve read the story reflects their thoughts and opinion of the story in perspective of the reader who’s checking the score and trying to decide whether to read or not, in one, easy-to-understand, decimal number.

Although, it doesn’t simply end here.

I need to change the reviewing system on the site to enable any reader to submit an extensive review of any story. My thoughts so far have explored re-implementing the reviewing system in parallel with a second score for stories that is derived from those reviews. So each story will have two scores: the existing score, let’s call it the popular score and a new score and let’s call it a reviewers’ score.

So far this idea has shown merit, and it beats all other ideas that’s I’ve had myself, or were suggested to me.

However, such a system presents its own challenges and limitations, so for now it’s in the analysis phase. It may or may not proceed into the planning and then implementation phases.

Update: 2004-11-01 : 11:25 am:┬áThe future reviewing system will be accessible to anybody (whenever it’s done). At the end of a story, the reader will have the current voting form and a link to submit a complete review.Reviews will be moderated, as in somebody have to approve the review before it shows up on the site and affect a story’s score; we don’t need reviews like ‘complete crap 1, 1, 1’ or ‘I liked it 10, 10, 10’, there is no point in those. The only way to really avoid silly or stupid reviews is to moderate them. Reviews should contain valid constructive criticism or a developed opinion.

No anonymous reviews, the reviewer’s user ID will show up on top of each review.

While the idea of restricting reviews to premier members does have merits, it would be too restrictive, and many of the qualified and willing to post people would be shutout. So the feature will be practically crippled; so no restriction on who can and cannot review.