Scoring Changes Implementation Follow Up

It’s been ten days since the implementation of the new voting system, so I thought I would keep everybody up-to-date with how things are shaping up.

First thing to report is that voting is up. The number of basic votes cast per day added to the number of TPA votes cast per day is about 30% higher than the previous week’s number of basic votes cast. So it seems that many people were not using the old system and were encouraged by the new one. So for those who were afraid that the new voting system will lead to people moving away from voting, rest assured, it didn’t happen.

Second, the voting form wording change is affecting scores mildly. The median of votes cast after the change is lower. It currently stands at 8.00 compared to 8.82 prior to the change. Of course, the Q Score is compensating for this difference and the Q Scores are more consistent.

The number of TPA votes is about 30% of the basic votes number. So, it’s quite high. I didn’t expect it to be this high. So, I’m not going to keep individual TPA votes indefinitely. I can’t. I haven’t decided how long to keep them, I’ll wait for future development to guide this decision.

Implementation changes:

The Appeal component of the TPA is now a part of the Q Score. It’s processed as a basic score.

During the implementation, I changed the minimum number of votes needed to show the score on the site from 10 to 15.

The expected developments are that some authors and few readers are still trying to find a concrete relationship between the Q Score and the regular score and since they don’t have any hard numbers, many are just ending up being confused and already declaring the new system to be a failure.

My advice to those who are trying to correlate the scores is: Don’t. You don’t have the numbers and if I gave you the exact formula, you will still be missing the data used in the formula to calculate things.

Yes, relative positions of stories may change between the old score and the Q Score and that’s because different posting and updating dates of stories affect the Q Score a lot. So unless you can dig the data from the database, the exact formula won’t be much help.

The unexpected development is that so far, I’m getting a lot more support for the Q Score from readers and few authors. I’ve received many requests to remove the old scores completely and just leave the Q Score and the TPA (many of those requests came from premier members). Reasons given in the requests are that the score next to the Q Score are confusing things and that with all the data presented in the tables, there is too much info to scan through making the site harder to use.

I’m starting to lean in that direction already. If I receive more requests of the same, I will start the process of removing the old average scores and just leave the Q Scores. When/If I start that change I will make it so that the TPA score is also hidden if the number of votes is below 15.

Scoring System Changes Implemented

Well, it’s done.

Today, I implemented the display changes to show the new system in action.

You can read the previous entries in this blog to see how these changes came about.

First change is the voting forms. The old form has been redesigned with the new look and the wording has changed. It’s very close to the temporary change that I implemented at the beginning of December, but with one major difference. The 10 is no labeled like before; the current version says ‘Most Amazing Story’, which makes it more attainable.

Another change is the reversal of the order of the grades. It used to be that the 10 was the first item under the mouse, now it’s the furthest one. This is a subtle change, but one I implemented to encourage readers to think before they cast their vote.

The second change is the availability of the Expanded voting form. I designed the system to switch easily between the two forms and be able to set the default from within the form itself. This functionality relies on some JavaScripts. For those of you who run without JavaScript or your browser doesn’t support the advanced functionality, if you want to use the Expanded voting form, you need to change the preference in the ‘My Account’ page. There is a new entry titled ‘Edit Voting Form Preference’.

The third change is the display in the listings. I added a new column to most listings pages titled ‘QScor’ (the ‘e’ is missing from the word score to keep the column as narrow as possible).

For those of you who didn’t read the previous blog entries, this column contain a new score calculated by comparing the story’s usual score with the median of the scores of all the stories posted in the same period as the story in question.

For example, if the story you’re looking at was originally posted in 2000, the Qscore is calculated based on the median (the midway point) of all the scores of all the stories posted between 1998 and 2001. After 2001, it goes by year, until today.

This calculation is designed to take into account the general voting patterns of the period to compensate for the constant upwards score creep. Stories posted in earlier years have lower scores generally, regardless of the quality of the story.

Both scores will be displayed side by side for the next year or so. After that, the average story scores (the old ones) will be removed and only the Qscore will stay.

Due to the change in wording of the voting form, newer votes will tend to be lower generally, resulting in lower average scores than what you’re used to. The Qscore is designed to compensate for that, so for newer stories, the Qscore will be more consistent. So you should start getting used to seeing and relying on the Qscore instead of the average score.

Theoretically, a Qscore of 6 and over means a good story. A Qscore of 6 is the equivalent of an old score of 8.8.

The last change is the addition of the TPA score. It stands for Technical/Plot/Appeal score. It’s a composite score derived from the new (optional) Expanded voting form. It works almost like the old form. It’s an average of all the votes cast, but each category is on its own and no fractions. So if somebody gives a story a 9 for Technical Merit and 8 for Plot and 5 for Appeal, then the story will have a TPA score of 9.8.5 if another person give it 4, 5 and 10, that means its TPA score will become 6.6.7, the count of the TPA votes is to the left of the TPA score.

The TPA score is designed for authors and those readers who prefer to give detailed feedback to the authors.

The regular voting system still allows for 1 vote per reader per story and you can’t change it. The new Expanded voting form allows you to change your vote. However, if you vote with an expanded form, you can’t change your mind to an overall vote and if you vote an overall vote you can’t change it to an expanded one.

It’s a lot to digest, and the additional info displayed definitely makes the site’s pages more crowded, which I don’t like. But for a while, we’ll all have to live with it the way it is now, until most of you have gotten used to the Qscore and the TPA scores.

Six months from now (maybe earlier, we’ll see how things go), the top listings pages will get sorted by the Qscore.

A year from now the ‘Score’ column will go away.

90 Days from now, the overall voting form will lose the numbers and will have labels only.

Update:

I’ve posted a follow up to this article:
Scoring Changes Implementation Follow up

Expanded Voting Form: Wording and Value distibution

I’m working on implementing the new optional expanded voting form referenced in my previous blog entry Final Decisions. I need some feedback about the wording of the voting form.

The form will have three separate criteria for the reader to select values for and I want it to be clear enough about the meaning of each. So for each criteria, I need a short, concise sentence that goes under each option, to explain what the reader is selecting. It has to be as clear as possible as to not leave the reader confused about what they’re selecting and short enough to be simple. So far I’ve come up with:

Quality:
Spelling and Grammar

Plot:
Thoroughness of the storyline

Appeal:
Appeal to your personal taste.

Also, I can’t display all the numbers, so I must combine the three criterias into a single value to display.

So far, the relative score is going to be in its own column and the expanded score would be in another column.

So it’s going to be:

Size | Dnlds | Votes | Score | E Score | Q Score

The E Score is the expanded one and the Q Score is going to be the Weighed score. (It’s going to be really confusing for a while.)

Authors can already see the Q Score in the stats page.

Now, to calculate the E Score value, I’m thinking:

Quality: 20%
Plot: 50%
Appeal: 30%

Quality can be easily fixed with a proofreader’s help. Appeal is a subjective value that varies by the reader’s personal taste. So the emphasis is on the Plot.

I would like to hear as many opinions as possible about the wording of the form as wells as the distribution of the values to be calculated.

Weekly Download Counter Change

I’ve made an important change to the site today. Tonight will be the last night that weekly counters are all reset to zero. From now on the system will keep track of the last seven days individually for each story.

So, the weekly counters will now reflect a story’s downloads in the last seven days regardless of which day you view the listings.

Previously, the site kept track of weekly downloads starting from the weekly reset time of Sunday night. So, if you view the download listings on Tuesday, you get the tally of two days’ worth of downloads. With the new system, viewing the top downloads list on Tuesday will give you the cumulative downloads since last Wednesday.

The current system had the effect that any story posted around midnight on Sunday, had the biggest chance of staying on top of the download list of the rest of the week. Some authors took advantage of this by posting their updates and new stories on Sunday after 9pm EST; which of course is completely understandable, who wouldn’t want to have the biggest possible advantage.

With the new system, that changes completely and closes a long standing loophole. Posting time has no advantage at all anymore. And posting in the middle of the week has no more disadvantage.

The full effect of this change will only be felt next Monday. The system doesn’t have the daily stats for the last 7 days. It starts counting tonight. For this week, the weekly downloads will behave exactly like the previous system.

Final Decisions

I want to thank everybody that commented and contributed to the discussions in the three previous blog entries, and who made suggestions. It has been educational.

I’ve made up my mind on how to proceed next with regards to the voting system.

While I don’t like to lose any authors or readers, the current state of the scoring system is pitiful, and if left as it currently is, then it will only get worse over time, which, to me, is unacceptable.

The suggestions by many were interesting to say the least, but a lot of it, while could be useful and helpful, is not feasible to implement. Everybody has to remember that the site is a busy one and I don’t have unlimited resources. The site serves millions of pages per day and any change in the page, in the processing, in the data stored can have a huge effect on the site’s performance and its ability to cope.

For example, keeping scores indefinitely, in order to allow for dropping a certain percentage of votes, or comparing previous voting patterns for each reader, is not possible. While the data stored could be relatively small, the cumulative numbers are huge. And it’s not just about storing the info; processing power needed to handle all that grows exponentially. Searching a database of millions upon millions of data rows is expensive in processing power. Also, backing up the data and shuttling it around the net for off-site backup gets really expensive, really fast.

Also, anything that requires processing of each bit of info on the fly is also a no-no. For example, the suggestion that the story’s highlight color changing with the score requires the evaluation of every story’s score against the median for each page displayed. That requires multiple IFs for each row of each table. The site serves over a million listings page per day, with each page having 10 rows or 20, that means for each additional IF that I add to the process, the server farm has to execute it 10 or 20 million times per day, that means each listings page would need more than double what it already needs, that means I would need at least double the processing power that the site uses now and that’s without taking future growth into consideration.

So, any solution would need to be simple, fast to process, easy on storage and easy on processing repetitions to be acceptable.

I understand what the system needs to do, and I understand what it takes to do the things that needs to be done. And, I’m the only one to know what it costs to do each little additional thing.

So, in order to balance the needs of the authors, with the needs of the readers and the resources available, I’ve decided to implement the following changes:

1 – A rewording of the current voting form. Not as drastically different as the previous rewording, but something more sensible. The change will be implemented in two stages. First stage the wording will be changed while keeping the number values associated with each description. 90 days later, the numbers will be removed and the voting form will have descriptions only.

2 – The previously noted vote weighing system will be implemented (it is very necessary), however, both scores will be displayed side by side for a whole year. In the first six months, the listings pages ordered by score will use the old score, and then in the next six month it will be sorted by the weighed score. After a year, the average score will not be shown anymore (remember, displaying both scores requires more bandwidth).

3 – The voting form will have two optional variations. The current reworded form will be the default one. Readers will have the option of using a more elaborate form that has three separate criteria to judge: Plot, Quality and Appeal.
Which form to see in the story by default will be an option in each user’s preferences and will have a switch in the form itself.

I haven’t decided how I’m going to display the scores from the elaborate forms in the listings pages yet, or how it should affect the sorting in the listings pages ordered by score.

Two options in that regard, display the average of each value or have a single combined representation. We’ll see.

In the beginning, I will keep all the scores from the elaborate voting forms. Depending on how many people use them, I will decide later if I can afford to keep that data indefinitely or not. Users who choose to use the elaborate voting forms will be able to change their votes as long as the votes are kept.

Authors will be able to view how many of each vote their stories received.

Chapter voting, I’m not sure I’ll implement it because of abuse possibilities. To stop abuse, I must track some data from each vote for each chapter. With over a hundred thousands possible users, combined with over 50,000 individual chapters currently on the site, the size of the data can be huge.

On the other hand, individual chapter voting can be done for authors’ eyes only and each reader can vote as many times as they like for each chapter, which in return, makes chapter voting almost useless, but if authors really want it, then it can be done and please don’t suggest to see vote value distribution, that would require keeping a lot more data.

Eventually, I will implement the capability to allow paying members to vote after the fact. They can download stories to read offline, so they should be able to vote on their next visit to the site for the stories that they downloaded in their previous session.

No mandatory voting. No mandatory comments while voting. No default voting for non voters. Non-voters outnumber voters 20 to 1, no matter how wild the actual voting is, it wouldn’t make a blip if non voters were counted as average. Let’s say the default vote is a 6, that means no story could score higher than 6.19 (with all 10s) or less than 5.76 (with all 1s).

As they say, hindsight is 20/20.

I now realize that the changes proposed previously were too much too fast. Hopefully, with the new gradual phasing in approach, people, both readers and authors can acclimate to the coming changes.

Update:Just to give you and idea about the time line of the changes:

Unlike the first attempt, the first thing to be implemented is the score weighing system.

Then the form wording will change. This way, the vote weighing system can compensate for any change in voting patterns created by the wording change.

After that, I’ll implement the three criteria voting system.

It will take a while to get done. To implement these changes it will take a lot of updates to the various pages on the site, and that takes time.

We’re talking about a month at least. So, patience is required.

Derailed

Well, it has been an interesting experience to say the least.

If you haven’t read my two most recent entries about changes to the scoring system on the site, you should read them.

The gist of this blog entry is to tell you that ALL changes to the scoring system have been canceled and reverted.

A recap of what happened.

For the last couple of years, story scores on the site have been edging up higher and higher due to various psychological reasons. The median for all scores of last year is about 8.87. Which means that half of the stories are scoring more than 8.87 and the other half below 8.87. The results of this median is that story scores are in general, extremely high.

Over the last year, I kept hearing from many authors how ridiculous the scores are becoming, and from readers how they can’t trust the system to guide them to the better stories. The phrase ‘I don’t read anything that is not scoring over 9.25’ came up very frequently.

So, because of these things, I set out to correct what’s causing these issues. I planned to do two things:

The first is to change the wording of the voting form to be more consistent and somewhat realistic of what those votes reflect, which is the reader’s feeling about the stories they read. The current wording is somewhat confusing, some grades are related to feelings, some others are related to the quality of the work. I seem to have made the grave mistake of wording the top score (10) entry in a way that few readers really could give a 10 but to the really best of stories (which logically is the way it should be).

The second was to create a different score evaluation system that took into consideration what date the story was posted and the average of all the stories posted in the same period of time. The new system would have taken care of the over-time creep up of scores and made the playing field between old and new, more level. It would have also leveled the playing field between older stories that were scored using the old wording and the newer stories scored using the new wording. There wouldn’t have been a large discrepancy between the meaning of older stories and newer ones.

This new valuation system would have shifted the median of scores down to a more logical median. My goal was to have a median of 6.

I guess I aimed too far down.

On December 1st, 2006, I announced the upcoming changes, and implemented the vote form wording change.

The wording change had its intended effect and stories posted after the change had a more moderate, more reasonable score average. Many seemed pleased with the change, although, I only received two approvals, one from an author and the other from a reader.

However, the opposition was far more vocal and came from authors. I received many worried notes and nearly ten authors asked for their stories to be removed if the score weighing were to take place.

Since I’m not in the business of pissing everybody off – and to me, ten vocal opposing people represent a 100 opposing silent ones – I canceled the score weighing system implementation.

I left the new wording in the forms and the newer stories continued to be scored lower on average. I received few notes from authors angry about unfairness of the lower scores and one demanded that I remove his stories from the site.

I agree that having lower average scores is not fair, considering that readers won’t really know or care why the older stories have higher score average than the newer ones. They’ll understand the drop as a drop in story quality. I even received one note from a reader asking why all the newer stories were crap (based on scores).

So, last night, I reversed the change in the voting form, and in an ironic twist, had to use the same code that I created to calculate a weighed scoring system to raise all the score of the stories posted after the wording change to a level more in line with the scores of stories posted before the change.

And now a week after the start of the changes, we’re back to the same old system that many mocked and distrusted and complained about. We’re back to a kindergarten-like system where everybody is a winner and everybody gets an A for their effort.

I don’t know what I’m going to do next, but I’ll think of something to keep the scores under control. I really don’t want to see the scores on the site compress any further and lowest scoring stories reach 9+ and high scoring stories reach 9.9+.

If anybody has any suggestions to make the system work better, without confusing the hell out of authors, I’m all ears. And please, no suggestions of a complicated score weighing system. I had the perfect one and it was rejected. If there were supporters for the new system that I planned and announced, they mostly kept it to themselves.

Update: Please see my following entry about the subject.

Scoring Changes Follow Up

I canceled the planned changes. Too many authors asked for the removal of their stories from the site. Sorry for all the trouble.

This is a follow up to my previous blog entry about the changes to the scoring system. If you haven’t read that one, please check it to see what this whole thing is about.

This follow up is to address as many of the comments that have been received so far as possible.

A simple clarification: the voting system itself is not really changing. It still works the same way. It’s the results representation that’s changing to allow for a clearer distinction between tiny variations in the scores. I’m just shifting the median for all scores from whatever it is now, to an artificial one of 6. For example, the current top scoring serials on the site contains eight stories with a score of 9.77. The new representation would simply magnify smaller variations within the .77 bit.

As for suggestions offered, there were plenty, and that’s good.

Few things to clarify with regards to the nature of the site, to shed light on why some things are the way they are and why I can’t/won’t change some things related.

The site gets accessed from all over the world. In most places, internet access is not unlimited. Many, many readers pay for every minute that they spend on the site. So a large chunk of the story accesses are for quick downloads to read offline. Can’t force those readers to vote. Voting works only when reading a story while on line.

Many of those world wide readers don’t have English as their first language, hell, I don’t have English as a first language. More than half of the readers don’t feel and definitely aren’t qualified to judge the grammar and sentence composition of the text their reading. Can’t force them to cast a grammar vote. However, they can tell whether they like a story or not.

Things not doable:

* Forcing readers to vote: Not good.
Readers should never feel that they must vote. This action would cause a lot of junk voting. It would be worse than not voting.

* Forcing readers to comment: Not good.
5% of all readers vote and less than 1% actually comment; trying to force those numbers up will drive people away from the site. Not good.

* Dropping a certain percentage of votes from the top end and the bottom end: That would require keeping individual votes indefinitely. Not doable; requires too much resources.
Unless everybody is willing to chip in for a larger disk array for the site ($15,000 +) and for the cost of hosting it ($600 per month), then it is not possible to keep votes indefinitely.

* Allowing readers to change their vote later and allowing voting for stories previously downloaded: Not doable.
The site has a system in place for blocking score manipulation. Those changes would break it completely and make scores open to easy manipulation. That’s a bad thing.

* Changing voting method for an additional criteria like grammar: Not exactly fair.
Older stories that had their votes cast already would be at a severe disadvantage. Plus it would require readers to vote for multiple criteria.

* Disallowing votes for serials until their completed: Not fair.
Many authors rely on votes to give them motivation to write. No votes means way less feedback.
Plus, doing that would create even more bias towards serials. If scoring is only allowed after a story is completed, the only those who stuck with the story till then end would vote, which by default means they liked it and their votes would automatically be very high.

* Automatic vote casting for non voters: Not Good.
Since there is such a large difference between the number of downloads and the number of votes, casting a 6 for each non-voter means that the scores will never go above 6.5 or below 5.5, that’s even worse than it is now.

Things Doable:

Adding an additional voting panel for individual chapters. The results of this panel would be simply sent to the author, but not displayed on the site.

Adding an additional voting panel for grammar and stuff. The results of this panel would be simply sent to the author, but not displayed on the site.

—-

Thanks to Aleph Null’s suggestion. He provided the solution that I needed for the new system to be more fair for older stories. It’s so simple, I can’t believe that I didn’t think of it first. The new system will calculate the median for each year and then calculate each story’s weighed score depending on when it was posted or last updated.

—-

Everybody seems to think that I’m doing this as a spur of the moment thing. I’m not. I created the initial code more than a year ago. I knew it would piss a lot of authors off. After all, having your scores go down from 9+ to 7+ is a bummer.

I’ve been thinking about the issue and monitoring the median for the last year and a half. And now, the median has reached a ridiculous level. The effect of the extremely high average of the scores was evident in the comments posted. Many said that they don’t read anything that has a score below 9. Why is that?

From the authors comments, it was clear again that the authors’ expectations of the system are misplaced. Every author wants the system to be the equivalent of the film critics. Unfortunately, it’s not and it can never be. It’s like a poll at the door exit from the theatre.

Just look at the reviews. There are 30 people on the site able to post a review for a story. I would invite everybody to count how many reviews are submitted per month.

I tried the multi-criteria voting system in 1999 where it asked readers to rate three things: story line, quality and appeal. In its first week 8 votes were cast. Just Eight in a whole WEEK!

It was an abject failure.

People don’t want to think about it. It has to be a single easy choice. Anything other than that would be used by a slim minority of those already voting.

The new display method would be closer to showing what people are thinking instead of showing what they’re doing.

As for the ‘Impossible to improve’ option. I know it sounds ridiculous, but it’s on purpose.

The reason for it is best illustrated in Stormy Weather’s response:

Under the old system I rated stories 9s or 10s … and sometimes 8s. With the wording of the new system, the stories I read will be getting 9s and 8s and 7s. With the way 10 reads now, I can’t see myself giving it anymore… unless there’s something out there that really knocks me out of my chair.

The new wording is meant to keep the 10 for those who knock your socks off with their work. How do you really reward those authors that put so much work into their stories and have a great creativity that results in truly great story? Do you give them the same as you’re giving everybody else?

Is a 9.5 really meaningful when almost everybody is getting over 9.2?

I want people to stop and think for a bit before casting that vote.

And it seems that the new wording is being fairly effective. From a sampling of the most recently posted stories, the scores seem to be a bit more realistic.

The first two weeks after the new score display is implemented will be very rough, especially on me. I know I will hear about some extreme displeasure with what’s happening, and I’m definitely NOT looking forward to it.

We’ll all just need to get used to the new numbers. Lower our mental line in the sand for the new scores from 9 to 7 and everything will be fine soon.

Update:

I’ve been refining the system before full deployment and got some interesting numbers.

I’ve defined a set periods that the system will use to define which median value to apply to a story and got the following:


+------------+------------+--------+
| From Date | To Date | Median |
+------------+------------+--------+
| 1998-01-01 | 2001-07-01 | 8.25 |
| 2001-06-30 | 2003-01-01 | 8.44 |
| 2002-12-31 | 2004-01-01 | 8.60 |
| 2003-12-31 | 2005-01-01 | 8.65 |
| 2004-12-31 | 2006-01-01 | 8.93 |
| 2005-12-31 | 2006-12-01 | 8.87 |
| 2006-12-01 | 2008-01-01 | 8.33 |
+------------+------------+--------+

The 2001-07 date corresponds with when the system went from no-login anybody can vote as many times as they wished for any story, to the log in system where nobody could normally vote more than once per story.

The 2006-12-01 date corresponds with the wording change in the forms.

As you can see, there is a definite rise in the general voting over the years. the only anomaly is the difference between 2005 and 2006, the median dropped from 8.93 to 8.87. The explanation may not be obvious, but the drop corresponds with moving the form from below ‘the end’/to be continued’ line to above it and hiding the score of the story in the form so readers couldn’t readily compare the story’s existing score and didn’t have as much of an incentive to go higher.

The change is needed to future proof scores. Say an author posts a story this year that scores 9.7 which is pretty good now. If nothing is done, and the scores keep creeping up, this same story score, which now is pretty good, in two years would look lame compared to stories posted in two years.

At the current rate of score creep up, in a couple of years, stories that score below 9.25 would be crap and the top end would reach 9.9+.

If going unchecked, the scores will reach a level that everybody would be forced to vote a 10 for anything otherwise it would be below everything else at the time.

And for those worrying about the old scores, not to worry, internally, the system will still work as it is now. The same scores are kept internally, but they’ll be displayed in an interpretted way. I’m not sure yet, but I may show both scores in the authors stats pages along with the median that each story is compared against.

Scoring System Changes

Since implementing the scoring system on the site in 1999, it has been the only controversial part of the whole site. Many users found it useful, some didn’t trust it and some ignored it.

To authors it’s even more troublesome. Many authors expect the score to tell them how they did in their writing, they wanted it to reflect the effort that they put into the story regardless of the story’s content and its subject’s appeal to the readers.

Those concerns and expectations are not something that I can really address. Authors need to simply realize that the score simply reflects how much a reader liked the story and whether they recommend that others read it too. It’s like a thumbs up signal.

However, there is a problem with the scoring system that I can address: Score compression.

Score compression is when votes, like they are now on the site, tend to be mostly on one end of the scale. Last check revealed that the median for all scores on the site is 8.62!

A median of 8.62 means that half the stories on the site have a score of 8.62 and more. That means about 8000 stories have about 1.2 points spread. That means anything below 9 didn’t get a good score. 8.62 is so close to the top, it’s making scores meaningless.

The reasons for this compression are multiple.

  • Some readers never vote anything but 10; they’re nice people, they don’t want to hurt the author’s feelings.

 

  • Some readers vote only for stories they like. For stories they don’t like, they abstain from voting.

 

 

  • The psychological effect of high scores. The higher the scores the higher the readers will tend to vote.

 

 

So, I’m introducing two changes to the system to be rolled out gradually.

The first change is the wording accompanying the number scores in the vote form and I’m removing the numbers. I’m proposing the following as the new list:

Amazing; Impossible to Improve
Excellent Story
Great Story
Good Story
Not Bad
Some Good, Some Bad
Not Good
Pretty Bad
Hated it
You Call this a Story!?

This way it’s not mixed signals. The old list was a bit misleading to authors as it implied that the score may represent the readers’ judgement on how well the story is written. Words like ‘Needs Work’, imply that the reader noticed the errors in the story and commenting on them.

This list is not final. I’m open to suggestions of a better wording that improves the distinction in your minds about the meaning of the score you’re casting.

The second change is the more drastic one. I’m replacing the current scores with weighed scores.

The new scores shown on the site will reflect the story’s score relative to the median of all scores on the site. This will have the effect of lowering all scores. I’ve implemented the formulas that calculate the weighed scores and here is a sample of scores and their new values:

Old Score -> New Score
(average) -> (weighed)

10 -> 10
9.85 -> 9.56
9.5 -> 8.55
9 -> 7.10
8.62 -> 6
7 -> 4.93
6 -> 4.28
5 -> 3.62
4 -> 2.96
3 -> 2.31
1 -> 1

One thing to remember, the weighed score is relative to the current median. So a story’s score may change even if it received no new votes. If the median changes, then the story’s score will change.

Hopefully, the wording change will make the votes that readers cast more reasonable, so that automatic 10s change to something more meaningful.

One problem I don’t know how to address is the fact that the more recent the story is on the site, the higher the average score is; this is related to the psychological effect of higher and higher scores. So if you have a reasonable solution, I’m all ears.

I know that scoring is a controversial subject but let’s all try to be as objective and reasonable as possible. I’m trying to make the system work for everybody the best possible way. I appreciate everybody’s contributions.

And before you fire off your reply, one thing I will not do, I won’t ever scrap the voting system. So don’t even suggest it.