Recent Site Changes


I made few changes to the site in the past few days. I thought I’d talk about them a bit.

The first two changes are in the way the voting mechanism and the feedback mechanism work at the end of stories. I’ve added some JavaScripts to make the forms submit to the site and return feedback without having to reload the whole page.

The reason for the change is to make a better user interface. Previously (and currently if you don’t have JavaScript active in your browser or doesn’t have a supported browser), when a reader clicked the vote button in the voting form or send button in the mail form, the whole page would change, so to do both, one usually had to hit the back button once.

With the change, the form simply goes away and the text changes to reflect the result.

Overall, it works very well. Most people have recent browsers and most people have JavaScript active.

The transition for the voting form was almost trouble free, but the one for the mail form was quite bumpy. The reason for the bumpiness is mostly my inexperience with the new technology that I’m using in the new forms; I’m learning it as I implement it. Many readers had the mail form fail to send or return any results and it took me a couple of days to sort out why it’s happening and how to go about solving the issue. But it also took longer to sort because very few people reported the problem; only two people reported it.

To help me fix any current or future problems with the site, I would appreciate it if those problems were reported to me. The webmaster link on the site provides an easy way to do it.

So, if the new forms are not working for you, let me know. In the Webmaster’s contact page, select the message type as Bug Report.

» Webmaster’s Contact Page


19 responses to “Recent Site Changes”

  1. Speaking of changes, just curious: any idea why story chapters seem to have a new heading format? Previously, when I’d look under the SOL pages tracked in my IE6 History window, there’d be the story’s index as title-author & each chapter as title: chapter nn. Now, the chapters are headed by title author: chapter nn.

    Not that this is a problem, by any means. Just wondered.

    KB

  2. The change is to harmonize file names when saving with the html files downloadable as archives.

    When one download archives from the site, the files and folders are named Author_name@story_name.

    When you choose to save a file from the site, the browser picks the title of the page as the name of the file, with this change, both will have the author’s name first.

    So basically, I made the change for the sake of consistency.

    Lazeez

  3. Love the changes, Lazeez. I read the entry and thought “What the heck. So what.” But the scoring and comment section have a better feel.
    AFW

  4. I would like to make a suggestion in the voting. Is it possible for a person who thinks the story is very good but not excellent to have a 9.5 to score it?

    I have read many stories that I would not want to give a 10 but it was better than a 9.

  5. If you don’t want to give something a top score then you shouldn’t give it a top score, given the way I think SOL averages scoring, and the way a lot of people seem to score, I think you should just always try to err on the lower scale of you scoring anyways just to make some sort of lasting effect on the scoring.

    I mean, most everything should be pegged as a 6 or so because that’s near the average. Anything that really pleases you? Somewhere around a 7-8. Best story you’ve read ever? THEN you roll out the 10…

    People don’t on the middle much, so to make scoring meaningful everyone should do their part to draw scores closer to the middle except in those exceptions where something is actually extraordinary. And no harsh on the authors at SOL, but I think everyone realizes that there’s a difference between “gives me a hardon” and “this is a great story.”

    Voting doesn’t always reflect that though, and it certainly wouldn’t be helped by pushing readers to make more choices on the top end. A “9.5” is probably a 9, or maybe even a 7, if you’re reflecting an interest in the averages.

    It would be interesting if Lazeez could find someone to help him write out a script which took all the scores at SOL and bell-curved them automatically. So a perfect 10 would stay a perfect 10, but by the time you got to 9.25 you might be pulling those scores down to 8.xx and such to reflect the story’s actual position within the ranks of the submissions to Storiesonline.

  6. Re the scorring, I typically score stories and 8 or a 9 based on the descriptions of “Good” and “Very good”. If I feel a story is very good then I give it a 9. I give a 10 only to stories that I feel cannot be improved at all (with the exception of very minor formatting or proof-reading errors).
    Part of the reason I don’t give out many lower stories is that I don’t vote on a story that I don’t finish (so I typically don’t vote on the first few chapters of a serial), and almost the only poor stories I finish are very short ones (and I generally don’t read the shorter stories).
    Most importantly however I try and give constructive feedback to the author, so they know how I feel they can improve – as most are interested in doing.

  7. Scoring for a story of more than a few chapters problem.
    Since the expected number of chapters is not given I do not know when the author has got in the swing of the thing. Some start good and then go bad. I think I would like a voteing system that lets me vote early them change my mind if needed.
    If I don’t vote early then the author feels that we are not interested.

  8. Voting before a story ends just means you have to wait for about 4 months to vote on how the story finally ended. Give that story a 10 after 1 chapter, and the author can post 10 more chapters saying “poop poop poop” and you gave it a 10.

    On the flip side, if you didn’t allow votes before a story is done, then many authors will feel like they are getting no feedback. Forget the fact that most readers seem to only know 3 values: 10, 9, and 1.

    There isn’t an easy way to fix the voting, because it would invalidate the previous voting. Older stories would have votes under 1 system, and newer stories would get the benefit/pain of the new system. We are basically stuck with an undersirable voting system (in 20/20 hindsight, of course) that has good intentions but the readers don’t use any discretion with.

    There was a time (way back in the day… lol) when a story with a score over 9 was really good. Now, unless it is a scat story, getting under a 9 means the author almost had to work at writing crap.

    They vote with either one hand, or they vote for an author that wrote a previous decent story, and feel like they deserve a 10 no matter what.

    It’s one of the reason I write reviews for SOL. To give some viable feedback to readers, since the scores mean nothing.

    Andrew Johns

  9. A possible way to solve the “Many chapters problem” is to allow everyone who has already voted to vote again when the last chapter is posted (maybe only for stories with more than say 4 chapters, or where the time between the first and last chapters is a week or longer)

    The only fair way to introduce a new scoring system would be to launch a new one in parallel and not convert the existing ones (maybe rename that as ‘traditional style score). A good model for the new style would be the three-aspect style used by reviewers. Although how you would stop people giving a thin stroke story a 10 is difficult, perhaps you have to ask permission to use it, which isn’t granted until you’ve made say 30 votes, where the mean average score is between certain values (maybe 4 and 8 – although I suspect my average would be higher than that, my modal average is almost certainly 9). This would eliminate all those who only vote high or low, although it may allow people who only vote 10 or 1. Perhaps more complicated maths would solve that or just disalllow anyone whose modal average is either of those scores.

    The point of this would be to allow everyone to vote as they do now, but also give users who consider their vote to produce a secondary ranking that is statistically significant and reliable over a range of scores (the current range of 9.5-9.8 would be over about 6-9.5)

  10. For the changes to the forms: Cool. A big improvement to the way things were.

    For the voting system: I have to agree that the current system is probably broken. The way things are now I usually don’t bother to even check out a story that has a rating of 9.0 or less. A year or so ago I would sometimes read a story with a score as low as maybe 7.0. It’s my suspicion that some jackass has written a script that is being used to heavily weight certain stories higher than what they deserve. Just look at any story that mentions scifi, esp or fantasy in the keywords. Most have at least a 9.5 rating and some really don’t deserve better than a 6.

    The trouble is that there are some stories that really should be ranked a lot closer to the top but they have a lower score because probably that same jackass didn’t like them. So unless I check every story on the site I miss some good ones. Really wish that things were more realistic.

  11. Well I must agree with the previous anonymous poster about only reading stories with a 9 or above. The current system, if your story is below 9.. it’s probably complete utter crap, if it was submitted in the last 12 months. I’m not really sure why that is, but it is what it is.

    As far as voting for multiple times… the easy way to fix it is to allow voting for each chapter… and that’s all aggregated into a final story score. Easier said than done.

    I definitely miss Celeste… from the old ASSM days, but honestly, I’m here for a reason.. and not other places… the layout/format/features/interface is what keeps me here. I know I miss stories because they aren’t cross posted to this site, but oh well. If its good… it’ll end up here eventually and I’ll read it then.

    My thanks to Lazeez for this labor of love of his… I’m grateful. This is a BIG FAR CRY of the days when I would read on Usenet over 12 or 13 years ago….. you were lucky to read CRAP! 🙂

  12. It seems that most everyone is overlooking the likeliest reason for the current state of the scores – although a42, I believe partially explained it. Most people are reluctant to vote on a story they didn’t finish, and those are the very stories they’d vote low on. But even more so, many people are reluctant to provide the negative feedback implied by a low score. Whether this is from fear of discouraging developing authors, or from some innate politeness (which is hard to imagine in the online community) is uncertain.

    Typically, I won’t vote for a story if I found it readable but lacking any real merit. Dishonestly coded stories, or stories obviously lacking any proofreading/editting, those I tend to vote, and vote low on, now. But average stories, I suppose I could give them a 6-7 score, but I usually don’t bother. If the author accepts anon feedback, I might send email on the subject. (I’ve had authors respond pretty damn indignantly to even mild criticism, so don’t feel like giving them an email address any longer. Not that all authors are like that, but it’s sometimes difficult to guess who will or won’t be reasonable.)

    Granted, there are those authors (and readers) who feel that any story deserves a 10. And who will vote accordingly. And then there are those who will try to skew scores – or “correct” them, relative to favored stories. Let’s face it, any system is going to be open to abuse. I doubt anyone could come up with a sytem which would produce accurate results, without incredible waste of resources. (I suppose that if readers had to rank stories, from best to worst… and had to place every story read somewhere on that list, then a 10 could be granted to the story ranked highest overall, and all other stories rated relative to it. But the shear amount of memory used by a database storing such info would render that infeasible. Even if that’s the least “breakable” voting I can imagine. And considering there’s several ways to scam that system, that’s not good.)

    Really, the system is about as good as can be expected. And btw… there used to be some pretty good stuff on USENET. It was rare, and you had to wade through a lot of crap that wouldn’t score better than a 7 here these days, but it was there.

  13. I think that if the system just defaulted a 5 or 6 for people who chose not to vote it would make a great difference in pounding home the bell curve properly. If you choose to not vote on a story then you obviously either disliked it enough to not finish it or it didn’t make an enormous impression on you so it was average.

    Of course, if there were anywhere near the room and bandwidth for such a thing I think allowing readers to look back on and change their voting history might be nice.

  14. That last suggestion, while a possible ‘fix’, has some serious drawbacks. In the case of continuing serials, if someone chooses not to vote until the story is completed, the story will recieve a large number of ‘default’ votes. And, unless those people who’ve held off on deciding to vote can later change the ‘default’ vote… the scores for long stories will – relative to the shorter stories – suffer dramatically.

    There are also scenarios which can be envisioned in which someone starts reading a story, but for some reason doesn’t finish and vote – perhaps a computer crash, or a loss of connection or whatever – and then comes back to the story at a later date, and decides the story is worthy of a higher score. Agsin, unless the ‘default’ vote which was issued can be overridden at a later time, the approach of using a ‘default’ vote is flawed.

    Perhaps this approach would result in scores more along the lines of a normal distribution, but given the low percentage of votes compared to many stories’ download counts, the range of scores would become far too small to be useful, unless these newly-generated scores were somehow normalized. Either way, it’s likely far too much work for too little reward, from Lazeez’s pov.

  15. On the scoring side, Im not always wanting to post with a number.

    I think it would be nice to have your profile list your interest and then you could put in a vote for how it appealed to your interests.

    I must admit that I dont come to read the erotic stories I come because I often find some very good and enjoyable reads, some of them erotic some not so. Its the story that pulls me in and keeps me coming back day after day. This normally means I am unlikely to read short stories <30 or 50K unless the description especially attracts me.

    I choose the stories based on past authors work, the codes and size [I want a story!] this generally means that I wont choose many stories I dont enjoy and those that I havent enjoyed is often because I miss understood the description etc, this means I am unlikely to vote many stories low.

    I do wonder if authors average scores/number of stories could be included on the recently updated/new page etc.

    Personally I would find it interesting to see how the votes change as new chapters of serials get posted, but I suspect this is much too complicated to keep and show.

    But to finish I say thanks to lazeez for all the hard work and dedication to provide this excelant resource for authors and readers.

    Simon

  16. Sorry the changes didn’t work out for you. Thank you for your work on the site, and your efforts to continually improve it. Keep the faith.

  17. I’m sorry that you backed off so soon. It seems to me that the rating system needs some augmentation, especially between the casual reader and those that have libraries where the story has been saved. I am guessing that those that have saved a serial story to their library probably like it more than those who have not. Perhaps a number indicating “saves” or somesuch would augment the system, and give some a more concrete example of the value of the story, at least for serial stories.

  18. I’m glad Lazeez committed to make changes in the manner in which SOL stories are scored. I hope he hasn’t given up on that altogether. What has been in place is quite flawed, as I among others have been saying with rancor for quite some time. From what I can tell, what Lazeez currently outlines as a replacement is but another considerably flawed system. But, I can’t be certain without knowing the mechanics of the unexplained weighting mechanism he wants to employ. Of course, there is no such thing as a perfect rating system for anything, which naturally includes stories on what mostly amounts to an erotica website, but so what? Both readers and authors profit immeasurably by having the rating system, so not to have a rating system because it is destined not to be perfect is tantamount to tossing out the baby with the bath water-Lazeez is right to keep a ratings system on SOL. The idea is just to fix it as best as can be and also try to avoid substandard fixes which only marginally improve current performance. Lazeez claims to be open to a better idea. OK! I firmly believe what follows is that better idea.

    Unfortunately, fixing SOL’s rating system properly so that it can adequately perform its many necessary functions and address all simultaneous task requirements is difficult to discuss in an open forum due to complexity. What I am about to say sounds much like a discussion in a graduate statistics class. I’m sorry that there’s no way to avoid sounding so technical, because the SOL rating problem actually IS an advanced stat problem albeit with a rather trivial solution as maybe the best way one can design a new system. I’ll minimize the jargon and add illustrative examples for some clarity…which will make matters drag out even longer. Sorry. The problem can be viewed and solved much as the problem of employment discrimination in the American workplace was once successfully tackled ( until the US Gov’t fouled everything up again for purely political reasons). My proposed system draws heavily on that anti-discrimination work. It’s not some concoction made up from daydreams.

    How any body of stories is viewed can be mathematically modeled along pretty much any dimension anyone would care to look at, including comparative merit. Given enough stories-and SOL now has 15,685 and counting-there will be some stories that are nearly perfect. There will be some that are almost literal crap. But, most will gather around some midpoint in quality. Note that it makes no difference that I am describing the relative qualities of pure strokers, ScFi, historical fiction, fantasy, D/s, BDSM, romance, etc.

    You see its no different than judging relative pools of job applicants coming from a majority ethnic group plus any number of different ethnic minorities when all must be cross compared to see which have the highest relative merit in order to be within a single hiring pool. That is, provided that we have large enough samples in each population sample segment to be able to plot every individual/story simultaneously on a frequency graph with like individuals/stories and get a large dispersion of scores. That’s because it’s reasonable to assume that each subpopulation of individuals seeking a job or stories seeking readers’ approval conforms to a normal, or bell curve when plotted as test scores on aptitude, achievement, and intelligence tests for people, or readers’ ratings for the various genres of stories. Long experience has proved this sort of thing out.

    Let’s talk fairness in hiring majority and minority job applicants first. I chose to use this example because it makes my point with gusto in a way that talking about mere stories can’t, as well as making a great model to work from. Who among you reading my words would like to stand up and say that all or most white applicants for a job are logically to be preferred over all or most of the black applicants based on merit? If you honestly believe this to be true, then you believe that all or most whites among all living white people are superior to all or most living blacks along whatever relevant skill dimensions the job to be filled requires of a job holder. Thankfully, there aren’t very many people who would assert that whites are somehow inherently superior to blacks in North America in the 21st Century. Those of you who do, please quit reading now and go directly to some skinhead website where you will be far happier as I continue.

    The vast majority of readers who are left are now willing to accept that whites and blacks are equals in abstract principle. But, there’s an apparent problem in score disparities between the races as we sift everyone in search of a relatively fairly chosen hiring pool that is just to both races. For whatever reasons, completely across the board whites outscore blacks on every known comparator score. Whites as a whole score higher on intelligence, various aptitudes batteries, various achievement tests, and even on grade point averages made in school. Those are the facts! What now? So, how do we fairly make comparisons between job candidates coming from different races based on those obviously biased scores-which are all we’ve got? The key is in the long-winded discussion above.

    Plot the aggregate performances of each job candidate on the same plot with others in his/her same ethnic group (notice how I even finessed the previously unmentioned issue of camparing men vs women? My suggested system works that part in too.). Establish a separate plot for each classification: one plot for white men, another for white women, another for black men, then black women, hispanic men, hispanic women, asian men, asian women, etc. Probably, so long as you have a minimum one hundred in each category and therefore on each plot, you’re ok to make a valid comparison. Step back and look at all the plots you made of ascending scores left to right on an X-axis vs the corresponding %age of individuals who made each score on the Y-axis. It’s extremely highly probable you are looking at a collection of similar looking bell curves. That fit can be further improved with what are known as mathematical transforms to make the fit almost exact. That curve fitting is alright because any departures are almost assuredly due to biases and error. One uses a Z-transform formula in this case to smooth out results. That transformation makes all data points conform completely to a bell curve. It makes data from real live people look and act like some perfect data distribution and curve that a computer generated. It also stretches out tightly packed data points.

    Now, as long as you are not some bigot with an agenda, you are logically compelled to swallow what I’m about to say. Let’s suppose that we want to take everyone that’s top 20%. Fine. No problem. Take the top 20% from each separate plot and hire all of them secure in the knowledge that you did, in fact, get the best 20% of all applicants regardless of racial characteristics or sex–just so long as your tests were fair to all within each subcategory and also were fair tests of what it takes to do the jobs you need filled. All will be well. Nobody got screwed over or short shrifted. Every separate category is proportionally represented. Ideally the top 1/5 of each group was exactly equal in merit to the top 1/5 of the other groupings.

    That has to be so, because you bought into the fact that the entire population of whites is no better or worse than the entire population of blacks, same for males vs females, and so on for comparing all ethnic groups with each other. The only differences had to be due to the unfairness in society’s making for differences in educational quality, socio-economic background, reading level of parents, access to cultural benefits, level of academic competition, average test taking expertise, culture fairness of the tests, fairness in giving out grades, etc.

    OK, so now think of the various genres on SOL as evidenced by the codings in the ticklers as analogous to various ethnic groupings. The composite averaged ratings that are given are analogous to the composite scores for each job applicant above. You will end up with separate bell curves for strokers, romances, teen/coming of age, fant, D/s, BDSM, etc. It all works because no one can say for sure whether strokers are better than romances or fantasies or D/s stories. Each self-chosen audience rates what it chooses to read based upon the ticklers posted.

    Long ago, statisticians noticed what I said above regarding the curve plots of large sample populations. They set up a convention whereby the %age/%ile points on a bell curve were put into a table of relative frequencies called a “Z-Score” table. (http://www.epatric.com/documentation/statistics/z-score_table.html) The average or mean Z-Score of 0.00 was placed at 50.00 %ile, which is the peak in the middle of the bell curve. It so happens that that peak is also the middle-most score among all scores on the curve (median), and the most frequent score (mode). How the scores distribute out from there to both sides of the bell curve is well known. For a picture and more details go to http://en.wikipedia.org/wiki/Normal_distribution and see the graphs approximately half way down the web page.

    I therefore propose that a running tally of readers ratings for each separate genre that Lazeez wishes to sanction (only one per story please-whatever is deemed most applicable between Lazeez and the author) such that each genre has no less than 100 recent stories (or more if Lazeez desires) within its category. Mathematically set up a bell curve and compute Z-scores from a plot of readers’ ratings that came in for each individual story, ultimately placing all stories from the same genre on the appropriate genre curve according to their mean/average reader ratings. The Z-scores that result for each story allow for a rating 0.00 to 11.00 (Don’t worry. No one will ever get an 11.00. That point will be called “Off the charts” and exists only to keep the current top scores as well as future top scores from suddenly depreciating so severely when the rating system gets changed thus disappointing a good many authors and readers alike. Likewise, due to the “nice factor”, no one should ever get a 0.00 which will be a point called “Absolute Zero”. That point balances the 11.00 mathematically.) to be assigned to each story. What the component scores were is irrelevant once that story’s votes are fitted to that story’s bell curve to arrive at a Z-score and thereby, a rating between 0.00 and 11.00. Post that story on the summary pages as it is done now with the rating that fits the obtained Z-score. Lazeez needs to make 5.00 the mandated mean/average rating because it makes a better midpoint than 6.00 (see note at bottom), that 5.00 score will correspond to a Z-score from readers that averages 50.00 %ile (a Z-score of 0.00). A rating of 4.00 will be minus one half standard deviation (a technical stat term explained on wickopedia, also depicted on the normal curve diagram). A 3.00 will be minus 1 standard deviation. A 2.00, minus 1.5 SD, a 1.00 minus 2 SD, and a 0.00 will be minus 2.5 SD. Yes, I know that it is statistically very very unlikely anyone would ever get down to a minus 2.5 SD score which equals a 0.00 rating. Up top rating scores go plus 0.5 SD equals 6.00, plus 1 SD equals 7.00, plus 1.5 SD equals 8.00, plus 2 SD equals a 9.00, and plus 2.5 SD equals a 10.00, and plus 3 SD equals an 11.00. (Yeah, and a “11” will be virtually unattainable-just like true perfection is virtually unattainable and off the charts. Only one single reader needs to vote less than 11.00 to make reaching 11.00 totally impossible. BTW, just less than 1 story in 200 should reach a 10.00.)

    Finally, go back and convert all historical story scores to the system after assigning primary genres and constructing appropriate historical curves for each older story laying out Z-scores and therefore old ratings as they should ideally have been with a characteristic Z-score based on merit. (The concept of stratifying the stats by calendar year segments could be incorporated here to allow for chronologically shifting means/medians, if desired) Those scores can likewise be curve fitted even though they skew greatly toward 11.00. That will make all previously adjusted and newer ratings change a small amount as the work continues to include the entire story archive working from newer to older stories in order, but there should be no drastic shifts in ratings except for some older stories that weren’t all that great anyway. Ultimately all stories will be part of some genre subpopulation and will be comparable by composite mean Z-scores and resulting rating scores which in turn fit into the Z-score curve for the genre as a whole. The public will only see the tickler descriptions with the genres and codings and a rating number, just as they do now:0.00 through 11.00. Only a very few will give a damn how the scores came about-so long as they are fair, don’t depreciate older top scores markedly and work well for readers. Also, why not use Lazeez’ proposed new descriptors pegged to ratings 1.00 through 10.00 plus my “Absolute Zero” and” Off the Charts” on either end? That seems better on the whole than the old descriptors to me.

    All newly submitted stories go by assigned primary genre into their appropriate populations and curves as they begin to accrue more than 10 votes, just as they do groupings. The pools to compute Z-scores would be by assigned genre to be the total accumulation in each genre for all stories on the website to allow for direct quality comparisons. The resulting ratings will allow for quality comparisons across the various genres that will tend to have real validity. What worked to iron out the thorny issues of race and sex discrimination will work to rate stories with no problem. You see, you really can compare apples, with oranges, and kumquats, so long as you stick scrupulously to making quality comparisons. In that sense, a “10.00” ScFi story is actually of higher quality as compared to a “9.00” Fantasy story.

    That’s the ideal situation, as best as I can describe it, but that’s only a part of the solution because there are other problems requiring attention that affect the basic model I described. Readers vary in how they assign ratings, or even if they vote at all. It’s also painfully obvious that there is some rating rigging and padding going on by certain nefarious individuals who shall remain nameless whether Lazeez is willing to admit it or not. Gaming the system so certain stories come out on top isn’t all that hard. If I wanted (and I don’t, so relax), I could ruin the scores of any author I chose. That problem can be minimized by taking each website reader account and mathematically shaping all responses made on that account (as I alluded to above) into an individual bell curve of responses after the first 10 votes are cast on that account with no votes at all locked into the formal website ratings until those first 10 votes are cast for at least 5 different author’s stories to set a valid baseline for that voter. Should that voter fail to get the baseline established, then none of his/her votes will ever be formally tabulated. Every time the reader votes short of the established threshold of 10 votes for at least 5 different writers, he or she will get referred to a FAQ that outlines how to become voter eligible.

    This works because an honest reader who isn’t into some game playing agenda is going to choose a selection of stories that will tend to cluster around the adjusted 5.00 rating of average. The truly honest reader can’t know in advance what the 6.00’s and 9.00’s will be any more than knowing what the 3.00’s and 4.00’s are though he or she may succeed in using posted ratings from others to read more of the better stories than the lesser stories, but that marginal bias can’t be avoided short of forcing readers to read stories at random-which no one would put up with anyway. The mathematics take automatic care of “low raters”, “high raters” and “cheaters”. The low raters have their scores boosted to be on a par with everyone else with a mean rating at 5.00, the high raters get adjusted down to a mean of 5.00 and the cheaters end up lowering the relative scores they are cheating to boost directly or even indirectly, even if they never vote an “11” for their favorite. Think of this like the reverse of pari-mutual betting where the betting affects the odds quoted and the payoffs on a horse at the racetrack. The reasons for that assertion would require several chapters of text, so I won’t go into that part. The system in place now can’t do that because it’s not sophisticated enough.

    Then next to last, there’s the problem of those who choose not to vote. Handling that group is a toughy. Rather than use arbitrary measures to force voting, or sanctions for those who fail to vote, and piss people off while biasing scores tremendously, I suggest that incentives to vote be put in place. What if those who maintain at least a 90% voting record on whatever they are recorded as downloading get to download 30 stories per day and get complete use of the alphabetical story title index if they are non-members? Maybe members who vote 90% or more could get a 10% discount on future subscription rates? The impact of happier writers, more involved readers, and an enhanced rep for the website should do something to offset any losses to Lazeez.

    Moreover, there’s the issue of when to allow voting. That answer seems fairly clear if you want to encourage honest and fair voting. Don’t allow any votes until a story is completely posted, or at the very least has 1 MB worth of text in place. No one can know much about how a story will turn out until it is completed or well advanced. Instead, as a story is beginning to be uploaded for all but first time authors, maybe a specially labeled rating which would be the mean/average of all previous stories for that author could be posted. The idea is that all things being equal, the current story should be roughly equal in quality to what has gone before by that same writer. That average rating would be an adequate guide for those searching for a story to read and wanting some idea of what kind of quality to expect from a particular author. New authors would be unknown quantities-which they certainly are anyway.

    Finally, it looks as though many writers and some readers really want to get/give feedback on stories. That can be arranged also, with some ingenuity. Lazeez would need to make up a SHORT checklist of multiple choice alternatives to mechanize feedback for the inarticulate, since only a small fraction of readers feel comfortable extemporizing to an author. At the bottom would be a large blank for any additional optional spontaneous comments. The form would be inserted as a pop-up screen that had to be completed to be cleared before either the balance of a story exceeding 1 MB OR the final chapter of a story could be viewed. It would also appear again at the very end of the story as an totally optional item along with the voting blank to allow for any amended feedback on a purely volunteer basis. Therefore, no one could ever read a complete multi chapter story without providing feedback virtually at the end. Plus anyone could still make additional comments and feedback after the very end of the story. As currently, there would be a limit of only one vote per story per account. Yes, I do realize that this system won’t help short story writers who post in one segment that can’t easily be subdivided into at least two artificial chapters. But, that can’t easily be helped without a system that definitely coerces readers.

    Naturally, there are mathematical details and operational details I left out to manage whatever brevity was possible in this type of discussion. Since what I’m advocating is complex, I’ll be happy to answer anyone’s politely asked legitimate questions-time permitting-most especially any from Lazeez (who has my e-mail address in my account record to allow for forwarding) as to how to make all this work. The actual specifics should be rather straightforward for anyone with average computer programming ability and web design acumen who reads what I wrote above. It’s far from rocket science. Failing that, I charge by the hour for writing code/macros/etc and I’m not cheap and don’t do unrealistic deadlines. The described system wouldn’t be perfect, but not too awfully bad if I do say so myself.

    NOTE: Don’t get confused by this note. It makes a technical point for compulsive personalities only. There is no practical way to make the bell curves mentioned above truly perfect since humans and their ratings are fallible. The “nice factor” will make for a small amount of positive skew (tendency for more higher, and fewer lower scores). Not to worry. That sort of effect is good for motivational purposes for writers to keep writing and readers to keep reading. Nothing I wrote is invalidated to the point that there would be a SIGNIFICANT computational problem or inaccuracy. Besides, even with the unavoidable positive skew, there would never ever again be a problem with scores bunching toward the top of the rating distribution because the mean/average will henceforth be mathematically held at 5.00 regardless of what readers vote…unless absolutely everyone votes, say an 11.00 on every single story giving no raw score dispersion- which simply won’t happen.

    Red

  19. Hi Lazeez,

    Not so much a comment as a question really. I recently posted the first chapter of a new work, but, and this is very unusual, didn’t get one SOL feedback comment. Of course it could be that the story sucks and didn’t generate any interest, but have a feeling that some messages didn’t go through. Is there anyway to check this?

    Cheers and thanks for your hard work.

    Robin