I did consider that ... I actually wanted to have a selection for "sort results by" but in my imagination that user interface got messy doing that, so I just return results alphabetically and the user can then play with the columns.
At the moment I'm just doing a select and printing the rows as I pull them, after massaging the numbers pretty and adding links (well, 1 at the moment) to the layouts. To pre-sort I would need to populate a table with results and then determine some 'average' (see below) for each row, and then sort the table based on that average.
The question is, how is the average calculated?
For simple cases, eg Test1, Test2, Test3, returning Patrick's "Score" only, yes, that is reasonably straightforward... we want the high score. But different tests have different 'ranges' of scores... eg a good score on Alice is over 70, while a good score on Putin is only over 45 or so. So a straight 'average' won't work.
I actually have already built a spreadsheet (attached, if I can figure out how) which attempted this exercise for a few leading layouts and selected tests and results.
The idea was basically this (column headings)
layoutname [test score Rank] (have multiple sets of these [ ])
I looked at : Score, Distance, SameFinger, HandBalance (in finger usage).
So I would have a rank in each category, as well as a winner and loser.
If you add up all the Ranks, the lowest overall Rank is the winner, worst is loser.
The overall best in that exercise was BEAKL 4 Mod Ian 71.34 74.65 74.19, with IHEAUDS 71.48 74.47 73.37 second.
But at the same time, IHEAUDS had 11 Bests and 1 Worst, and BEAKL had 10 Bests and 1 Worst. So by that measure, the results should be flipped.
Which is why your suggestion is not so trivial... :-)
Then I got to thinking about "standardizing" each set of scores. The problem is that for example we have scores like this:
75 74 73 65 62 58 55
The difference between the top two is 1, while between the bottom two is 3, but in Ranking, they will only be 1 rank apart, and give a misleading idea of their relative strength.
However if for example, we put the top score at 100%, and then converted each score to a percentage of that, then we would have a consistent way of comparing things.
However, how do we handle the cases where LOW SCORE is desired? I suppose one way would be to do the same exercise, and then subtract everything from 100.... but then "best" does not come out at 100 but at something below it
Eg doing the exercise above for the numbers
75 74 73 65 62 58 55
for Want High, we get 100 98.67 97.33 86.67 82.67 77.33 73.33
for Want Low, we get 0 1.33 2.67 (etc)
and taking averages will just get you 50 each.
I suppose what MIGHT work is if we do this, to standardise LOW score as desired on a percentage scale.
Take max score.
Find next multiple of 10 (eg 100, 1000, 10000, etc).
Subtract each score from this.
Now take resulting high score as 100%, and calculate the rest as a percentage.
so for above we would have
75 74 73 65 62 58 55, giving
25 26 27 35 38 42 45, and taking 45 as 100 (we want LOW SCORE to win), we get
55.55 57.78 60 77.78 84.44 93.33 100
and then I think we can work out an overall average....
I'll have to think about how to do this.... at the moment the program printing the table is doing it 'blind', it does not know nor get which score is good or bad, it's just data to be printed. I'll have to
1. store in table.
2. add bunch of columns to table for the standardised %
3. do the sums, which will need to know which way around to calculate the %
4. sort on total column.
Issues:
1. layouts require score for every test.
2. results of doing this are RELATIVE to other layouts compared ... if I add a new layout tomorrow then all the scores may change if it is good, since all scores are relative to 'best' in any given test.
Going for brunch, will ponder this some more.
Typing helps me think (because it forces both sides of your brain to work together :-) ).
Spreadsheet attached, .ods in .zip. Done with LibreOffice. Not a spreadsheet guru.