Thanks for the comments Janette.
Janette has raised some issues around assessing
reliability so I might clarify my use of the term. I am using it as an objective
measure of the quality of the calculation, not assessing the report or the
skills of the people making the reports. I can best explain this by example. If
only one datasheet has been submitted for a gridcell, the only reporting rates
possible are 0% (species not seen) and 100% (species seen). Either way, we
cannot be sure what the figure would be if we had say 100 datasheets - it could
be anywhere in the range 0% to 100%. If we did have 100 sheets, then the
calculation could give us a number in the range 0 to100% in 1%
increments. If it was say 40% then we can be reasonably sure that the
reporting rate is close to this number and not, say, 80-100% because of the
number of datasheets submitted. If we still got 0% with 100 datasheets, then we
can be more sure that the species does not occur in the area but still not
completely sure. While there are statistical ways of expressing these
concepts, I have used the number of sheets submitted as the measure of the
quality because it is more easily understood.
The reason I reversed the sizing of the circles
(the fewer the number of sheets the larger the circle) was to
emphasise where the most care was required in using the data but I recognise
that this approach has its problems.
I have received a number of comments with
suggestions for changing the map to make it less complex, as well as had a few
more of my own thoughts. I will produce another less complex version to see if
it can be made more useable.
Thanks to everyone who