Date: 10/9/19 1:50 pm
From: Brodie Cass Talbott <brodiecasstalbott...>
Subject: [obol] Re: Limits on the interpretation of detail in bird records
Some seemed to bristle at the idea of eBird determining who is a "good"
birder, but it is actually a rather fascinating process. I had dinner with
Chris Wood and Colby Neuman last April (caveat: the entire following
message is based off my memory of the conversation) and Chris was
describing how they arrive at that determination. After crunching a lot of
data, they were able to ascertain species that are more likely reported by
expert birders than new birders.

So, in a hypothetical scenario, you have a birder who has never gone
birding, and they report a robin and a Spotted Towhee on their first
checklist. After fifty checklists, that same birder is reporting 15
species, including a song sparrow. After 1000 checklists, that birder is
reporting 45 species, including heard-only birds like Lesser Goldfinch and
Virginia Rail.

eBird scientists, analyzing the checklists of that birder and thousands
more, have identified patterns in the growth of all birders such that they
can tell how advanced a birder is by what species they are/are not
reporting (ostensibly in a way that also helps them identify errors - eg
someone who, at a certain stage in their learning process, begins
misreporting a certain species, and then later stops). As a fun trivia
question, the number one bird that gets reported more often by advanced
birders than beginning birders, according to their data, is none other than
the Northern Rough-winged Swallow.

So, even if you are an advanced birder who has just started using eBird,
eBird "can tell if you are a good birder" just by what birds were included
on even your first checklist, in comparison to others birding the same area
and time. Phil Pickering's Boiler Bay checklists come to mind.

Some disclaimers: My understanding is that all of this is done on a grand
scale, with lots of allowance for errors (eg if an expert birder shares a
checklist with a new birder, or a new birder makes mistakes, like calling a
young Tree Swallow a NRWS). Also, of course, the last thing eBird wants is
for people to report birds differently because they want to be thought of
as an advanced birder, which I'm guessing is why they don't publicize this
or give people badges for reporting NRWS. They would much prefer everyone
report birds with an eye towards what makes the data most valuable (no x
checklists, shorter more specfic lists, only reporting those birds the
observer is sure about, etc), and meanwhile they can figure out in the
background how to get the most information out of those data.

I would also add that this isn't trivial. The study that everyone has been
sharing lately about the 2.9 bn bird decrease was largely based on
community science data, from my understanding. CBCs and BBSs are great, but
eBird is filling in the gaps in between with a tool that gives us a great
way to keep track of what we see while also sharing that with the
ornithologists who are doing incredibly important work monitoring bird
populations. Win/win.


On Tue, Oct 8, 2019 at 5:21 PM <whoffman...> wrote:

> If the third point in Fun fact #1 is true, then Fun fact #3 is a
> mathematical necessity.
> Wayne
> ------------------------------
> *From: *"Mike Patterson" <celata...>
> *To: *"obol" <obol...>
> *Sent: *Tuesday, October 8, 2019 4:44:16 PM
> *Subject: *[obol] Re: Limits on the interpretation of detail in bird
> records
> Fun fact#1
> 18.5% of all Oregon eBird checklist are produced by the top 10 users
> 37.6% by the top 50 users
> 50% by the top 100 users
> Fun fact#2
> I'm in the top 20% of eBird users
> Fun fact#3
> There are way more than 100 birders in Oregon...
> --
> Mike Patterson
> Astoria, OR
> Lies, Damned Lies and Statistical Significance
> POST: Send your post to <obol...>
> OBOL archives:
> Contact moderator: <obol-moderators...>

Join us on Facebook!