New Preprint: Information in Meta-Ecosystems

I’m very excited to share a new preprint: “Filling the Information Gap in Meta-Ecosystem Ecology.” This paper is the result of an ad hoc group of ecologists who met at the 2018 Gordon Research Conference on Unifying Ecology Across Scales. The theme that year was “information”, something that most of us (and probably many ecologists) hadn’t considered in the way that was presented by Mary O’Connor (the conference lead organizer) and the other speakers. We decided to try to think through what all these new-to-us ideas would mean for spatial ecology, and over the next two years did a lot of learning! I read so many interesting papers and our brainstorming, writing, and revising sessions were both fun and really pushed my understanding of ecology. We came up with some ideas I’m really proud of. Anyway, we’re excited to share the fruits of our labors, which I co-led with Matteo Rizzuto (who is looking for a postdoc, hire him!).

ECR Paper Award for My Recent AmNat Paper

I’m extremely honored that the Ecological Society of America (ESA) Early Career Section chose my paper, Nonlinear Effects of Intraspecific Competition Alter Landscape-Wide Scaling-Up of Ecosystem Function, for their Outstanding Paper Award this year. The paper was published earlier this year in The American Naturalist and it’s work that I’m especially proud of. I realized I never did a Twitter thread or blog post about this paper even though it’s the one I love the most out of everything I’ve published, so here I’ll talk about it a little bit!

The idea that eventually became this paper arose out of a discussion I had with Emanuel Fronhofer when we were planning out my part of the Dispersal Network experiment. I wanted to look at whether dispersing and resident (non-dispersing) amphipods consumed detritus at different rates, because if they did, then dispersers might have different impacts on ecosystem function (spoiler: we found that this is sometimes the case!). At some point in this conversation, I realized that there was a confounding factor here: density. Dispersers are typically at lower infraspecific densities because they are by definition the first arrivals in a new patch. So if density had any effect on their feeding behavior, it would be hard to tell whether the differences between dispersers and residents were due to traits or to density. To address this, as we ran our experiment comparing residents and dispersers, we *at the same time* (yes this made a lot of work…) ran an experiment looking at detritus consumption across a density gradient.

What we found was exciting: indeed, in both amphipod species, per-capita consumption was much higher when individuals were alone or with only a few neighbors, and was quite low when they were in crowded mesocosms. Importantly, this relationship was highly nonlinear. We attributed the results to interference competition, since food resources were not limiting in our experiments.

This already might have made this work my favorite project, because the density aspect was a question that I felt I really came up with independently and pushed to test as part of our other experiment. It is kind of the inverse of how we typically think of a “functional response”: instead of feeding rate being a function of prey/resource density, it is also a function of consumer density. But we went even further! Out in nature, there is immense variation in population size between different ecosystems. So, I used geostatistical modeling to map the density-dependent per-capita consumption rates onto stream networks where we had done extensive surveys of population sizes. This generated maps that showed hotspots of amphipod density but fairly homogenous patterns in detrital consumption due to the nonlinear relationship between the two. I could then integrate over the entire networks of each of the ten small catchments (watersheds) I had surveyed, to get estimates of the total amount of detritus being processed in each one. This mode of scaling up estimates of ecosystem function is novel and was really fun to work out.

I find this work so exciting, and it drew from nearly every aspect of my PhD: lab experiments, field experiments, spatial modeling, and meta-ecosystem theory. Please go have a read!

New Paper: Interference Competition, Ecosystem Function, and Scaling-Up

My newest paper, “Nonlinear Effects of Intraspecific Competition Alter Landscape-Wide Scaling-Up of Ecosystem Function,” is in press at The American Naturalist. This work paired a lab experiment in which I found nonlinear density-dependence of detritivore feeding rates with geostatistical modeling to explore the consequences for organic matter processing in whole stream catchments. You can read it here (accepted version without final formatting) or here (published version, paywall). This is the work I’m most proud of and excited about in my career so far, and I’d love to discuss it if you have comments or questions! Big thank you to my PhD advisor Florian Altermatt and my co-author Emanuel Fronhofer, as well as friends, colleagues, technicians, and students who helped with various parts of the field and lab work.

Settling In To Vancouver & UBC

I took two months off of science after finishing up work at Eawag, and spent the time mostly hiking and running in different parts of the Alps, as well as having a few adventures and catching up with friends and family in northern New England. In September, I started a Killam Postdoctoral Research Fellowship at the University of British Columbia, in Rachel Germain’s lab in the Department of Zoology and the Biodiversity Research Center. I’m already thrilled by the atmosphere here, my great colleagues, and all the amazing science that is going on. I’m still a little burnt out post-PhD but this is a really invigorating place to be working!

Women Get Cited Less. What Can We Do About It?

A few weeks ago, a paper came out about the fate of research papers in ecology and evolution (my field!) pre- and post-publication, comparing outcomes between male and female authors.

I want to focus on just one aspect of their nuanced analysis (you can read the whole paper here for free). In this section, the authors gathered citation data on over 100,000 papers published in 142 journals in our field.

They found a slight, but significant, difference in how often papers by men and by women got cited. On average, controlling for the impact factor (a proxy for quality[1]), papers by women accrued 2% less citations.

A hundred thousand papers! That’s a lot, and it’s why the authors could detect this small difference. The sample size allowed them to do a quite powerful analysis.

It also revealed some interesting interactions. For example, papers with female last authors (which often indicates seniority or leadership of the research group) were cited less than those with male last authors at high impact journals, but the effect was reversed at low-impact journals.

Because more people cite papers in high-impact journals, this meant that overall, women were cited less often. [2]

I guess this shouldn’t have been a surprise, but it’s not something I had seen data about before, and consequently something I hadn’t really thought about.

But it is something that matters. Like it or not, citations are a metric of success that is easy to measure, and therefore whether others cite your work is a good piece of evidence that you’re a valuable scientist when you’re up for a job position, tenure, or an award. It’s not great for women if there is bias preventing them from doing well on this metric. [3]

Sounds About Right

Like a lot of research and headlines about challenges facing women in science, this got me mad. I’m a feminist, and this stuff pisses me off.

I see the experiences of my female friends and colleagues, and see when they are treated differently than their male peers. Not always, of course, but enough to make a pattern of our own anecdotal experience.

Then there’s the fact that in my study topics, almost every giant in the field is a man. The ones that defined the discipline and published the equations in a top journal? Men. There are women there, doing great work, but they are less famous.

When data on various aspects of academic life backs up our experience it’s like, “yup, sounds about right.”

Academia is harsh on most people; it’s a place with a lot of rejection, high standards, low pay, job insecurity, and so many power dynamics. The fact that academic science is even harder on women is just not fair.

And that’s not getting into the more complex and devastating nature of the structural problems in academia, which are even worse for other minorities, especially minority women. This paper found that on average women are cited 2% less than men; how much less often are minority women cited?

Anyway, I read this paper, and I was mad.

What I Did Next

Being mad doesn’t accomplish all that much. I tried to think about what we, in our daily lives as scientists, could do to work against this problem.

As an individual early-career scientist, I can only do a very little bit about peer review outcomes or paper acceptances.

But citations? That’s different. I am always writing papers, and I am always citing others’ work. I realized that this was a small step I could take to try to contribute to supporting female scientists. It sounds trivial, but I can cite their work. Could we all do this?

I brought this idea to our lab group, and was curious what they would think.

Some colleagues immediately recognized that this was a problem, and something we don’t think about enough. There are a handful of big names in our subfield, and we mostly cite them over and over. But do we need to do that? Are their papers that we cite every time actually the most relevant? Maybe not. There is probably a lot of work being done around the world we could cite that instead, or at least in addition to the now-traditional canon.

Another common reaction was for someone to say that they don’t think about the gender of authors when they search for papers, read them, or cite them. Here it diverged: a few people said, “I don’t think about it and now I realize that I should.” Others said, “I don’t think about the gender of authors when I’m citing them, therefore I’m not part of this problem.”

I challenged this statement. Does that really mean you’re not part of the problem? Maybe not, I said – and if not, that’s great for you, good work! But instead of assuming that not consciously citing more papers by men means no harm is done, check your reference list on the paper you’re writing. What’s the author breakdown? Do you think this strategy is really working?

I don’t think I was popular for making this callout.

Rubber Meets the Road

As I mentioned in a tweet a few weeks ago, “caring means walking the walk.”

In my paper-reading project, I track the gender of the authors I read, and I found that I am reading fewer papers by women than by men. I was curious about what the ratio might be of the papers I end up citing.

I went through the references section of my current manuscript with a blue and a pink highlighter (I know, supporting stereotypes, etc., it was lazy). The results were not pretty.

I was frankly surprised at how few of the papers I had cited were by women. I knew it wouldn’t be 50/50 for a lot of reasons (discussed below), but I didn’t think it would be that bad.

I basically proved to myself that in order to cite women, you have to do it on purpose. Just not intentionally excluding them isn’t enough.

It’s like how colorblind policies don’t work. Not intentionally doing harm is not the same thing as not doing harm. As Evelyn Carter writes about claiming to be colorblind with respect to race, “if you ‘don’t see’ race, but you say you care about inclusion, how can you advance inclusion efforts that will effectively target communities of color?”

There is a lot of unconscious bias and systematic barriers that lead us to contribute to inequality. Working for equality and to recognize contributions made by women and minorities means actively working to overcome those unconscious biases and systematic barriers.

Shifting the Balance

After my discouraging experiment with the highlighters, I went through the paper and looked at the places I had cited different work. In some places, I was able to find a paper by a woman that I could cite instead – and often even one that supported my point even better than the paper I had cited originally.

That was the most delightful aspect of this task I had set out for myself: I discovered papers in my core area of study, that were by women and that I had never read. And they were really good and very interesting!

The point of reading and citing work by women isn’t just to check boxes and give women a fair shot at career metrics. The main reason is to do better science. Science is creative; it involves having ideas, being exposed to new things, thinking outside whatever box you’re in. Reading work by more different people will necessarily help that process.

I’m really glad that I found these new-to-me papers.

As I discussed with a different colleague later, a lot of this issue comes down in part to poor citation practice, which is endemic across academia. We cite something because everyone else cites it, even though maybe we haven’t read the whole paper. Or we cite something because it’s already in our reference library and we are in a hurry. I’m completely guilty of this, and I’m quite sure everyone else I work with is, too.

If we spent more time reading, and more time looking for and getting familiar with other work that’s related to what we’re doing, I think all of our papers would have much more diverse author lists – at least in terms of evenness and not being dominated by the famous people in our field.

Our papers might also just simply be better, because of all the ideas we would be having.

Don’t Worry, I’m Still Citing Men

After this citation overhaul, my reference list was still majority male first authors. You need to cite what is relevant, and in a lot of cases this work is by men. One reason is that over the history of ecology, there is a lot more work published by men than by women, especially the farther back you go. [4]

Plus, if you need to cite a classic paper, it doesn’t matter who it’s by. That’s the one you need to cite. That’s a second thing. [5]

And likewise, if you need to cite something about your specific study organism or system, there might be only a handful (or a few handfuls) of people in the world who publish on this very specialized niche. They are who they are. If you need to cite peer-reviewed literature, you may have limited choices, and you need to cite the best and/or most relevant work out of that array. [6]

So there are a lot of reasons you can’t just take your reference list and manipulate it towards 50/50 gender equality. I want to make it completely clear: I am not advocating for a departure from citing good and relevant science!

When I mentioned this idea to colleagues – that citing women was a seemingly small, basic thing that we could do in our everyday lives as scientists to make a difference in structural biases – some were deeply uncomfortable that if they sought out work by women, then they would have to leave out other papers.

Listen: there is so much research out there, it boggles the mind. It’s growing exponentially. It’s insane. And in no paper do we cite all the possibly important research on a topic. We’re going to leave things out anyway. It’s just that now, we seem to be structurally leaving out work by women. Again, to overcome this bias is going to require intentionality. This is not a problem that we can just hope will go away because we are good people and don’t mean to cause harm.

No matter what we do, we’re going to keep citing a lot of great research by men.

Tokenism?

Among women, there was another layer. Many of the women I talked to did not want to be cited just because they were women and someone needed to move their reference list towards gender parity. They wanted to be cited because someone genuinely thought their research was the best and most relevant.

And I get that. I was invited to give a talk once because the organizer was looking for a replacement female speaker after the originally-invited woman couldn’t participate. I was so excited for this opportunity, but at the same time it felt weird to get it explicitly because they were looking for a woman.

I’m not sure what to do with this concern, but I think it comes back to proper citation practice. Is your work relevant to the topic, and being cited correctly? Then you deserve to be cited. If papers by women are accepted less often and cited less often, then part of the reason you are not currently being cited might be simply because someone isn’t familiar with your work.

And The Inevitable Question, Does Quality Limit Equality

Finally, we had some conversations about whether this might be because women’s work just wasn’t as good as men’s.

It’s pretty hard to assess that (although the paper I started this blog post with tried to, by using journal impact factor as a covariate).

I have two thoughts. One is that I doubt it’s true. There was an interesting graphic and paper that went around Twitter – about economics, not ecology – showing that papers by women are better written than those by men, that women incorporate reviewer comments more often, and that they improve at presenting information over the course of their careers.

It’s provocative and I’m not sure if it’s also true in my field or not, but I believe it. I think we are culturally trained from a young age to try to please, and so we might be likely to try to pacify reviewers, and to make a revision extra perfect if it was rejected the first time around.

Also, in the sciences, women have to be better to be evaluated as being as qualified as men.

Second, about the content. If it is possible that the data and experiments presented by women are less strong, why might that be?

It is interesting to think about how structural problems would lead to this. Could it be because women get less funding to do their research, and thus might have less resources or support? Could it be that women are asked to do more departmental service and teaching, and have less time to do research?

If the research really is worse – which I’m not convinced it is, but there’s little way to objectively assess, at least not for a dataset of any size – this very well might be a result of the same structural issues that cause the citation patterns.

What Next

What do you think about this? Are there any other ways to work on this problem?

Notes

[1] Impact factor is not a perfect measure of journal quality. It is an estimate of how often work there gets cited, which is a traditional metric for how important it is. For any individual paper, the impact factor of the journal it is published in doesn’t say much about the quality of that paper. However, I think it was the best covariate the authors could find to control for the differences in quality between papers when comparing men’s and women’s outcomes. Also, for better or for worse scientists pay attention to impact factors, so it may affect citation practice even if it’s not an actual metric of “quality” per se.

[2] I have an interesting, harebrained idea about this: if papers by women are less likely to be accepted, do good papers with women as last authors end up in lower-impact journals? That could explain why the better-cited papers from those journals are by women. Totally spitballing here.

[3] Citations shouldn’t define one’s value as a scientist, colleague, or employee anyway, but… that’s a discussion for another day.

[4] Is it really though? Women probably contributed to many important ideas. They typed the manuscripts. Maybe they did some of the work. There are tons of cases in science where people had the same idea independently, and one person got famous, and the other didn’t. Sometimes both these people were white men. I’m guessing there’s a lot of times when some of these people were women, minorities, scientists from outside Europe and North America, and people who were/are otherwise excluded from the elite science community.

[5] Given what I just wrote, I think I – and we all – need to put more effort into finding papers that were written around the same time as the famous ones that “came up with” ideas, and represent contributions of the community-building of those ideas by other people.

[6] When you’re looking for very specific things, there might be a lot of important and relevant data in theses and lower-impact papers by students who did not continue in science. This is valuable literature to search for, and might expand what you think the contributions of women and minorities are, given that these people are less likely to continue in academic careers either by choice or by exclusion.

From #FieldworkFail to Published Paper

Amphipods are, unfortunately, not very photogenic. But here you can see some of my study organisms swimming around in a mesocosm in the laboratory, shredding some leaf litter like it’s their business (because it is).

It can be intimidating to try to turn your research into an academic paper. I think that sometimes we have the idea that a project has to go perfectly, or reveal some really fascinating new information, in order to be worth spending the time and effort to publish.

This is the story of not that kind of project.

One of my dissertation chapters was just published in the journal Aquatic Ecology. You can read it here.

The project originated from a need to show that the results of my lab experiments were relevant to real-world situations. To start out my PhD, I had done several experiments with amphipods – small crustacean invertebrates common to central European streams – in containers, which we call mesocosms. I filled the mesocosms with water and different kinds of leaves, then added different species and combinations of amphipods. After a few weeks, I saw how much leaf litter the amphipods had eaten.

We found that there were some differences between amphipod species in how much they ate, and their preferences for different kinds of leaves based on nutrient content or toughness (that work is here). But the lab setting was quite different than real streams.

So I worked with two students from our limnoecology course (which includes both bachelors and masters students) to develop a field experiment that would test the same types of amphipod-leaf combinations in streams.

We built “cages” out of PVC pipe with 1-mm mesh over the ends. We would put amphipods and leaf litter inside the cages, zip tie them to a cement block, and place the cement block in a stream. We did this in two places in Eastern Switzerland, and with two different species of amphipod.

After two weeks, we pulled half the cement blocks and cages out. After four weeks, we pulled the other half out. Moving all those cement blocks around was pretty tough. I think of myself as strong and the two students were burly Swiss guys, but by the time we pulled the last cement block up a muddy stream bank I was ready to never do this type of experiment again.

Elvira and our two students, Marcel and Denis, with an experimental block in the stream. This was the stream with easy access; the other had a tall, steep bank that was a real haul to get in and out of.

Unfortunately, when I analyzed the data, it was clear that something had gone wrong. The data made no sense.

The control cages, with no amphipods in them, had lost more leaf litter than the ones with amphipods – which shouldn’t be the case since they only had bacteria and fungi decomposing them, whereas the amphipod cages had shredding invertebrates. And the cages we had removed after two weeks had lost more leaf litter than the ones we left in the stream for four weeks.

These are not the “results” you want to see.

We must have somewhere along the way made a mistake in labeling or putting material into cages, though I couldn’t see how. I tried to reconstruct what could have gone wrong, if labels could have gotten swapped or material misplaced. I don’t have an answer, but the data weren’t reliable. I couldn’t be sure that there was some ecological meaning behind the strange pattern. It could have just been human error.

I felt bad for the students I was working with, because it can be discouraging to do your first research project and not find any interesting results. It wasn’t the experience I wanted to have given them.

My supervisor and I agreed, with regret, that we had to redo the experiment. I was NOT HAPPY. I wasn’t mad at him, because I knew he was right, but I really didn’t want to do it. I’ve never been less excited to go do fieldwork.

But back out into the field I went with my cages and concrete blocks (and no students this time). In case we made more mistakes, we designed the experiment a bit differently. We had one really well-replicated timepoint instead of two timepoints with less replicates, and worked in one stream instead of two.

Begrudgingly, we hauled the blocks to the stream and then hauled them back out again.

Cages zip-tied to cement blocks and deployed in the stream. You can see the brown leaf litter inside the enclosure.

And then for 2 ½ years I ignored the data, until my dissertation was due, at which point I frantically analyzed it and turned it into a chapter.

The draft that I initially submitted (to the journal and in my dissertation) was not what I would call my best work. My FasterSkier colleague Gavin generously offered to do some copy-editing, and I was ashamed at how many mistakes he found. I hope he doesn’t think less of me. A fellow PhD student, Moritz, also read it for me, and had a lot of very prescient criticisms.

But through all of that and peer review, the paper improved. Even though it is not going to change the course of history, I’m glad that I put together the analyses and published it, because we found two kind of interesting things.

The first was about species differences. I had used two amphipod species in the experiment (separately, not mixed together). Per capita, one species ate a lot more/faster than the other… but that species was also twice as big as the other! So per biomass, the species had nearly identical consumption rates.

The metabolic theory of ecology is a powerful framework that explains a lot of patterns we see in the world. One of its rules is that metabolism does not scale linearly with body size (here’s a good blog post explainer of the theory and data and here’s the Wikipedia article). That is, an organism twice as big shouldn’t have twice the metabolic needs of a smaller organism. It should need some more energy, but not double.

This relates to my results because the consumption of leaf litter was directly fueling the amphipods’ metabolism. They may have gotten some energy and resources from elsewhere in the cages, but we didn’t put any plant material or other food sources in there. So we could expect to roughly substitute “consumption” for “metabolism” in this body size-metabolism relationship.

Metabolic theory was originally developed looking across all of life, from tiny organisms to elephants, so our twofold size difference among the two amphipod species isn’t that big. That makes it less surprising that the two species have the same per-biomass food consumption rates. But it’s still interesting.

The second interesting result had to do with how the two species fed when they were offered mixtures of different kinds of leaves. Some leaves are “better”, with higher nutrient contents, for example. Both species had consumed these leaves at high rates when they were offered those leaves alone, and had comparatively lower consumption rates when offered only poor-quality leaves.

In the mixtures, one species ate the “better” leaves even faster than would be expected based on the rates in single-species mixtures. That is, when offered better and worse food sources, they preferentially ate the better ones. The other species did not exhibit this preferential feeding behavior.

I thought this was mildly interesting, but I realized it was even cooler based on a comment from one of our peer reviewers. (S)he pointed out that this meant that streams inhabited by one species or the other might have different nutrient cycling patterns, if it was the species that preferentially ate all of the high-nutrient leaves, or not. We could link this to neat research by some other scientists. It was a truly helpful nudge in the peer review process.

So, while I had hated this project at one point, it’s finally published. And I think it was worth pushing through.

It was not a perfect project, but projects don’t have to be perfect for it to be worth telling their stories and sharing their data.

My #365papers Experiment in 2018

This year, based on initiatives by some other ecologists in the past, I embarked on the #365papers challenge. The idea of the challenge is that in academia, we end up skimming a lot of material in papers: we jump to the figures, or look for a specific part of the methods or one line of the results we need. Instead, this challenge urged people to read deeper. Every day, they should read a whole paper.

(Jacquelyn Gill and Meghan Duffy launched the initiative and wrote about their first years of it. But #365papers is now not just in ecology, but in other academic fields. Some of the past recaps I read were by Anne Jefferson, Joshua Drew, Elina Mäntylä, and Caitlin MacKenzie. Caitlin’s was probably the post that catalyzed m to do the challenge.)

I knew that 365 papers was too ambitious for me, and that I wouldn’t (and didn’t want to!) read on the weekends, for example. I decided to try nevertheless to read a paper every weekday in 2018, which would be 261 days total.

In the end, I clocked in at 217 papers (I read more than that, but see below for what I counted as a “paper” for this challenge) – not bad! I tweeted links to all the papers, so you can see my list via this Twitter search. I can confidently say that I have never read so many papers in a year.

In fact, I am guessing that this is more papers than I have read in their entirety (not skimming or extracting, as mentioned above), in my total career before 2018. That’s embarrassing to admit but I am guessing it’s not that unusual. (What do you think, if we’re all being honest here?)

This was a great exercise. I learned so much about writing, for one thing – there’s no better way to learn to write than to read a lot.

But the thing that was most exciting was that I read a lot more, and a lot of fun pieces. I had gotten to a place where there were so many papers that I felt I had to read for my own work, that I would just look at the pile, blanche, and put it off for later. Reading had become a chore, not something fun.

Titles_wordle

A Wordle of the paper titles. On my website it says I am a community and ecosystem ecologist, and I guess my reading choices reflect that! (I’d be interested to make a Wordle based on the abstracts, to see if there are more diverse words than the ones we choose for titles – but I didn’t make time to extract all the abstracts for that.)

That’s not a great way to do research, and luckily the challenge changed my reading status quo. If I was reading every day, I reasoned, then not every paper had to be directly related to my work as a community ecologist. There would be ample time for that, but I could also read things that simply looked interesting. And I did! I devoured Table of Contents emails from journals with glee and read about all sorts of things – evolution, the physical science of climate change, remote sensing.

These papers, despite seeming like frivolous choices, taught me a lot about science. Just because they were not about exactly what I was researching does not mean they did not inform how I think about things. This was incredibly valuable. We get stuck in our subfields, on our PhD projects, in our own little bubbles. Seeing things from a different angle is great and can catalyze new ideas or different framing of results. Things that didn’t make sense might make sense in a different light.

But I also did read lots of papers directly related to what I was working on. I think I could only do that because it no longer felt like a chore, like a big stack of paper sitting on the corner of my desk glaring at me. This challenge freed me, as strange as that sounds given the time commitment!

And finally, I tweeted each paper, and tagged the authors if possible. This helped me make some new connections and, often, learn about even more cool research. It helped me put faces to names at conferences and gave me the courage to strike up conversations. The social aspect of this challenge was fun and also probably pretty useful in the long run.

For all of the reasons I just mentioned, I would highly recommend this challenge to other academics. (It’s not just in ecology – if you look at the #365papers hashtag on Twitter, there are a lot of different people in different fields taking up the challenge.) Does 365 or 261 papers sound like too many? Set a different goal.  But make it ambitious enough that you are challenging yourself. For me, I found that making it a daily habit was key, because then it doesn’t feel like something you have to schedule (or something you can put off) – you just do it. Then sit down and read a whole paper, with focus and attention to detail. If you like it, why is that? Is the topic of interest to you? The writing good? The analyses particularly appropriate and well-explained? Is it that the visuals add a lot to the paper? Are the hypotheses (and alternative hypotheses) identified clearly, making it easier to follow? Or, if you don’t like it, why is that? Is it the science, or the presentation? What would you do differently?

One thing I didn’t nail down was how to keep notes. I read on paper, so I would highlight important or relevant bits or references to look up. But I don’t have a great system for how to transfer this to Evernote (where I keep papers’ abstracts linked to their online versions, each tagged in topic categories). In the beginning I was adding photos of each part of the paper I had highlighted to its note, but this was too time-consuming and I gave up. In the end it was like, if I had time, I would manually re-type my reading notes into Evernote, and if not, I wouldn’t. I do think the notes are valuable and important to have searchable, so this probably limits the utility of all that reading a little bit. It’s something I will think how to improve for next year. The biggest challenge is time.

In addition to reading a lot, I kept track of some minimal data about each paper I read. I’ll present that below, in a few sections:

  • Where (journals) and when the papers were published
  • Who wrote them – first authorship (gender, nationality, location)
  • A few thoughts about last authorship
  • Grades I assigned the papers when reading – potential biases (had I eaten lunch yet!?) and the three papers I thought were best at the time I read them

I plan to try this challenge again next year, and the data that I summarize will probably inform how I go about it. I’ll discuss that a little at the very end.

What Did I Count as One of My 365  261  217 Papers?

First, some methodological details. For this effort, I didn’t count drafts of papers that I was a co-author on, although that would have upped the number or papers quite a bit because I have been working on a lot of collaborative projects this year. I also didn’t count reading chapters of colleagues’ theses, or a chapter of a book draft. And I didn’t count book chapters, although I did read a few academic books, among them Dennis Chitty’s Do Lemmings Commit Suicide, Mathew Leibold & Jonathan Chase’s Metacommunity Ecology, Andy Friedland and Carol Folt’s Writing Successful Science Proposals, and a book about R Markdown. I started but haven’t finished Mark McPeek’s Evolutionary Community Ecology.

I did count manuscripts that I read for peer review.

Where The Papers Were Published

I didn’t go into this challenge with a specific idea of what I wanted to read. I find papers primarily through Table of Contents alerts, but also through Twitter, references in other papers, and searches for specific topics while I was working on my dissertation or on research proposals. This biases the papers I read to be more likely to be by people I’m already aware of or in journals I already read. Not entirely, but substantially.

We also have a “journal club” in our Altermatt group lab meeting which doesn’t function like a standard one, but instead each person is assigned one or two journals to “follow” and we rotate through each person summarizing the interesting papers in “their” journals once every few months (the cycle length depends on the number of people in the lab at a given time). That’s a good way to notice papers that might be good to read, and since we are a pretty diverse lab in terms of research topics, introduces some novelty. I think it’s a clever idea by my supervisor, Florian.

Given that I wasn’t seeking out papers in a very systematic way, I wasn’t really sure what the final balance between different journals and types of journals would be at the end of the year. The table below shows the number of papers for each of the 63 (!) journals that I read from. That’s more journals than I was expecting! (Alphabetical within each count category)

In addition, I read one preprint on BioRxiv.

I don’t necessarily think that Nature papers are the best ecology out there; that’s not why it tops the list. Seeing EcologyOikos, and Ecology Lettersas the next best-represented journals is probably a better representation of my interests.

But, I do think that Nature (and Science, which had just a few fewer papers) papers get a lot of attention and must have been chosen for a reason (am I naive there?). There are not so many of them in my field and I do try to read them to gauge what other people seem to see as the most important topics. I also read them because it exposes me to research tangential to my field or even entirely in other fields – which I wouldn’t find in ecology journals, but which are important to my overall understanding of my science.

I’m pleased that Ecology & Evolution is one of my top-read journals, because it indicates (along with the rest of the list) that I’m not only reading things for novelty/high-profile science, but also more mechanistic papers that are important to my work even if they aren’t so sexy per se. A lot of the journals pretty high up the list are just good ecology journals with a wide range of content.

There are a lot of aquatic-specific journals on the list, which reflects me trying to get background on my specific research. But there are also some plant journals on the list, either because I’m still interested in plant community ecology despite being in freshwater for the duration of my PhD, or because they are about community ecology topics that are useful to all ecology. It will be interesting to see if the aquatic journals stay well-represented when I shift to my next research project in a postdoc.

Society journals (from the Ecological Society of America, Nordic Society Oikos, British Ecological Society, and American Society of Naturalists, among others) are well represented. Thanks, scientific societies!

When The Papers Were Published

The vast, vast majority of papers I read were published very recently. Or, well, let’s be honest, because this is academic publishing: who knows when they were written? I didn’t systematically track this, but definitely noticed some were accepted months or maybe even a year before final paginated publication. And they were likely written long before that. But you get the point. As for the publication year, that’s below.

year_published

This data was not a surprise from me as a fair amount of my paper choices come from seeing journal table of contents alerts. I probably should read more older papers though.

Who Wrote the Papers: First Authors

Okay, on to the authors. Who were they? As I mentioned for journals, I didn’t systematically choose what I was reading, so I was curious what the gender and geographic breakdown of the authors would be. Since I didn’t very consciously try to read lots of papers authored by women, people of color, or people outside of North America and Europe, I guess I expected that the gender of first authors to be male-skewed, white, and from those continents. I wasn’t actively trying to counteract bias in this part of the process, so I expected to see evidence of it.

I did my best to find the gender of all first authors. Of those for which this was deducible based on homepages, Twitter profiles listing pronouns, in-person meetings at conferences, etc.,:

  • 59 first authors were women
  • 155 first authors were men
  • 2 papers had joint first authors
  • 1 paper I peer-reviewed was double-blind (authorship unknown to me)

I’m fairly troubled by this. I certainly wasn’t going out of my way to read papers by men, and I didn’t think it would be this skewed when I did a final tally. If I want to support women scientists by reading their work – and then citing it, sharing it with a colleague, contacting them about it, starting discussions, etc. – I am going to have to be a lot more deliberate. I want to learn about other women scientists’ ideas! They have a lot of great ones. I’m going to try harder in the future. Or, really, I’m going to try in the future – as mentioned, I was not intentionally reading or not reading women this past year.

I initially tried to track whether authors were people of color, but it’s just too fraught for me to infer from Googling. I don’t want to misrepresent people. But I can say that the number of authors who were POC was certainly quite low.

I did, however, take some geographic stats: where (to the best of my Googling abilities) authors originally came from, and where their primary affiliation listed on the paper was located.

For the authors for whom I could identify nationality based on websites, CVs, etc., 31 countries were represented.

FA nationality

The authors were numerically dominated by native English speakers, but those had relative geographic diversity, coming from the US, Canada, the UK, Ireland, Australia, and New Zealand (I’m not sure if English is the first language of the South African author). 15 different European nationalities were represented. There were a number of authors from Brazil, and one each from Chile, Colombia, and Ecuador, as well as Central America being represented by a Guatemalan. Maybe a surprise was that Chinese authors were underrepresented, either from Chinese institutions (see below) or those outside China; there were just five. There are many countries from which there are great scientists which are not represented in this dataset.

When it came to institutional country, the field narrowed to 24 countries plus the Isle of Man.

FA_inst

While there were 78 American first authors, 90 first authors came from American universities/institutions. In Europe, Denmark, Sweden and Switzerland gobbled up some of the institutional affiliations despite having low numbers of first authors originally from those countries (this is very consistent with my experience in those places).

(Note: it would have been really nice to make a riverplot showing how authors moved between countries, but I was too lazy to build a transition matrix. Sorry.)

This isn’t really surprising, the consolidation into fewer countries. It reflects that while small countries have great scientists, they often don’t have as many resources to have great research funding or many universities. Some places, even those with traditionally strong academic institutions, are simply going through austerity measures. I think of many Europeans I know who decided that leaving their countries – Portugal, Spain, the Baltics and Balkans, and other places – was their best bet to be able to do the research they wanted to do, and have a job. I think of others, notably a friend in Slovenia, who is staying there because he loves it, but whose opportunities are probably curtailed because of that.

I’d like to read more widely in terms of institutional location and author nationality, but it’s a bit overwhelming to make a solid plan. Reading more women is fairly straightforward. But when I think of all the places with good science but where I didn’t read a single paper, there are a lot of them. I can only read so many papers! So part of it will be recognizing that I can make an effort to read more diversely but I’m not going to solve bias in science just with my reading project. I need to make an effort that is meaningful, and then be okay with what it doesn’t accomplish.

Also, I don’t always know the gender, race, or nationality of an author before I Google them – this past year, I only did that after I read the papers. I might need to sometimes reverse that process, perhaps?

Do you have other ideas of how to tackle this? I’d love suggestions if anyone has them.

One thought is to more deliberately read from the non-North American, non-European authors in the journals I already read from. I already know I like the papers those editorial teams select. This would probably be the least amount of extra work required to diversify my reading, because I could stick to the same method of choosing papers (table of contents alerts), but execute differently on those tables of contents.

And a Bit About the Last Authors

I did not collect as detailed information about the last authors of each paper, but I did collect some. A big topic in academia is that women get fewer and fewer the higher you go in the academic hierarchy. I wondered if that was true in the papers I was reading.

There were fewer last authors because some papers were single-author. Of those that were multi-author, I filtered the dataset to look at only those where last-authorship seemed to denote seniority (based on author contribution statements, lab composition and relationships between authors, etc.) rather than being alphabetical or based on something else (on some papers with very many authors, all the senior authors were listed at the front of the author list). Of these,

  • 19 senior last authors were women
  • 105 senior last authors were men

Yikes!

That’s all one can say! Yikes!

Like the first authors, the last authors came from 31 different countries… but some different ones were represented (Venezuela, Serbia, India). They represented institutions in a few more places than the first authors, 28 different countries vs. 24 for first/single authors. I’m not sure what to make of that, especially since this is from a smaller subset of papers (since the single-author papers were removed), but obviously collaborative research and writing is alive and well.

Ratings and Favorite Papers

Right after I read each paper, I assigned it a letter grade. Looking back through my record keeping, I am less and less convinced that this is really meaningful. I think it had to do a lot with my mindset at the time, among other things. Did I just have a stressful meeting? Was I impatient to finish my reading and go home? Was I tired? Maybe I was less receptive to what I was reading. Or conversely, maybe if I was tired and a little distracted I was less likely to notice flaws in the paper. Who knows. Anyway, “B” was the grade I most frequently assigned.

grades

I didn’t keep detailed notes of why I felt different grades were merited, but I can make a few generalizations. Quite a number of the papers I gave poor grades were because I didn’t find methods to be well enough explained. I either couldn’t follow what the authors did, or maybe important statistical information wasn’t even included (or only in the supplementary information when I thought it was so essential to understanding the work that it really needed to be brought to the center). In particular this included some papers using fancy and cutting edge methods… just using those statistics or analysis techniques doesn’t make your paper magic. You still need to say what those analyses show and what they mean ecologically, and convince me that the fancy stats actually lead to a better understanding of what’s going on!

In some ways this is not authors’ faults – journals are often pressing for shorter word counts, and some don’t even publish methods in the main text, which is a total pain if you’re a reader. Also, it’s one of the biggest things I struggle with when writing – you know perfectly well what you did, and it can be hard to see that for an outsider your methods description seem incomplete. I get it! Reading papers where you don’t understand the methods is always a good cue to think about how you present your own work.

I assigned three papers grades of “A+”. Were they better than the ones I deemed “A”s? I’m not sure, but at the time, whether because of my general mood or their true brilliance, I sure thought they were great. They were:

I read a lot of other great papers too! But looking back, I can say that these were among my favorites, all for different reasons. I could go and add more papers to a “best-of” list but I’ll just leave it at that.

Recap!

Besides all the great reasons to do this challenge that I mentioned in the opening, this was pretty interesting data to delve into. I think I will try to keep doing the challenge in 2019, and I am currently thinking about how I choose which papers to read and if there are good strategies to read more diverse authors. I’m happy with the diversity of research that I read, but I would be happier if the voices describing that research were more diverse, to reflect the diversity of scientists in our world.

Do you have ideas about that? Comment below.

This was the final year of my PhD, and so in some ways a great time to do a reading challenge. It probably would have been more helpful if I had done this in the first year of my PhD, but hey, too late now. This year I wasn’t doing lab work, just writing and analyzing, so it was easy to fit in a lot of reading. It’s not good to stare at a screen writing all day, and I prefer to read on paper, so it was often a welcome break.

I don’t know what my work life will be like next year, so I will see how many papers I end up reading. It could be more, as I start a new project and need to get up to speed on a new subfield. Or it could be less as my working habits change. I’ll just do my best and adapt.

Finally, I’m thinking about whether there’s additional data I should track for next year’s challenge. Whether there is a difference between first and corresponding authors might be interesting. I’d welcome other suggestions too, but only if they don’t take much work to extract!

Science Fundraising and Day-In-The-Life

I recently took part in the Earth Science Women’s Network “science-a-thon” fundraiser, where over 150 scientists all over the world gave play-by-play snapshots of their days over social media. As I shared what it’s like to be a research scientist, I also asked for donations to support the ESWN, a peer-mentoring group for women in all earth-related fields of science (from ecology, my field, to geology, atmospheric sciences, you name it). Their goals go from career development and networking to teaching scientists to better engage with the public and with policymakers.

I used Storify to make a recap of my day, drawing from my own tweets as well as posts from lots of other of the 150 scientists, and adding some commentary about why I thought this fundraiser was so important. I hate asking people for money or anything else, so it was quite the experience.

What’s it like to be a research scientist? Storify and WordPress don’t play well together so I can’t embed the resulting post, but please click over to here to see it: https://storify.com/chelsl/my-scienceathon-day-and-what-i-learned-raising-mon