Picking what to study: postmodern approach

Science, in general, consists of two steps performed in a loop: Pick something to study; Study it; Pick next thing to study.

Blog Extreme Sciencing is focused on the second term of this process: how exactly do we perform the study? Is there better approaches to science process? Can we borrow techniques from other fields of human experience?

The first question is very important as well. There are roughly two approaches to finding something to investigate, especially at the beginning of the career:

  1. Postmodern approach: “Listen to the conversation” and decide what is missing or wrong, and fix it, thus moving conversation forward
  2. Artistic approach: wait for enlightenment from the higher power to show you the path, and follow that path thus being untethered from mainstream that can be wrong or a fashion

This is a philosophical distinction, and thus there cannot be “the best” approach. But that distinction is critical to understanding modern problems of science.

The first approach (where we read the current literature, listen to scientists, and figure out what is missing) has at least two problems. First, the “conversation” can be extremely noisy, or just plain wrong. People still can’t explain exactly how they performed the experiments and provide enough data for replication! So when you read paper about something being correct or wrong, this paper often cannot be trusted. Accumulation of noisy information can provide more reliable picture, but with ultra-specialization of science there might not be enough samples for averaging. Secondly, the conversation can be about something shitty, like Deep Fakes or other bad applications of AI. As Harvard CS professor James Mickens says: “If you are a PhD student studying Deep Fakes – drop out now. Your work will not make society better, it will only be used for revenge porn and disinformation“. And AI is today empowered with infinite applications, including some that perhaps shouldn’t exist such as in the US justice system.

The problem with the second approach to finding scientific question – believing in higher Truth and being guided by some external power – is that it is too easy to become untethered from reality. Not only does it create an opportunity for pseudo-science and general crankery, but it also creates unhealthy balance of power. How many time have you heard “You are working in science, you can’t do it for money!” or other appeal to passion for the question? When accepting existence of the higher power it is possible to also forget about human dignity and that we have to be serving ourselves first and foremost.

In conclusion, modern approach to picking scientific questions should combine reliance on existing literature and discussions of what is important, and also some filter to be able to highlight what has potential to benefit society, and what can be potentially harmful.

Locating the problem: External and Internal responsibility

In graduate school we often are told to be self-sufficient, motivated, responsible for our own success. That mindset emphasize that source of our problem (and solutions) is internal.

Meanwhile, as we’ve discussed in Solving problem at the correct level you can’t solve external problems by working on the internal side of the equation.

It is your job to work hard, try to be smart, and put good effort into work. But there are many people around you – mainly mentors – who has the job of helping you out. If they don’t, then you can’t solve that problem by working hard. You have to fix the External, by changing mentors, or by managing expectations in a way that minimizes their impact.

Same approach comes to blaming and finding source of problem. Some people, whether in grad school or real world, will lay blame for their mishaps onto others, making it external: “If one X and Y did better! If only my mentor told me to apply for Z!”

Other people “take responsibility” and carry the weight alone, by internalizing everything that happens: “I should’ve known better trust them!”

The truth, as often, lies between two approaches. I have somewhat moved from internalizing to externalizing issues of my PhD experience, and now getting closer to more balanced view.

After we figure out who is responsible for something, we need to understand how to control it and introduce change, which also can be external/internal.

Adapted from Derald Wing Sue, I guess

Management training in academia as incident response training

Academics, including PIs, receive very limited management training. Common understanding is that new PI will get their skill from previous advisers, but academia should stop being apprenticeship-based.

IT world provides us with example of how to do such training: tabletop scenarios, such as @badthingsdaily.

This twitter account provides examples of IT incidents that can and do occur in practice. From very specific cases of “your network has been compromised” to “your CEO has been arrested in foreign country famous for kidnaping” and many more:

The goal of these exercises is not to try and come up with perfect “playbook” for when something bad happens. Academia is too heterogeneous for that. But it should start the conversation, and provide material for figuring out where are the weaknesses in the process. For academic world, specifically running a lab, “Bad things” include:

  • international PhD student can’t get visa renewed and has been deported
  • PhD student hit 7th year without a single first-author paper out
  • global pandemic hit, and we have to shut down the lab for 2 months
  • your paper has been found to contain image duplication in figures
  • the experiment performed by your lab cannot be reproduced by trusted collaborator
  • You were not able to secure funding for next year. You have budget for 6 months
  • Project that was developed by PhD student just has been scooped and published by another group
  • You (PI) has been diagnosed with clinical depression
  • Your lab members want to know what you have been done to advance under-represented minorities (URM) in science and decrease systemic bias
  • Your lab tech who plays all the orders and prepares reagents just quit with 2-week notice

Academic science is not an art, and not a craft, and not apprenticeship-based

Recently I’ve asked whether academia has conceptual frameworks for project (and general sciencing) management like software development has. One comment was that “science is more like a craft” and that extra bureaucracy is unnecessary. Some people brought up that science is apprenticeship-based activity, where next generations learn from elders.

Academic scientific process will greatly benefit from treating it as business projects. Yes, we face a lot of uncertainty. Yes, we need to be free to explore. But even such art as cooking have come up with concepts, such as Salt, Fat, Acid, Heat or that “baking is a precise science“. There are a lot of concepts that cooks have adopted universally, without trying to pass cooking steak as some sort of magick. It is not easy, it requires practice, but it is still doable.

Similarly in science, of course there is huge component of luck, skill, experience, and serendipity. Meanwhile, there are good practices, that has to be openly adopted and discussed as “standard of practice”. However, discussion of these practices should have only one goal – making communication easier; not trying to standardize the science. Similar to A Pattern Language we need “A Science Language” that will bridge gap between new scientists and those who have worked in the field for years.

If you wish to contribute to the creating of that language, try to answer, in written form, what does “PhD student” mean to you, what does “PhD thesis” suppose to look like, how can we manage lab or finances, or what are the potential roles people can hold inside academic research (e.g. consultant, technician, research professor etc). Academic researchers need to discuss these terms and agree on some framework to think and communicate about it.

Authorship ordering: “marketize” academic currency?

Authorships, together with citations, work as academic currency. This is how we know something is valuable: people in the community discuss it.

When it comes to authorships, however, things get trickier as we only have 3 categories for authorship of standard scientific papers. We distinguish First authors, Last authors, and “middle” authors. Assigning order of name to contribution is not trivial.

PhD Comics issue #562

We can imagine treating paper as a company, and authorship as ownership structure. Each person will then own part of the company (paper), which should be made visible.

We all know papers where last author in that scheme would “own” 1% or even less. And we know papers where people who should’ve got 30% of the ownership are merely “acknowledged” at the end.

But academic papers are not companies, or products on a free market. We don’t have Securities and Exchange Commision to hold people accountable. It has to start within community. Accountability can be established by secret pre-registration of the paper. We often know that our work will result in paper, and a pre-print. Why not tell BioRxiv early on: “Hey, we are writing this. Authorship is split 4-ways as 25/25/30/20% between these authors”.

In case when autor A wants to bring a collaborator, they can negotiate with other stakeholders about the fraction of the paper that will be given to the collaborator. If somebody decides to quit project, their shares can be diluted among other authors. Splitting “shares” of the paper also allows us to remember that inviting more people to a project comes with a cost, but can be greatly beneficial in increasing value (just like with any investment).

While far from perfect (and probably impossible to implement) that scheme offers something of value already – a language to discuss paper authorship situations. For example, PI can state from the beginning: “This paper is not my responsibility, so the postdoc X will have 51% of shares” It makes it clear from the start who is really in charge.

There are a lot of problems with trying to treat papers as products or commodities. While knowledge is a commodity today, it is very hard to measure, break in pieces, and evaluate. Using monetary language, however, can be useful in managing writing and publication process.

Solving problem at the correct level

From the twitter thread

Problems can be roughly organized in hierarchy of complexity. The insight here is that you can’t solve lower-tier problems while using solutions on higher-tier level. For example, if your university doesn’t support you because of bias and xenophobia, you will not be able to solve that problem by “sciencing” harder or being smarter.

Image

The bottom line is not that we can’t improve things from top-down, but it is much harder to do. Problem needs to be addressed on appropriate level.