55

On the one hand, I'd like to produce a few graphs every week so my project can move forward. Taking a month off to learn a new programming language, do a literature review, or work through a relevant textbook would hurt the pace of my research, and my advisor would wonder why I haven't done anything for a month.

On the other hand, if I take time off to learn a new skill, it may make my research faster and more efficient in the future.

What is the best way to balance these competing demands?

ff524
  • 108,934
  • 49
  • 421
  • 474
Ben Bitdiddle
  • 5,923
  • 1
  • 32
  • 55
  • 11
    Why do you need to take time off to learn a new skill? Can't you do both at once (i.e., N% of working time on primary research and 100-N% of time on learning new skills)? – ff524 Dec 07 '14 at 20:39
  • 6
    Read on the bus, in the bathroom, or during lunch. Learn a new language of Fridays. (aka, what ff524 said). – Dave Clarke Dec 07 '14 at 20:48
  • 7
    @ff524 Sometimes easier said than done. The question gives examples in a computational context, but as an experimentalist, I often find that doing a new kind of experiment takes me 5-10 times as much time as doing a familiar one (that is inferior but within my comfort zone). A major factor when experiments can take hours, or even days to complete! – Superbest Dec 08 '14 at 04:33
  • Your research progress is measure by the number of graphs produced? Curious. – Raphael Dec 09 '14 at 17:36
  • Teach it. Arrange a seminar on the method, this will force you to learn it and to organise it in your mind. Practice will come by using it, but overview is what you need in the initial time, to make the most out of it. The good thing is that, if it is not really important to you to make a seminar out of it, you know automagically that you do not want to spend time learning it systematically. – Captain Emacs Apr 14 '16 at 08:49

4 Answers4

57

A common fallacy I observe in students I supervise is that they think they need to spend some time "learning X" before they can use X productively in their research.

If you are doing research, the most efficient way to "learn X" (where X is a programming language, methodology, or subject area, that may be of use in your research) is almost always to learn it by immediately applying it directly to your research.

In other words, I tell my students that if they are "taking time off" to learn something before starting to use it in their work, they are doing it wrong.

I would give the same advice to you: instead of taking time off to learn a new skill, start applying it to your research right now. You might be a little slower than usual for a couple of weeks (because you aren't comfortable with the new skill yet), but you'll still be making forward progress on your research, while learning the new thing.

Edit: This applies even more if the thing you are learning is a fundamental skill, and not an "extra" technique. Fundamental skills include things like writing readable code, scientific writing, keeping good notes, etc. The best way to learn these things is to actively and consciously work on them as you do research. It's not generally effective to take "time off" to read some books, then go back to doing research and start practicing the things you read about.

If it's a new skill that can't be directly applied to your research, then you definitely shouldn't take time off to learn it. But you might consider spending time on it during intervals of downtime. You can't spend 100% of your working time on your primary research anyways (mental fatigue sets in at some point), so spend time learning the new skill when you need a break.

ff524
  • 108,934
  • 49
  • 421
  • 474
  • 31
    I disagree with this advice for the specific case of learning programming languages. The problem is that each language comes with a mental mind-set and some idioms. Without acquiring those, code will read like a mechanical translation from some language the programmer already knows. I always learn a programming language by first working through a beginner book or tutorial, before trying to write any code I intend to keep. – Patricia Shanahan Dec 07 '14 at 21:55
  • 2
    @PatriciaShanahan I agree with you for someone who is a software engineer or a professional programmer, but not for someone who is a researcher. The purpose of code written in a research setting is generally very different from code written in another professional setting (also see Why do many talented scientists write horrible software) – ff524 Dec 07 '14 at 21:57
  • 24
    I agree with @PatriciaShanahan here. The mentality of "just do what works right now, forget about doing it right" is why astronomy is plagued by IDL and unusably terrible code. Research code doesn't have to be as shiny as enterprise software, but in the long run the field is harmed by having people learn bad techniques, which is what the mentality you suggest leads to. Researchers do not work in vacuum, and creating a research code that even your own collaborators can't understand is less than worthless. –  Dec 07 '14 at 22:05
  • 5
    @ChrisWhite I don't think "writing code that others can understand" requires taking a month or two off from doing research to learn how. Writing readable code is different from writing idiomatic code for a particular language, which is what Patricia is referring to. Skills related to writing decent, readable, re-usable code are not usually language-specific. – ff524 Dec 07 '14 at 22:07
  • 8
    I cannot stress the first paragraph enough. The most striking case I have seen is "I cannot use the printf function in MATLAB to round the numbers until I know C, because it comes from C". – Davidmh Dec 07 '14 at 23:31
  • 3
    @ChrisWhite part of the idea is to never stop improving. Maybe your first few weeks in one language are producing horrible IDLesque monsters, but once you have picked up the basics, you can keep reading up, get better, and produce nicer code. And in the process, go back and improve parts of the old code you still use. – Davidmh Dec 07 '14 at 23:34
  • 1
    Also known as, "on the job training." Honestly, about 75% or more of writing software is self-training, in my experience. And generally speaking, having a grasp of the basic principles of software is all you need to teach yourself a new language. @ChrisWhite Sure, your code won't be pretty when you start a new language, but the only way to really be proficient in a language is use it a long time. Tutorials, training, whatever other sand boxes just won't cut it for that. Also, if you think "enterprise code" is "shiny," you're deluding yourself. Most code is bad, in my experience. – jpmc26 Dec 08 '14 at 04:56
  • 15
    Am example where @ff524 advice would not work is for example learning functional programming when you have been using imperative languages for the most part. This is precisely the reason why so many people struggle with Haskell, i.e. they jump into it thinking they can make it work without understanding the whole shift in paradigm. – RJ- Dec 08 '14 at 05:31
  • 3
    @b70568b5 True! It probably also wouldn't work for, say, a history major learning calculus for the first time, or something else similarly paradigm shifting. – ff524 Dec 08 '14 at 07:07
  • 4
    I think this approach be avoided, for, when we try to apply the new skill immediately, we try to make things work, rather than understand what we are doing. That is approximate knowledge just enough to get the task done, but what we'll miss is the exact knowledge. Approximate knowledge will get the current work done, but will prove costlier when we need the exact knowledge at some point in the future. I'd follow approach suggested by @nirum while gathering the exact knowledge I'll need for the job. The key will be keeping a constant check on the new learning, which needs to be mastered. – Sundeep Dec 08 '14 at 14:11
  • 3
    In my school and work experience I agree. You actually need both approaches: learn-before and learn-as-you-go. However, I have found Western cultures tend to over-value learn-before, and miss out on learn-as-you-go. I think its because we go to school so long before we can earn a living. – Cort Ammon Dec 08 '14 at 15:55
  • 1
    This is really great advice. It depends on the context, but in biochemistry, you have a ton of downtime between experiments. Its hard to sit there and read research, but it needs to be done. The other thing is side project. Can you do some cloning while your SDS PAGE is running (which takes us 2 days including immunoblot). I guess looking back I would look to new skills to help your research project as a whole, and if a new side project helps the main project, then do it. Time management is a huge part of the doctorate degree. – Rob Dec 08 '14 at 20:12
  • 1
    A sufficiently determined programmer can write (and screw up) a Java program in any language. – zxq9 Dec 09 '14 at 02:26
  • -1, this is really not the way to learn to do something the right way. Of course people should apply techniques, programming languages, etc. as they can. Learning something correctly can take awhile now, but often has a huge pay-off in the future. For example, learning how to properly use Green's functions and some of the related mathematics isn't necessary in much of optics, but can be essential when faced with new problems requiring derivation directly from Maxwell's equations. – daaxix Dec 09 '14 at 18:41
24

I think this is an instance of a more general problem, of trading off short-term efforts to hit particular milestones vs. longer-term investments. Those longer-term investments might be learning a new skill, but might as easily be organizing your thoughts, refactoring a code base, improving your work environment, hunting through the literature, etc.

When you can do both at once, it's ideal, but often that's not the case. If you focus on the short term, you end up in danger of neglecting the forest for the trees. If you go for the long term, you might end up engaged in some serious yak shaving.

I personally struggle with this quite a bit, especially when you also consider the additional responsibilities of writing papers and pursuing grants. The best solution that I have found so far is essentially duty cycling. On any given day, I will decide which task is my primary goal for the day, and just keep switching to make sure that neither short-term nor long-term is getting unduly neglected.

jakebeal
  • 187,714
  • 41
  • 655
  • 920
19

This is something I think about a lot as well. Even if you learn new things while doing research, you will still be slower and there are always things you want to learn that aren't directly connected to your ongoing projects.

I asked my advisor about this trade-off once and his recommendation was to do enough work to get to the next stage (ie do enough as a grad student to get a good post-doc, enough as a post-doc to get a faculty job) and then spend the rest of your time learning and thinking about new things. I'm actually quite fond of that answer, but the key is in knowing how much is enough!

Niru
  • 191
  • 2
  • 2
    Excellent advice! I think the key is to keep a check on the learning direction on the new skill, and, time and again asking if I have already learned what I needed for the current job. – Sundeep Dec 08 '14 at 14:02
5

Have other people review what you produce.

Learn by doing is wonderful when it works but if you don't show your work to others you're trusting the judgment of someone who actually doesn't know what they're doing aren't you? :)

In the agile world of software development the typical time box is two weeks. Produce something (in your case a graph) in two weeks then subject it to review by other people. If it fails you go back and fix it. Otherwise move on to the next graph. Learn what you need to know to make each thing as you go. Sure, your first few graphs will suck compared to your later ones but worry about that when you're sure you can do better AND you have time.

You can fiddle with the two week time box but keep in mind that the more time you have between reviews the more rope you've got to hang yourself with. It really stinks to spend two months making something only to be told its worthless or already exists. You can try to make the time box smaller. The risk there is that your reviewer will get sick of helping you if you ask for reviews to often.

Neat trick here is that almost everyone can be effective as a reviewer even when they aren't an expert in what you are doing. So you can take your work before many people to get feedback so long as you can get them interested.

This way you get results and learn as you go. You will learn mostly what you need to know to finish that project. If you are feeling the need to take time off to learn new skills then what you're really asking for isn't time off. It's another project. One that needs those skills.

Sometimes a project runs you into an area where your skill set is weak. That's bound to happen eventually. You can respond by panicking and putting everything on hold while you fill in your skill set or you can get some help from someone and create a plan to learn and create just what you need to get back to your project.

If, say, you want to learn something (that helps you make graphs) then great, what are you going to make while you learn it? If you can't produce a graph in two weeks what can you produce? Break the problem down until the first chunk is something you're confident you can do in two weeks. Whatever it is you should also find a way to test it. If it's a language, and you've made something that works, you can get it peer reviewed at https://codereview.stackexchange.com/. So yeah, you're right back into that two week time box. When times up you darn well should have made something to show someone.

Sometimes you just need some more freedom to explore. A long demanding project can become a tyrant in your life. It will force you to learn what it needs, not necessarily what you find interesting. Taking time away from it and working on something related can be good to help you refresh but don't fool yourself into thinking you're accomplishing something while doing this. At most you're just learning something.

I've been programming professionally in Java for about 4 years now. You might think I'd be done learning it by now, but no, I haven't. I've been programming in some language or another for decades. You might think there is some language that I'm done learning, but no, there isn't.

I'd hate to think what would have happened if I waited to be done learning a language before producing something in it. Probably not a career.

candied_orange
  • 688
  • 4
  • 10