I like and upvoted mcottle's answer, but I want to cover some other dynamics and arguments that the other answers haven't yet brought up.
First, implicit in mcottle's answer is the fact that below a certain skill level, some problems are just impossible. Unfortunately, the way you find this out is by your team trying and failing, which is very expensive. After failing, there are two possible lessons to learn from this. One option is you learn that you need more competent developers and so you hire them and you complete the project significantly over-budget and over-schedule, but at least you're prepared in the future. The other option is that such a project is "too hard" for your team and such things shouldn't be attempted in the future, i.e. you give up on the project and effectively any similar ones. Of course, it will rarely be phrased as "we're too dumb to do this", but instead will be rationalized as "our systems are very complex" or "we have a lot of legacy code" or some others. This latter view can significantly warp a company's perspective on what's possible and how long/expensive development should be. "If it takes a year to fail to do X, maybe six months to do the much simpler Y is reasonable."
One question is, what exactly is your company's plan? Okay, they'll hire cheap, junior programmers. Three years pass, now what? What do they do with developer that's been with them throughout those three years? Did they just never give him/her a raise? The options here are the following:
- They give raises competitively to retain employees, in which case why would they have a problem paying senior developer rates now? I'll return to this though.
- They have stagnant rates which means they are eventually going to "boil down" to employees which lack drive and/or skill.
- They more actively remove more senior employees.
The second two cases imply a lot of employee turn-over which means loss of company knowledge and paying to ramp-up employees continuously. In the second case, you are essentially selecting for bad developers and so costs will rise in the form of increasing schedules. The way this will play out is everything is going fine on project X until suddenly Jim leaves, who was one of the better developers, because he hasn't gotten an raise in two years, now the project will "understandably" take significantly longer as you need to hire and train new junior developers who (presumably) won't be as good as Jim was. This is how you recalibrate expectations.
Even in the case that competitive raises are being provided, if all you have are junior developers where and how are they supposed to learn? You're basically hoping that one of them will learn good practices on their own in spite of their work environment, and eventually mentor others (as opposed to leaving for greener pastures). It would make a lot more sense to "prime the pump" with some good developers. More likely you'll develop a culture of Expert Beginners. The result is that you'll end up paying senior developer rates to people who are only slightly better than junior developers and are culturally toxic.
One benefit of, particularly, very good developers, that I'm surprised no one else has mentioned is they can readily be a multiplicative factor. It may well be the case that a junior developer and a senior developer take the same amount of time to make a table. However, a good developer won't do that. They'll make a table generator that reduces the time for everyone to make a table. Alternatively/additionally, they'll raise the ceiling of what's possible for everyone. For example, the developers who implemented Google's MapReduce framework were likely extremely qualified, but even if the users of MapReduce are utterly unable to make a massively distributed version of their algorithm on their own, they now easily can with MapReduce. Oftentimes this dynamic is less blatant. For example, better source control, testing, and deployment practices make everyone better, but it can be harder to trace to a specific person.
To argue the other side for a bit, maybe the higher-ups are right. Maybe more experienced developers aren't necessary. If that's the case, though, it would seem that development isn't a significant part of the company. In that case, I would just eliminate developers entirely and use off-the-shelf software or hire contractors on demand. It might be worth exploring why they don't just use contractors rather than an in-house team. If you're going to have a lot of employee churn anyway, then ramping-up contractors shouldn't be a problem.
Let's say the experienced programmers costs us 4x as much as the beginners.
--That is unlikely. The ratio is more like 2x or 3x. If you're paying programmers that poorly, what you're really doing is hiring amateurs and training them to do the job you need, only to have them leave your company for greener pastures once they get a minimal amount of experience under their belt.
– Robert Harvey Mar 16 '17 at 00:25Both are basically able to complete the seemingly simple things in the same amount of time.
--Well, the experienced programmer saves substantial time in the long run because you didn't have to give him more specific instructions on exactly what to do.
– Robert Harvey Mar 16 '17 at 00:25