12

I've just recently been on a project and during the release, we realized that it didn't work in Production. It works in all other environments but because we have a separate release team, and we cannot set up the servers and environments ourselves, we have no visibility of the configuration on them.

We suspect that Prod has some user permissions in its account or IIS settings that are different, so we are working though it now.

So I think this whole thing has been a learning experience for me and I don't want the same thing repeated again. I would like to ask, how different should these environments be? I always thought that PreProd should be an identical copy to the Prod environment using a copy of the same database, using a copy of the same user account, should be installed on the same servers etc.

But how far should I take it? If the web site is externally facing, should PreProd be externally facing? What if the website has components that don't require a user account or password to navigate to? Is it still okay to expose it to the outside world?

RoboShop
  • 2,780
  • 6
  • 30
  • 38
  • Everywhere I've worked Pre-Prod was a direct copy of Production, with the except the database(s) would be a week-old. – Nickz Jun 30 '11 at 00:16
  • @Nick : I don't mean just the code base, I mean like the entire set up of the whole environment. – RoboShop Jun 30 '11 at 00:17

5 Answers5

11

I think the best practice for this is the Blue Green Deployment approach, coined by Jez Humble and David Farley in their book Continuous Delivery and described by Martin Fowler in his blog post Blue Green Deployment.

The premise is very simple. From Martin Fowler's post:

Blue Green Deployment

The blue-green deployment approach ... [ensures] you have two production environments, as identical as possible. At any time one of them, let's say blue for the example, is live. As you prepare a new release of your software you do your final stage of testing in the green environment. Once the software is working in the green environment, you switch the router so that all incoming requests go to the green environment - the blue one is now idle.

Blue-green deployment also gives you a rapid way to rollback - if anything goes wrong you switch the router back to your blue environment.

This approach would solve your problem of not having identical pre-production and production environments, as well as optimizing your deployment strategy.

Paddyslacker
  • 11,080
  • 1+ for the cool diagram – Nickz Jun 30 '11 at 00:19
  • mmm not sure about keeping the database in sync. It'd be difficult. What if the transaction came via your preprod server? Would that be reflected in the production db? – RoboShop Jun 30 '11 at 05:54
  • Is written, that's very expensive. You have to duplicate all the hardware necessary for live production just for test. But yes, cool diagram. – Michael Lorton Jun 30 '11 at 06:10
  • @Malvolio: there's nothing there which says you need to duplicate the hardware. – FinnNk Jun 30 '11 at 07:34
  • 1
    TECHNICALITY, n. In an English court a man named Home was tried for slander in having accused his neighbor of murder. His exact words were: "Sir Thomas Holt hath taken a cleaver and stricken his cook upon the head, so that one side of the head fell upon one shoulder and the other side upon the other shoulder." The defendant was acquitted by instruction of the court, the learned judges holding that the words did not charge murder, for they did not affirm the death of the cook, that being only an inference. -- Ambrose Bierce – Michael Lorton Jun 30 '11 at 08:24
  • 1
    Yes, technically, I don't need to duplicate the hardware but even if you dodge that requirement by fooling around with virtualization and such, you either (a) hard allocate resources, such as bandwidth and CPU, to each environment, which would have the same cost as duplicating hardware or (b) share resources, which means your test issues could bring down your production system. – Michael Lorton Jun 30 '11 at 08:29
  • Or just configure another instance of each server on the same physical server without any virtualization – Pierre de LESPINAY Mar 29 '12 at 06:46
  • And the award for "worst idea ever" goes to... @PierredeLESPINAY ! Congratulations! – Parthian Shot Jul 15 '15 at 16:36
  • This is a pretty simple cost-benefit thing. Continue increasing the amount spent on partitioning preprod and prod systems until either you've already got separate hardware and a separate network partition behind the router, or until monetary cost associated with the time & effort it would take to bring a broken server back up * estimated probability that a given preprod deployment would break something is less than / equal to the cost of the current level of separation + the marginal cost of increasing partitioning. – Parthian Shot Jul 15 '15 at 16:39
  • So, if it doesn't matter if the service goes down for the 3 hours it might take to get the service back up and running, then sure, run separate instances on the same hardware and O/S. But if it matters, and you're worried about losing a client, expend effort commensurate with the value of that client, accounting for the fact that it also saves money + effort to use the same partitioning system across all clients. – Parthian Shot Jul 15 '15 at 16:41
6

You should certainly be testing on an environment that's identical to your production severs as far as is practicable. If you don't then you're not testing what your customers will be using. If nothing else you need such an environment to test any bug reports.

Obviously there will be things that you won't want identical - links to payment systems spring to mind, but these should be mocked as if they were the real thing. There are also things you can't replicate - the scale of the system.

You might want to test via an external URL - again you're testing what your users will be seeing. Also testing via an external URL will use the network in a different way to the internal usage of the system. Permissions (for example) will play a role as will available bandwidth, firewalls etc. All of which users will face but you'll skip if you access the system directly.

I don't see an issue with components that don't require an account and password though. If it doesn't need a password then it's not vital/sensitive, if it's sensitive then why hasn't it got a password?

ChrisF
  • 38,938
  • Wow, that's a silly answer. So in your test environment, if you make a purchase, it should charge the credit card and ship what you bought? If the prod environment consists of 150 servers, the test env should too? I would have said "obviously" there must be differences between prod and test, but it wasn't obvious to ChrisF. – Michael Lorton Jun 30 '11 at 06:05
  • @Malvolio - no. I didn't mean that at all. I was thinking more of the points raised in the question with permissions, connections etc. – ChrisF Jun 30 '11 at 07:39
3

Our final pre-production environment is simply one of the live servers taken out of the load balancer. We deploy our preproduction build (which is basically identical to the live build apart from database connection strings and a couple of other config changes) and test that. If that goes ok we deploy the live code, and finally if that proves to be ok we return the server to the load balancer and deploy the production build to the remaining servers one-by-one.

FinnNk
  • 5,809
1

They should be as similar as possible, so that you can identify problems at any point within the system, with the possible exception of an inability to scale. If at all possible, the only difference between your production environment and pre-production/staging/testing environment would be the size - I would expect a production environment to consist of many more machines in a large-scale environment. You should even mirror the dedications of machines you have, such as database servers, web servers, and so on.

However, an exact replication might not be possible under your current budget. The closer it is, the more effective testing will be and the less likely problems will creep up during a push to production.

I take a different stance than ChrisF on if this environment should be public-facing. I say it shouldn't be. I would opt for running on a copy of the actual databases, or at least a copy of a subset of the actual live databases and an inward-facing environment. This way, you can test against actual and realistic data and not worry about security holes leading to a leak. You can, of course, sanitize the data, but that might remove some "dirty data" from the environment that could lead to the discovery of a defect in a live system.

Thomas Owens
  • 82,739
  • 1
    If you're doing security testing then I agree it shouldn't be public facing, but you might want it to be for final acceptance testing (for example). – ChrisF Jun 29 '11 at 23:45
  • That is a valid point, as well. I'm typically more security-focused than usability-focused, but if you did want to expose a new version of your system for acceptance testing (perhaps by clients, or as part of a public beta), then yes, a public-facing environment would be required. – Thomas Owens Jun 29 '11 at 23:48
  • Yeah, we used to have a competitor that would test all their stuff on a public-facing computer for a week or so before going live. They never figured out how we always got features out right before they did... – Michael Lorton Jun 30 '11 at 06:12
1

Everywhere I've worked banks, telecommunications and etc. pre-prod was a direct copy of production except the database would be a week-old or so. It was massive process maintaining the data across pre-prod but it was regarded as essential for the companies I worked for which implemented pre-prod.

In the AU banking section, the government fines banks for failure of service e.g. website ATMs and etc are down, every minute. It isn't uncommon to hear of a development/testing team fired over an incident. Pre-prod isn’t for every company or development process but essential for some.

Nickz
  • 1,430
  • 3
    "It isn't uncommon to hear of a development/testing team fired over an incident" -- yeah, that'll help. The beatings will continue until morale improves. – Michael Lorton Jun 30 '11 at 06:13