No. At no time should you ever have an explicit world matrix in your shader.
A detailed explanation for why can be found here, but the short version is really very simple: you never need it and it can kill your floating-point precision.
If your world space is too big, then a camera that is far from the origin can cause floating point precision problems.
All world space is is nothing more than an intermediary between model space and camera space. It's a place where you can express the camera and all other objects in the same space. But all you use it for is to generate a world-to-camera matrix, which you then apply to all of the model-to-world matrices to create model-to-camera matrices.
You can deal with precision problems in C++ by using doubles instead of floats for matrix computations. You can convert these back to floats before uploading them to the shader.
So why would you ever need an explicit world-space transform in your shader? In your source code, yes. But in your shader? What would you do with it that you can't do with camera space?
Lighting can be done in camera space just as easily as world space; all you have to do is transform your light positions/directions into camera space. After all, camera space has the same scale as world space. You do this transformation once per frame per light; hardly a performance burden even on the CPU.
So there is absolutely no point in ever exposing your shaders to an explicit world space transform. It's just an intermediate step that you fold into your matrices.