As described in my comment above, there's actually a good reason Unity's editor doesn't like to zoom out to these scales, and forcing it to do so is likely to just reveal more problems.
Like many engines, Unity uses single-precision floating point numbers to represent object and vertex positions. Floating point representations can handle an immense range of values, but they lose precision as the numbers get larger.
At 1 Earth radius from the origin, positions can only be specified to a resolution of a quarter of a metre, which is likely to be too coarse for almost all gameplay. (Imagine if your camera was jumping a quarter metre every time you nudged it, or if objects were hovering a quarter metre above the ground because that's the closest non-intersecting point the physics simulation could find)
This doesn't get substantially better if you change units or work in miniature. Changing to 1 unit = 1 km makes the values 1000 times smaller, but an error in the last decimal place matters 1000 times as much, so it basically washes out.
The way this is normally dealt with is to keep the action of the game centred near the origin of the coordinate system (0,0,0), and shifting it back there when the player strays too far away. As long as everything in the world shifts together, the player won't perceive this movement.
This will probably require spawning just the visible parts of the Earth's surface on demand when your player is close to them. (And falling back on a scaled-down sphere proxy when the player is far away)
Since you can't use the editor to assemble the whole planet in your scene view, you'll need to look at other ways of describing what you want on your surface, like...
- storing it in chunks where each item is close to its local chunk origin
- generating surface content procedurally on demand
- storing your level data in a custom file format using higher-precision numbers so you still have precision over planetary scales