I'm not entirely sure if I understood the question, but it seems like you want to implement environment map illumination in your ray tracer (you seem to be thinking in real-time, though). I will approach the general solution for offline renderers without assuming anything about the scene except the environment sphere.
This is generally done by considering your environment as a light source. Let's say that you use a sphere with a radius big enough to contain your scene. Then, you sample it as you would sample an area light source. You can think of this sphere as a textured light source.
The contribution of the direct lighting on the radiance leaving a point $x$ on the direction $\Theta$ would be:
$$
L_{\text{direct}}(x \rightarrow \Theta) = L_{\text{lights}}(x \rightarrow \Theta) + L_{\text{env}}(x \rightarrow \Theta)
$$
Where the environment map contribution is:
$$
L_{\text{env}}(x \rightarrow \Theta) = {\int_{\Omega_x} L_{\text{tex}}}(x \leftarrow \Psi)f_r(x, \Theta \leftrightarrow \Psi)V(x, \Psi) \cos(\Psi, N_x)\, d\omega_{\Psi} \qquad (1)
$$
Note that, here, $\Psi$ are directions to the sphere, $V(x, \Psi) = 1$ if and only if the sphere is visible from that direction and $V(x, \Psi) = 0$ otherwise, $f_r$ is the BRDF and $L_{\text{tex}}$ is the color on the texture.
Directions $\Psi$ can be generated with a parameterization of the sphere. If you want something simple, then this is enough, just treat it as a textured area light and emit whatever color is on the texture on the sampled point. If you want something more elaborate, consider the rest of the text.
With this said, environment mapping can be inefficient if not implemented with care.
Also, I am assuming you only implemented area lights that emit uniformly, so it was okay to sample its area uniformly. However, it is clear that the sphere is a light source where some parts contribute more than others since the colors vary on the texture. This, combined with the fact that the sphere occupies the entire solid angle around a point (instead of the hemisphere that you usually sample), skyrockets the variance of the estimator of $(1)$. The visibility term $V(x, \Psi)$ further complicates things regarding variance, depending on your scene (if it very exposed or not).
In order to sample and estimate $(1)$, you have to consider:
Parameterization of the sphere: You need to decide how to parameterize your sphere. You need to have a space where you can generate random points and map to a point on the sphere to get a direction $\Psi$ and the color from the texture. You can just use typical the latitude-longitude parameterization.
For more elaborate, better distributed parameterizations: paraboloid parameterization [page 4]; concentric-map parameterization.
PDF $p(\Psi)$ used for sampling the sphere: It is possible to reduce the variance that I mentioned earlier. You can importance sample only using the term $\cos(\Psi, N_x)$ (i.e. cosine lobe sampling), but even so it is possible to do better than that. If the BRDF $f_r$ is simple, you can sample the product of the BRDF with $\cos(\Psi, N_x)$.
There are other methods. In [Kollig, Keller; 03] a PDF is built from the 2D array of the texture itself. This way, it is possible to get more samples from directions from where the map contributes more. A PDF from the product of this with the BRDF would yield the best results considering a compromise between the sampling cost and actual variance reduction.
For a more elaborate and adaptive solution, check [Agarwal et. al, 03].