Programming in GLESSL is a bit different than programming for a normal CPU.
Since you're using OpenGLES 2, I will assume you are using GLESSL 1.0. So I will be pulling quotes from its documentation.
In general, GLESSL offers you a variety of types as programming aids, with some guarantees regarding their behavior, but leaves the actual low level implementation details to the manufacturer of a specific GPU.
GLESSL is not a low level programming language (like C), and as such, talking about bit representations in GLESSL is mostly meaningless.
According to section 4.5.1 of the documentation:
The vertex language must provide an integer precision of at least 16 bits, plus a sign bit.
That means that for the vertex shader, you can safely use values from -65536 to 65536 with a highp int
, something which is repeated in table 2 of section 4.5.2.
This may sound strange if you're accustomed to thinking of 16 bit int
s being able to represent values between -32768 and 35767. GLESSL does not specify how int
s are to be represented internally, just that they must provide enough precision for numbers between -65536 and 65536, which should be enough for your purposes.
In fact, as section 4.1.3 implies, int
s are meant to be programming aids, and there is no guarantee that the underlying representation must be a hardware integer. In most cases, all operations will be performed on hardware floats (probably 32-bit, even for lowp
), but GPUs are free to implement them any way they like, even as textual representations in Swedish using EBCDIC.
In practical terms, in the vertex shader, you can use int
(highp
is implied, as per section 10.3), and you are guaranteed to be able to store values from -65536 to 65536. Also, you can use float
and have at least 16 bits of precision (plus one bit sign) for any given exponenent within range, which means that you can also use float
for your purposes. This is also clarified in section 4.5.2:
a highp int
can be represented by a highp float
.
I would recommend you take float
s, and pass float
s from your program, since it is very likely that any given GPU uses float
s to represent int
s. If you pass int
s, it is very likely that a costly conversion will take place.
Also, there is glGetShaderPrecisionFormat
if you want to know the exact precision of a given type in a given GPU. I would rather rely on the guarantees given by the standard.
As far as I know, precision qualifiers are more about bandwidth than actual calculations. Mobile GPUs are very limited in terms of bandwidth, and specifying that you can live with lowp
on a particular varying
variable, may allow the GPU to perform better packing strategies and give you better performance than if you specify you require highp
.
0
-255
can be exactly represented by a shader variable. – sam hocevar Jan 13 '13 at 00:00