With the library glm
, we can do matrix computations on the CPU. However, the GPU is more suitable to do this. So what if I put the matrix computations in a Compute Shader? Will it be faster? If it is possible, it will be really nice for C, as doing matrix computations is a really hard job in C.
I may have to implement glm::perspective()
, glm::lookat()
, glm::rotate()
, etc. in the Compute Shader, though.