I would like to keep only the color and x,y coordinates of the pixels that are touching a pixel that is a different color than itself (basically the edges) and remove the rest so that the gpu can compare two edge pixels without needing to read through all the pixels that fill the shape. I want to process this more afterwards but I am trying to do this to limit how many gpu cores the edge detection need so that I can use those cores for other processing at the same time. I am not planning on this as a shader or any graphic thing I’m using it as an AI vision for a game character. I know that they have machine learning for this but I want to avoid that if possible. Not for any reason except I want to know what is going on in there. How can I do this without using the cpu?
Asked
Active
Viewed 32 times
0
-
In future, please edit your question rather than posting a new one. I don't expect your previous question will get you the answers you need, so it's not worth keeping around as a separate post. As I mentioned before, you should include in this question an example of the kind of image you need to process (details of the image can suggest more efficient solutions for your use case). – DMGregory Aug 23 '20 at 17:23
-
Sorry. I didn’t know you could edit a question. – user11937382 Aug 23 '20 at 17:25
-
You should also explain what kind of operations you need to do on these edges. Usually we need to examine the edge in order to draw something different on our frame buffer. But if we delete the non-edge pixels, we change the spatial layout, so drawing our results back to the frame buffer becomes more challenging - we need to store mapping information that used to be implicit in the position of the pixel within the buffer. Do you perhaps not want to remove the non-edge pixels, but skip expensive processing on them, using for instance a stencil buffer? – DMGregory Aug 23 '20 at 17:26
-
I am trying to make get the gpu to compare edges for more processing without wasting cores on non edge pixel to get the general shapes in the image. – user11937382 Aug 23 '20 at 17:32
-
1Again, please edit your question to explain this in detail. The more you can show us about your real use case, the better we can help you find efficient solutions and avoid the X/Y Problem. For instance, you might want to tell us whether this is happening in a compute shader or fragment shader, or if you have flexibility to use either, and what kinds of information we need to preserve about these pixels (eg. location in original image, adjacencies, pixel colours, depth, etc...) – DMGregory Aug 23 '20 at 17:35