3

My goal is to implement an edge detection algorithmus that is capable to find edges of arbitrary 3d meshes. I want to find the edges by detecting normal discontinuities. Furthermore, I want the edges to be one pixel wide.

I render the color values of my scene and the corresponding normal vectors to two different render buffer targets, i.e. to two different textures (screenTexture and normalTexture). I use the sobel edge detection algorithmus in my fragment shader:

#version 330 core

out vec4 color;

uniform sampler2D screenTexture;
uniform sampler2D normalTexture;
uniform sampler2D depthTexture;

mat3 sx = mat3( 
    1.0, 2.0, 1.0, 
    0.0, 0.0, 0.0, 
   -1.0, -2.0, -1.0 
);
mat3 sy = mat3( 
   1.0, 0.0, -1.0, 
   2.0, 0.0, -2.0, 
   1.0, 0.0, -1.0 
);

void main()
{
    vec4 diffuse = texelFetch(screenTexture, ivec2(gl_FragCoord), 0);
    vec3 normal = texelFetch(normalTexture, ivec2(gl_FragCoord), 0 ).xyz;
    vec3 I[3];
    for (int i=0; i<3; i++) {
        float sampleValLeft  = dot(normal, texelFetch(normalTexture, ivec2(gl_FragCoord) + ivec2(i-1,-1), 0 ).rgb);
        float sampleValMiddle  = dot(normal, texelFetch(normalTexture, ivec2(gl_FragCoord) + ivec2(i-1,0), 0 ).rgb);
        float sampleValRight  = dot(normal, texelFetch(normalTexture, ivec2(gl_FragCoord) + ivec2(i-1,1), 0 ).rgb);
        I[i] = vec3(sampleValLeft, sampleValMiddle, sampleValRight);
    }

    float gx = dot(sx[0], I[0]) + dot(sx[1], I[1]) + dot(sx[2], I[2]); 
    float gy = dot(sy[0], I[0]) + dot(sy[1], I[1]) + dot(sy[2], I[2]);

    float g = sqrt(pow(gx, 2.0)+pow(gy, 2.0));
    g = smoothstep(0.4,0.8, g);

    if(g > 0.2) {
        color = vec4(0., 0., 0.0, 1);
    } else {
        color = diffuse;
    }
  }

The shader produces the following result:

enter image description here

As you can see, the line thickness of the solids are not consistent:

  1. Edges visible between the turquois background and the solid are 1px wide like intended.
  2. Edges within the object are 2px wide which is too bride.

The reason why 1. works is because I initalized normalTexture with vec3(0,0,0) normal vectors which produces a dot-product of 0 as soon as the current pixel is a scene pixel and the adjacent right or left pixel is an edge pixel of the solid.

But edges within the object are detected two times by sobel because of a gx or gy which is one time positive and one time negative. In other words, gx or gy is both times not 0 and therefore g is greater than 0.2 both of the time.

I'm totally stuck what I can do to produce consistent wide lines.

enne87
  • 601
  • 2
  • 7
  • 15

1 Answers1

1

The edge between the sphere and the background is actually the one that is incorrect; you need to initialize your normal texture with a unit normal to get correct results.

The two pixels thickness is a limitation of the Sobel based edge detection and other 3x3 convolution filters: you can only detect edges twice as big as your pixels. The Robert operator uses a 2x2 convolution filter with weights making a cross, so it yields thinner edges but they are still thicker than one pixel.

If you absolutely want one pixel width, you could use filtered texture sampling and sample every half pixel when applying the Sobel operator, but the detection will probably miss some features.

Julien Guertault
  • 4,420
  • 2
  • 23
  • 47
  • Would sampling half pixels give equivalent results to generating an image at double resolution, producing lines 2 pixels wide, and then scaling the image down to the intended size? – trichoplax is on Codidact now Aug 01 '16 at 16:01
  • 1
    Couldn't I just use the sign of the dot-product (i.e. gx or gy) in any way? For example, when sampling a horizontal edge, i get -4 for gx on one side of the edge and 4 on the other side. – enne87 Aug 01 '16 at 18:49
  • Just one more question: Would morphological erosion (http://homepages.inf.ed.ac.uk/rbf/HIPR2/erode.htm) or thinnening (http://homepages.inf.ed.ac.uk/rbf/HIPR2/thin.htm) be an alternative? – enne87 Aug 02 '16 at 22:46
  • @enne87 why not skip edge detection alltogether and use edge geometry to draw the lines? – joojaa Aug 03 '16 at 01:35
  • @enne87: yes I think erosion could be a possibility, but I didn't suggest it because I'm not familiar enough with it so I don't know how well suited it is. – Julien Guertault Aug 03 '16 at 06:14
  • Thank you both! @joojaa: I thought about it but how would I draw the outline of a sphere with geometrical data? – enne87 Aug 03 '16 at 08:55
  • @enne87 you draw the edges on the culling treshold and hard edges slightly offset towards the camera. A sphere is in fact one of the easier shapes. – joojaa Aug 03 '16 at 09:10
  • @joojaa: Thanks for your suggestion. Just to make clear that I understand your idea:
    1. For the silhouette I render the edges of the backfaces and move the faces in screen Z forward so that only the edges of the backfacing triangles are visible, right?

    2. Sorry but I don't understand how I would draw the inner edges (creases and ridges) ?

    – enne87 Aug 03 '16 at 10:30
  • @enne87 edges on the front backface boundary prefeerably. You draw them as gl lines. – joojaa Aug 03 '16 at 11:01