The problem statement in the linked post says that only the vertices and edges are input to the algorithm, but I don't think it's possible to do this unambiguously in 3D without additional input about the faces of the mesh. In the 2D case, since the input is specified to be a planar graph, the faces are unambiguous: any region in the plane contained within an edge loop, and empty of edges in its interior, is a face. However, in 3D, you don't know which edge loops should be faces and which shouldn't.
Consider a cube, represented as just vertices and edges: you would want the "usual" 6 sides of the cube to be treated as faces, but you wouldn't want the algorithm to create additional faces that stretch across the diagonal of the cube, internally. But there is no way for the algorithm to know that. Moreover, the vertex/edge mesh might not even be possible to assign faces to in a sensible way; it might have non-planar faces, might be non-orientable (e.g. Möbius strip) or non-manifold.
Typically in 3D applications we would have the mesh faces already defined, and we can assume it to be a manifold, orientable mesh. With the face data as input to the algorithm, such as by lists of edges or vertices in counterclockwise order around each face, then it becomes straightforward (if tedious) to figure out the half-edge relationships.