Several other rendering paradigms exist aside from conventional rasterization of triangles using vertex and fragment shaders.
- Ray tracing works by intersecting light rays with surfaces (triangle meshes, or more general surfaces: anything that can be tested for intersection) and then firing additional rays to gather information about lighting, shadows, reflections, etc at the intersection point. Path tracing and photon mapping are refinements of this concept.
- REYES rendering subdivides all primitives into micropolygons the size of a single pixel or smaller, calculates lighting and shading per micropolygon vertex, then uses a specialized rasterizing algorithm to render the micropolygons in screen space with motion blur and depth of field.
- Distance field ray-marching (also called sphere tracing) represents all primitives in terms of distance fields: 3D functions and/or volumetric textures that give the distance from any given point to the nearest surface in any direction. The distance field is used to step adaptively along a ray until it hits a surface, and can also be used for lighting effects such as ambient occlusion.
However, all rendering algorithms use linear transformation matrices. That's not part of the "paradigm" so much as it's the basic machinery for manipulating objects, the camera, etc in 3D space. It would be hard to do anything interesting without it.