The goal of any parser is to verify that a sequence of characters is a string in a specific language [C, Python, whatever]. In principle, the parser can be written to inspect each character in the sequence directly, but the parser will be more comprehensible and efficient if built to inspect higher-level lexical tokens - names, numbers, quoted-strings, miscellaneous operator symbols, parentheses, etc. This is where the scanner comes in. The scanner is essentially a subroutine which the parser calls to read the raw character stream and extract & return the next lexical token.
The scanner is implemented as a DFA which recognizes any token. Each token type has its own structure as a character sub-sequence, representable as a regular expression which in turn is translated to a simple DFA; for example: a name is (in most languages) a sequence of characters consisting of [letter] followed by zero or more [letter or digit]. The scanner combines the simple DFAs for these expressions into a single DFA for their union. This is (conceptually) implemented as a simple two-dimensional table, where rows are indexed by character values and columns are indexed by DFA state numbers (actually the rows are usually indexed by character classes - e.g 'letter' instead of 'a', 'b', ...).
Each call to the scanner follows this process: Begin in a common start state (e.g 0), looking at the next character [in the I/O buffer]. Using the lookup table, calculate the next state and 'consume' the next character [typically this means copying the character into the next slot in a buffer]. Repeat until a final state is reached, then return an object identifying the type of lexical token and the value from the buffer. Example: Suppose the scanner is called when the next few characters are 'foo '. The machine will move from state 0 to state (say) 11 and copy 'f' to the buffer; then move from state 11 back to state 11 and copy 'o' to the buffer - twice (two 'o'-s); then seeing the space it will move to some final state (say) 12, and return from the call, returning the object [type = NAME, value = 'foo']. The next call to the scanner will begin where the last ended, looking at that space.
Most scanners do a bit more than this, of course. In languages with reserved words, when the scanner finishes reading the name, it will typically look up the name in a table of reserved words; if found, it will return an object representing that specific reserved word instead of a generic 'name' object. The scanner will usually 'eat' all contiguous whitespace without returning any token; similarly for comment text. And so on and so forth....