TL;DR version:
Your initial statement was accurate - I'll add some more clarity (hopefully) for people who come across this post later on:
The formal definition (from Daniel I.A. Cohen):
A transition graph, abbreviated TG, is a collection of three things:
- A finite set of states, at least one of which is designated as the start state ( - ) and some
(maybe none) of which are designated as final states ( + ) .
- An alphabet I of possible input letters from which input strings are formed.
- A finite set of transitions (edge labels) that show how to go from some states to some others, based on reading specified substrings of input letters (possibly even the null
string lambda ).
I'll use TG to refer to transition graphs and use FA to refer to finite automata.
My notes on features of TGs:
- A TG can have multiple start states, while a FA cannot
- A TG can have lambda on an edge, while a FA cannot
- A TG can have an entire string on an edge, while a FA cannot
- A TG is non-deterministic; It can have multiple possible paths while following the input string. If even ONE of these paths leads to an accept state, the string is accepted.
- A TG can crash because its transition functions do not constitute total functions, which means functions that provide output for the entire domain of input (in other words, in a FA, you NEED to consider all possible input at each state - with a TG you do not.)
It is also important to note that any TG can be converted to a FA. Part of the underlying theory of Kleene's theorem is that if a TG exists, OR an FA exists, or a regular language exists for a language, then they all have to exist.
Additional information:
Transition Graphs are a relaxing of the notion of a finite automaton. It's a bit confusing, because the diagrams of machines in general are often called state transition diagrams. But other authors (e.g., Lenz) refers to the diagrams (visuals) themselves as transition graphs.
However, the notion of a Transition Graph (TG) isn't "non-standard nomenclature" as has been suggested. It is probably less popular now, but this type of machine was originally discussed in Finite Automata and the Representation of Events by J. Myhill in 1957. It was made very popular in semi-modern times by Daniel I.A. Cohen's excellent (classic) book, Introduction to Computer Theory, specifically in chapter 6 (in the 2nd edition, 1997). Cohen was one of Alonzo Church's Ph.D. candidates, so I don't think this was just a matter of some no-named researcher trying to make a certain term or concept popular.