In my view, it's most useful to think of a function as some procedure which takes an input value of some "domain" type, and outputs a value of some "codomain" type.
Now, in practice, in particular in applied fields, you'll often see functions given in a table format. So, for example, a table like the one below could be used to represent data of the number of births per year in some hypothetical city.
Year Number of Births
------------------------
1998 20145
1999 21350
2000 21048
2001 22484
2002 22537
From such a table, you can build a function. Namely, the procedure associated to the table is: you search for the input value in the first column, then move to the second column in the same row, and give that as the output value. Of course, for such a table to give a function, there has to be exactly one row to be found for each possible input value in the domain type. So in the above example, the table corresponds to a function $f$ where for example $f(1999) = 21350$ and $f(2002) = 22537$.
In mathematical contexts, however, we start to run into problems with the table representation when we want to consider functions with an infinite domain: in such a context, we will always run out of paper before being able to list all possible rows of the table. That can be worked around by considering some abstract set of ordered pairs to be a generalization of the table idea. However, since we've now moved to an abstract set instead of something that you can look at in its entirety, my opinion is that this becomes less useful as a mental model. (So, in this point of view, the main utility is primarily in formal contexts, for example if you're starting with just the language and axioms of ZFC and want to define functions in terms of the language of ZFC. That would be something best left for more advanced classes.
Another thing to possibly mention, more relevant in the past than now, would be the log tables, and tables of trigonometric functions, that used to be published. The idea there would be to approximate either the logarithm function, or one of the trigonometric functions, by the function whose rule is given by looking up the nearest values in a large set of samples to the input, and then performing linear interpolation between the two nearest samples. Technically, we might say this also has a codomain type of some notion of "fixed-precision" numbers, since the tables can only give a finite number of decimal digits of the output.)
(As I hinted above, I also tend to like the idea of using informal type theory as a basis for teaching mathematics, as opposed to the idea of using informal set theory which seems to dominate currently. In my opinion, type theory more closely represents the way mathematicians actually think about things on a day-to-day basis, with the possible exception of axiomatic set theorists. I do admit, however, that I have no knowledge of whether formal study has been made of the idea, and whether it would actually result in improved educational outcomes.)