I'll try to answer your question with an extended analogy.
Here is a simplified picture of how programming computers works. At a basic level, computers run machine code: a simple set of instructions that say things like "add the bits in these two positions in memory" or "move this bit from this position to this other position". It's extremely laborious to write programs in machine code, so people develop higher-level programming languages with more complicated instructions, data types, etc. When I write a program in a higher-level language, another program called a compiler takes my program as input, reads it, and translates it into machine code, which can be run on my computer.
In mathematics, first-order ZFC is like machine code (or maybe assembly code? but that's splitting hairs). Essentially no one actually writes proofs or definitions in the first-order language of set theory. Instead, they communicate using the whole apparatus of ordinary / natural language mathematics, which is like a higher-level programming language. The role of first-order ZFC as a foundation for mathematics is that concepts and proofs in ordinary mathematics can in principle be translated (compiled) into formal proofs from ZFC.
Asides: (1) Different computers with different hardware have different machine languages. But that's no problem (and this is one of the advantages of having a higher-level programming language). The same program can be run on different computers, as long as it is compiles to appropriate machine code for each computer it might run on. The same is true in mathematics: we can swap out first-order ZFC for a different foundation system, e.g. another set theory or homotopy type theory or whatever, and at the expense of picking a new "compiler", we will still be able to translate our higher-level proofs to this new foundation - as long as they didn't use any features not present in the new foundation (e.g. the axiom of choice is not implemented in ZF).
(2) We can also swap out the higher-level language. If my use of the phrase "natural language mathematics" bothers you, feel free to instead think about some formal language used by an automated proof assistant like Coq or Agda or Lean. These languages are more explicitly like higher-level programming languages that can be compiled to formal proofs. A growing number of people think that mathematicians should "program" in these languages instead of (or rather, in addition to) natural language.
Ok, now one feature of higher-level programming languages and their compilers is that they catch bugs automatically: they detect syntax errors and type errors when they read your code, before trying to compile it. For example, if you try to write something like $5+\texttt{"banana"}$, most programming languages will complain that it doesn't make sense to add a number to a string. Such code will never make it to the stage of running on a computer, because it won't get compiled: the compiler doesn't know what to do with it. Of course, the same programming language can have different compilers, and some compiler might make a weird choice like treating every string as $0$ when you add it to a number, so that $5+\texttt{"banana"} = 0$. But in general, it's helpful if your compiler has strong type checking and error checking capabilities so that bugs are caught before buggy code gets run on the computer.
Similarly, there are lots of ways to decide on a translation procedure from higher-level mathematics to first-order ZFC, and it's helpful if our translation procedure refuses to translate statements that are obviously buggy (meaningless). Some simple translation procedures may happily accept the "sentence" $\frac{1}{0} = \frac{1}{0}$ and translate it into a sentence of ZFC that can be proved or disproved - but since this sentence is actually meaningless, whether it is provable or disprovable will depend heavily on the implementation of the translation procedure! But it's more reasonable to expect the notation $\frac{a}{b}$ to require $a$ to be a number and $b$ to be a non-zero number. If $b = 0$, we should get a "type error". That is, our translation procedure should be smart enough that before it translates a statement involving the notation $\frac{a}{b}$, we have to give it proofs that $a$ is a number and $b$ is a non-zero number (and this is something mathematicians are used to doing - when I write $\frac{a}{b}$ on the board in the course of a proof or calculation in class, I always justify to my students why $b\neq 0$, if it's not obvious). In this case, it doesn't make sense to ask about the truth value of $\frac{1}{0} = \frac{1}{0}$ because it is rejected out of hand by our "mathematics compiler".
Let me now respond to your points 1-4.
- Mathematics takes place within the context of first-(or second)-order logic ("formalizable in ZFC" means "within first-order logic"), and the distinction between "syntax" and "semantics" is meaningful.
I believe that mathematics is not solely first-order ZFC. Rather, it's a complex, strongly typed, natural language system (that includes things like partial functions) but can, in principle, be "compiled" to first-order ZFC. This is part of my personal philosophy of mathematics.
But the discussion above is consistent with the worldview that mathematics is solely first-order ZFC. Under such a view, we should call all the higher-level mathematics that people actually do "pre-mathematics". It is only "mathematics" once it is translated to ZFC. This view does not require that the translation procedure is total (defined on all possible input). It's perfectly consistent to view the sentence $\frac{1}{0} = \frac{1}{0}$ as "pre-mathematics" that has a type error and hence cannot be turned into "mathematics".
- "$\forall x(x=x)$" is a theorem of first-(and second)-order logic.
Clearly true.
- The theory of real closed fields is complete, and "$\frac11=1$" is provable in the theory of real closed fields.
I'm not sure of the relevance of this point. The first-order theory of real closed fields is indeed complete. $\frac{1}{1} = 1$ is not provable in the theory of real closed fields, because the syntax $\frac{1}{1}$ is not part of the first-order language of real closed fields! But we would usually take this as shorthand for $1 = 1\cdot 1$, which is provable.
- The correctness of a mathematical proof is not a matter of opinion, and $\frac10\ne\frac10$
In the ideal world, the correctness of a mathematical proof should not be a matter of opinion. In the real world, of course, reasonable people do disagree all the time about the correctness of proofs! But yes, I believe that in such a situation, either the two parties disagree about the standards for proof, or at least one of them is wrong.
$\frac{1}{0}\neq\frac{1}{0}$ is meaningless and has no truth value.