Throughout this answer I'm ignoring the idea of semantics, since you seem to be focusing on the purely formalist aspects of logic for now, and while semantics can indeed be implemented in a formalist way that adds some serious complexity to the situation.
The stuff about Curry-Howard at the beginning really seems like a red herring; in light of your comment "I found formal logic confusing, too," I think it's a good idea to look at the following more general version of your question:
Is there a way to view proofs of theorems from axiom systems as being generated by some simple set of "string-manipulation rules"?
(Until later on I'm ignoring the problem that precedes even this one - that is, fixing a syntax for our formulas to live in. Before we can talk about proofs, we need to know what a formula is in the first place.)
Production systems are a particular example of such a set of rules, but they're quite limited. Formulas are in general a lot more complicated than just equations (think about quantifiers), and moreover proof steps that involve combining two hypotheses are hard to model naturally in this way. To get a satisfying positive answer, we really need to look at things more flexible than production systems - but which are still just sets of easy-to-use string manipulation rules.
These systems also address another of your concerns: "I found ZFC confusing [...] it felt like other logical axioms were "implied" in addition to the ZFC axioms that were spelled out." What's missing is exactly the relevant rule system, which comes before ZFC. (Actually, they're really independent of each other: we can consider different axiom systems over the same rule set, or the same axiom system over different rule sets.)
It's probably worth considering a concrete example at this point.
A Hilbert-style system is a family of rules which generate a set of formulas which we call "tautologies." For example, one such rule is: "For all formulas $\varphi,\psi$, the formula $\varphi\rightarrow(\psi\rightarrow\varphi)$ is a tautology." Another is: "If $\varphi$ and $\varphi\rightarrow\psi$ are tautologies, then $\psi$ is a tautology." Yet a third is: "For any term $t$ and any formula $\varphi$, if $\forall x(\varphi(x))$ is a tautology then $\varphi(t)$ is a tautology."
A given axiom set is then "slapped on top of" a Hilber-style system. When we say "$T$ proves $\varphi$" what we really mean is that there are some sentences $\alpha_1,\alpha_2,...,\alpha_n\in T$ such that $$\alpha_1\rightarrow(\alpha_2\rightarrow...\rightarrow(\alpha_n\rightarrow\varphi)...))$$ is a tautology. (Remember that "$A$ implies ($B$ implies $C$)" is equivalent to "($A$ and $B$) implies $C$;" for technical reasons, it's often useful to phrase everything in terms of $\rightarrow$ whenever possible.)
Another example, which is often confusing at first but is ultimately super important and valuable, is sequent calculus.
A sequent is an expression of the form "$\Gamma\vdash\varphi$" for $\Gamma$ a set of formulas and $\varphi$ a single formula; such a sequent intuitively means "$\Gamma$ proves $\varphi$." A sequent calculus is a system for building up a collection of "valid sequents" - and in this system, saying "$\Gamma$ proves $\varphi$" is shorthand for "'$\Gamma\vdash\varphi$' is in the set of sequents that the rules generate."
(Preempting a common question at this point, you should resist the urge to conflate "$\vdash$" and "$\rightarrow$". It's totally understandable, but will ultimately steer you wrong.)
Here are a couple examples of sequent rules:
"If $\varphi\in \Gamma$ then $\Gamma\vdash\varphi$ is a valid sequent."
- This is (basically) reflexivity: statements entail themselves (and adding "superfluous hypotheses," namely the other formulas in $\Gamma$, doesn't change that).
"If $\Gamma\vdash\forall x\varphi(x)$ is a valid sequent, then so is $\Gamma\vdash\varphi(t)$ for any term $t$."
- This is universal instantiation: it says that if we can deduce the formula $\forall x\varphi(x)$ from the set of formulas $\Gamma$, then - for any term $t$ - we can also deduce the formula $\varphi(t)$ from $\Gamma$.
"If $\Gamma\vdash\varphi$ and $\Gamma\vdash\psi$ are each valid sequents, so is $\Gamma\vdash\varphi\wedge\psi$."
- This is "$\wedge$-introduction" - it tells us how to show that a conjunction $(\varphi\wedge\psi)$ is provable from a set of hypotheses $(\Gamma)$.
If $\Gamma\vdash\varphi\wedge\psi$ is a valid sequent, then so are each of $\Gamma\vdash\varphi$ and $\Gamma\vdash\psi$."
- These are the left and right "$\wedge$-elimination" rules, respectively. (It may be tempting at this point to think of the introduction and elimination rules as defining $\wedge$ in the first place; this idea turns out to be surprisingly subtle, and an important search term here is "logical harmony.")
For example, applying reflexivity twice and $\wedge$-introduction once we get (for any choice of formulas $\varphi,\psi$) that $$\{\varphi,\psi\}\vdash \varphi\wedge\psi$$ is a valid sequent. The construction of this sequent from the rules involved looks like a tree with a "root" and two "leaves:" the first leaf is $\{\varphi,\psi\}\vdash\varphi$ (which we get for free from reflexivity), the second leaf is $\{\varphi,\psi\}\vdash\psi$ (ditto), and the root is $\{\varphi,\psi\}\vdash\varphi\wedge\psi$ (which we get from the two leaves via $\wedge$-introduction).
To sum up, when we really go fully formal we wind up looking at a combination of three things:
A syntax, telling us what a formula is.
A proof system, which is some collection of rules for generating "valid" formulas, sequents, or similar, together with an interpretation of "$\Gamma$ proves $\varphi$."
An axiom set, which is just a set of formulas.
These three pieces are, to a surprisingly large extent, independent; really the only constraint is that the syntax limits what the proof system and axiom set can involve in the first place. The proof system is the part of this apparatus which is most relevant to your question, and in setting one up we have various choices to make. One particularly important choice is what sort of "simplicity" we're going for. Production rules are quite simple, but are also very limited and result in lots of tedium. Sequent rules (for example) are much more flexible and lead to fairly natural proofs, but are more complicated objects in the first place.
One final thing that needs to be mentioned is the notion of comparison of proof systems. Fixing a common syntax for simplicity (we can talk about different syntaces but that quickly gets weird), we say:
Proof system $\mathbb{P}_1$ is at least as strong as proof system $\mathbb{P}_2$ if whenever $\Gamma$ proves $\varphi$ in the sense of $\mathbb{P}_2$, we also have that $\Gamma$ proves $\varphi$ in the sense of $\mathbb{P}_1$.
Two proof systems which are each at least as strong as the other are equivalent.
We can then prove, for example, that such-and-such Hilbert-style system is equivalent to such-and-such sequent calculus. Broad equivalence phenomena (e.g. from a semantic perspective, see here), similar to the broad equivalences between different models of computation, then ultimately lead us to - in most cases - suppress the choice of particular proof system.
prime(n)
means "for all m, 1<=m<n notDivides(m,n)" - you could definenotDividesAll(n, m)
recursively:notDividesAll(n, m) <=> ((notDivides((n), (m))) and (notDividesAll((n), ((m)-(1)))))
(you'd need to definenotDivides
and a base case when(m)
=(1)
) – Jesus is Lord Apr 17 '20 at 20:25(y)
=(0)
and(x)
=(1)
is a possible solution to equation set by adding production rules(y) | (0)
and(x) | (1)
and then deriving(1)+(0)=(1)
and((1)-(0))=(1)
from your addition/subtraction production rules and substituting(y)
and(x)
to get the equations you asked for – Jesus is Lord Apr 17 '20 at 20:29