The key term here is proof theory - this is in my opinion one of the more inaccessible areas of logic, but it's quite interesting. The Stanford Encyclopedia of Philosophy has great articles on it and topics within it, and I recommend this one as a starting point.
There's a big problem with this definition: How deep do we have to go? i.e: What can or cannot be taken for granted?
For the questions you're asking to make sense, we need to get more precise. In general we need to fix a logical language and a formal notion of proof; and in any specific situation we also need a fixed set of axioms.
As far as the first point goes, the standard choice is first-order logic (often we begin by studying propositional logic but for the most part that's best thought of as a "toy system"); I'll ignore here the argument for this choice, since it gets rather technical, and for now just take for granted that this is the "right" context to work in.
As to the second, there are many formal notions of proof, which yield somewhat different notions of length and other complexity measures. The one I personally found simplest to understand was sequent calculus - whose wikipedia article I find a bit hard to read, and might recommend instead something like this - but in each case we have a set of "basic logical rules." These rules can be applied to any given set of sentences $\Gamma$, and - thinking of the elements of $\Gamma$ as our axioms - the sentences we can get from $\Gamma$ by applying these rules are the theorems of $\Gamma$. The main difference we'll care about here amongst these systems is how they represent proofs - basically, the options are as sequences or as trees. Each style has its own advantages and disadvantages; for example, sequences have a snappier notion of "length" (should the "length" of a tree be its height, or its width, or the number of nodes, or ...?) and better match how we write natural-language proofs, but trees are often easier to analyze (and in my opinion to think about).
- I said earlier that I was going to ignore the details of first-order logic, but I should say a bit at this point. The key properties that all these proof notions have are that they are computable (= there is a clear list of what the basic logical rules are) and sound and complete with respect to first-order logic - these latter two properties together mean that the theorems of $\Gamma$ are exactly the sentences true in every model of $\Gamma$. As a warning, the word "complete" here is being used in a different sense than in Godel's incompleteness theorem). The major argument in favor of first-order logic is the lack of such systems for most stronger logics (and even those which do have them have other drawbacks).
Finally, our choice of axioms will of course be dependent on our context. Sometimes we work in (first-order) Peano arithmetic, other times in ZFC, other times in more limited theories like the theory of real closed fields, and other times in yet other theories. It's a big universe out there.
Now within a given proof system, there are various notions of length and more general complexity we may care about. Here are a few:
In my opinion, the simplest useful notion of length is length in the usual sense for proofs-as-sequences and height for proofs-as-trees. Especially in the trees case this is very well suited to proof by structural induction, which is one general advantage of trees over sequences.
Another option is much finer-grained: the number of symbols involved. This one looks arbitrary at first, but has its uses, the main one being the fact that (as long as our language is finite, which for now let's just assume) there are only finitely many proofs of a given number of symbols even if our set of axioms is infinite which is a useful feature in some technical contexts.
Then there's what I'll call rule complexity: what logical rules does a proof actually use? For example, does a proof work in intuitionistic logic? How about some substructural logic? Issues like this are extremely important in proof theory. This notion of complexity isn't a number, but rather a set of rules used.
In general, there is no a priori "best" notion of proof complexity - rather, there are various "measurements" we can take which are interesting and useful depending on what context we're in. Indeed there are lots of "foundational" questions about proofs which don't (yet) have satisfying answers, my favorite being: when are two different proofs "essentially the same"?