"Formal" power series seems very niche.
To me, "formal" is related to "formal logic", which is the most foundational mathematic and philosophical basis for essentially all propositional logic and "formal" proofs in algebra and number theory. Others have already described what "formal" means in "formal" proofs: that they need to be rigorous and adhere to systematized rules that build from each other into broader and broader hierarchies.
It starts from philosophical ideas about reality, existence, and truth, and metabolizes into a "formal" systemized expression in logic. These expressions are tantamount to equivalence relations and various identities that we use in algebra and boolean algebra, that are, in turn, often required to prove things at a more abstract level with confidence that the results are cohesive to everything established at lower levels of the ladder.
These can further be expressed in the form of code to create formal equivalence checking and things of that nature that use compute resources to prove things instead of relying on the poor accuracy of the hand-written detail.
That's what I think when it comes to "formal" in math.
Building off of propositional logic, strong induction is an important technique in proving that a sequence of cases is true, and has strong ties to trees and applications to recursion in functional programming. It can also be used to assert that a function in code should behave in a certain way across a sequence of cases.
Counting in both ways is a formal proof-based methodology that's perhaps the most important technique in combinatorics. It involves proving that two combinational expressions are equivalent by demonstrating that they count the size of the same set. Proving that a function is bijective is a form of double counting, and this tool can also be used to formally prove modular arithmetic theorems like Little Fermat.
Biases and logical fallacies
It's an underappreciated topic, but defining and identifying fallacies and bias is crucial, and this is contingent on formal propositional logic that we've already mentioned.
This became a topic of great note first in classical periods, particularly in Greece in the 5th through 3rd centuries B.C.E. Renewed interest peaked during the European Enlightenment era and Scientific Revolution. It waned and hasn't picked up again in some time. After all, there's social media instead.
The Scientific Method
The entire thing is a formal mathematical exercise. You establish a hypothesis, test it fairly while maintaining a control group as a set of items with which to compare against the test group, and you draw a conclusion based on the output. It must be rigorous enough to be reproducible. To get published, the paper must be screened through a formal review by expert peers in the field of relevance. Others will attempt to recreate your test.
Also, a lot of formal descriptions of imperative math, including algebra, calculus, statistics, and probability theory is communicated in the process.
While it's used for very interesting but funny and niche parts of science, like fish mating habits, it's also used to exhibit a pharmaceutical drug's safety, and other scary important things, as well as findings in emerging fields like robotics, ML, AI, autonomous systems, networked sensor arrays, and gene editing, which are all fascinating, but also equally as scary if they go wrong, justifying significant formal scientific techniques based on formal use of math.
Statistical studies inform us about a whole host of things, but the numbers are meaningless unless they've been vetted through formal tests and logically work free from bias and fallacies (see earlier section). The problem with biases and fallacies when it comes to statistical studies is that bias is hard to recognize in oneself. Which is why these also need to be vetted by peers.
The tests used to analyze the data which has been acquired – such as p-tests, t-test, and chi-sq tests – all have a basis in probability and are, like almost all mathematics, described in excruciating detail.
If they don't undergo the requisite formal math progressions, then the data can be malformed, or the interpretation of the data can be completely wrong.
When money, equipment, machinery, property, and lives are on risk, it's critical to go through formal procedures in order to reduce the likelihood of system faults or compromise. Which is why standards (such as IEEE, ISO, and MISRA) exist, are produced and followed by related industries. FTAs, HAZOP examinations, sequence diagrams, use case scenarios, and test plans are all drawn out. Source code and design documents are peer-reviewed. Unit and system test results are deeply scrutinized. And test fleets are burned in stressed to the brink of failure and perfected to 6-sigma of reliability.
Everyone from Otis (a designer of elevators) to Lockheed Martin (specializing in aircraft) commit to some of these industry-standard ways of development because a formally-designed and formally-tested product is far more likely to function as intended compared to one manufactured using shoddy practices.
You might ask what this has to do with math. It has EVERYTHING to do with math! It's all probability theory: what's an acceptable probability that a car fails to decelerate? what's an acceptable probability that a spacecraft's stabilization systems fail? and an acceptable probability of someone being able to access a bank account or commit identity theft? All of these things are defined by formal procedures, calculated with formal rigorous math, and mitigated with techniques that have a basis in probability theory and number theory, such as elections in containment regions, physical redundancy, and combinational intractability. We can't forget the applied side of things, as it saves lives and improves product lifetime of usefulness.