I saw a mathematician explain how the number 1 is not considered a prime number despite it fitting the traditional definition for a prime number; it is a natural number that can be divided by 1 and by itself yielding a natural number as a result. The explanation apparently lied on the Fundamental Theorem of Arithmetic, stating that every positive whole number can be written as a unique product of primes.
When we divide any number into its prime factors, at first glance we have two options, we can consider 1 to be a prime number and hence a "candidate" prime factor, or we can exclude it from being a prime factor. In the first case, 1 will surely be a factor because every whole number can be divided by 1 yielding a whole number: itself. But since multiplying times 1 does not change the number multiplied, we can include it as a factor infinitely many times, yielding infinitely many expressions all of which, when multiplied together, result in the original number. This would contradict the Fundamental Theorem of Arithmetic, because any positive whole number could then be written as infinite "different" products of primes. But if we exclude 1 from being a prime factor, then every positive whole number can indeed be expressed as a unique product of primes, satisfying the theorem. Hence, 1 is not considered a prime number.
If that is more or less the reasoning, I think I can understand it. However, I am not sure (a · b · 1) can be considered a different expression from (a · b · 1 · 1).
When I think of mathematical operations, I think of change. Every operation I can think of implies the transformation of one fragment of information to another. In that sense, the simplest operation would be a logical inversion, taking a 1-bit value and transforming it to its alternative 1-bit value. Other simple transformations would be binary logical operations; each taking a minimum of two 1-bit values and transforming them into a single bit encoding a different meaning. For example, if a and b are 1-bit values each conveying its own meaning, the operation (a AND b) yields a 1-bit value indicating whether or not both inputs have a specific value I will call "high", whereas the operation (a XOR b) yields a 1-bit value indicating whether both inputs have different or equal values. More complex transformations can be achieved by combining simple transformations. An AND and a XOR performed in parallel on the same two inputs yields two bits containing the arithmetic sum of both inputs; if a third 1-bit input joins in, and further AND, XOR and OR operations are performed in a specific pattern, a full 1-bit addition with carry is achieved. One-bit addition can be extended to n-bit addition, which can be chained into multiplication, which can in turn be chained into powers and factorials, and so on. In general, mathematical operations seem to be methods for the transformation of one or more inputs into a meaningful output.
A computer program could thus be seen as a highly complex mathematical operation, since it too is a process that takes one or more inputs and transforms them into one or more different meaningful outputs, and the idea can be generalised to non-computer programs such as recipes, which are also sequences of operations that enable the transformation between some inputs (the raw ingredients) and some output (the hopefully delicious meal).
But that makes me wonder. If you give me some raw ingredients and I spend one hour in the kitchen doing all sorts of noises, coming out with the same ingredients in the same condition, to what extent can I say I have been cooking? I may have wasted one hour doing who knows what, but the result is equal to me not having done anything in terms of cooking. Likewise, if a mathematical "operation" does not produce any change in its input(s), to what extent can it be said to be an operation? We may call it "adding zero", or "multiplying times one", but those are simply fancy ways of saying "doing nothing". The instruction "take this number and multiply it times one" is equivalent to the instruction "take this number and don't do anything with it".
Of course, in practice it may take time to do nothing, just like I can spend one hour in the kitchen without cooking, and that can itself be used as a feature for synchronisation purposes; a computer program might "waste" a few cycles adding zero to its accumulator in order to adjust the time it takes to perform a broader task. But in the abstract and apparently timeless universe of mathematical theory, what difference would it make to not do anything for longer or shorter?
To me, the expression (a · b) means "take number a and multiply it times number b". The expression (a · b · 1) means "take number a, multiply it times number b and don't do anything to the result". And the expression (a · b · 1 · 1) means "take number a, multiply it times number be, don't do anything to the result and don't do anything to the result". I could go on for ever not doing anything to the result, but that wouldn't mean that I'd be doing something to the result, so I am not sure to what extent I can consider the latter two expressions as being different.
But that is the perspective of someone who is highly ignorant about mathematics. I'd like to know what mathematicians think about this. Thanks!