So currently I am making a system for a game that works something like this.
You have Value A/Turquoise, which is 1
You have Value B/Orange, which is 0.5
I first move Value A decimal point 2 times to the left, making Value A 100, I then divide value A by 1.25, and then move the decimal point right 2 times, making the final value 0.8
I then do the same thing for Value B, I move the decimal point to make it 50, divide by 2.5, then move the decimal point back to where it becomes 0.2
We now have Value C/Red, which is the sum of the calculated values of A and B, which should be 1. Value A affects Value C by 80%, while Value B affects Value C by 20%. I think it's working as intended.
$$\color{red}{0.8}\\ \color{orange}{0.2}\\ \color{green}{1.0}$$
If I change the value of Value B to 0.3 instead of 0.5, we get this.
$$\color{red}{0.80}\\ \color{orange}{0.12}\\ \color{green}{0.92}$$
So the calculation is working as intended (I'm pretty sure), but the problem is, if I go higher than 0.5 the value keeps going up, I don't want it like that. If the real float of Value B is 0.7, I want the calculated value to be 0.12, as if it the real float of Value B is 0.3. Now with the way programming works, I can say if the float of Value B is greater than 0.5 then we will use the new calculation. The problem is I don't know how to do that, what would I need to do to make that work.
Now I get a solid C in High School math classes, so I'm certainly not the brightest (and didn't really pay attention) so I have absolutely no clue what to do or what to even look up because I don't know the names or terms or concepts that would be associated with the problem I am in right now.
What is the solution here? Thanks in advance for the help! If you need more clarification on what exactly I need don't be afraid to ask!