I want to know how to calculate the value of sin, not using table values or calculator.
I found this $\frac{(e^{ix})^2-1}{2ie^{ix}}$, but how to deal with $i$ number, if it's $\sqrt{-1}$?
I want to know how to calculate the value of sin, not using table values or calculator.
I found this $\frac{(e^{ix})^2-1}{2ie^{ix}}$, but how to deal with $i$ number, if it's $\sqrt{-1}$?
You changed very quickly in a comment to MrFatzo's answer to "how do computers calculate $\sin(x)$?", so I'm going to infer that what you're actually trying to ask is:
How does one calculate sines from scratch, without taking anyone's word for the correctness of tables or other magic values that go into the calculation?
I'm aware of two methods:
The ancients reckoned sines in degrees rather than radians. They created tables of sine values (actually chord values, in really ancient times, but that more or less amounts to the same problem) by starting with $\sin(0^\circ)=0$, $\sin(90^\circ)=1$ and then using known formulas for $\sin(v/2)$ to find sines of progressively smaller angles than $90^\circ$, and then formulas for $\sin(v+u)$ to find sines of sums of these smaller angles. That way they could eventually fill out their entire table.
In this method calculating a single sine from scratch is not really anything you do -- it's not very much less work than creating the entire table, which is to say: years and years of painstaking manual calculations.
See How to evaluate trigonometric functions by pen and paper? for a bit more detail.
In more modern times -- that is, roughly after the development of calculus -- we prefer our sines in radians. Then the gold standard for what the value of a sine should be is the power series: $$ \sin x = x - \frac16 x^3 + \frac1{120} x^5 - \cdots + \frac{(-1)^n}{(2n+1)!} x^{2n+1} + \cdots $$ This series converges quite fast when $x$ is not larger than a handful of radians, and it is simple to estimate the convergence as you go along (once $2n>x$, the limit will be strictly between any two successive partial sums), so that lets you compute single sines from scratch to any precision you desire.
The power series is still kind of slow even for computers, if you want to compute millions of sines. So in practice computers and calculators use various combinations of clever interpolation methods and tables that are built into the hardware. The tables themselves were ultimately constructed using the power series methods.
I'm not sure what can you do "manually", but maybe try using a taylor approximation?
For example, you can calculate $x-\frac{x^3}{6}$
Use the old-fashioned method: draw a really big circle, add the angle you wish to calculate, and measure.
For a remarkably good approximation with relatively little effort, consider Bhaskara I's sine approximation formula. For angles in degrees (between 0 and 180), this takes the form $$ \sin x^\circ \approx \frac{4x(180-x)}{40500-x(180-x)} $$ while in radians, it's $$ \sin x \approx \frac{16x(\pi-x)}{5\pi^2-4x(\pi-x)}=\frac{16\frac{x}\pi(1-\frac{x}\pi)}{5-4\frac{x}\pi(1-\frac{x}\pi)} $$ This expression has a relative error of less than 2%, which occurs for very small (and for very close to 180$^\circ$) angles. The absolute error is never greater than 0.00165.
I would suggest a less well-known method that generalises nicely to many other functions and can be quite efficient even when you need to do all the calculations by hand:
The pair $(c,s) = (\cos x, \sin x)$ (in radians, of course!) can be interpreted as the unique solution to the ordinary differential equation $$ \frac{\mathrm{d}}{\mathrm{d}x} \begin{pmatrix}c\\s\end{pmatrix} = \begin{pmatrix}-s\\c\end{pmatrix} $$ with starting condition $c(0) = 1$, $s(0) = 0$. As such, it can be solved approximately by Runge-Kutta solvers. The idea is to start from zero and then approach the target value step by step, effectively using a Taylor expansion around each point. Because the steps are small, Taylor expansion converges much faster here than if you directly put in the target value.
Why this is particularly convenient for manual calculation: you can choose the step-lengths in a way so the numbers will stay reasonably nice in decimal, as long as you make sure the steps are small and add up to the point where you want to go.
I'll use the Heun scheme, which is the simplest of these iterative solvers that actually gives usable precision. It's based on a 2nd order Taylor expansion.
So let's say you want to compute $\sin (45^\circ) = \sin (\tfrac\pi4)$. We know this should come out as $\sqrt2/2$, but let's see. I'll pick 0.2 as the default step-size. Let's go:
$x_0=0$, $c_0=1$, $s_0=0$
Ok, we're now pretty close to $\tfrac\pi4$, and it's already good to see how the values for $\cos$ and $\sin$ become very similar, since theory says $\sin(\tfrac\pi4) = \cos(\tfrac\pi4) = \tfrac{\sqrt2}2$. We can further confirm this by quickly squaring: $0.70364^2 \approx 0.495$. Reasonably close to $0.5$. Not great, but also I only did five not-that-small steps (since the Heun scheme is 2nd order accurate, making the steps a bit smaller would give really notably better accuracy) and – that's the main advantage – all of them only involved very simple multiplications, because I could choose the step size so it would be convenient, unlike in a direct Taylor-series evaluation.
That is also the basic idea behind the CORDIC method which was already mentioned in comments. That uses extra properties of sine and cosine to achieve better efficiency, but this is not really needed. Many functions that are defined by an ODE are efficiently evaluated by a Runge-Kutta solver; often the fourth-order version is preferred which gives yet better convergence.
In practice, this method is still only superior to direct Taylor if you need multiple function values. Then it's very good because a) you make a table of values as you go along b) you compute both sine and cosine, which can then be combined by exploiting the symmetry/periodicity properties.
There are many ways in which trig functions are calculated by computers, including rather inaccurate ones (for example the fsin method of Intel processors is notorious). A nice overview of implementations of many functions can be found here as part of the GNU MPFR library.
As for how computer actually evaluate sin(x) and other trig / transcendental functions, rather than using the Taylor series, which can converge rather slowly at times, the method usually used is a Chebyshev Polynomial. It should be noted that the whooshing sound you can hear is the mathematics on that page going clean over my head. ;)
That said, you normally extract a relatively small number of coefficients, and use them in a polynomial expansion that gets reasonable accuracy, albeit with a non-zero error term. This page shows the numbers involved in evaluating Sin(x)