10

Is there any way to extend the elementary definition of powers to the case of complex numbers?

By "elementary" I am referring to the definition based on $$a^n=\underbrace{a\cdot a\cdots a}_{n\;\text{factors}}.$$ (Meaning I am not interested in the power series or "compound interest" definitions.) This is extended to negative numbers, fractions, and finally irrationals by letting $$a^r=\lim_{n\to\infty} a^{r_n}$$ where $r_n$ is rational and approaches $r$.

For a concrete example, how would we interpret $e^i$ in terms of these ideas?

  • 2
    The extensions from $\mathbb{N}$ to $\mathbb{Z}$ and then to $\mathbb{Q}$ are forced by the condition that the law $a^{m+n}=a^ma^n$ should hold on the larger domain; the extension to $\mathbb{R}$ is for continuity. So one natural interpretation of the question is, is the extension to $\mathbb{C}$ forced by the condition that $z\mapsto a^z$ should be a continuous group homomorphism from $(\mathbb{C},+)$ to $(\mathbb{C}^\times,\cdot)$? The answer is no (e.g., $x+iy\mapsto a^xa^y$ is another such), but maybe there's some additional mild condition that picks out the extension we want. –  Jun 14 '14 at 01:52

3 Answers3

17

Short version: If we try to extend the functions $x\mapsto a^x$ to complex exponents in the same way we extended them to reals (essentially, by requiring that $a^{m+n}=a^ma^n$ should still hold), we are left with a lot of ambiguity: there is a large family of plausible extensions. The problem is to pick out the particular extension we want. I'll show how we can narrow down the options considerably using reasonable criteria, but the final step is exactly the normalization of a logarithm (i.e., the choice of base) which, I believe, cannot be justified without appeal to calculus (which would open the door to power series definitions, etc., which you forbade). This perhaps explains why all proofs of Euler's formula make crucial use of calculus in one way or another.

Long version:

As I said in my comment, the way we extend the definition from $\mathbb N$ to $\mathbb Z$, and then to $\mathbb Q$, is by asking that the law $a^{m+n}=a^ma^n$ continue to hold on the larger domains; we extend to $\mathbb R$ by asking that the function be continuous. So in my view, the "elementary" definition of exponentiation for reals is this:

Elementary definition of real exponentiation. For any $a\in\mathbb{R}^+$, there is exactly one function $\varphi\colon\mathbb R\to\mathbb R$ satisfying these conditions:

  1. $\varphi(1) = a$;
  2. $\varphi(x+y)=\varphi(x)\varphi(y)$ for all $x,y\in\mathbb{R}$; and
  3. $\varphi$ is continuous.

We write $\varphi(x) = a^x$.

(I consider only positive bases, for simplicity.)

Let's see what functions $\varphi\colon\mathbb C\to\mathbb C$ satisfy these conditions. First note that $$ \varphi(x+iy) = \varphi(x)\varphi(iy) = a^x \varphi(iy) \text{ ,} $$ so really it's a matter of choosing the function $y\mapsto\varphi(iy)$. To describe the candidates for this function, I need the following functional characterization of the trigonometric functions.

Proposition 1. Suppose $C,S\colon\mathbb R\to\mathbb R$ satisfy these conditions:

  1. $C$ and $S$ are continuous;
  2. $C(u-v) = C(u)C(v)+S(u)S(v)$ for all $u,v\in\mathbb R$;
  3. $S(u-v) = S(u)C(v)-C(u)S(v)$ for all $u,v\in\mathbb R$;
  4. $C$ and $S$ are not both identically zero.

Then there exists $\lambda\in\mathbb R$ such that

$$ C(u) = \cos(\lambda u) \quad\text{and}\quad S(u) = \sin(\lambda u) \text{ .} $$

The proof is long, so I defer it to an appendix below. Let's continue with identifying the candidate functions $y\mapsto\varphi(iy)$.

Proposition 2. Suppose $\psi\colon\mathbb R\to\mathbb C$ satisfies these conditions:

  1. $\psi(0)=1$;
  2. $\psi(u+v) = \psi(u)\psi(v)$ for all $u,v\in\mathbb R$; and
  3. $\psi$ is continuous.

Then there exist $\lambda\in\mathbb R$ and $\mu\in\mathbb R^+$ such that

$$ \psi(u) = \mu^u(\cos(\lambda u) + i\sin(\lambda u)) \text{ .} $$

Proof. Note that $|\psi(u+v)| = |\psi(u)\psi(v)| = |\psi(u)|\,|\psi(v)|$. By the definition of real exponentiation, $$ |\psi(u)| = |\psi(1)|^u $$ for all $u\in\mathbb R$; let $\mu=|\psi(1)|$. In particular, $|\psi(u)|$ is never zero, so we can define $$ C(u) = \text{Re}\ \frac{\psi(u)}{|\psi(u)|} \quad\text{and}\quad S(u) = \text{Im}\ \frac{\psi(u)}{|\psi(u)|} \text{ .} $$ These functions are continuous, because $\psi$ is, and they are not both identically zero, because $\psi$ is not. Furthermore, \begin{align*} C(u-v) &= \text{Re}\ \frac{\psi(u-v)}{|\psi(u-v)|} \\ &= \text{Re}\ \left(\frac{\psi(u)}{|\psi(u)|}\right) \left(\frac{\psi(v)}{|\psi(v)|}\right)^{-1} \\ &= \text{Re}\ (C(u)+iS(u))(C(v)-iS(v)) \\ &= C(u)C(v)+S(u)S(v) \end{align*} and similarly for $S(u-v)$. By Proposition 1, there exists $\lambda\in\mathbb R$ such that $$ \psi(u) = |\psi(u)|(C(u)+iS(u)) = \mu^u(\cos(\lambda u)+i\sin(\lambda u)) $$ as desired.

Thus we can describe all the $\mathbb C\to\mathbb C$ extensions:

Corollary. Let $a\in\mathbb{R}^+$. Suppose $\varphi\colon\mathbb C\to\mathbb C$ satisfies these conditions:

  1. $\varphi(1) = a$;
  2. $\varphi(x+y)=\varphi(x)\varphi(y)$ for all $x,y\in\mathbb{R}$; and
  3. $\varphi$ is continuous.

Then there exist $\lambda\in\mathbb R$ and $\mu\in\mathbb R^+$ such that

$$ \varphi(x+iy) = a^x \mu^y (\cos(\lambda y) + i\sin(\lambda y)) $$

for all $x,y\in\mathbb R$.

At this point I think it's natural to pick out the case $\mu=1$. For one thing, the curves $\psi(\mathbb R)$ are logarithmic spirals, except in the case $\mu=1$, which is the unit circle, which is qualitatively distinct. For another thing, the exponentiation part of $\psi$ (the $\mu^u$) is behaviour we already have; it's the rotation part (the $\cos$ and $\sin$) which is new. It seems most tidy to have $\varphi(x)$ do the exponentiation and $\varphi(iy)$ do the rotation.

One elegant algebraic criterion which picks out the $\mu=1$ case is to ask that $\varphi$ respect conjugation, i.e., $\varphi(\overline z)=\overline{\varphi(z)}$. This requires the curve $\psi(\mathbb R)$ to be symmetrical under reflection in the real axis, which excludes all the logarithmic spirals. Alternatively, we could require that $\psi(\mathbb R)$ be bounded, or that it be bounded away from zero. I'll state the conjugation version:

Corollary. Let $a\in\mathbb R^+$. Suppose $\varphi\colon\mathbb C\to\mathbb C$ satisfies these conditions:

  1. $\varphi(1) = a$;
  2. $\varphi(w+z)=\varphi(w)\varphi(z)$ for all $w,z\in\mathbb C$;
  3. $\varphi$ is continuous; and
  4. $\varphi(\overline z) = \overline{\varphi(z)}$ for all $z\in\mathbb C$.

Then there exists $\lambda\in\mathbb R$ such that, for all $x,y\in\mathbb R$,

$$ \varphi(x+iy) = a^x (\cos(\lambda y) + i\sin(\lambda y)) \text{ .} $$

(Some treatments of complex exponentiation present this conjugation condition as if it were a provable conclusion and not an imposed condition; see this question.)

We still have the ambiguity of $\lambda$, and indeed, nothing said so far gives any clue about what $\lambda$ we should take for a given $a$. We don't even have any connection between the choice of $\lambda$ for different bases. The latter issue can be partly addressed by also assuming the rule $(ab)^z=a^zb^z$. Thus:

Proposition 3. Let $\lambda\colon\mathbb R^+\to\mathbb R$. For $a\in\mathbb R^+$, define

$$ \varphi_a(x+iy) = a^x (\cos(\lambda(a)y) + i\sin(\lambda(a)y)) $$

If these functions satisfy the condition $\varphi_{ab}(z) = \varphi_a(z)\varphi_b(z)$, then $\lambda$ satisfies $\lambda(ab)=\lambda(a)+\lambda(b)$.

The proof is direct, so I omit it.

At this point it's natural to take $\lambda(a) = \log_c(a)$ for some $c$; then we get Euler's formula $c^{i\pi} = -1$. To justify the normalization $c=e$ requires, I believe, ideas from calculus, which pretty much opens the door to the series definition of the exponential, and the continuously compounded interest definition, which you forbade.

Another perhaps interesting issue is that the functional equation $\lambda(ab)=\lambda(a)+\lambda(b)$ apparently doesn't quite characterize the logarithms; on Wikipedia, at least, there is only mention of a characterization which also assumes $\lambda$ is increasing, which seems quite artificial in our context. So there is something more to pin down here, I think.

Appendix: Proof of Proposition 1

(My proof here draws heavily on Robison, "A New Approach to Circular Functions, $\pi$, and $\lim (\sin x)/x$", Math. Mag. 41 (1968), 66–70, jstor, which gives a different functional characterization which is very nice but not quite adapted to my needs here.)

Lemma. $S(0)=0$. Proof: Take $u=v$ in condition 3.

Lemma. $C(u)^2+S(u)^2=C(0)$. Proof: Take $u=v$ in condition 2.

Lemma. $C(0)\ne 0$. Proof: Otherwise $C(u)^2+S(u)^2=0$ for all $u$, so $C$ and $S$ would be identically zero, contrary to condition 4.

Lemma. $C(0)=1$. Proof: $C(0)=C(0)^2+S(0)^2=C(0)^2$, which implies $C(0)=1$ because $C(0)\ne 0$.

Corollary. $C(u)^2+S(u)^2=1$.

Lemma. $C(-u)=C(u)$. Proof. $C(-u) = C(0-u) = C(0)C(u)+S(0)S(u) = C(u)$.

Lemma. $S(-u)=-S(u)$. Proof. Similar.

Corollary. $S(u+v) = S(u)C(v) + C(u)S(v)$.

Corollary. $C(u+v) = C(u)C(v) - S(u)S(v)$.

Corollary. $C(2u) = C(u)^2 - S(u)^2$.

Corollary. $C(2u) = 2C(u)^2 - 1$.

Lemma. If $C$ is not constant then $C$ has a root. Proof. Let $y_0\in\mathbb R$ be such that $C(y_0)\ne 1$; since $C(y_0)^2 = 1 - S(y_0)^2 \le 1$, we have $C(y_0) < 1$. Define $a_n = C(2^ny_0)$ and $L=\inf_n a_n$. Then $$ L = \inf_{n\ge 0} a_n \le \inf_{n\ge 0} a_{n+1} = \inf_{n\ge 0} (2a_n^2-1) = 2\inf_{n\ge 0} a_n^2 - 1 \le 2L^2-1 $$ and so either $L\ge 1$ or $L\le-\frac12$. If $L\ge 1$ then $C(y_0)=a_0\ge 1$, contrary to our choice of $y_0$; so $L\le-\frac12$. In particular, $C$ takes negative values (as well as the positive value $C(0)=1$), so by IVT, $C$ has a root.

In view of this last lemma, we break into cases.

Case 1. $C$ is constant. Then $C$ is constantly $1$, and $S=\pm\sqrt{1-C^2}$ is constantly $0$, so we can take $\lambda=0$ in the proposition.

Case 2. $C$ is not constant. Let $p$ be the smallest positive root of $C$. (Roots exist as just shown; positive roots exist because $C$ is even, as shown above; and a smallest such root exists because $C$ is continuous.) We continue as follows:

Lemma. If $u\in[0,p)$ then $C(u)>0$. Proof. $C(0)>0$, $C$ is continuous, and $p$ is its smallest positive root.

Corollary. If $u\in[0,p)$ then $C(u) = \sqrt{\frac12(1+C(2u))}$.

Corollary. $C(p/2^n) = \cos(\pi/2^{n+1})$ for all $n\in\mathbb N$. Proof. By induction.

Lemma. $S(p)=\pm 1$. Proof. $S(p)^2=1-C(p)^2=1$.

Lemma. $S(u)=S(p)C(p-u)$. Proof. $S(u)=S(p-(p-u))$ and $C(p)=0$.

Corollary. If $u\in(0,p]$ then $S(u)$ has the same sign as $S(p)$.

Corollary. If $u\in(0,p]$ then $S(u) = S(p)\sqrt{1-C(u)^2}$.

Corollary. $S(p/2^n) = S(p)\sin(\pi/2^{n+1})$ for all $n\in\mathbb N$.

Now to complete the proof of the proposition. Take $\lambda = S(p)\pi/2p$. We have already shown that $C(u) = \cos(\lambda u)$ and $S(u) = \sin(\lambda u)$ for $u = p/2^n$ with $n\in\mathbb N$. (Note that $\cos(S(p)v) = \cos v$ and $\sin(S(p)v) = S(p)\sin v$.) Using the addition formulas we can extend these identities to $u = mp/2^n$ with $m,n\in\mathbb N$; continuity then extends them to all $u\ge 0$; the evenness of $C$ and $\cos$ and the oddness of $S$ and $\sin$ then finish the job.

  • 1
    Thank you for this superb answer. I have accepted it, but please feel free to add any thoughts that may come up later. –  Jun 15 '14 at 15:35
  • Thanks. I actually just had to edit the proof of Proposition 1, as it had an error. I'm pretty sure it's correct now. –  Jun 15 '14 at 15:40
  • I've been editing to improve, I hope, but I now see a few typos and whatnot... I'd edit now but I don't want to bump it so frequently, so I'll fix those few things up in a day or two. –  Jun 16 '14 at 04:06
  • This is great! I'm still exploring this topic (along similar lines), but it seems one need not assume anything about the trigonometric functions (just the real $\exp$ and $\log$); indeed, one can base trigonometry on the study of complex exponentiation (without any appeal to geometry), or so it seems to me for now. – Allawonder Jun 06 '19 at 18:45
1

I know this question is somewhat old, but I would like to add on to the wonderful accepted answer from the point where the problem is reduced to to finding a continuous function $\psi : \mathbb{R} \to \mathbb{C}$ such that

  1. $\psi(0) = 1$
  2. $\psi(x + y) = \psi(x) \cdot \psi(y)$ for all $x, y \in \mathbb{R}$

Here, one can use the following lemma:

Lemma : Let $I \subseteq \mathbb{R}$ be an interval and $f : I \to \mathbb{R}^2 \backslash \{0\}$ be a continuous function. Then, there exist a unique continuous function $r : I \to (0, \infty)$ and a continuous function $\theta : I \to \mathbb{R}$ unique upto addition by an integral multiple of $2 \pi$ such that $f(x) = r(x) \cdot (\cos(\theta(x)), \sin( \theta(x)))$ for all $x \in \mathbb{R}$

Proof : Let $r_1, r_2 : I \to (0, \infty)$ and $\theta_1, \theta_2 : I \to \mathbb{R}$ be continuous functions such that

\begin{align} r_1(x) \cdot (\cos(\theta_1(x)), \sin( \theta_1(x))) = r_2(x) \cdot (\cos(\theta_2(x)), \sin( \theta_2(x))) \end{align}

for all $x \in \mathbb{R}$. By taking the norm on both sides, we get that $r_1(x) = r_2(x)$. Dividing by this equality, from elementary trigonometric identities we get that $\theta_1(x) = \theta_2(x) + 2n(x)\pi$ where $n(x) \in \mathbb{Z}$. Since $\mathbb{R}$ is connected and $\dfrac{\theta_1 - \theta_2}{2\pi}$ is continuous, it follows that $n(x)$ is a constant. We have proven the uniqueness of $(r, \theta)$.

For the existence, one can produce $r$ easily by setting $r = ||f||$. Hence, the problem is now reduced to finding a function $\theta$ for $f : I \to S^1$. We first tackle the cases where the range of $f$ is nice and construct $\theta$ explicitly.

Case 1 : $f(I) \subseteq S^1 \backslash \{(-1, 0)\}$

Define $\theta_1$ to be

  1. $\sin^{-1} \circ \pi_2 \circ f$, on $f^{-1}\left(S^1 \cap (0, \infty) \times \mathbb{R}\right)$
  2. $\cos^{-1} \circ \pi_1 \circ f$, on $f^{-1}\left(S^1 \cap \mathbb{R} \times (0, \infty)\right)$
  3. $- \cos^{-1} \circ \pi_1 \circ f$, on $f^{-1}\left(S^1 \cap \mathbb{R} \times (0, - \infty)\right)$

Case 2 : $f(I) \subseteq S^1 \backslash \{(1, 0)\}$

Define $\theta_2$ to be

  1. $\cos^{-1} \circ \pi_1 \circ f$, on $f^{-1}\left(S^1 \cap \mathbb{R} \times (0, \infty)\right)$
  2. $\pi - \sin^{-1} \circ \pi_2 \circ f$, on $f^{-1}\left(S^1 \cap (0, -\infty) \times \mathbb{R}\right)$
  3. $2\pi - \cos^{-1} \circ \pi_1 \circ f$, on $f^{-1}\left(S^1 \cap \mathbb{R} \times (0, - \infty)\right)$

where $\pi_i$ is the projection to the $i$th coordinate.

By the gluing lemma, the above functions are well defined and continuous.

For the general case, fix $x \in I$ (one can explicitly produce $x$ to avoid the axiom choice, for instance $x = \dfrac{\sup (I) + \inf (I)}{2}$ for bounded $I$). WLOG assume $f(x) = (0, 1)$ (this can be achieved via rotation). Let $E$ be the set of open connected components of the inverse image of each of $S^1 \backslash \{ (-1, 0) \}$ and $S^1 \backslash \{ (1, 0) \}$ under $f$. This is an open covering of $I$. For $U \in E$, let $\theta_U : U \to \mathbb{R}$ be defined appropriately as per the above cases.

(Note : For a less direct but more elementary route, one can carry out a similar variant of the following arguments by first proving the theorem for the case where $I$ is compact by obtaining a minimal finite covering via the Heine-Borel theorem, and then by extending to the general case via taking unions of compact intervals)

Definition : A chain of length $n$ is a subset $\{U_1, U_2, \ldots, U_n\}$ of $E$ such that $U_i \cap U_j \neq \varnothing$ iff $|i - j| \leq 1$.

Claim : For every chain $\{U_1, \ldots U_n\} \subseteq E$, there exists exactly one appropriate $\theta : \bigcup_{i = 1}^n U_i \to \mathbb{R}$ extending $\theta|_{U_1}$.

Let $U = \bigcup_{i = 1}^n U_i$. Since $U$ is connected, it follows from the uniqueness that any two $\theta$ must differ by an integral multiple of $2\pi$, and this difference must be $0$ since they must agree on $U_1$.

The existence is proven by induction on the length of the chain. For $n = 1$, this is trivially true. Assume that the claim is true for chains of length $n$. Let $\{U_1, U_2, \ldots U_{n + 1}\} \subseteq E$ be a chain. Let $U' = \bigcup_{i = 1}^n U_i$. By induction, there exists $\theta' : U \to \mathbb{R}$ extending $\theta|_{U_1}$. Since the intersection of connected sets in $\mathbb{R}$ is connected, $\theta'$ and $\theta|_{U_{n + 1}}$ differ on $U' \cap U_{n + 1}$ by $2n\pi$ for some $n \in \mathbb{Z}$. Hence, by the gluing lemma, $\theta : \bigcup_{i = 1}^{n + 1} U_i$ defined to be $\theta'$ on $U'$ and $\theta|_{U_{n + 1}} + 2n\pi$ on $U_{n + 1}$ is an appropriate function defined on the union of the chain $\{U_1, U_2, \ldots U_{n + 1}\}$.

Let $\{U_1, \ldots, U_n\}$ and $\{V_1, \ldots V_m\}$ be two chains such that $x \in U_1$ and $x \in V_1$. Let $U = \bigcup_{i = 1}^n U_i$ and $V = \bigcup_{i = 1}^m V_i$ and $\theta_U : U \to \mathbb{R}$ and $\theta_V : V \to \mathbb{R}$ be the appropriate functions extending $\theta|_{U_1}$ and $\theta|_{V_1}$. Since $U \cap V$ is open, connected and non-empty, it follows that $\theta_U$ and $\theta_V$ differ by an integral multiple of $2 \pi$ on $U \cap V$. By the construction of $E$, $U_1 = V_1$ and $\theta_{U_1} = \theta_{V_1}$. Hence, $\theta_U$ and $\theta_V$ agree on $U \cap V$. Let $\theta$ be the union of $\theta|_{\bigcup_{i = 1}^nU_i}$ on all chains $\{U_1, \ldots, U_n\} \subseteq E$ such that $x \in U_1$. By the gluing lemma, this is a well defined continuous function. Since $I$ is connected, it follows that for all $y \in I$, there exists a chain $\{U_1, \ldots, U_n\} \subseteq E$ such that $x \in U_1$ and $y \in U_n$ (proof here). By construction, $\theta$ satisfies $f(x) = (\cos(\theta(x)), \sin( \theta(x)))$. $$\tag*{$\blacksquare$}$$

A quick observation shows that $\psi$ never vanishes since $\psi(0) = 1$ and $\psi(x + y) = \psi(x) \cdot \psi(y)$. From the above lemma, it follows that there exist continuous $r : \mathbb{R} \to (0, \infty)$ and $\theta : \mathbb{R} \to \mathbb{R}$ such that $\psi(x) = r(x) \cdot (\cos(\theta(x)), \sin( \theta(x)))$ and $\theta(0) = 0$. The multiplicative property implies that $r(x + y) \cdot (\cos(\theta(x + y)), \sin( \theta(x + y))) = r(x) \cdot r(y) (\cos(\theta(x) + \theta(y)), \sin( \theta(x) + \theta(y)))$ for all $x, y \in \mathbb{R}$. By taking modulus on both sides, it follows that $r(x + y) = r(x) \cdot r(y)$. Since $r$ is continuous, one observes that $r(x) = r(1)^x$.

On the other hand, for all $x, y \in \mathbb{R}$ we have

\begin{align} (\cos(\theta(x + y)), \sin( \theta(x + y))) = (\cos(\theta(x) + \theta(y)), \sin( \theta(x) + \theta(y))) \end{align}

Fix $y \in \mathbb{R}$. Consider the functions $x \mapsto \theta(x) + \theta(y)$ and $x \mapsto \theta(x + y)$. From the lemma, it follows that there exists $n \in \mathbb{Z}$ such that $\theta(x) + \theta(y) = \theta(x + y) + 2n\pi$. Setting $x = 0$, we get that $n = 0$. Hence, $\theta(x + y) = \theta(x) + \theta(y)$ for all $x, y \in \mathbb{R}$. Since $\theta$ is continuous, it has to be linear, implying that there exists $\lambda \in \mathbb{R}$ such that $\theta(x) = \lambda x$.

Therefore, we get that $\psi(y) = \mu^y (\cos(\lambda y), \sin( \lambda y))$ for some $\mu \in (0, \infty)$ and $\lambda \in \mathbb{R}$. Hence, $e^{x + iy} = e^x \mu^y (\cos(\lambda y), \sin( \lambda y))$. If one further demands differentiability at $0$, then applying the Cauchy Riemann equations at $0$ (which is a necessary condition for differentiability), we get that $\mu$ and $\lambda$ have to be $1$. Therefore, $e^{x + iy}$ must necessarily be defined as $e^x(\cos y + i \sin y$).

Going by this method, it also follows that $a^z$ has to be defined as $e^{(\log a) z}$ for $a \in \mathbb{R}^+$ if it has to satisfy the basic properties of an exponential function and differentiability at $0$.

QED
  • 867
1

So here's a good place to start

$$e^{i\theta}$$

Is interpreted as the complex number that is formed if you form a circle of radius 1 in the complex number field. And starting from the point 1 + 0i you move along the circle for an angle $\theta$ to a new number in the complex number field:

$$\sin(\theta) + \cos(\theta)i)$$

Now note that ANY complex number is of the form

$$r e^{i\theta}$$

Where $r$ is the absolute value of the complex number (or) it's distance from the point 0. The value of the complex exponential simply indicates the angle. In other words we have polar coordinates here.

Then taking exponents becomes quite obvious with it simply distributing over both items which can be factored into complex exponential themselves.

If 'factor' your exponentials into a product of numbers of this format then it becomes intuitive what physically is occurring.

Hope that helps :)

  • The equality $e^{ix}=\cos x+i\sin x$ comes from the power series definition - I am looking for a definition similar to that given for real numbers. Thanks though. –  Jun 14 '14 at 00:44
  • Based on what you have offered I don't believe it is possible. Since you would be essentially asking for a sequence of real numbers that converges to a complex-non-real number. Additionally the way rational exponents is define is simply using the laws of nested exponents for integers and nested roots for integers which also cannot allow you to break away into the complex realm – Sidharth Ghoshal Jun 14 '14 at 01:00
  • @NotNotLogical: how do you define the symbol $e$ without using series? – Martin Argerami Jun 14 '14 at 01:27
  • You can use the limit definition: $e = lim_{n\rightarrow\infty} (1 + \frac{1}{n})^n$ – Sidharth Ghoshal Jun 14 '14 at 01:30
  • @MartinArgerami $e$ can be defined with series if you like. It's just a number - I am more interested in defining $a^z$ for numbers $a$ (regardless of how they are defined). –  Jun 14 '14 at 17:24