0

$L_1 ,L_2$ are regular language. We form a new language $L_{12}$ as follows: $$L_{12}=\left \{ w_1\cdot w_2\mid w_1\in L_1\land w_2\in L_2\land |w_1|=|w_2| \right \}$$

In this exercise I am not given any alphabet and I'm required to build a PDA for $L_{12}$. But by definition $M=\left \{Q,\sum,\Gamma,\delta ,q_0,\dashv,F\right\}$ and I don't have any alphabet to work with. By intuition similar alphabets can affect the solution differently than dissimilar alphabets.

John L.
  • 38,985
  • 4
  • 33
  • 90
user6394019
  • 217
  • 1
  • 5
  • 1
    $L_1$ and $L_2$ have their alphabets $\Sigma_1$ and $\Sigma_2$. So it suffices to select $\Sigma = \Sigma_1 \cup \Sigma_2$ (since every word from $L_{12}$ is in this alphabet). Is this what you are asking? –  Jul 14 '20 at 20:27
  • @Dmitry If I don't have any info on the alphabet, then how can I classify a certain situation in the PDA. Perhaps the model should be somewhat more general, because I need to make distinctions in the model that show if input belongs to the language or not but how can I do it without specific alphabet? – user6394019 Jul 14 '20 at 20:39
  • 1
    The same language was considered earlier: For any two regular languages A, B, show that {xy|x ∈ A, y ∈ B, |x| = |y|} is context-free. There the alphabet was left unkown, or abstract. It just was named $\Sigma$ in the answer. In problems like this the actual alphabet does not really matter. In constructions we usually write things like "for every letter $\sigma\in \Sigma$ we ...". – Hendrik Jan Sep 08 '21 at 18:12

2 Answers2

0

You might reason like this: If $L_1, L_2$ are regular, then $L_2^R$ is regular ($L^R$: reverse words). You can build regular grammars $G_1 = (N_1, \Sigma_1, P_1, S_1)$ and $G_2 = (N_2, \Sigma_2, P_2, S_2)$ that generate $L_1$ and $L_2^R$. The crucial point is that the regular grammars have the lonely non-terminal always at the end (or beginning!) of the sentential form. Build a CFG with nonterminals $Q = N_1 \times N_2$, and productions that build up $L_1$ to the left (using the first part of the nonterminal) and $L_2^R$ to the right (using the second part of the nonterminal, building from the end). From the resulting grammar you can build a PDA.

The basic idea is similar to the construction for $\{ w w^R \colon w \in \Sigma^*\}$ with $S \to x S x$ for all $x \in \Sigma$ and $S \to \varepsilon$ (or $S \to xx$ for purists).

vonbrand
  • 14,004
  • 3
  • 40
  • 50
0

Let $\Sigma_1$ be the alphabet for $L_1$ and $\Sigma_2$ be the alphabet for $L_2$. By definition, $\Sigma_1,\Sigma_2$ are both finite.

Then, assume $\Sigma_1=\{\sigma_1,\sigma_2,...,\sigma_n\},\Sigma_2=\{\mu_1,\mu_2,...,\mu_k\}.$

Now, define $\Sigma=\Sigma_1\bigcup\Sigma_2$ and let it be the alphabet for $L_{12}$.

Notice: In general, when talking about languages - we must specify what language we are working with (by definition of what a language is) - but usually we assume the binary alphabet ($\Sigma_{bin}=\{0,1\}$) or any other alphabet with 2 or more letters (more than two letters is just for ease of use), since you can encode every letter as a series of 0's and 1's. In rare cases, we might assume the unary alphabet ($\Sigma_{unary}=\{1\}$) but that is more useful (and you will see it probably) in Complexity Analysis (and not for PDA's and DFA's).

nir shahar
  • 11,538
  • 3
  • 14
  • 35