A matrix is usually understood as an rectangular array of objects (for the most part numbers) arranged in rows and columns. If the objects belong to a set $\mathcal S$, then one can write $M(m,n;\mathcal S)$ for the set of $(m \times n)$-matrices $A$ with entries in $\mathcal S$. Such a matrix $A$ has $m$ rows and $n$ columns. The entry of $A$ occurring in row $i$ and column $j$ is often denoted as $a_{ij}$ and one writes $A = (a_{ij})$.
In linear algebra matrices are used to represent linear maps $f : V \to W$ between finite-dimensional vector spaces $V,W$ with respect to bases $\mathfrak V =\{v_1,\ldots, v_n\}$ of $V$ and $\mathfrak W =\{w_1,\ldots, w_m\}$ of $W$. We have $f(v_j) = \sum_{i=1}^m a_{ij}w_i$ with unique $a_{ij}$ and therefore $f(\sum_{j=1}^n\lambda_jv_j) = \sum_{j=1}^n \lambda_jf(v_j) = \sum_{i,j} \lambda_j a_{ij}w_i$. The $(m\times n)$-matrix $(a_{ij})$ is the matrix representation of $f$ with respect to $\mathfrak V, \mathfrak W$.
For such matrix representations we work only with bases indexed by sets of the special form $I_k = \{1,\ldots,k\}$. Here is my question:
Wouldn't it be more flexible to allow arbitrary finite index sets? That is, to consider matrix sets of the form $M(I, J; \mathcal S) = \mathcal S^{I \times J}$ with arbitrary finite sets $I, J$. A matrix $A \in M(I, J; \mathcal S)$ is then an indexed collection $A =(a_{(i,j)}) \in \mathcal S^{I \times J}$. We can still regard it as a rectangular array of objects of $\mathcal S$ arranged in rows and columns, although these do not have integer row numbers and column numbers.
Of course this concept is not a big innovation. But does is occur somewhere in the literature?
This question was motivated by the answers to Chain rule for differentiation yields conflicting dimensions. The question deals with a matrix valued function $f : \mathbb R \to M(n,n;\mathbb R)$. Clearly $M(n,n;\mathbb R)$ is isomorphic to $\mathbb R^{n^2}$, but it does not have a canonical basis indexed by $\{1\ldots,n^2\}$. Instead it has a natural base consisting of the matrices $(E_{ij}) \in M(n,n;\mathbb R)$ where the $E_{ij}$ have an entry $1$ in row $i$ and column $j$, all other entries being $0$. The index set of this natural basis is $I_{m,n} = \{1,\ldots,m\} \times \{1,\ldots,n\}$. When considering the derivative $Df \mid_x$ at $x \in \mathbb R$, which naturally is a linear map $\mathbb R \to M(n,n;\mathbb R)$ , the mentioned answers are wavering around by saying that we have to "flatten matrices" or to "identify $M(n,n;\mathbb R)$ with $\mathbb R^{n^2}$" to get a matrix in $M(n^2,1;\mathbb R)$. One can do this, but I think it is uncessary and may even cause confusion (the OP of the above question seems to mix up $M(n^2,1;\mathbb R)$ with $M(n,n;\mathbb R)$ when flattening $Df\mid_x)$). In my opinion it is much more transparent to say that the Jacobian of $f$ at $x$ is a matrix in $M(I_{m,n},1;\mathbb R)$.