### Applications of Binary Matrices

Note: if you cannot remember what a binary matrix is, revise notes for 1.1 Matrices: Definition and Unique Cases.

**Binary matrices**are used to represent**correlation**between datasets. If two categories are**correlated**, a**1**is placed in the element corresponding to the two, if they are**not**, a**0**is placed instead.- They are also used to represent
**dominance.**If one category is “greater than” another, a 1 is placed in the corresponding element, else a 0 is placed instead.

Note: we will further explore dominance matrices in topic 5 notes.

Example

The matrix below represents the road network connecting 3 towns (town A, B and C). A 1 means there are roads which allow travel directly from the town listed along the row to the town listed along the column.

\begin{array}{cccc} & A & B & C \\ A & 0 & 1 & 1 \\ B & 0 & 0 & 0 \\ C & 1 & 0 & 0 \end{array}

From the matrix, we can see you cannot travel directly from town B to C, meaning there is no road connecting them and you can travel directly from town A to C and visa-versa. It also shows that you can travel from town A to B, but not the other way around. In the context of what this matrix represents, we can interpret this as meaning there is a one-way street between the two towns.

### Permutation Matrices

- A
**permutation matrix**is a**square binary matrix**which only contains a**single**one “1” element in each row and column, with all other elements being 0. **Identity matrices**are permutation matrices.- Permutation matrices can be used to
**rearrange the elements**of another matrix through matrix multiplication.- If the permutation matrix is placed to the
**left**of the matrix being rearranged, it will rearrange the**rows**. - If the permutation matrix is placed to the
**right**of the matrix being rearranged, it will rearrange the**columns**. - Multiplying by the identity matrix will not rearrange a matrix.

- If the permutation matrix is placed to the

Example

We wish to reverse the letters in the word “TAR”. We can do this by setting it up as a column matrix and multiplying it by a permutation matrix; P.

P\left[\begin{array}{l} T \\ A \\ R \end{array}\right]=\left[\begin{array}{l} R \\ A \\ T \end{array}\right]

In order to figure out what the permutation matrix will be, we must remember how matrix multiplication works. The first row of the permutation matrix will multiply on an element-by-element basis with the column matrix to give the first element in the resultant matrix: R. Thus, the “1” in this row must be in the last column.

Applying this logic to consequential rows, we get the permutation matrix:

P=\left[\begin{array}{lll} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{array}\right]

Thus giving us:

\left[\begin{array}{lll} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{array}\right]\left[\begin{array}{l} T \\ A \\ R \end{array}\right]=\left[\begin{array}{l} R \\ A \\ T \end{array}\right]

Equivalently, we could have instead set this scenario up as follows:

\left[\begin{array}{lll} T & A & R \end{array}\right]^{*} P=\left[\begin{array}{lll} R & A & T \end{array}\right]

Where we are instead rearranging the columns instead of the rows. Again we can find P using our understanding of matrix multiplication. The first element; “R” is found by multiplying the first (and only) row of the TAR matrix with the first column of the permutation matrix; P, on an element-by-element basis. “R” is initially positioned in the last column, so the “1” element in the first column of P must be in the last row.

Applying this logic to consequential columns, we find the permutation matrix is in fact identical to the previous setup:

P=\left[\begin{array}{lll} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{array}\right]

Thus, giving us:

\left[\begin{array}{lll} T & A & R \end{array}\right]\left[\begin{array}{lll} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{array}\right]=\left[\begin{array}{lll} R & A & T \end{array}\right]

[/membership]