×

Search anything:

Matrix Multiplication vs Dot Product

Binary Tree book by OpenGenus

Open-Source Internship opportunity by OpenGenus for programmers. Apply now.

In this article at OpenGenus, we have explored the importance and link between Matrix Multiplication and Dot Product both in general and in the field of Deep Learning (DL).

Table of contents:

  1. What is Matrix Multiplication and Dot Product?
  2. Matrix Multiplication == Dot Product
  3. Matrix Multiplication and Dot Product in Deep Learning (DL)

What is Matrix Multiplication and Dot Product?

Dot Product of 2 matrices of the same size is the addition of each element multiplied by the element at the same index of the second matrix. The answer is a single value.

For dot product:

  • 2 Inputs of size NxN
  • Output is a single element

Matrix Multiplication is the dot product of each row of first matrix with each column of second matrix. The output is a matrix.

For matrix multiplication:

  • 1 input of size NxM
  • 1 input of size MxK
  • Output is of size NxK

Matrix Multiplication has dot product as a sub-operation.

This is how dot product is used for matrix multiplication:

  • Get i-th row of matrix A
    • Get j-th row of matrix B
      • Do dot product of the row and column
      • Save answer in index [i][j]

Go through the next two implementations to understand how Dot Product and Matrix Multiplication is performed.

Following is a naive implementation of dot product of 2 matrices in C++:

#include <iostream>
#include <vector>

// Function to initialize a matrix of size row x col with random values between 0 and 9
void init_matrix(std::vector<std::vector<int>>& mat, int row, int col) {
    for (int i = 0; i < row; i++) {
        for (int j = 0; j < col; j++) {
            mat[i][j] = rand() % 10;
        }
    }
}

// Calculate dot product of two matrices
double dot_product(vector<vector<double>>& A, vector<vector<double>>& B) {
    int rows_a = A.size();
    int cols_a = A[0].size();
    int rows_b = B.size();
    int cols_b = B[0].size();

    // Check if the matrices can be multiplied
    if (cols_a != rows_b) {
        throw "Matrices cannot be multiplied!";
    }

    double result = 0.0;
    for (int i = 0; i < rows_a; i++) {
        for (int j = 0; j < cols_a; j++) {
            result += A[i][j] * B[j][i];
        }
    }
    return result;
}

int main() {
    // Initialize matrices A and B of size 3 x 4 and 4 x 2 respectively
    int row_A = 3, col_A = 4;
    std::vector<std::vector<int>> A(row_A, std::vector<int>(col_A));
    init_matrix(A, row_A, col_A);

    int row_B = 4, col_B = 2;
    std::vector<std::vector<int>> B(row_B, std::vector<int>(col_B));
    init_matrix(B, row_B, col_B);

    // Calculate the dot product of A and B
    std::vector<std::vector<int>> C = dot_product(A, B);

    // Print the result matrix C
    int row_C = C.size(), col_C = C[0].size();
    std::cout << "Result Matrix C:" << std::endl;
    for (int i = 0; i < row_C; i++) {
        for (int j = 0; j < col_C; j++) {
            std::cout << C[i][j] << " ";
        }
        std::cout << std::endl;
    }

    return 0;
}

Following is a naive implementation of matrix multiplication of 2 matrices in C++:

void matrix_multiplication(int** A, int** B, int** C, int m, int n, int p) {
    // initialize result matrix to 0
    for (int i = 0; i < m; i++) {
        for (int j = 0; j < p; j++) {
            C[i][j] = 0;
        }
    }
    
    // compute matrix multiplication
    for (int i = 0; i < m; i++) {
        for (int j = 0; j < p; j++) {
            for (int k = 0; k < n; k++) {
                C[i][j] += A[i][k] * B[k][j];
            }
        }
    }
}

Matrix Multiplication == Dot Product

Dot product is a matrix multiplication between two matrices of size (1, n) and (n, 1). The output is a single element.

Matrix Multiplication and Dot Product in Deep Learning (DL)

In Machine Learning or Deep Learning, Convolution is a major operation.

Convolution involve 2 inputs:

  • Image of size NxN
  • Filter of size KxK

The output is the dot product of the filter with every sub-matrix of size KxK of Image. Hence, Convolution is an application of dot product.

In implementation, Convolution is implemented by GEMM calls which is a matrix multiplication.

When filter size is 1x1, then a single matrix multiplication is same as Convolution.

For larger filter size, the input is transformed into a special format and then, matrix multiplication is applied multiple times in different ranges to compute Convolution.

With this article at OpenGenus, you must have the complete idea of the importance and relationship between Matrix Multiplication and Dot product.

Matrix Multiplication vs Dot Product
Share this