×

Search anything:

Questions on Convolution operation in ML

Binary Tree book by OpenGenus

Open-Source Internship opportunity by OpenGenus for programmers. Apply now.

In this article, we have presented several Interview Questions (MCQs) with detailed answers on Convolution in Machine Learning models. Convolution is the most important and compute intensive operation in ML models.

Start practicing the following questions:

What is the time complexity of brute force approach for 2D Convolution?

O(N^2 K^2)
O(N^2)
O(N * K)
O(N^3)
2D Convolution is the dot product of all sub-matrices of size K^2. There are O(N^2) such sub-matrices. So, the brute force approach for computing 2D Convolution is O(N^2 K^2).

What is the fundamental operation in Convolution?

Dot product
Matrix multiplication
Finding maximum element
Transposition
Dot product is the fundamental operation in Convolution. The dot product is between a sub-matrix of the input data and a kernel matrix of same size.

What is the time complexity of Winograd Algorithm for 2D Convolution?

O(N^2 K^2)
O(N^2)
O(N * K)
O(N^3)
Winograd algorithm is an efficient algorithm for Convolution but the time complexity is same as with Brute Force approach. It reduces the total number of operations but the order of operations is same.

On average, how much time is consumed by Convolution in a ML model?

70%
50%
90%
10%
On average, Convolution takes up to 70% of the Inference time for Machine Learning models. It is the most compute intensive operation in ML.

1x1 Convolution is widely used in GoogleNet. Which one of the following is not an advantage of 1x1 Convolution?

Improved Performance
Apply nonlinearity
Feature Pooling
Reduce or increase dimensionality
1x1 Convolution has 3 major advantages which are: the ability to control dimension, act as a feature pooling and apply nonlinearity. It is not used for its performance.

1x1 Convolution is also known as?

Pointwise convolution
Depthwise convolution
Spatial convolution
Grouped convolution
1x1 Convolution is also known as Pointwise convolution.

Which ML models first introduced the concept of Grouped convolution?

AlexNet
ResNeXt
ResNet50
MobileNetV1
Grouped convolution was first used in AlexNet which was released in 2012. Later, it was used in ResNeXt model as well.

Depthwise convolution is performed over which dimension of input?

Channel
Batch size
Height
Width
Depthwise convolution is performed over each channel of the input independently.

Which special kind of Convolution is popular in MobileNetV1 model?

Depthwise Separable convolution
Pointwise convolution
Spatial convolution
Grouped convolution
Depthwise Separable convolution is used in MobileNetV1. It involves two types of Convolutions linked together namely Depthwise Convolution and Pointwise Convolution.

Which type of Convolution is used in Inception models like InceptionV1?

Spatial convolution
Pointwise convolution
Depthwise convolution
Grouped convolution
Spatial convolution is used in Inception models. Spatial convolution is applied in spatial dimensions like width and height.

How many Convolution layers does ResNet50 model has?

48
10
37
72
ResNet50 model has 48 Convolution layers with 1 MaxPool and 1 AvgPool layers. It is one of the most widely used Machine Learning models.

Which one is the non-conventional size of kernel in a Convolution layer?

8
1
3
5
The most common size of Kernel in Convolution is 1 and 3. In rare models, kernel size 5 and 7 are also used. Other kernel sizes like 2 or 8 have not been used in mainstream models.

In Quantized Convolution, which parameter is extra compared to FP32 Convolution?

Min and Max values
Quantized value of input
Quantized copy of kernel
INT32
Min and Max value of the original input and kernel are the extra parameters in a Quantized Convolution. The min and max value of kernel and input are used for scaling and dequantization.

Convolution is fused with other op for optimization. With which op, it cannot be fused?

Pooling
Padding
MatMul
Bias
Convolution op cannot be fused with a Pooling op. Convolution is frequently fused with other ops like Padding, MatMul and Bias for optimization.

Which one is not a common value for Stride in Convolution?

4
1
2
3
Stride values are commonly 1, 2 and 3. 4 value for stride is not used in mainstream ML models.

What does the "dilation" in Convolution control?

Downsampling rate
Dropout rate
Output size
Precision
Dilation in Convolution is filter upsampling or input downsampling rate.

What is the use of Convolution in ML models?

Extract features
Highlight specific features
Remove outliers
Detect features
Convolution operation in ML models is used to extract features. It is the kernel that determines which features are extracted.

What is the default data format for Convolution in TensorFlow?

NHWC
NCHW
NCHWc8
NDHWC
The default data format for Convolution in TensorFlow is NHWC. In fact, for CPU, TensorFlow only supports NHWC for Convolution.

What is the default data format for Convolution in OneDNN?

NCHW
NHWC
NCHWc8
NDHWC
The default data format for Convolution in OneDNN TensorFlow is NCHW. In fact, during Inference, NCHW data format is internally converted to NCHWc8 data format.

The transpose of Convolution is known as?

Deconvolution
Fused Convolution
Reversed Convolution
Convolution2
The transpose of Convolution is known as "Deconvolution" after a 2010 paper but it is not the standard "deconvolution".

With this article at OpenGenus, you must have a good practice of answering questions based on Convolution.

OpenGenus Tech Review Team

OpenGenus Tech Review Team

The official account of OpenGenus's Technical Review Team. This team review all technical articles and incorporates peer feedback. The team consist of experts in the leading domains of Computing.

Read More

Improved & Reviewed by:


Aditya Chatterjee Aditya Chatterjee
Questions on Convolution operation in ML
Share this