Practice multiple choice questions on Convolutional Layers with answers. This is the most important layer in a Machine Learning model in terms of both functionality and computation.
If you want to revise the concept, read this article 👉:
Let us start with the questions. Click on the right option and the answer will be explained.
The Convolution Layer is what makes CNN so powerful. How this layer operates?
Conv Layer has parameters that consists in learnable filters, each one being small spatially but extends through the full depth of the input volume
Conv Layer has a big database of features which can gather information to find the best output
Conv Layer has parameters that consists in how much iterations you want do to
Analysing the features considering their input volume
After the filters expansion, intuitively, the network will learn filters that activate when they see some type of visual feature such as an edge of some orientation or a blotch of some color on the first layer.
What is the receptive field?
Is the name given to the connectivity of neurons only to a local region of the input volume
Is the name given to the relationship of neurons
Is the name given to the connectivity of neurons only to a local region of the output volume
It happens when neurons has difficult to find connections, and this hyperparameters helps it
When dealing with high-dimensional inputs such as images, it is impractical to connect neurons to all neurons in the previous volume.
Which answer explains better the hyperparameter Depth?
It corresponds to the number of filters we would like to use, each learning to look for something different in the input
It corresponds of how deep the connections between neurons will be
It corresponds to the number of features we would like to use, each learning to look for something different in the input
It corresponds to the number of features we would like to use, each learning to look for something different in the previous output
For example, if the first Convolutional Layer takes as input the raw image, then different neurons along the depth dimension may activate in presence of various oriented edges, or blobs of color. We will refer to a set of neurons that are all looking at the same region of the input as a depth column.
Which answer explains better the hyperparameter Stride?
It is how much pixels, filters will be moved
It is how much filters, pixels will be moved
It is how much features will be used
It defines how many outputs we will receive
When our strider is 1, filters will move 1 pixel at a time. If our strider is 2, it will move 2 pixels at a time. Sometimes stride is 3 or more, but it is unusual.
Which answer explains better the hyperparameter Zero-padding
It says how many zeros you use to pad the input volume
It pads the missing values with zero
It avoid zeros between the values
It says how many zeros you use to pad the output volume
Sometimes pad the input volumes with zeros in the border help us to control spatial size of the output volumes.
What is feature map in Convolutional Layer?
It is a map of features learned by the CNN
It is a map of features that you use as input
It is a map of features that helps CNN to make better predictions
If is a map of features that the algorhitm will not use
Everytime CNN finds a important feature, it is stored in feature map. Also, if we have a combination of features in a certain area, its possible that we have a more important and complex feature there.
When we are talking about Conv Layer, what is Kernel Size?
Kernel size = n_inputs * n_outputs
Kernel size = n_inputs / n_outputs
Kernel size = (2 * n_inputs) * n_outputs
Kernel size = n_inputs * (2 * n_outputs)
A fully connected layer connects every input with every output in his kernel term.
Why is so important to keep Kernel Size small
The number of parameters grows quadratically with kernel size.
The number of features grows quadratically with kernel size.
The number of parameters decreases quadratically with kernel size.
The number of features decreases quadratically with kernel size.
This makes big convolution kernels not cost efficient enough, even more, when we want a big number of channels.
What is the Kernel Size more used nowadays?
3x3 and 5x5
3x3 and 9x9
5x5 and 7x7
5x5 and 11x11
Those Kernel Sizes provides a good result costing less.
Conv Layers are powerful but also has big computacional cost. Which of the techniques below can make it cheaper?
In Wider Convolutions we use less but fatter layers, where fat means more kernels per layer. It’s easier for the GPU, or other massively parallel machines for that matter, to process a single big chunk of data instead of a lot of smaller ones.
With these questions on Convolution layers at OpenGenus, you must have a good idea of Convolution layers. Enjoy.