×

Search anything:

Recurrent Neural Network (RNN) questions [with answers]

Binary Tree book by OpenGenus

Open-Source Internship opportunity by OpenGenus for programmers. Apply now.

Practice multiple choice questions on Recurrent Neural Network (RNN) with answers. It is an important Machine Learning model and is a significant alternative to Convolution Neural Network (CNN).

If you want to revise the concept, read these articles 👇:

Let us start with the questions. Click on the right option and the answer will be explained.

Question 1

What is the basic concept of Recurrent Neural Network?

Use previous inputs to find the next output according to the training set.
Use a loop between inputs and outputs in order to achieve the better prediction.
Use recurrent features from dataset to find the best answers.
Use loops between the most important features to predict next output.
After you train your RNN algorithm, it calculates the Output according to your input. Important to say, is that it calculates input after input, so if you have 2 inputs or more, it calculates the first output and consider it as next input to calculate next output.

Question 2

For what RNN is used and achieve the best results?

Handwriting and speech recognition
Handwriting and images recognition
Speech and images recognition
Financial predictions
Due it´s behavior, RNN is great to recognize handwriting and speech, calculating each input (letter/word or a second of a audio file for example), to find the correct outputs. Basically, RNN was made to process information sequences.
topic of image

Question 3

According to the image, classify the type of connection we have in the example 1.

One to one
One to many
Many to one
Many to many
In the one to one case we have the classic feedforward neural network.

Question 4

According to the image, classify the type of connection we have in the example 2.

One to many
One to one
Many to many
Many to one
In one to many applications, we read the data only once for the hidden state, and then we pass that hidden state forward for various periods of time, in which we also read information contained in it. An example of such an application is when we want to create a model for describing images. In this case, we read the image once to the hidden state, but we read of that hidden state several times, once for each word generated in the image description.

Question 5

According to the image, classify the type of connection we have in the example 3.

Many to one
Many to many
One to one
One to many
In the many to one case, we read the data several times, but we produce a forecast only after reading the whole sequence. A good example of this type of application is text classification, such as analysis of feelings, where we read the whole text before making a prediction.

Question 6

According to the image, classify the type of connection we have in the example 4.

Many to many
Many to one
One to many
one to one
In this case of many to many application, we read the data for a few periods of time before we start releasing forecasts. In other words, there is a discrepancy between the reading of the input sequence and the output of the output sequence. An example of such applications are cases of machine translation or audio transcription, where the RNR reads the sequence for some time before starting to generate the output sequence, be it the words in another language or the words corresponding to an audio .

Question 7

According to the image, classify the type of connection we have in the example 5.

Many to many
One to many
Many to one
One to one
In this case of many to many application is when we have sequences in the input and output of the network, but each input corresponds to an output in the same time period. This type of data structure appears in time series and dashboard data, in which we want to make a forecast for the next time period, given what has occurred in this time period and in previous periods.

Question 8

What is 'gradient' when we are talking about RNN?

A gradient is a partial derivative with respect to its inputs
It is how RNN calls it´s features
The most important step of RNN algorhitm
A parameter that can help you improve the algorhitm´s accuracy
A gradient measures how much the output of a function changes, if you change the inputs a little bit. The higher the gradient, the steeper the slope and the faster a model can learn. But if the slope is zero, the model stops to learning. A gradient simply measures the change in all weights with regard to the change in error.

Question 9

One of the RNN´s issue is 'Exploding Gradients'. What is that?

When the algorithm assigns a stupidly high importance to the weights, without much reason
When the algorithm assigns a stupidly high importance to the weights, because the better features
When the algorithm assigns a stupidly high importance to the weights, when your dataset is too big
When the algorithm assigns a stupidly high importance to the weights, when your data is too small
This problem can be easily solved if you truncate or squash the gradients.

Question 10

The other RNN´s issue is called 'Vanishing Gradients'. What is that?

When the values of a gradient are too small and the model stops learning or takes way too long because of that.
When the values of a gradient are too big and the model stops learning or takes way too long because of that.
When the values of a gradient are too small and the model joins in a loop because of that.
When the values of a gradient are too big and the model joins in a loop because of that.
This was a major problem in the 1990s and much harder to solve than the exploding gradients. Fortunately, it was solved through the concept of LSTM by Sepp Hochreiter and Juergen Schmidhuber.

Question 11

LSTM? What is that?

LSTM networks are an extension for recurrent neural networks, which basically extends their memory. Therefore it is well suited to learn from important experiences that have very long time lags in between
LSTM networks are an extension for recurrent neural networks, which basically extends their memory. Therefore it is well suited to learn from important experiences that have very low time lags in between
LSTM networks are an extension for recurrent neural networks, which basically shorten their memory. Therefore it is well suited to learn from important experiences that have very low time lags in between
LSTM networks are an extension for recurrent neural networks, which basically extends their memory. Therefore it is not recommended to use it, unless you are using a small Dataset.
The units of an LSTM are used as building units for the layers of a RNN, which is then often called an LSTM network. LSTM’s enable RNN’s to remember their inputs over a long period of time. This is because LSTM’s contain their information in a memory, that is much like the memory of a computer because the LSTM can read, write and delete information from its memory.

With these questions on Recurrent Neural Network (RNN) at OpenGenus, you must have a good idea of Recurrent Neural Network (RNN). Enjoy.

Recurrent Neural Network (RNN) questions [with answers]
Share this