Interview questions on GAN
Do not miss this exclusive book on Binary Tree Problems. Get it now for free.
In this article, we have presented several Interview questions on Generative Adversarial Network (GAN) along with detailed answers.
Multiple Choice Questions
1. GAN is short for.
- Generative Advertising Network
- Generative Adversarial Network
- Generate Adversarial Network
- Generation adversarial Network
ANS: Generative Adversarial Network
The fullform of GAN is Generative Adversarial Network. Generative Adversarial Networks (GANs) are a powerful class of neural networks that are used for unsupervised learning.
2.The development and introduction of Generative Adversarial Networks was made by .
1. Allan Tunning
2. J. Goodfellow
3. Rutherford
4. None of the above
ANS: J. Goodfellow
It was developed and introduced by Ian J. Goodfellow in 2014.
3. Which GAN implementation is among the most well-liked and effective?
1. Conditional GAN
2. Vanilla GAN
3. Deep Convolutional GAN
4. Laplacian Pyramid GAN
ANS: Deep Convolutional GAN
DCGAN is one of the most popular also the most successful implementation of GAN.
It mainly composes of convolution layers without max pooling or fully connected layers. It uses convolutional stride and transposed convolution for the downsampling and the upsampling.
4. Described as a deep learning technique that employs some conditional parameters.
1. Conditional GAN
2. Vanilla GAN
3. Deep Convolutional GAN
4. Laplacian Pyramid GAN
ANS: Conditional GAN
GAN can be described as a deep learning method in which some conditional parameters are put into place. In CGAN, an additional parameter ‘y’ is added to the Generator for generating the corresponding data. Labels are also put into the input to the Discriminator in order for the Discriminator to help distinguish the real data from the fake generated data.
5. Developed and introduced in, Generative Adversarial Networks
1. 2015
2. 2014
3. 2013
4. 2012
ANS: 2014
It was developed and introduced by Ian J. Goodfellow in 2014.
6. Parts of Generative Adversarial Networks (GANs) can be separated.
1. 4
2. 3
3. 2
4. 1
ANS: 3
Generative Adversarial Networks (GANs) can be broken down into three parts. Generative: To learn a generative model, which describes how data is generated in terms of a probabilistic model.
Adversarial: The training of a model is done in an adversarial setting.
Networks: Use deep neural networks as the artificial intelligence (AI) algorithms for training purpose.
7._ is utilized to become familiar with generative models, which explain how data is produced using probabilistic models.
1. Adversarial
2. Generative
3. Networks
4. discriminator
ANS: Generative
A generative model must also be probabilistic rather than deterministic. If our model is merely a fixed calculation, such as taking the average value of each pixel in the dataset, it is not generative because the model produces the same output every time. The model must include a stochastic (random) element that influences the individual samples generated by the model.
8. How accurate will the discriminator be for GAN models at the global optimum?
- 1
- 0.5
- p_data/(p_g + p_data)
- None of those
ANS: 0.5
As the generator improves with training, the discriminator performance gets worse because the discriminator can't easily tell the difference between real and fake. If the generator succeeds perfectly, then the discriminator has a 50% accuracy. In effect, the discriminator flips a coin to make its prediction.
9. The generator G's main goal is
- Maximize classification error for discriminator
- Minimize classification error for discriminator
- Minimize log(1 - D( G(z) ))
- Maximize log(D( G(z) ))
ANS: Minimize log(1 - D( G(z) ))
The generator seeks to minimize the log of the inverse probability predicted by the discriminator for fake images. This has the effect of encouraging the generator to generate samples that have a low probability of being fake.
10. Can the training of GAN models be compared to a Minimax 2-player game?
- Yes
- No
ANS: Yes
GAN originally presented by Goodfellow et al is a novel technique that uses a minmax two-player game to learn latent data distributions. The framework is adversarial in the sense that the training procedure for G tries to maximize the probability of D making a mistake. The framework thus corresponds to a minimax two-player game.
11. The following claims are true if we characterize Generator as G(z; theta g).
- p_z(z) represents probability distribution of data
- input noise is sampled from domain z
- theta_g represents parameters of generator model
- theta_g represents parameters of discriminator model
ANS: theta_g represents parameters of generator model
x = g(z; \theta_g) where x is the produced sample and θg are the respective parameters characterizing the generative model. The input to the generator is a noise sample z which is drawn from a simple prior noise distribution pz. Often we use a standard Gaussian distribution or a uniform distribution as prior.
Subjective Questions
1. Is it acceptable to extract data from a real sample using a generated sample?
Although the created sample contains some "modified" values, a pragmatic perspective might help us understand when and how we might use it. It's acceptable to use a generated sample for these kinds of applications because in many cases, the "changing values" are not as significant as they can affect the process's result.
2. What is the role of noise and noise sampling?
NNoise has a balancing function! We can alter how we sample from it or even produce it (not as pure noise, but something different!) for sure. However, it will result in a biased generator feeding the "noise input". Therefore, it could be wise to keep it as haphazard as possible.
3. What happens if you change Noise to another type of distribution?
In essence, a Conditional Generative Adversarial Network (CGAN) is created (Mirza & Osindero, 2014). In addition to image-to-image translation, CGANs have been employed in a variety of applications, including image super-resolution (which appears to be the case in the research you may have cited involving astronomical image data) (Isola et. al. 2016). Superpixel resolution is one use of Image-to-Image translation, along with image colorization and dense semantic tagging.
4. Can we have a Deeper GAN?
Well, once more, practicality is what we need to prioritize. A "Deeper GAN" can be created, but only when more processing on the data is required. In reality, when we feed a generated sample from a GAN into another one, we are essentially doing so in place of "Noise" so that the G portion can perform "Something" on it. How many steps are required? Depending on our design, yes. What drawbacks are there? Undoubtedly, training would be time-consuming. yet it can also lead to more "strange representations" in the finished product. because some generated samples are being used as noise. There is no guarantee that it will work in all circumstances, however it may work well in some.
5.Why are Generative Adversarial Networks classed as unsupervised?
Since there are other varieties of GANs, I will just use the term "original GAN" in my response. Because you don't assume that you have a goal variable in your dataset—and if you have, you don't utilize it—it is referred to as unsupervised learning. You only need a few features, like photos, and you don't even need to know what class these photographs fall under, etc. To draw a sample from the distribution that produced these images is your objective (via the generator).
6.How can GAN be used to create images solely from text?
Using an intriguing sort of GAN called StackGAN, we may create visuals just based on the text. Two steps make up the StackGAN's operation:
They produce a rudimentary contour, simple forms, and a low-resolution representation of the image in the first stage.
The picture created in the first stage is improved in the second stage by adding additional realism, and it is then transformed into a high-resolution image.
7.Explain Generative Adversarial Network.
Unsupervised deep learning technique called GAN (Generative Adversarial Network) trains two networks simultaneously. Generator and Discriminator are its two halves. The discriminator tells the difference between fake and real images, while the generator creates visuals that are nearly identical to the real thing. GAN is capable of creating fresh content.
8. When should we switch from other generative models to GANs?
GANs are probably a good choice for jobs that have a perceptual element. This category includes graphics programs like picture synthesis, image translation, image infilling, and attribute manipulation.
9. Briefly introduce about Style GAN
StyleGAN is a GAN formulation that can produce extremely high-quality images, up to and including 10241024 resolution. The goal is to create a stack of layers in which the first layer can produce images with a low resolution (beginning at 22) and subsequent layers steadily increase the resolution.
10. How may non-image data be used to improve GAN performance?
On other continuous data, we anticipate that GANs will eventually succeed at the same degree of image synthesis, but it will take better implicit priors. It will need careful consideration of what makes sense and is computationally viable in a given domain to find these priors.
We are less certain about structured data or data that is not continuous. Making the generator and discriminator both agents with reinforcement learning training could be one strategy. Large-scale computational resources may be necessary to make this method successful. Finally, this issue might well call for advancements in fundamental research.
11. Why do we need the cycle consistent loss?
A random permutation of images in the target domain that can fit the target distribution is created using the images from the source domain in a cycle GAN. We therefore employ a unique kind of loss called cycle consistent loss to relieve this.
12. What are some of the use cases where Cycle GAN is preferred?
Cycle GAN is mostly utilized in situations when getting paired training samples is challenging. CycleGAN has a number of intriguing uses, such as photo enhancing, season transfer, transforming genuine photos into beautiful images, and more.
13. How Cycle GAN differs from the other types of GAN?
Data is mapped from one domain to another via the CycleGAN. The distribution of images from one domain is mapped to the distribution of images in an other domain using the CycleGAN, in other words.
14. When InfoGAN is useful?
InfoGAN is a variant of conditional GAN that is unsupervised. To create the desired image using the class labels found in the dataset, we impose a condition on the generator and discriminator in the conditional GAN. When we have an unlabeled dataset, InfoGAN can be used to create the desired images.
15. Can we regulate and alter the images produced by GAN? In that case, how?
With the vanilla GAN, we are unable to regulate and alter the images produced by the GAN generator. We thus employ conditional GAN. With conditional GAN, we have control over and flexibility over the images produced by the GAN generator.
16. Describe the Wasserstein distance.
The Earth Movers (EM) distance is another name for the Wasserstein distance. In optimal transport situations where objects need to be moved from one configuration to another, it serves as the distance measure.
17. What are GAN's shortcomings and why do we require Wasserstein GAN?
The JS divergence between the generator distribution and the actual data distribution is minimized in GAN. The JS divergence, however, has the drawback of being irrelevant when there is no overlap or when the two distributions do not share the same support.
We can therefore utilize the Wasserstein GAN, which uses the Wasserstein distance rather than JS divergence, to get around this problem.
18. How the least-squares GAN is helpful?
We employ sigmoid cross-entropy as the loss function in the GAN.
The issue with the sigmoid cross-entropy loss is that even if the false samples are far from the real distribution, gradients tend to disappear once they reach on the right side of the decision surface.
We employ the least-squares GAN to get around this problem.
Although the false samples produced by the generator in the least-squares GAN are on the right side of the decision surface, gradients won't disappear until the fake samples match the real distribution.
19. Explain about the discriminator of DCGAN .
Convolutional and batch norm layers with leaky ReLU activations make up the DCGAN discriminator.
The discriminator first receives the image as an input, runs a series of convolution operations, and then determines whether the image is a fake image produced by the generator or a true image derived from the training data.
20. Explain about the generator of DCGAN .
Convolutional transposition and batch norm layers with ReLU activations make up the DCGAN's generator.
The generator is first fed with a noise that is drawn from a normal distribution. This noise is sent into the convolutional transpose and batch norm layers of the generator, which produces an image that is comparable to those in the training set.
21. Why do we need DCGAN?
GANs are frequently used in image-related applications, such as creating new images and transforming grayscale to colorful images. Since CNN is good at handling images, we employ it instead of a feed-forward neural network for dealing with images.
Similarly, we can utilize DCGAN in place of vanilla GAN, whose generator and discriminator use convnets rather than feed-forward networks. When it comes to image-related tasks, the DCGAN is far more efficient than standard GANs.
22. What is the role of the generator and discriminator?
The generator's job is to create brand-new data points that are comparable to those in the training set, whereas the discriminator's job is to determine whether the given data points are generated by the generator or real.
23. Why GANs are called implicit density models?
The training set's data points are used to produce new data points by the generator network. The generator implicitly learns the distribution of the training set in order to create a new data point, and then creates the new data point based on this implicitly learnt distribution.
GANs are frequently referred to as the implicit density model since the generator network implicitly learns the distribution of the training set.
24. Explain the difference between the discriminative and generative models.
By learning the decision boundary that best divides the classes, the discriminative model organizes the data points into their corresponding classes.
The generative models may also categorize the data points, but they do so by learning the properties of each class rather than the decision boundary.
25. Why are generative adversarial networks (GANs) so popular?
There are several uses for generative adversarial networks. When it comes to working with photographs, they have a lot of traction and are really effective.
Art production: GANs are used to produce creative drawings, paintings, and sketches.
They are used to significantly improve the resolution of the supplied photos.
They can also be used to quickly modify certain characteristics of photographs, such as day from night and summer from winter.
26. What is a latent space vector?
The latent vector z is just random noise.
The most frequent distributions for that noise are uniform: z∼U[−1,+1]
or Gaussian: z∼N(0,1). I am not aware of any theoretical study about the properties derived from different priors, so I think it's a practical choice: choose the one that works best in your case.
The dimensionality of the noise depends on the architecture of the generator, but most of the GANs I've seen use a unidimensional vector of length between 100 and 256.
27. Does it make sense to do train test split when trainning GANS?
In my opinion, training GANs is only somewhat unsupervised. For the Generator, it is undoubtedly unsupervised, but for the Adversarial Network, it is supervised. In order to evaluate the Disciminator's capacity to discriminate between real and fraudulent situations, new data that it has never seen before might be helpful.
In other words, if you want to examine the Discriminator's ability to generalize its task on data it has never seen before, it makes appropriate to break your dataset into train(-validation)-test.
28. GAN vs DCGAN difference
The concept of utilizing a generator model to create fake examples and a discriminator model to try to determine whether the picture it gets is a fake (i.e. from the generator) or a real sample is used to create a Generative Adversarial Network (GAN).
Initially, this was demonstrated using rather straightforward, completely connected networks.
However, instead of employing those fully-connected networks, a Deep Convolution GAN (DCGAN) focuses on using Deep Convolutional networks.
In general, conv nets look for spatial correlations to identify areas of correlation within a picture.
As a result, a DCGAN would probably be more suited for image/video data, whereas a GAN's fundamental concept can be used to a larger range of domains because the model's specifics are left up to the individual model architects.
29. Is it possible for a DCGAN to do regression? What are some examples of this?
The answer is yes. There is a paper titled "Tak\ing Control of Intra-class Variation in Conditional GANs Under Weak Supervision" by Richard Marriott , Sami Romdhani and Liming Chen from Ecole Centrale de Lyon, France discussing it.
They propose a "C-GAN that is able to learn realistic models with continuous, semantically meaningful input parameters". They actually cover generating images of people at different ages as well.
30. Purpose of SrGAN
As the name implies, SRGAN is a method for creating a GAN that uses both a deep neural network and an adversarial network to produce higher resolution images.
This particular sort of GAN is very helpful in enhancing details in native low-resolution photos while minimizing mistakes.
31. Discuss Similarity and dissimilarity between DiscoGAN and Cycle GAN
DiscoGAN and CycleGAN share a same fundamental idea:
One learns a transformation from domain X to domain Y, while the other learns a reverse mapping. Both employ reconstruction loss as a measure of how effectively the original image is rebuilt after twice transforming across domains.
Both adhere to the idea that a picture should match the original if it is transformed from one domain to another before returning to the original domain.
DiscoGAN and CycleGAN vary primarily in that DiscoGAN uses two reconstruction losses, one for each domain, whereas CycleGAN only uses one cycle-consistency loss.
32. What's the difference between CNN, GANs, autoencoders and VAE?
Convolutional neural networks are indicated by these. This particular variety of neural network is intended for use with spatially structured data. For instance, photos that naturally have a spatial ordering to them are ideal for CNNs. Numerous "filters" that "slide" across the data in convolutional neural networks result in an activation at each slide location. The "feature map" that results from these activations shows how much the data in that location activated the filter. To be more precise, the dot product of the filter and the picture data is used to determine this activation.
Generative Adversarial Networks are referred to as GANs.These are a form of generative model because they may create fresh, similar-looking images by learning to imitate the statistical distribution of the data you give them. The two rival networks (adversaries) that compete with one another within GANs gave rise to the term "adversarial." The generator is a neural network that outputs an image, I, from a vector of random variables, Z. The discriminator is a similar neural network that decides the likelihood that an image is real based on the input image I. When p=1, discriminator, respectively, strongly feels that the image is real and when p=0, the discriminator, strongly feels that the image is fake.
Autoencoders
Autoencoders are pretty simple. All they do is take an input and try to duplicate it as accurately as they can. The autoencoder is expected to generate the exact same photo if I input one of the digits "1." Although it sounds unnecessary and simple, some autoencoder setups can result in intriguing outcomes. Typically, we don't just have an input layer and an output layer because that would allow the network to do nothing more than copy pixels between the two, which is completely worthless. Between the input and output layers, we often have one (or more) hidden levels that serve as bottleneck layers.
VAEs
Variational AutoEncoder is the acronym for it. An autoencoder and a VAE are somewhat similar, but the VAE has a unique twist!
A variational autoencoder must reproduce its output while maintaining the distribution of its hidden neurons, as opposed to an autoencoder, which only needs to reproduce its input. This means that the network's output will need to adjust to the hidden neurons' distribution-based output. As a result, we may create new images simply by taking a sample from that distribution and entering it into the network's hidden layer.
Sign up for FREE 3 months of Amazon Music. YOU MUST NOT MISS.