×

Search anything:

Questions on TensorFlow (with Answers)

Binary Tree book by OpenGenus

Open-Source Internship opportunity by OpenGenus for programmers. Apply now.

In this article, we have present the most insightful and MUST attempt questions on TensorFlow with multiple options to choose from. Select an answer to find out if you got it right and get explanation for the answer.

This will help you get prepared for Interviews on TensorFlow, Google's Machine Learning framework.

To do minimum pool (minpool) in TensorFlow, which op needs to be used?

tf.reduce_min
tf.maxpool
tf.minpool
tf.pool
Minimum pooling is not supported in TensorFlow so minpool() op does not exist. We can create an op for Minimum pooling using tf.reduce_min().

Is TensorFlow available/ supported in C++?

the question text

Limited support
Full support
No
Depends on version
TensorFlow is available as C++ API though the most used API is the Python API. TensorFlow does not test its C++ API across builds so it is not recommended as it might be broken and has a limited support.

Which padding type is not supported in TensorFlow's Conv2D op and variants?

VALID_EXPLICIT
SAME
VALID
EXPLICIT
TensorFlow's convolution ops like Conv2D supports implicit padding and has 3 types namely SAME, VALID and EXPLICIT.

TensorFlow has an explicit pad type tf.pad. Which padding mode is not supported in it?

EXPLICIT
CONSTANT
REFLECT
SYMMETRIC
TensorFlow's pad op support 3 modes namely CONSTANT, REFLECT and SYMMETRIC. Types like SAME, VALID and EXPLICIT are not present as EXPLICIT is the default type.

What is the latest stable version of TensorFlow?

v2.5.0
v1.15.5
v2.0.0
v1.4.2
As of 2021, the latest stable version of TensorFlow is v2.5.0.

What is TensorBoard?

Visualization tool for TF
Debug tool for TF
Optimization tool for TF
Tool for training models
TensorBoard is TensorFlow's visualization toolkit for visualizing the models and various operations.

Which library by Google is used to optimize memory allocation in TensorFlow?

TCMalloc
jemalloc
GCC
OneDNN
TCMalloc is a library by Google that is used to optimize memory allocation calls like malloc, realloc. It is recommended to use with TensorFlow but it not a necessary dependency.

Which library by Intel is used to optimize TensorFlow?

OneDNN
nauta
MLIR
cerl
OneDNN is a library by Intel that is linked with TensorFlow to optimize Inference performance. OneDNN is officially supported by TensorFlow.

What is servables in TensorFlow?

Objects to perform computation
Prepare production environment
Serve new algorithms
Support graph optimizations
Servables are objects to perform computation. It can contain a single operation or an entire model. Servable is a part of TensorFlow Serving which are used to design production environment.

Which function destroyes a variable in TensorFlow?

the question text

tf.Session.close()
tf.Variable()
tf.debugging.assert_near
tf.Session.reset()
A variable in TensorFlow is created using tf.Variable.initializer() and it has its scope in current session. The variable is destroyed when tf.Session.close() is called.

Value of which one of the following cannot be changed?

tf.placeholder
tf.variable
tf.constant
tf.Conv2D
The value of tf.placeholder cannot be changed once created.

How to convert Numpy array to TensorFlow tensor?

tf.convert_to_tensor()
tf.make_ndarray()
tf.constant()
np.array()
tf.convert_to_tensor() is used to convert various objects to Tensor objects. The function can accept Tensor objects, numpy array, Python lists and Python scalars.

tf.dtypes defines the datatype in a TensorFlow tensor. How many datatypes are supported in TensorFlow?

30
8
16
42
tf.dtypes supports 30 different types including tf.qint32, tf.qint8, float32 and other types.

What is the step known as where graph optimizations are done by TensorFlow in runtime?

Graph layout pass
Graph tuning pass
Graph compression pass
Double pass
Graph layout pass is the step where operations in a model are optimized by TensorFlow in runtime before execution. These optimizations include merging multiple operations, removing redundant operations, replacing operations with optimized versions and others.

What is MLIR in TensorFlow used for?

Defining common optimizations
Writing models
An graph optimization technique
Another programming language
MLIR (Multi-level intermediate representation) is an intermediate representation (IR) system between a language (like C) or library (like TensorFlow) and the compiler backend (like LLVM). It allows code reuse between different compiler stack of different languages and other performance and usability benefits.

Which function in TensorFlow is used to convert FP32 data to INT8 data?

QuantizeV2
Dequantize
QuantizedConv2D
Convert
QuantizeV2 operation in TensorFlow is used to convert FP32 data to INT8 data. It represents the quantized operation (first operation) in Quantization.

How to use TensorFlow v1.x API in TensorFlow v2.x build?

tf.compat.v1
tf.compat.v2
tf.v1
tf.forward_compatible
compat.v1 and compat.v2 provides TensorFlow v1.x and v2.x APIs for forward and backward compatibility.

How to get the TensorFlow version?

tf.version.VERSION
tf.version
tf.VERSION
Not possible
tf.version.VERSION is used to get the version number of TensorFlow. It returns a value like 2.5.0. tf.version have other attributes like COMPILER_VERSION, GIT_VERSION, GRAPH_DEF_VERSION, GRAPH_DEF_VERSION_MIN_CONSUMER and GRAPH_DEF_VERSION_MIN_PRODUCER.

What is eager execution in TensorFlow?

the question text

Execute operations immediately
Create graph before execution
Do advanced optimization
Compile time memory allocation
Eager execution in TensorFlow is an approach where operations are executed immediately without creating graph of the model. It eliminates graph optimizations. Eager execution is enabled by default in TensorFlow v2.x.

To do matrix multiplication in TensorFlow, which op needs to be used?

tf.matmul
tf.conv2d
tf.maxpool
tf.contrib.layers.fully_connected
tf.matmul() op is used to do matrix multiplication in TensorFlow.

What data format is used in TensorFlow in its ops by default?

NHWC
NCHW
NCHWc8
NCDHW
NHWC data format is used in TensorFlow. NHWC stands for Number of batches, Height, Widht, Channel.

With these questions at OpenGenus, you must have a strong hold on TensorFlow. Enjoy.

OpenGenus Tech Review Team

OpenGenus Tech Review Team

The official account of OpenGenus's Technical Review Team. This team review all technical articles and incorporates peer feedback. The team consist of experts in the leading domains of Computing.

Read More

Improved & Reviewed by:


Questions on TensorFlow (with Answers)
Share this