×
Home Discussions Write at Opengenus IQ
×
  • DSA Cheatsheet
  • HOME
  • Track your progress
  • Deep Learning (FREE)
  • Join our Internship 🎓
  • RANDOM
  • One Liner
Vivek Praharsha

Vivek Praharsha

ML & DL Enthusiast

8 posts •
Machine Learning (ML)

Jensen Shannon Divergence

Jensen Shannon Divergence is one of the distribution comparison techniques that can be easily used in parametric tests in ML.

Vivek Praharsha Vivek Praharsha
TensorFlow

Depthwise Convolution op in TensorFlow (tf.nn.depthwise_conv2d)

This article will discuss about the Depthwise Convolution operation and how it is implemented using the TensorFlow framework (tf.nn.depthwise_conv2d).

Vivek Praharsha Vivek Praharsha
TensorFlow

New features in TensorFlow v2.8

TensorFlow 2.8 has been finally released. Let us take a look at some of the new features and improvements being rolled out in this version. This new version comes with lots of additions, bug fixes and changes.

Vivek Praharsha Vivek Praharsha
Machine Learning (ML)

One hot encoding in TensorFlow (tf.one_hot)

This article discusses about one of the commonly used data pre-processing techniques in Feature Engineering that is One Hot Encoding and its use in TensorFlow.

Vivek Praharsha Vivek Praharsha
TensorFlow

Dropout operation in TensorFlow (tf.nn.dropout)

This article discusses about a special kind of layer called the Dropout layer in TensorFlow (tf.nn.dropout) which is used in Deep Neural Networks as a measure for preventing or correcting the problem of over-fitting.

Vivek Praharsha Vivek Praharsha
Machine Learning (ML)

Scaled-YOLOv4 model

The authors of YOLOv4 pushed the YOLOv4 model forward by scaling it's design and scale and thus outperforming the benchmarks of EfficientDet. This resulted in Scaled-YOLOv4 model.

Vivek Praharsha Vivek Praharsha
Machine Learning (ML)

YOLOv4 model architecture

This article discusses about the YOLOv4's architecture. It outperforms the other object detection models in terms of the inference speeds. It is the ideal choice for Real-time object detection, where the input is a video stream.

Vivek Praharsha Vivek Praharsha
Machine Learning (ML)

ReLU (Rectified Linear Unit) Activation Function

We will take a look at the most widely used activation function called ReLU (Rectified Linear Unit) and understand why it is preferred as the default choice for Neural Networks. This article tries to cover most of the important points about this function.

Vivek Praharsha Vivek Praharsha
OpenGenus IQ © 2025 All rights reserved â„¢
Contact - Email: team@opengenus.org
Primary Address: JR Shinjuku Miraina Tower, Tokyo, Shinjuku 160-0022, JP
Office #2: Commercial Complex D4, Delhi, Delhi 110017, IN
Top Posts LinkedIn Twitter
Android App
Apply for Internship