×
Home Discussions Write at Opengenus IQ
×
  • DSA Cheatsheet
  • HOME
  • Track your progress
  • Deep Learning (FREE)
  • Join our Internship 🎓
  • RANDOM
  • One Liner
Prashant Anand

Prashant Anand

Bachelor of Technology (2016 to 2020) in Electronics and Communications Engineering at Reva University, Bangalore | Intern at OpenGenus

Bengaluru, Karnataka, India •
13 posts •
Software Engineering

What is Blockchain?

Blockchain is an emerging technology that can radically improve banking, supply chain, and other transaction networks and can create new opportunities for innovation. Businesses contain many examples of networks of individuals and organizations that collaborate to create value and wealth.

Prashant Anand Prashant Anand
Machine Learning (ML)

Bias in Machine learning

Bias is an constant parameter in the Neural Network which is used in adjusting the output. Therefore Bias is a additional parameter which helps the model so that it can perfectly fit for the given data. It is also known as bias nodes, bias neurons, or bias units

Prashant Anand Prashant Anand
Machine Learning (ML)

Generative Model

A Generative Model is a way of learning any kind of data distribution. Generative modeling algorithms process the training data and make reductions in the data. The main aim is to learn the true data distribution of the training set so that the new data points are generated with some variations.

Prashant Anand Prashant Anand
Machine Learning (ML)

Discriminative Model

Discriminative models, also referred to as conditional models, are a class of models used in statistical classification, especially in supervised machine learning. Discriminative modelling studies the P(y|x) i.e, it predicts probability of y(target) when given x(training samples).

Prashant Anand Prashant Anand
Machine Learning (ML)

XGBoost

XGBoost is short for extreme gradient boosting. It is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable. It implements machine learning algorithms under the Gradient Boosting framework. It provides a parallel tree boosting known as GBDT, GBM

Prashant Anand Prashant Anand
Machine Learning (ML)

Independent Component Analysis (ICA)

Independent component analysis (ICA) is a statistical and computational technique for revealing hidden factors that underlie sets of random variables, measurements, or signals and is a special case of blind source separation. A common application is to listen to one person's speech in a noisy room

Prashant Anand Prashant Anand
Machine Learning (ML)

Bayesian model

A Bayesian model is a statistical model where we use probability to represent both the uncertainty regarding the output and input to the model. The basic idea is that we start by assuming something which is adjusted based upon input data. We look into Bayesian Linear Regression as well

Prashant Anand Prashant Anand
Machine Learning (ML)

Hidden Markov Model

Hidden Markov Model is a stochastic model describing a sequence of possible events in which the probability of each event depends on the state attained in the previous event. Markov model can be used in real life forecasting problems. Simple Markov model cannot be used for customer level predictions

Prashant Anand Prashant Anand
Machine Learning (ML)

Markov Chain

Markov chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. It refers to the sequence of random variables such a process moves through, with the Markov property of serial dependence

Prashant Anand Prashant Anand
Machine Learning (ML)

Residual Network (ResNet)

ResNet makes it possible to train up to hundreds or even thousands of layers and still achieves compelling performance. Thanks to this technique they were able to train a network with 152 layers while still having lower complexity than VGGNet. It achieves a top-5 error rate of 3.57%

Prashant Anand Prashant Anand
Machine Learning (ML)

GoogleNet / InceptionNet

The winner of the ILSVRC 2014 competition was GoogleNet from Google. It achieved a top-5 error rate of 6.67%. GoogleNet has 22 layer, and almost 12x less parameters (So faster and less then Alexnet and much more accurate). Their idea was to make a model that also could be used on a smart-phone

Prashant Anand Prashant Anand
Machine Learning (ML)

Architecture of AlexNet and its current use

Alexnet is a Deep Convolutional Neural Network (CNN) for image classification that won the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. We see the architecture and compare it with GoogleNet and ResNet

Prashant Anand Prashant Anand
Machine Learning (ML)

Types of Activation Functions used in Machine Learning

We explored the various types of activation functions that are used in Machine Learning including Identity function, Binary Step, Sigmoid, Tanh, ReLU, Leaky ReLU and SoftMax function. Activation function help the network use the useful information and suppress the irrelevant data points

Prashant Anand Prashant Anand
OpenGenus IQ © 2025 All rights reserved â„¢
Contact - Email: team@opengenus.org
Primary Address: JR Shinjuku Miraina Tower, Tokyo, Shinjuku 160-0022, JP
Office #2: Commercial Complex D4, Delhi, Delhi 110017, IN
Top Posts LinkedIn Twitter
Android App
Apply for Internship