×
Home Discussions Write at Opengenus IQ
×
  • Join our Internship 🎓
  • #7daysOfCode
  • C Interview questions
  • Linux 💽
  • 🔊 Data Structures
  • Graph Algorithms
  • Dynamic Programming 💑
  • Greedy Algo 📔
  • Algo Book 🧠
  • String Algo 🧬
  • Home

Machine Learning (ML)

Machine Learning is the fastest growing and most potential field that enables a computer to perform specific tasks better than humans. It is actively used in companies like Apple, Tesla, Google and Facebook. We are covering the latest developments in the field

Machine Learning (ML)

Understand Autoencoders by implementing in TensorFlow

Autoencoders are a type of unsupervised neural networks and has two components: encoder and decoder. We have provided the intuitive explanation of the working of autoencoder along with a step by step TensorFlow implementation.

Surya Pratap Singh
Machine Learning (ML)

Concept Whitening in Machine Learning

Concept Whitening is a technique that can be used to find the inner functioning. The difference in its approach is that rather than searching for solutions in the trillions of trained parameters it tries to find answers by interpreting the networks.

Srishti Guleria Srishti Guleria
Machine Learning (ML)

ELMo: Deep contextualized word representations

ELMo is the state-of-the-art NLP model that was developed by researchers at Paul G. Allen School of Computer Science & Engineering, University of Washington. In this article, we will go through ELMo in depth and understand its working.

Zuhaib Akhtar
Machine Learning (ML)

Differentiating fake faces using simple ML and computer vision

We have explored a technique to Differentiate fake faces using simple Machine Learning (ML) and computer vision. We have used Large-scale CelebFaces Attributes dataset.

Akshay Atam
Machine Learning (ML)

How a ML Dataset is designed?

In this article, we have explored How a (Machine Learning) ML Dataset is designed? with the example of the PASCAL dataset (Pascal-1 Visual Object Classes (VOC)) and how it evolved over the years and how the initial dataset was prepared.

Ayush Mehar
Machine Learning (ML)

Different Basic Operations in CNN

We have explored the different operations in CNN (Convolution Neural Network) such as Convolution operation, Pooling, Flattening, Padding, Fully connected layers, Activation function (like Softmax) and Batch Normalization.

Apoorva Kandpal Apoorva Kandpal
Machine Learning (ML)

ResNet50 v1.5 architecture

ResNet50 v1.5 is the modified version of the original ResNet50 model. ResNet50 v1.5 is slightly more accurate (0.5%) and slightly lower performance (down by 5%) compared to ResNet50.

Ayush Mehar
Machine Learning (ML)

Multilayer Perceptron

Multilayer perceptron is a fundamental concept in Machine Learning (ML) that lead to the first successful ML model, Artificial Neural Network (ANN). We have explored the idea of Multilayer Perceptron in depth.

Ayush Mehar
Machine Learning (ML)

Multilayer Perceptrons vs CNN

We have explored the key differences between Multilayer perceptron and CNN in depth. Multilayer Perceptron and CNN are two fundamental concepts in Machine Learning. When we apply activations to Multilayer perceptrons, we get Artificial Neural Network (ANN) which is one of the earliest ML models.

Ayush Mehar
Machine Learning (ML)

Disadvantages of RNN

We have explored the disadvantages of RNN in depth. Recurrent Neural Networks (or RNNs) are the first of their kind neural networks that can help in analyzing and learning sequences of data rather than just instance-based learning.

Dishant Parikh
Machine Learning (ML)

BERT base vs BERT large

BERT base model has 12 encoder layers stacked on top of each other whereas BERT large has 24 layers of encoders stacked on top of each other.

Zuhaib Akhtar
Machine Learning (ML)

Applications of NLP: Extraction from PDF, Language Translation and more

In this, we have explored core NLP applications such as text extraction, language translation, text classification, question answering, text to speech, speech to text and more.

Shubham Sood Shubham Sood
Machine Learning (ML)

Applications of NLP: Text Generation, Text Summarization and Sentiment Analysis

In this article, we have explored 3 core NLP applications such as Text Generation using GPT models, Text summarization and Sentiment Analysis.

Shubham Sood Shubham Sood
Machine Learning (ML)

Differences between Standardization, Regularization, Normalization in ML

We have covered the Differences between Standardization, Regularization, Normalization in depth along with the introductory knowledge and complete explanation of the key terms.

Ayush Mehar
Machine Learning (ML)

ALBERT (A Lite BERT) NLP model

ALBERT stands for A Lite BERT and is a modified version of BERT NLP model. It builds on three key points such as Parameter Sharing, Embedding Factorization and Sentence Order Prediction (SOP).

Zuhaib Akhtar
Machine Learning (ML)

Different core topics in NLP (with Python NLTK library code)

In this, we have covered different NLP tasks/ topics such as Tokenization of Sentences and Words, Stemming, Lemmatization, POS Tagging, Named Entity Relationship and more.

Shubham Sood Shubham Sood
Machine Learning (ML)

XLNet, RoBERTa, ALBERT models for Natural Language Processing (NLP)

We have explored some advanced NLP models such as XLNet, RoBERTa and ALBERT and will compare to see how these models are different from the fundamental model i.e BERT.

Shubham Sood Shubham Sood
Machine Learning (ML)

LSTM & BERT models for Natural Language Processing (NLP)

The fundamental NLP model that is used initially is LSTM model but because of its drawbacks BERT became the favored model for the NLP tasks.

Shubham Sood Shubham Sood
Machine Learning (ML)

NASNet - A brief overview

NASNet stands for Neural Search Architecture (NAS) Network and is a Machine Learning model. The key principles are different from standard models like GoogleNet and is likely to bring a major breakthrough in AI soon.

Devika Nair
Machine Learning (ML)

The Idea of Indexing in NLP for Information Retrieval

We have explored the fundamental ideas for Information Retrieval that is Indexing Data. We have covered various types of indexes like Term document incidence matrix, Inverted index, boolean queries, dynamic and distributed indexing, distributed indexing and Dynamic Index.

Shubham Sood Shubham Sood
Machine Learning (ML)

Applications of Random Forest

Random Forest is mainly used for classification tasks and is used widely in production (applications) like in credit card fraud detection, cardiovascular disease medicine and many more.

Mansi Meena
Machine Learning (ML)

Regression vs Correlation

We have explored the key differences between Correlation and Regression along with the basic idea behind both concepts. There are mainly 5 differences between Regression and Correlation.

Ayush Mehar
Machine Learning (ML)

Wide and Deep Learning Model

Wide and Deep Learning Model is a ML/ DL model that has two main components: Memorizing component (Linear model) and a Generalizing component (Neural Network) and a cross product of the previous two components. Wide and Deep Learning Model is used in recommendation systems.

Sneha Gupta
Machine Learning (ML)

RoBERTa: Robustly Optimized BERT pre-training Approach

RoBERTa (Robustly Optimized BERT pre-training Approach) is a NLP model and is the modified version (by Facebook) of the popular NLP model, BERT. It is more like an approach better train and optimize BERT (Bidirectional Encoder Representations from Transformers).

Zuhaib Akhtar
Machine Learning (ML)

Introduction to GPT models

Generative Pre-Training (GPT) models are trained on unlabeled dataset (which are available in abundance). There are different variants like GPT-1, GPT-2 and GPT-3 which we have explored.

Zuhaib Akhtar
×

Visit our discussion forum to ask any question and join our community

View Forum
OpenGenus IQ © 2021 All rights reserved â„¢ [email: team@opengenus.org]
Top Posts LinkedIn Twitter