×
Home Discussions Write at Opengenus IQ
×
  • Track your progress
  • RANDOM
  • Join our Internship 🎓
  • 100+ Graph Algorithms
  • 100+ DP Problems
  • 50+ Linked List Problems
  • 50+ Array Problems
  • One Liner
  • 50+ Binary Tree problems
  • #7daysOfCode
  • Linux 💽
  • 🔊 Data Structures
  • Graph Algorithms
  • Dynamic Programming 💑
  • Home

Natural Language Processing (NLP)

Natural Language Processing (NLP) is the field of statistical analysis over textual data which involves tasks such as text summarization, topic modeling and much more.

Machine Learning (ML)

Embeddings in BERT

We will see what is BERT (bi-directional Encoder Representations from Transformers). How the BERT actually works and what are the embeddings in BERT that make it so special and functional compared to other NLP learning techniques.

Adith Narein T Adith Narein T
Natural Language Processing (NLP)

Word Embedding [Complete Guide]

We have explained the idea behind Word Embedding, why it is important, different Word Embedding algorithms like Embedding layers, word2Vec and other algorithms.

Kevin Ezra Kevin Ezra
Natural Language Processing (NLP)

Why SpaCy over NLTK?

We listed 10 aspects where spaCy shines better than NLTK. It also includes information when NLTK outsmarts spaCy.

Neeha Rathna Janjanam Neeha Rathna Janjanam
Machine Learning (ML)

Applications of NLP: Extraction from PDF, Language Translation and more

In this, we have explored core NLP applications such as text extraction, language translation, text classification, question answering, text to speech, speech to text and more.

Shubham Sood Shubham Sood
Machine Learning (ML)

Applications of NLP: Text Generation, Text Summarization and Sentiment Analysis

In this article, we have explored 3 core NLP applications such as Text Generation using GPT models, Text summarization and Sentiment Analysis.

Shubham Sood Shubham Sood
Machine Learning (ML)

ALBERT (A Lite BERT) NLP model

ALBERT stands for A Lite BERT and is a modified version of BERT NLP model. It builds on three key points such as Parameter Sharing, Embedding Factorization and Sentence Order Prediction (SOP).

Zuhaib Akhtar Zuhaib Akhtar
Machine Learning (ML)

Different core topics in NLP (with Python NLTK library code)

In this, we have covered different NLP tasks/ topics such as Tokenization of Sentences and Words, Stemming, Lemmatization, POS Tagging, Named Entity Relationship and more.

Shubham Sood Shubham Sood
Machine Learning (ML)

XLNet, RoBERTa, ALBERT models for Natural Language Processing (NLP)

We have explored some advanced NLP models such as XLNet, RoBERTa and ALBERT and will compare to see how these models are different from the fundamental model i.e BERT.

Shubham Sood Shubham Sood
Machine Learning (ML)

LSTM & BERT models for Natural Language Processing (NLP)

The fundamental NLP model that is used initially is LSTM model but because of its drawbacks BERT became the favored model for the NLP tasks.

Shubham Sood Shubham Sood
Machine Learning (ML)

The Idea of Indexing in NLP for Information Retrieval

We have explored the fundamental ideas for Information Retrieval that is Indexing Data. We have covered various types of indexes like Term document incidence matrix, Inverted index, boolean queries, dynamic and distributed indexing, distributed indexing and Dynamic Index.

Shubham Sood Shubham Sood
Machine Learning (ML)

Heaps' law in NLP for Frequency of Words

Heap's Law in NLP is a relation between the number of unique words to the total number of words in a document. It is, also, known as Herdan's law.

Shubham Sood Shubham Sood
Machine Learning (ML)

Zipf's Law in NLP

According to Zipf's law, the frequency of a given word is dependent on the inverse of it's rank . Zipf's law is one of the many important laws that plays a significant part in natural language processing.

Ashvith Shetty
Machine Learning (ML)

Byte Pair Encoding for Natural Language Processing (NLP)

Byte Pair Encoding is originally a compression algorithm that was adapted for NLP usage. Byte Pair Encoding comes in handy for handling the vocabulary issue through a bottom-up process.

Ethan Z. Booker
Machine Learning (ML)

A Deep Learning Approach for Native Language Identification (NLI)

Native language identification (NLI) is the task of determining an author's native language based only on their writings or speeches in a second language. In this article, we will implement a model to identify native language of the author.

Zuhaib Akhtar Zuhaib Akhtar
Machine Learning (ML)

Complete Guide on different Spell Correction techniques in NLP

This is the complete Guide on different Spell Correction techniques in Natural Language Processing (NLP) where we have explored approximate string matching techniques, coarse search, fine search, symspell, Seq2Seq along with code demonstration.

Ashish Kumar Sinha Ashish Kumar Sinha
Machine Learning (ML)

Different Word Representations

We have discussed the different word representations such as distributional representation, clustering based representation and distributed representation with several sub-types for each representation.

Chaitanyasuma Jain Chaitanyasuma Jain
Machine Learning (ML)

Topic Modeling using Non Negative Matrix Factorization (NMF)

Non-Negative Matrix Factorization is a statistical method to reduce the dimension of the input corpora. It uses factor analysis method to provide comparatively less weightage to the words with less coherence.

Murugesh Manthiramoorthi Murugesh Manthiramoorthi
Machine Learning (ML)

Sentiment Analysis Techniques

Sentiment Analysis is the application of analyzing a text data and predict the emotion associated with it. This is a challenging Natural Language Processing problem and there are several established approaches which we will go through.

Chaitanyasuma Jain Chaitanyasuma Jain
Machine Learning (ML)

Text Summarization using RNN

Encoder Decoder RNN (Recurrent neural network) model is used in order to overcome all the limits faced by the NLP for text summarization such as getting a short and accurate summary.

Ashutosh Vashisht Ashutosh Vashisht
Machine Learning (ML)

Latent Dirichlet Allocation (LDA)

Latent Dirichlet Allocation (LDA) is used as a topic modelling technique that is it can classify text in a document to a particular topic. It uses Dirichlet distribution to find topics for each document model

Harsh Bansal Harsh Bansal
Machine Learning (ML)

Topic Modelling Techniques in NLP

Topic modelling is an algorithm for extracting the topic or topics for a collection of documents. We explored different techniques like LDA, NMF, LSA, PLDA and PAM.

Murugesh Manthiramoorthi Murugesh Manthiramoorthi
Machine Learning (ML)

Implement Document Clustering using K Means in Python

In this article, we discuss the implementation of concepts like TF IDF, document similarity and K Means and created a demo of document clustering in Python

Chaitanyasuma Jain Chaitanyasuma Jain
Machine Learning (ML)

TextRank for Text Summarization

TextRank is a text summarization technique which is used in Natural Language Processing to generate Document Summaries. It uses an extractive approach and is an unsupervised graph-based text summarization technique based on PageRank.

Chaitanyasuma Jain Chaitanyasuma Jain
Machine Learning (ML)

Text classification using K Nearest Neighbors (KNN)

In this article, we will demonstrate how we can use K-Nearest Neighbors (KNN) algorithm for classifying input text into different categories. We used 20 news groups for a demo.

Harshiv Patel Harshiv Patel
Machine Learning (ML)

PageRank

PageRank is an algorithm to assign weights to nodes on a graph based on the graph structure and is largely used in Google Search Engine being developed by Larry Page

Ashutosh Vashisht Ashutosh Vashisht
OpenGenus IQ © 2022 All rights reserved â„¢ [email: team@opengenus.org]
Top Posts LinkedIn Twitter