×
Home Discussions Write at Opengenus IQ
×
  • DSA Cheatsheet
  • HOME
  • Track your progress
  • Deep Learning (FREE)
  • Join our Internship 🎓
  • RANDOM
  • One Liner

Machine Learning (ML)

Machine Learning is the fastest growing and most potential field that enables a computer to perform specific tasks better than humans. It is actively used in companies like Apple, Tesla, Google and Facebook. We are covering the latest developments in the field

Machine Learning (ML)

Popular Datasets in Machine Learning

Data sets are important in Machine learning as the more better data we have, the better the model. The various popular data sets available for machine learning are ImageNet, MNIST, NIST, CIFAR-10 and YouTube 8M.

OpenGenus Tech Review Team OpenGenus Tech Review Team
TensorFlow

Key ideas in TensorFlow

TensorFlow is a popular Deep Learning library that is used to build Deep Learning models and is backed by Google. A few key ideas of TensorFlow are tensor, distributed computing, kernel abstraction, operation abstraction, computational graph, automatic gradient computation and others

OpenGenus Tech Review Team OpenGenus Tech Review Team
TensorFlow

Build and install TensorFlow from source with MKL DNN support and AVX enabled

In this guide, we will walk you through building and installing TensorFlow from source with support for MKL DNN and with AVX enabled. By following the six simple steps, you can build and install TensorFlow from source in 20 minutes

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

Differences between Torch and PyTorch deep learning libraries

We have explored some of the differences between two popular frameworks namely Torch and PyTorch from the view of common origin, current development status, source code and implementation, usage, performance and ONNX support. As development of Torch has been paused, you should go with PyTorch

OpenGenus Tech Review Team OpenGenus Tech Review Team
openmp

OpenMP clauses: firstprivate, lastprivate, ordered

There are three basic OpenMP clauses namely firstprivate, lastprivate and ordered. firstprivate clause is used to initialize a variable from the serial part of the code and private clause doesn't initialize the variable

OpenGenus Tech Review Team OpenGenus Tech Review Team
openmp

Basic OpenMP functions

There are 3 basic functions in OpenMP namely omp_get_thread_num, omp_set_num_threads (nthreads) and omp_get_num_threads. We have given a basic C/ C++ example to demonstrate the use of the function and the observation of the output as well

OpenGenus Tech Review Team OpenGenus Tech Review Team
openmp

When to use OpenMP directives?

If some section of your code can be parallelized and you have more than one processor, you should definitely speed up the execution of your program using OpenMP directives. We have demonstrated how to use Amdahl's law to calculate the expected speed up and profiling to find which section takes time

OpenGenus Tech Review Team OpenGenus Tech Review Team
openmp

Introduction to OpenMP

OpenMP is an open source library which is used for multi-threaded parallel processing and shared-memory multi-processor (core) computers. In OpenMP, part of program is a single thread and part is multi-threaded We have given advantages, disadvantages, industry use example, approach of OpenMP library

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

ONNX format for interchangeable AI models

The Open Neural Network Exchange Format (ONNX) is a format for exchanging deep learning/ artificial intelligence models. It will make deep learning models portable thus preventing vendor lock in. We have provided a real life use case of ONNX, benefits of ONNX and the key ideas and challenges of ONNX

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

MKL (Math Kernel Library), a Basic Linear Algebra Subprograms (BLAS) Library

Intel's Math Kernel Library (Intel MKL) is a Basic Linear Algebra Subprograms (BLAS) Library that optimizes code with minimal effort for future generations of Intel processors. We have presented how to download and use MKL using an example of matrix multiplication in MKL Library

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

FLAME BLIS, a Basic Linear Algebra Subprograms Library

FLAME BLIS is an open source portable software framework/ Basic Linear Algebra Subprograms (BLAS) library for instantiating high-performance BLAS-like dense linear algebra libraries. We have presented how to install and use BLIS using an example of matrix multiplication in FLAME BLIS

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

OpenBLAS, a Basic Linear Algebra Subprograms Library

OpenBLAS is an open source optimized BLAS (Basic Linear Algebra Subprograms) library based on GotoBLAS2 1.13 BSD version. We have presented how to install and use OpenBLAS with a matrix multiplication example in OpenBLAS

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

Basic Linear Algebra Subprograms (BLAS) Library

The BLAS (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations. There are three levels within the BLAS library. The various implementations include Intel's MKL, BLIS, NetLib's BLAS, OpenBLAS, BLAS++ and others

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

PyTorch vs TensorFlow

We have compared PyTorch and TensorFlow on the basis of various metrics to help you determine the framework you should go forward with. In short, TensorFlow gives you more control and high computational efficiency while PyTorch gives you the simplicity to develop applications.

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

Install and use NNVM Compiler

NNVM compiler is a graph compiler for the TVM Stack that takes in models in NNVM Intermediate Representation format and compiles them for various backends such as LLVM, METAL, CUDA and others. We have presented how to install and build NNVM from source and how to use it with the configurations

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

NNVM Intermediate Representation

NNVM is a reusable graph Intermediate Representation stack for deep learning systems. It provides useful API to construct, represent and transform computation graphs to get most high-level optimization needed in deep learning. NNVM is a part of TVM stack for deep learning and has a compiler as well

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

Run a ResNet34 model in ONNX format on TVM Stack with LLVM backend

In this guide, we will run a ResNet34 model in ONNX format on the TVM Stack with LLVM backend. You do not need any specialized equipment like GPU and TPU to follow this guide. A simple CPU is enough.

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

Run a ResNet18 model in ONNX format on TVM Stack with LLVM backend

In this guide, we will run a ResNet18 model in ONNX format on the TVM Stack with LLVM backend. You do not need any specialized equipment like GPU and TPU to follow this guide. A simple CPU is enough.

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

Run a ResNet101 model in ONNX format on TVM Stack with LLVM backend

In this guide, we will run a ResNet101 model in ONNX format on the TVM Stack with LLVM backend. You do not need any specialized equipment like GPU and TPU to follow this guide. A simple CPU is enough.

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

Run a ResNet152 model in ONNX format on TVM Stack with LLVM backend

In this guide, we will run a ResNet152 model in ONNX format on the TVM Stack with LLVM backend. You do not need any specialized equipment like GPU and TPU to follow this guide. A simple CPU is enough.

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

Run a ResNet50 model in ONNX format on TVM Stack with LLVM backend

In this guide, we will run a ResNet50 model in ONNX format on the TVM Stack with LLVM backend. You do not need any specialized equipment like GPU and TPU to follow this guide. A simple CPU is enough.

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

Run a VGG16 model in ONNX format on TVM Stack with LLVM backend

In this guide, we will run a VGG16 model in ONNX format on the TVM Stack with LLVM backend. You do not need any specialized equipment like GPU and TPU to follow this guide. A simple CPU is enough.

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

Run a VGG19 model in ONNX format on TVM Stack with LLVM backend

In this guide, we will run a VGG19 model in ONNX format on the TVM Stack with LLVM backend. You do not need any specialized equipment like GPU and TPU to follow this guide. A simple CPU is enough.

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

Install TVM and NNVM from source

In this guide, we will walk you through the process of installing TVM and NNVM compiler from source along with all its dependencies such as HalideIR, DMLC-CORE, DLPACK and COMPILER-RT. Once installed, you can enjoy compiling models in any frameworks on any backend of your choice.

OpenGenus Tech Review Team OpenGenus Tech Review Team
Machine Learning (ML)

TVM: A Deep Learning Compiler Stack

TVM is an open source deep learning compiler stack for CPUs, GPUs, and specialized accelerators that takes in models in various frameworks like TensorFlow, Keras, ONNX and others and deploys them on various backends like LLVM, CUDA, METAL and OpenCL. It gives comparably better performance than other

OpenGenus Tech Review Team OpenGenus Tech Review Team
OpenGenus IQ © 2025 All rights reserved â„¢
Contact - Email: team@opengenus.org
Primary Address: JR Shinjuku Miraina Tower, Tokyo, Shinjuku 160-0022, JP
Office #2: Commercial Complex D4, Delhi, Delhi 110017, IN
Top Posts LinkedIn Twitter
Android App
Apply for Internship