Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2 AVX512 VNNI FMA [Solved]
Do not miss this exclusive book on Binary Tree Problems. Get it now for free.
In this article, we have understood the reason behind the warning "TensorFlow binary was not compiled to use: AVX AVX2 AVX512 VNNI FMA" and presented 3 fixes using which the warning will not come.
Table of contents:
- Reason for Warning
- Fix 1: Build from source with flags
- Fix 2: Turn off warning
- Fix 3: Pip install Intel optimized TensorFlow
When you will run any code using TensorFlow, you will get this warning:
2022-09-28 23:34:41.981225: I tensorflow/core/platform/cpu_feature_guard.cc:142]
Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2 AVX512 AVX512F AVX512 VNNI FMA
In your case, you may see a few of the above instructions (AVX AVX2 AVX512 AVX512F AVX512 VNNI FMA) specific to your CPU.
We will understand the reason behind this warning and fix it.
Reason for Warning
The reason is that the current TensorFlow being used has been built using default options. This can happen if you have install TensorFlow from pip or built from source without enabling optimization flags.
This is a warning which means it will not impact the accuracy of your programs. The impact will be on performance.
AVX2, AVX512, VMA are instruction sets that are available in modern CPU systems which TensorFlow can use to execute efficiently.
For example, if AVX512 instructions are not used, TF will have to use AVX2 instructions which will be slower. For example, AVX512 VNNI is used for optimal INT8 quantized inference which can give over 3X performance boost compared to just using AVX512.
Default TensorFlow does not come with these optimizations.
Fix 1: Build from source with flags
The steps to build TensorFlow from source with the optimization flags enabled for all instruction sets are as follows:
- Follow these steps:
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
./configure
- Build the pip wheel file
bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mavx512
--copt=-mavx512f --copt=-mavx512vnni --copt=-mfpmath=both --copt=-msse4.1
--copt=-msse4.2 -k //tensorflow/tools/pip_package:build_pip_package
Note the optimization flags used such has--copt=-mavx512f
- Prepare the wheel file
bazel-bin/tensorflow/tools/pip_package/
build_pip_package /tmp/tensorflow_pkg
- Install the wheel file and use TensorFlow
pip install tensorflow-2.10.0-wheel --force
Fix 2: Turn off warning
As this is a warning, we can turn it off by setting an environment variable.
export TF_CPP_MIN_LOG_LEVEL=2
Note: with this fix, the issue still exists that is performance will remain low but the warning will not be displayed on using TensorFlow. You can use this if the warning is bugging you and you do not need the optimized TensorFlow.
Fix 3: Pip install Intel optimized TensorFlow
Intel's version of TensorFlow is highly optimized and all instruction sets are enabled by default. So, as a quick fix, you can use intel-tensorflow instead of the official TensorFlow.
Install Intel-TensorFlow using this command:
pip install intel-tensorflow --force
Following any of the 3 fixes, you will not see the warning with your next TensorFlow run. Enjoy debugging and read documentation carefully.
Sign up for FREE 3 months of Amazon Music. YOU MUST NOT MISS.