Apache Mxnet 简明教程

Apache MXNet - Installing MXNet

为了开始使用 MXNet,我们需要做的第一件事,就是将其安装在我们的计算机上。Apache MXNet 几乎适用于所有可用平台,包括 Windows、Mac 和 Linux。

To get started with MXNet, the first thing we need to do, is to install it on our computer. Apache MXNet works on pretty much all the platforms available, including Windows, Mac, and Linux.

Linux OS

我们可以在 Linux 系统上按照以下方式安装 MXNet -

We can install MXNet on Linux OS in the following ways −

Graphical Processing Unit (GPU)

这里,当我们使用 GPU 进行处理时,我们将使用各种方法,即 Pip、Docker 和源代码来安装 MXNet -

Here, we will use various methods namely Pip, Docker, and Source to install MXNet when we are using GPU for processing −

By using Pip method

您可以使用以下命令在您的 Linus 系统上安装 MXNet -

You can use the following command to install MXNet on your Linus OS −

pip install mxnet

Apache MXNet 还提供 MKL pip 包,在英特尔硬件上运行时速度更快。这里例如 mxnet-cu101mkl 表示 -

Apache MXNet also offers MKL pip packages, which are much faster when running on intel hardware. Here for example mxnet-cu101mkl means that −

  1. The package is built with CUDA/cuDNN

  2. The package is MKL-DNN enabled

  3. The CUDA version is 10.1

对于其他选项,您还可以参考 https://pypi.org/project/mxnet/

For other option you can also refer to https://pypi.org/project/mxnet/.

By using Docker

您可以在 DockerHub 上找到包含 MXNet 的 docker 镜像,它位于 https://hub.docker.com/u/mxnet 。让我们检出以下步骤,以使用带有 GPU 的 Docker 安装 MXNet -

You can find the docker images with MXNet at DockerHub, which is available at https://hub.docker.com/u/mxnet Let us check out the steps below to install MXNet by using Docker with GPU −

Step 1 − 首先,按照可在 https://docs.docker.com/engine/install/ubuntu/ 获得的 docker 安装说明。我们需要在我们的计算机上安装 Docker。

Step 1− First, by following the docker installation instructions which are available at https://docs.docker.com/engine/install/ubuntu/. We need to install Docker on our machine.

Step 2 − 为了从 docker 容器中启用 GPU 的使用,接下来我们需要安装 nvidia-docker-plugin。您可以按照 https://github.com/NVIDIA/nvidia-docker/wiki 中提供的安装说明进行操作。

Step 2− To enable the usage of GPUs from the docker containers, next we need to install nvidia-docker-plugin. You can follow the installation instructions given at https://github.com/NVIDIA/nvidia-docker/wiki.

Step 3 − 使用以下命令,您可以拉取 MXNet docker 镜像 -

Step 3− By using the following command, you can pull the MXNet docker image −

$ sudo docker pull mxnet/python:gpu

现在为了查看 mxnet/python docker 镜像拉取是否成功,我们可以按如下列出 docker 镜像 -

Now in order to see if mxnet/python docker image pull was successful, we can list docker images as follows −

$ sudo docker images

为了获得最快的 MXNet 推断速度,建议使用带有 Intel MKL-DNN 的最新 MXNet。查看以下命令 -

For the fastest inference speeds with MXNet, it is recommended to use the latest MXNet with Intel MKL-DNN. Check the commands below −

$ sudo docker pull mxnet/python:1.3.0_cpu_mkl
$ sudo docker images

From source

要通过 GPU 构建 MXNet 共享库,首先需要为 CUDA 和 cuDNN 设置环境,如下所述−

To build the MXNet shared library from source with GPU, first we need to set up the environment for CUDA and cuDNN as follows−

  1. Download and install CUDA toolkit, here CUDA 9.2 is recommended.

  2. Next download cuDNN 7.1.4.

  3. Now we need to unzip the file. It is also required to change to the cuDNN root directory. Also move the header and libraries to local CUDA Toolkit folder as follows −

tar xvzf cudnn-9.2-linux-x64-v7.1
sudo cp -P cuda/include/cudnn.h /usr/local/cuda/include
sudo cp -P cuda/lib64/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
sudo ldconfig

设置 CUDA 和 cuDNN 的环境后,按照以下步骤从源代码构建 MXNet 共享库:

After setting up the environment for CUDA and cuDNN, follow the steps below to build the MXNet shared library from source −

Step 1 − 首先,我们需要安装必备软件包。Ubuntu 16.04 或更高版本需要这些依赖项。

Step 1− First, we need to install the prerequisite packages. These dependencies are required on Ubuntu version 16.04 or later.

sudo apt-get update
sudo apt-get install -y build-essential git ninja-build ccache libopenblas-dev
libopencv-dev cmake

Step 2 − 在此步骤中,我们将下载 MXNet 源代码并进行配置。首先,让我们使用以下命令克隆存储库:

Step 2− In this step, we will download MXNet source and configure. First let us clone the repository by using following command−

git clone –recursive https://github.com/apache/incubator-mxnet.git mxnet
cd mxnet
cp config/linux_gpu.cmake #for build with CUDA

Step 3 − 使用以下命令可以构建 MXNet 核心共享库:

Step 3− By using the following commands, you can build MXNet core shared library−

rm -rf build
mkdir -p build && cd build
cmake -GNinja ..
cmake --build .

Two important points regarding the above step is as follows−

Two important points regarding the above step is as follows−

如果要构建调试版本,请按以下方式指定:

If you want to build the Debug version, then specify the as follows−

cmake -DCMAKE_BUILD_TYPE=Debug -GNinja ..

为了设置并行编译作业的数量,请指定以下内容:

In order to set the number of parallel compilation jobs, specify the following −

cmake --build . --parallel N

一旦成功构建 MXNet 核心共享库,您将在 buildMXNet project root, 找到 libmxnet.so ,这是安装语言绑定(可选)所必需的。

Once you successfully build MXNet core shared library, in the build folder in your MXNet project root, you will find libmxnet.so which is required to install language bindings(optional).

Central Processing Unit (CPU)

在此,当使用 CPU 进行处理时,我们将使用各种方法(即 Pip、Docker 和 Source)来安装 MXNet:

Here, we will use various methods namely Pip, Docker, and Source to install MXNet when we are using CPU for processing −

By using Pip method

可以使用以下命令在 Linus OS 上安装 MXNet:

You can use the following command to install MXNet on your Linus OS−

pip install mxnet

当在英特尔硬件上运行时,Apache MXNet 还提供了支持 MKL-DNN 的 pip 包,而这些包要快得多。

Apache MXNet also offers MKL-DNN enabled pip packages which are much faster, when running on intel hardware.

pip install mxnet-mkl

By using Docker

在 DockerHub 上可以找到带 MXNet 的 Docker 镜像,网址为 https://hub.docker.com/u/mxnet 。让我们查看以下步骤,以使用 Docker 和 CPU 安装 MXNet:

You can find the docker images with MXNet at DockerHub, which is available at https://hub.docker.com/u/mxnet. Let us check out the steps below to install MXNet by using Docker with CPU −

Step 1 − 首先,按照可在 https://docs.docker.com/engine/install/ubuntu/ 获得的 docker 安装说明。我们需要在我们的计算机上安装 Docker。

Step 1− First, by following the docker installation instructions which are available at https://docs.docker.com/engine/install/ubuntu/. We need to install Docker on our machine.

Step 2 − 使用以下命令可以提取 MXNet docker 镜像:

Step 2− By using the following command, you can pull the MXNet docker image:

$ sudo docker pull mxnet/python

现在,为了查看 mxnet/python docker 镜像提取是否成功,我们可以按如下方式列出 docker 镜像:

Now, in order to see if mxnet/python docker image pull was successful, we can list docker images as follows −

$ sudo docker images

为了获得 MXNet 的最快推理速度,建议使用带有英特尔 MKL-DNN 的最新 MXNet。

For the fastest inference speeds with MXNet, it is recommended to use the latest MXNet with Intel MKL-DNN.

检查以下命令:

Check the commands below −

$ sudo docker pull mxnet/python:1.3.0_cpu_mkl
$ sudo docker images

From source

若要从源代码使用 CPU 编译 MXNet 共享库,请执行以下步骤 −

To build the MXNet shared library from source with CPU, follow the steps below −

Step 1 − 首先,我们需要安装必备软件包。Ubuntu 16.04 或更高版本需要这些依赖项。

Step 1− First, we need to install the prerequisite packages. These dependencies are required on Ubuntu version 16.04 or later.

sudo apt-get update

sudo apt-get install -y build-essential git ninja-build ccache libopenblas-dev libopencv-dev cmake

Step 2 − 在此步骤中,我们将下载 MXNet 源代码并进行配置。首先,让我们使用以下命令克隆存储库:

Step 2− In this step we will download MXNet source and configure. First let us clone the repository by using following command:

git clone –recursive https://github.com/apache/incubator-mxnet.git mxnet

cd mxnet
cp config/linux.cmake config.cmake

Step 3 − 您可以使用以下命令编译 MXNet 核心共享库:

Step 3− By using the following commands, you can build MXNet core shared library:

rm -rf build
mkdir -p build && cd build
cmake -GNinja ..
cmake --build .

Two important points regarding the above step is as follows−

Two important points regarding the above step is as follows−

如果您想编译调试版本,请按如下指定:

If you want to build the Debug version, then specify the as follows:

cmake -DCMAKE_BUILD_TYPE=Debug -GNinja ..

要设置并行编译作业的数量,请指定以下内容 −

In order to set the number of parallel compilation jobs, specify the following−

cmake --build . --parallel N

一旦成功编译 MXNet 核心共享库,您将在 MXNet 项目根目录的 build 文件夹中找到 libmxnet.so,该库是安装语言绑定(可选)所需的。

Once you successfully build MXNet core shared library, in the build folder in your MXNet project root, you will find libmxnet.so, which is required to install language bindings(optional).

MacOS

我们可以在 MacOS 上通过以下方式安装 MXNet −

We can install MXNet on MacOS in the following ways−

Graphical Processing Unit (GPU)

如果您打算在带有 GPU 的 MacOS 上编译 MXNet,那么 NO Pip 和 Docker 方法可用。在这种情况下,唯一的方法是从源代码编译它。

If you plan to build MXNet on MacOS with GPU, then there is NO Pip and Docker method available. The only method in this case is to build it from source.

From source

要从源代码使用 GPU 编译 MXNet 共享库,首先需要为 CUDA 和 cuDNN 设置环境。您需要按照 NVIDIA CUDA Installation Guide 中提供的说明操作,该说明可在 https://docs.nvidia.com 中找到以及 cuDNN Installation Guide, 中提供的说明,该说明可在 https://docs.nvidia.com/deeplearning 中找到以适用于 Mac OS。

To build the MXNet shared library from source with GPU, first we need to set up the environment for CUDA and cuDNN. You need to follow the NVIDIA CUDA Installation Guide which is available at https://docs.nvidia.com and cuDNN Installation Guide, which is available at https://docs.nvidia.com/deeplearning for mac OS.

请注意,在 2019 年中,CUDA 停止了对 macOS 的支持。事实上,未来版本的 CUDA 可能也不支持 macOS。

Please note that in 2019 CUDA stopped supporting macOS. In fact, future versions of CUDA may also not support macOS.

一旦您为 CUDA 和 cuDNN 设置了环境,按照下面给出的步骤在 OS X(Mac)上从源代码安装 MXNet−

Once you set up the environment for CUDA and cuDNN, follow the steps given below to install MXNet from source on OS X (Mac)−

Step 1 − 由于我们在 OS x 上有一些依赖项,因此首先需要安装必备包。

Step 1− As we need some dependencies on OS x, First, we need to install the prerequisite packages.

xcode-select –-install #Install OS X Developer Tools

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" #Install Homebrew

brew install cmake ninja ccache opencv # Install dependencies

我们还可以不用 OpenCV 编译 MXNet,因为 opencv 是可选的依赖项。

We can also build MXNet without OpenCV as opencv is an optional dependency.

Step 2 − 在此步骤中,我们下载 MXNet 源代码并进行配置。首先,让我们使用以下命令克隆存储库−

Step 2− In this step we will download MXNet source and configure. First let us clone the repository by using following command−

git clone –-recursive https://github.com/apache/incubator-mxnet.git mxnet

cd mxnet
cp config/linux.cmake config.cmake

对于支持 GPU,首先有必要安装 CUDA 依赖项,因为当人们尝试在没有 GPU 的机器上编译支持 GPU 的编译时,MXNet 编译不能自动检测到您的 GPU 架构。在这样的情况下,MXNet 将针对所有可用的 GPU 架构。

For a GPU-enabled, it is necessary to install the CUDA dependencies first because when one tries to build a GPU-enabled build on a machine without GPU, MXNet build cannot autodetect your GPU architecture. In such cases MXNet will target all available GPU architectures.

Step 3 − 使用以下命令可以构建 MXNet 核心共享库:

Step 3− By using the following commands, you can build MXNet core shared library−

rm -rf build
mkdir -p build && cd build
cmake -GNinja ..
cmake --build .

有关上述步骤的两个重要说明如下−

Two important points regarding the above step is as follows−

如果要构建调试版本,请按以下方式指定:

If you want to build the Debug version, then specify the as follows−

cmake -DCMAKE_BUILD_TYPE=Debug -GNinja ..

要设置并行编译作业的数量,请指定以下内容:

In order to set the number of parallel compilation jobs, specify the following:

cmake --build . --parallel N

一旦成功编译 MXNet 核心共享库,您将在 build 文件夹中的 MXNet project root, 中找到 libmxnet.dylib, ,它是安装语言绑定(可选)所需的。

Once you successfully build MXNet core shared library, in the build folder in your MXNet project root, you will find libmxnet.dylib, which is required to install language bindings(optional).

Central Processing Unit (CPU)

在这里,当我们使用 CPU 进行处理时,我们将使用 Pip、Docker 和源这几种方法来安装 MXNet−

Here, we will use various methods namely Pip, Docker, and Source to install MXNet when we are using CPU for processing−

By using Pip method

您可以使用以下命令在您的 Linus 操作系统上安装 MXNet

You can use the following command to install MXNet on your Linus OS

pip install mxnet

By using Docker

您可以在 DockerHub 上找到带有 MXNet 的 docker 映像,它可在 https://hub.docker.com/u/mxnet 上获得。让我们看看以下步骤以使用具有 CPU 的 Docker 安装 MXNet−

You can find the docker images with MXNet at DockerHub, which is available at https://hub.docker.com/u/mxnet. Let us check out the steps below to install MXNet by using Docker with CPU−

Step 1 − 首先,按照可在 https://docs.docker.com/docker-for-mac 上获得的 docker installation instructions 安装 Docker 到我们的机器上。

Step 1− First, by following the docker installation instructions which are available at https://docs.docker.com/docker-for-mac we need to install Docker on our machine.

Step 2 − 通过使用以下命令,您可以拉取 MXNet docker 映像−

Step 2− By using the following command, you can pull the MXNet docker image−

$ docker pull mxnet/python

现在为了查看 mxnet/python docker 映像拉取是否成功,我们可以按以下方式列出 docker 映像−

Now in order to see if mxnet/python docker image pull was successful, we can list docker images as follows−

$ docker images

为了获得 MXNet 最快的推理速度,推荐使用带有 Intel MKL-DNN 的最新 MXNet。查看以下命令−

For the fastest inference speeds with MXNet, it is recommended to use the latest MXNet with Intel MKL-DNN. Check the commands below−

$ docker pull mxnet/python:1.3.0_cpu_mkl
$ docker images

From source

按照以下提供的步骤在 OS X(Mac)上从源代码安装 MXNet−

Follow the steps given below to install MXNet from source on OS X (Mac)−

Step 1 − 因为我们需要一些在 OS x 上的依赖项,所以首先,我们需要安装先决条件包。

Step 1− As we need some dependencies on OS x, first, we need to install the prerequisite packages.

xcode-select –-install #Install OS X Developer Tools
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" #Install Homebrew
brew install cmake ninja ccache opencv # Install dependencies

我们还可以不用 OpenCV 编译 MXNet,因为 opencv 是可选的依赖项。

We can also build MXNet without OpenCV as opencv is an optional dependency.

Step 2 − 在此步骤中,我们将下载 MXNet 源代码并配置。首先,让我们通过使用以下命令克隆仓库−

Step 2− In this step we will download MXNet source and configure. First, let us clone the repository by using following command−

git clone –-recursive https://github.com/apache/incubator-mxnet.git mxnet

cd mxnet

cp config/linux.cmake config.cmake

Step 3 − 您可以使用以下命令编译 MXNet 核心共享库:

Step 3− By using the following commands, you can build MXNet core shared library:

rm -rf build
mkdir -p build && cd build
cmake -GNinja ..
cmake --build .

Two important points regarding the above step is as follows−

Two important points regarding the above step is as follows−

如果要构建调试版本,请按以下方式指定:

If you want to build the Debug version, then specify the as follows−

cmake -DCMAKE_BUILD_TYPE=Debug -GNinja ..

要设置并行编译作业的数量,请指定以下内容 −

In order to set the number of parallel compilation jobs, specify the following−

cmake --build . --parallel N

一旦成功编译 MXNet 核心共享库,您将在 build 文件夹中的 MXNet project root, 中找到 libmxnet.dylib, ,它是安装语言绑定(可选)所需的。

Once you successfully build MXNet core shared library, in the build folder in your MXNet project root, you will find libmxnet.dylib, which is required to install language bindings(optional).

Windows OS

为在 Windows 上安装 MXNet,以下为先决条件−

To install MXNet on Windows, following are the prerequisites−

Minimum System Requirements

  1. Windows 7, 10, Server 2012 R2, or Server 2016

  2. Visual Studio 2015 or 2017 (any type)

  3. Python 2.7 or 3.6

  4. pip

  1. Windows 10, Server 2012 R2, or Server 2016

  2. Visual Studio 2017

  3. At least one NVIDIA CUDA-enabled GPU

  4. MKL-enabled CPU: Intel® Xeon® processor, Intel® Core™ processor family, Intel Atom® processor, or Intel® Xeon Phi™ processor

  5. Python 2.7 or 3.6

  6. pip

Graphical Processing Unit (GPU)

By using Pip method−

如果您计划在带有 NVIDIA GPU 的 Windows 上构建 MXNet,则有两个选择,可以使用 Python 包使用 CUDA 支持安装 MXNet−

If you plan to build MXNet on Windows with NVIDIA GPUs, there are two options for installing MXNet with CUDA support with a Python package−

Install with CUDA Support

以下是我们借助其设置带有 CUDA 的 MXNet 的步骤。

Below are the steps with the help of which, we can setup MXNet with CUDA.

Step 1 − 首先安装 Microsoft Visual Studio 2017 或 Microsoft Visual Studio 2015。

Step 1− First install Microsoft Visual Studio 2017 or Microsoft Visual Studio 2015.

Step 2 − 接下来,下载并安装 NVIDIA CUDA。推荐使用 CUDA 版本 9.2 或 9.0,因为在过去已经发现了 CUDA 9.1 中的一些问题。

Step 2− Next, download and install NVIDIA CUDA. It is recommended to use CUDA versions 9.2 or 9.0 because some issues with CUDA 9.1 have been identified in the past.

Step 3 − 现在,下载并安装 NVIDIA_CUDA_DNN。

Step 3− Now, download and install NVIDIA_CUDA_DNN.

Step 4 − 最后,使用以下 pip 命令,安装带 CUDA 的 MXNet。

Step 4− Finally, by using following pip command, install MXNet with CUDA−

pip install mxnet-cu92

Install with CUDA and MKL Support

以下是可供参考的步骤,我们可以利用这些步骤,使用 CUDA 和 MKL 设置 MXNet。

Below are the steps with the help of which, we can setup MXNet with CUDA and MKL.

Step 1 − 首先安装 Microsoft Visual Studio 2017 或 Microsoft Visual Studio 2015。

Step 1− First install Microsoft Visual Studio 2017 or Microsoft Visual Studio 2015.

Step 2 − 接下来,下载并安装英特尔 MKL。

Step 2− Next, download and install intel MKL

Step 3 − 现在,下载并安装 NVIDIA CUDA。

Step 3− Now, download and install NVIDIA CUDA.

Step 4 − 现在,下载并安装 NVIDIA_CUDA_DNN。

Step 4− Now, download and install NVIDIA_CUDA_DNN.

Step 5 − 最后,使用以下 pip 命令,安装带 MKL 的 MXNet。

Step 5− Finally, by using following pip command, install MXNet with MKL.

pip install mxnet-cu92mkl

From source

要通过源代码使用 GPU 构建 MXNet 核心库,我们有以下两个选项:

To build the MXNet core library from source with GPU, we have the following two options−

Option 1− Build with Microsoft Visual Studio 2017

Option 1− Build with Microsoft Visual Studio 2017

为了使用 Microsoft Visual Studio 2017 自己构建和安装 MXNet,您需要以下依赖项。

In order to build and install MXNet yourself by using Microsoft Visual Studio 2017, you need the following dependencies.

Install/update Microsoft Visual Studio.

Install/update Microsoft Visual Studio.

  1. If Microsoft Visual Studio is not already installed on your machine, first download and install it.

  2. It will prompt about installing Git. Install it also.

  3. If Microsoft Visual Studio is already installed on your machine but you want to update it then proceed to the next step to modify your installation. Here you will be given the opportunity to update Microsoft Visual Studio as well.

按照 https://docs.microsoft.com/en-us 中提供的打开 Visual Studio 安装程序的说明来修改各个组件。

Follow the instructions for opening the Visual Studio Installer available at https://docs.microsoft.com/en-us to modify Individual components.

在 Visual Studio 安装程序应用程序中,根据需要更新。之后,查找并选中 VC++ 2017 version 15.4 v14.11 toolset ,然后单击 Modify

In the Visual Studio Installer application, update as required. After that look for and check VC++ 2017 version 15.4 v14.11 toolset and click Modify.

现在,使用以下命令,将 Microsoft VS2017 的版本更改为 v14.11−

Now by using the following command, change the version of the Microsoft VS2017 to v14.11−

"C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Auxiliary\Build\vcvars64.bat" -vcvars_ver=14.11

接下来,您需要下载并安装 CMake ,它可在 https://cmake.org/download/ 获得。建议您使用 CMake v3.12.2 ,它可在 https://cmake.org/download/ 获得,因为它经过 MXNet 测试。

Next, you need to download and install CMake available at https://cmake.org/download/ It is recommended to use CMake v3.12.2 which is available at https://cmake.org/download/ because it is tested with MXNet.

现在,下载并运行 OpenCV 程序包,它可在 https://sourceforge.net/projects/opencvlibrary/ 获得。该程序包将解压缩几个文件。由您决定是否将它们放入另一个目录。在这里,我们将 C:\utils(mkdir C:\utils) 路径用作我们的默认路径。

Now, download and run the OpenCV package available at https://sourceforge.net/projects/opencvlibrary/which will unzip several files. It is up to you if you want to place them in another directory or not. Here, we will use the path C:\utils(mkdir C:\utils) as our default path.

接下来,我们需要设置环境变量 OpenCV_DIR,以便指向我们刚刚解压的 OpenCV 构建目录。为此,打开命令提示符并键入 set OpenCV_DIR=C:\utils\opencv\build

Next, we need to set the environment variable OpenCV_DIR to point to the OpenCV build directory that we have just unzipped. For this open command prompt and type set OpenCV_DIR=C:\utils\opencv\build.

一个重要的问题是,如果您没有安装英特尔 MKL(Math Kernel Library),您可以安装它。

One important point is that if you do not have the Intel MKL (Math Kernel Library) installed the you can install it.

您可以使用的另一个开源包是 OpenBLAS 。在此,为了进一步说明,我们假设您正在使用 OpenBLAS

Another open source package you can use is OpenBLAS. Here for the further instructions we are assuming that you are using OpenBLAS.

因此,下载 OpenBlas 包,该包可在 https://sourceforge.net 中获取,然后解压该文件,将其重命名为 OpenBLAS 并将其放在 C:\utils 下。

So, Download the OpenBlas package which is available at https://sourceforge.net and unzip the file, rename it to OpenBLAS and put it under C:\utils.

接下来,我们需要设置环境变量 OpenBLAS_HOME 以指向包含 includelib 目录的 OpenBLAS 目录。为此,打开命令提示符并键入 set OpenBLAS_HOME=C:\utils\OpenBLAS

Next, we need to set the environment variable OpenBLAS_HOME to point to the OpenBLAS directory that contains the include and lib directories. For this open command prompt and type set OpenBLAS_HOME=C:\utils\OpenBLAS.

现在,下载并安装可在 https://developer.nvidia.com 中获取的 CUDA。请注意,如果您已经安装了 CUDA,然后安装了 Microsoft VS2017,那么您现在需要重新安装 CUDA,以便您可以获取 Microsoft VS2017 集成的 CUDA 工具包组件。

Now, download and install CUDA available at https://developer.nvidia.com. Note that, if you already had CUDA, then installed Microsoft VS2017, you need to reinstall CUDA now, so that you can get the CUDA toolkit components for Microsoft VS2017 integration.

接下来,您需要下载并安装 cuDNN。

Next, you need to download and install cuDNN.

接下来,您需要下载并安装也在 https://gitforwindows.org/ 的 git。

Next, you need to download and install git which is at https://gitforwindows.org/ also.

一旦安装了所有必需的依赖项,请按照以下步骤来构建 MXNet 源代码 -

Once you have installed all the required dependencies, follow the steps given below to build the MXNet source code−

Step 1 - 在 Windows 中打开命令提示符。

Step 1− Open command prompt in windows.

Step 2 - 现在,使用以下命令,从 GitHub 下载 MXNet 源代码:

Step 2− Now, by using the following command, download the MXNet source code from GitHub:

cd C:\

git clone https://github.com/apache/incubator-mxnet.git --recursive

Step 3 - 接下来,验证以下内容 -

Step 3− Next, verify the following−

DCUDNN_INCLUDE and DCUDNN_LIBRARY 环境变量指向安装了 CUDA 的位置的 include 文件夹和 cudnn.lib 文件

DCUDNN_INCLUDE and DCUDNN_LIBRARY environment variables are pointing to the include folder and cudnn.lib file of your CUDA installed location

C:\incubator-mxnet 是您在上一步中克隆的源代码的位置。

C:\incubator-mxnet is the location of the source code you just cloned in the previous step.

Step 4 - 接下来,使用以下命令,创建一个构建 directory 并转到该目录,例如 -

Step 4− Next by using the following command, create a build directory and also go to the directory, for example−

mkdir C:\incubator-mxnet\build
cd C:\incubator-mxnet\build

Step 5 - 现在,使用 cmake,编译 MXNet 源代码,如下所示 -

Step 5− Now, by using cmake, compile the MXNet source code as follows−

cmake -G "Visual Studio 15 2017 Win64" -T cuda=9.2,host=x64 -DUSE_CUDA=1 -DUSE_CUDNN=1 -DUSE_NVRTC=1 -DUSE_OPENCV=1 -DUSE_OPENMP=1 -DUSE_BLAS=open -DUSE_LAPACK=1 -DUSE_DIST_KVSTORE=0 -DCUDA_ARCH_LIST=Common -DCUDA_TOOLSET=9.2 -DCUDNN_INCLUDE=C:\cuda\include -DCUDNN_LIBRARY=C:\cuda\lib\x64\cudnn.lib "C:\incubator-mxnet"

Step 6 - CMake 成功完成后,使用以下命令编译 MXNet 源代码 -

Step 6− Once the CMake successfully completed, use the following command to compile the MXNet source code−

msbuild mxnet.sln /p:Configuration=Release;Platform=x64 /maxcpucount

Option 2: Build with Microsoft Visual Studio 2015

Option 2: Build with Microsoft Visual Studio 2015

为了使用 Microsoft Visual Studio 2015 自己构建并安装 MXNet,您需要以下依赖项。

In order to build and install MXNet yourself by using Microsoft Visual Studio 2015, you need the following dependencies.

安装/更新 Microsoft Visual Studio 2015。从源代码构建 MXnet 的最低要求是,Microsoft Visual Studio 2015 的更新 3。你可以使用 Tools → Extensions and Updates…​ | Product Updates 菜单对其进行升级。

Install/update Microsoft Visual Studio 2015. The minimum requirement to build MXnet from source is of Update 3 of Microsoft Visual Studio 2015. You can use Tools → Extensions and Updates…​ | Product Updates menu to upgrade it.

接下来,你需要下载并安装 CMake ,可从 https://cmake.org/download/ 获得。建议使用 CMake v3.12.2 ,该软件位于 https://cmake.org/download/ ,因为它已通过 MXNet 的测试。

Next, you need to download and install CMake which is available at https://cmake.org/download/. It is recommended to use CMake v3.12.2 which is at https://cmake.org/download/, because it is tested with MXNet.

现在,下载并运行 OpenCV 包,该包可从 https://excellmedia.dl.sourceforge.net 获得,它将解压多个文件。至于是否将它们放入另一个目录由你决定。

Now, download and run the OpenCV package available at https://excellmedia.dl.sourceforge.net which will unzip several files. It is up to you, if you want to place them in another directory or not.

接下来,我们需要将环境变量 OpenCV_DIR 设置为指向我们刚刚解压的 OpenCV 构建目录。为此,打开命令提示符并键入 set OpenCV_DIR=C:\opencv\build\x64\vc14\bin

Next, we need to set the environment variable OpenCV_DIR to point to the OpenCV build directory that we have just unzipped. For this, open command prompt and type set OpenCV_DIR=C:\opencv\build\x64\vc14\bin.

一个重要的问题是,如果您没有安装英特尔 MKL(Math Kernel Library),您可以安装它。

One important point is that if you do not have the Intel MKL (Math Kernel Library) installed the you can install it.

您可以使用的另一个开源包是 OpenBLAS 。在此,为了进一步说明,我们假设您正在使用 OpenBLAS

Another open source package you can use is OpenBLAS. Here for the further instructions we are assuming that you are using OpenBLAS.

因此,下载可从 https://excellmedia.dl.sourceforge.net 获得的 OpenBLAS 包,并解压该文件,将其重命名为 OpenBLAS 并将其放入 C:\utils。

So, Download the OpenBLAS package available at https://excellmedia.dl.sourceforge.net and unzip the file, rename it to OpenBLAS and put it under C:\utils.

接下来,我们需要将环境变量 OpenBLAS_HOME 设置为指向包含 include 和 lib 目录的 OpenBLAS 目录。你可以在 C:\Program files (x86)\OpenBLAS\ 中找到该目录。

Next, we need to set the environment variable OpenBLAS_HOME to point to the OpenBLAS directory that contains the include and lib directories. You can find the directory in C:\Program files (x86)\OpenBLAS\

请注意,如果你已经安装了 CUDA,然后安装了 Microsoft VS2015,那么你需要重新安装 CUDA,这样你才能获得适用于 Microsoft VS2017 集成的 CUDA 工具包组件。

Note that, if you already had CUDA, then installed Microsoft VS2015, you need to reinstall CUDA now so that, you can get the CUDA toolkit components for Microsoft VS2017 integration.

接下来,您需要下载并安装 cuDNN。

Next, you need to download and install cuDNN.

现在,我们需要将环境变量 CUDACXX 设置为指向 CUDA Compiler(C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.1\bin\nvcc.exe (例如)。

Now, we need to Set the environment variable CUDACXX to point to the CUDA Compiler(C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.1\bin\nvcc.exe for example).

类似地,我们还需要将环境变量 CUDNN_ROOT 设置为指向包含 include, libbin 目录的 cuDNN 目录(例如 C:\Downloads\cudnn-9.1-windows7-x64-v7\cuda )。

Similarly, we also need to set the environment variable CUDNN_ROOT to point to the cuDNN directory that contains the include, lib and bin directories (C:\Downloads\cudnn-9.1-windows7-x64-v7\cuda for example).

一旦安装了所有必需的依赖项,请按照以下步骤来构建 MXNet 源代码 -

Once you have installed all the required dependencies, follow the steps given below to build the MXNet source code−

Step 1 − 首先,从 GitHub 下载 MXNet 源代码−

Step 1− First, download the MXNet source code from GitHub−

cd C:\
git clone https://github.com/apache/incubator-mxnet.git --recursive

Step 2 − 接下来,使用 CMake 在 ./build 中创建一个 Visual Studio。

Step 2− Next, use CMake to create a Visual Studio in ./build.

Step 3 − 现在,在 Visual Studio 中,我们需要打开解决方案文件 .sln 并对其进行编译。这些命令将在 ./build/Release/ or ./build/Debug 文件夹中生成一个名为 mxnet.dll 的库

Step 3− Now, in Visual Studio, we need to open the solution file,.sln, and compile it. These commands will produce a library called mxnet.dll in the ./build/Release/ or ./build/Debug folder

Step 4 − 一旦 CMake 成功完成,使用以下命令编译 MXNet 源代码

Step 4− Once the CMake successfully completed, use the following command to compile the MXNet source code

msbuild mxnet.sln /p:Configuration=Release;Platform=x64 /maxcpucount

Central Processing Unit (CPU)

在这里,当我们使用 CPU 进行处理时,我们将使用 Pip、Docker 和源这几种方法来安装 MXNet−

Here, we will use various methods namely Pip, Docker, and Source to install MXNet when we are using CPU for processing−

By using Pip method

如果你计划在配备 CPU 的 Windows 上构建 MXNet,则可以使用 Python 包安装 MXNet,方法有两种:

If you plan to build MXNet on Windows with CPUs, there are two options for installing MXNet using a Python package−

Install with CPUs

Install with CPUs

使用以下命令使用 Python 安装带有 CPU 的 MXNet−

Use the following command to install MXNet with CPU with Python−

pip install mxnet

Install with Intel CPUs

Install with Intel CPUs

如上所述,MXNet 实验性地支持 Intel MKL,还支持 MKL-DNN。使用以下命令使用 Python 安装带有 Intel CPU 的 MXNet−

As discussed above, MXNet has experimental support for Intel MKL as well as MKL-DNN. Use the following command to install MXNet with Intel CPU with Python−

pip install mxnet-mkl

By using Docker

你可以在 DockerHub 中找到具有 MXNet 的 docker 镜像,可以在 https://hub.docker.com/u/mxnet 上获得。我们通过 Docker 使用 CPU 安装 MXNet 的步骤如下:

You can find the docker images with MXNet at DockerHub, available at https://hub.docker.com/u/mxnet Let us check out the steps below, to install MXNet by using Docker with CPU−

Step 1 − 首先,通过按照可从 https://docs.docker.com/docker-for-mac/install 阅读的 Docker 安装说明进行操作。我们需要在我们的机器上安装 Docker。

Step 1− First, by following the docker installation instructions which can be read at https://docs.docker.com/docker-for-mac/install. We need to install Docker on our machine.

Step 2 − 通过使用以下命令,您可以拉取 MXNet docker 映像−

Step 2− By using the following command, you can pull the MXNet docker image−

$ docker pull mxnet/python

现在为了查看 mxnet/python docker 映像拉取是否成功,我们可以按以下方式列出 docker 映像−

Now in order to see if mxnet/python docker image pull was successful, we can list docker images as follows−

$ docker images

为了获得 MXNet 的最快推理速度,建议使用带有英特尔 MKL-DNN 的最新 MXNet。

For the fastest inference speeds with MXNet, it is recommended to use the latest MXNet with Intel MKL-DNN.

检查下面的命令−

Check the commands below−

$ docker pull mxnet/python:1.3.0_cpu_mkl
$ docker images

Installing MXNet On Cloud and Devices

此部分重点介绍了如何在云端以及设备上安装 Apache MXNet。让我们从了解如何在云端安装 MXNet 开始。

This section highlights how to install Apache MXNet on Cloud and on devices. Let us begin by learning about installing MXNet on cloud.

Installing MXNet On Cloud

您还可以通过对 Graphical Processing Unit (GPU) 提供支持的几个云提供商来获取 Apache MXNet。可以找到的另外两种支持如下−

You can also get Apache MXNet on several cloud providers with Graphical Processing Unit (GPU) support. Two other kind of support you can find are as follows−

  1. GPU/CPU-hybrid support for use cases like scalable inference.

  2. Factorial GPU support with AWS Elastic Inference.

以下是提供支持 Apache MXNet 的不同虚拟机的 GPU 支持的云提供商−

Following are cloud providers providing GPU support with different virtual machine for Apache MXNet−

The Alibaba Console

您可以使用阿里巴巴控制台创建 NVIDIA GPU Cloud Virtual Machine (VM) (可在 https://docs.nvidia.com/ngc 获得),并使用 Apache MXNet。

You can create the NVIDIA GPU Cloud Virtual Machine (VM) available at https://docs.nvidia.com/ngc with the Alibaba Console and use Apache MXNet.

Amazon Web Services

它还提供 GPU 支持,并为 Apache MXNet 提供以下服务−

It also provides GPU support and gives the following services for Apache MXNet−

Amazon SageMaker

它管理 Apache MXNet 模型的培训和部署。

It manages training and deployment of Apache MXNet models.

AWS Deep Learning AMI

它为 Python 2 和 Python 3 提供了预安装的 Conda 环境,其中包括 Apache MXNet、CUDA、cuDNN、MKL-DNN 和 AWS Elastic Inference。

It provides preinstalled Conda environment for both Python 2 and Python 3 with Apache MXNet, CUDA, cuDNN, MKL-DNN, and AWS Elastic Inference.

Dynamic Training on AWS

它对实验手动 EC2 设置以及半自动化 CloudFormation 设置提供培训。

It provides the training for experimental manual EC2 setup as well as for semi-automated CloudFormation setup.

您可以使用 Amazon Web Services 中提供的 NVIDIA VM (可在 https://aws.amazon.com 获得)。

You can use NVIDIA VM available at https://aws.amazon.com with Amazon web services.

Google Cloud Platform

Google 还提供 NVIDIA GPU cloud image ,可在 https://console.cloud.google.com 获得,可用于处理 Apache MXNet。

Google is also providing NVIDIA GPU cloud image which is available at https://console.cloud.google.com to work with Apache MXNet.

Microsoft Azure

Microsoft Azure Marketplace 还提供 NVIDIA GPU cloud image ,可在 https://azuremarketplace.microsoft.com 获得,可用于处理 Apache MXNet。

Microsoft Azure Marketplace is also providing NVIDIA GPU cloud image available at https://azuremarketplace.microsoft.com to work with Apache MXNet.

Oracle Cloud

Oracle 还提供 NVIDIA GPU cloud image ,可在 https://docs.cloud.oracle.com 获得,可用于处理 Apache MXNet。

Oracle is also providing NVIDIA GPU cloud image available at https://docs.cloud.oracle.com to work with Apache MXNet.

Central Processing Unit (CPU)

Apache MXNet 可在每个云提供程序的仅限 CPU 的实例上运行。有各种安装方法,例如 −

Apache MXNet works on every cloud provider’s CPU-only instance. There are various methods to install such as−

  1. Python pip install instructions.

  2. Docker instructions.

  3. Preinstalled option like Amazon Web Services which provides AWS Deep Learning AMI (having preinstalled Conda environment for both Python 2 and Python 3 with MXNet and MKL-DNN).

Installing MXNet on Devices

让我们了解如何在设备上安装 MXNet。

Let us learn how to install MXNet on devices.

Raspberry Pi

您也可以在 Raspberry Pi 3B 设备上运行 Apache MXNet,因为 MXNet 也支持基于 Respbian ARM 的操作系统。为了在 Raspberry Pi3 上平稳运行 MXNet,建议使用具有 1 GB 以上内存和至少 4GB 可用空间的 SD 卡的设备。

You can also run Apache MXNet on Raspberry Pi 3B devices as MXNet also support Respbian ARM based OS. In order to run MXNet smoothly on the Raspberry Pi3, it is recommended to have a device that has more than 1 GB of RAM and a SD card with at least 4GB of free space.

以下是利用这些方法为 Raspberry Pi 构建 MXNet 并安装该库的 Python 绑定:

Following are the ways with the help of which you can build MXNet for the Raspberry Pi and install the Python bindings for the library as well−

Quick installation

预先构建的 Python wheel 可用于安装在带有 Stretch 的 Raspberry Pi 3B 上以便快速安装。此方法的一个重要问题是我们可能需要安装若干个依赖项才能让 Apache MXNet 运行。

The pre-built Python wheel can be used on a Raspberry Pi 3B with Stretch for quick installation. One of the important issues with this method is that, we need to install several dependencies to get Apache MXNet to work.

Docker installation

您可以按照 https://docs.docker.com/engine/install/ubuntu/ 中提供的 docker 安装说明在您的机器上安装 Docker。为此,我们也可以安装和使用社区版 (CE)。

You can follow the docker installation instructions, which is available at https://docs.docker.com/engine/install/ubuntu/ to install Docker on your machine. For this purpose, we can install and use Community Edition (CE) also.

Native Build (from source)

为了从源安装 MXNet,我们需要按照以下两个步骤操作:

In order to install MXNet from source, we need to follow the following two steps−

Step 1

Build the shared library from the Apache MXNet C++ source code

Build the shared library from the Apache MXNet C++ source code

为了在 Raspberry 版本 Wheezy 及更高版本上构建共享库,我们需要以下依赖项:

To build the shared library on Raspberry version Wheezy and later, we need the following dependencies:

  1. Git− It is required to pull code from GitHub.

  2. Libblas− It is required for linear algebraic operations.

  3. Libopencv− It is required for computer vision related operations. However, it is optional if you would like to save your RAM and Disk Space.

  4. C Compiler− It is required to compiles and builds MXNet source code. Following are the supported compilers that supports C 11− G++ (4.8 or later version) Clang(3.9-6)

使用以下命令安装上述依赖项:

Use the following commands to install the above-mentioned dependencies−

sudo apt-get update
sudo apt-get -y install git cmake ninja-build build-essential g++-4.9 c++-4.9 liblapack*
libblas* libopencv*
libopenblas* python3-dev python-dev virtualenv

接下来,我们需要克隆 MXNet 源代码存储库。为此,请在您的主目录中使用以下 git 命令:

Next, we need to clone the MXNet source code repository. For this use the following git command in your home directory−

git clone https://github.com/apache/incubator-mxnet.git --recursive

cd incubator-mxnet

现在,利用以下命令构建共享库:

Now, with the help of following commands, build the shared library:

mkdir -p build && cd build
cmake \
-DUSE_SSE=OFF \
-DUSE_CUDA=OFF \
-DUSE_OPENCV=ON \
-DUSE_OPENMP=ON \
-DUSE_MKL_IF_AVAILABLE=OFF \
-DUSE_SIGNAL_HANDLER=ON \

-DCMAKE_BUILD_TYPE=Release \
-GNinja ..
ninja -j$(nproc)

一旦您执行了上述命令,它将启动构建过程,该过程将需要几个小时才能完成。您将在构建目录中得到一个名为 libmxnet.so 的文件。

Once you execute the above commands, it will start the build process which will take couple of hours to finish. You will get a file named libmxnet.so in the build directory.

Step 2

Install the supported language-specific packages for Apache MXNet

Install the supported language-specific packages for Apache MXNet

在此步骤中,我们将安装 MXNet Pythin 绑定。为此,我们需要在 MXNet 目录中运行以下命令:

In this step, we will install MXNet Pythin bindings. To do so, we need to run the following command in the MXNet directory−

cd python
pip install --upgrade pip
pip install -e .

或者,通过以下命令,您也可以创建一个可使用 pip 安装的 whl package

Alternatively, with the following command, you can also create a whl package installable with pip

ci/docker/runtime_functions.sh build_wheel python/ $(realpath build)

NVIDIA Jetson Devices

您也可以在 NVIDIA Jetson 设备上(如 TX2Nano )运行 Apache MXNet,因为 MXNet 也支持基于 Ubuntu Arch64 的操作系统。为了在 NVIDIA Jetson 设备上平稳运行 MXNet,在您的 Jetson 设备上安装 CUDA 是必要的。

You can also run Apache MXNet on NVIDIA Jetson Devices, such as TX2 or Nano as MXNet also support the Ubuntu Arch64 based OS. In order to run, MXNet smoothly on the NVIDIA Jetson Devices, it is necessary to have CUDA installed on your Jetson device.

以下是借助其可以为 NVIDIA Jetson 设备构建 MXNet 的方式:

Following are the ways with the help of which you can build MXNet for NVIDIA Jetson devices:

  1. By using a Jetson MXNet pip wheel for Python development

  2. From source

但是,在通过上述任何一种方式构建 MXNet 之前,您需要在 Jetson 设备上安装以下依赖项−

But, before building MXNet from any of the above-mentioned ways, you need to install following dependencies on your Jetson devices−

Python Dependencies

为了使用 Python API,我们需要以下依赖项−

In order to use the Python API, we need the following dependencies−

sudo apt update
sudo apt -y install \
   build-essential \
   git \
   graphviz \
   libatlas-base-dev \
   libopencv-dev \
   python-pip
sudo pip install --upgrade \
   pip \
   setuptools
sudo pip install \
   graphviz==0.8.4 \
   jupyter \
   numpy==1.15.2

Clone the MXNet source code repository

通过在主目录中使用以下 git 命令克隆 MXNet 源代码存储库−

By using the following git command in your home directory, clone the MXNet source code repository−

git clone --recursive https://github.com/apache/incubator-mxnet.git mxnet

Setup environment variables

在主目录中的 .profile 文件中添加以下内容−

Add the following in your .profile file in your home directory−

export PATH=/usr/local/cuda/bin:$PATH
export MXNET_HOME=$HOME/mxnet/
export PYTHONPATH=$MXNET_HOME/python:$PYTHONPATH

现在,使用以下命令立即应用更改−

Now, apply the change immediately with the following command−

source .profile

Configure CUDA

在使用 nvcc 配置 CUDA 之前,您需要检查正在运行哪个版本的 CUDA −

Before configuring CUDA, with nvcc, you need to check what version of CUDA is running−

nvcc --version

假设您的设备或计算机上安装了多个 CUDA 版本,并且您希望切换 CUDA 版本,那么请使用以下并将其替换为您想要的版本的符号链接−

Suppose, if more than one CUDA version is installed on your device or computer and you want to switch CUDA versions then, use the following and replace the symbolic link to the version you want−

sudo rm /usr/local/cuda
sudo ln -s /usr/local/cuda-10.0 /usr/local/cuda

上述命令将切换到 CUDA 10.0,该版本预装在 NVIDIA Jetson 设备 Nano 上。

The above command will switch to CUDA 10.0, which is preinstalled on NVIDIA Jetson device Nano.

完成上述先决条件后,您现在可以在 NVIDIA Jetson 设备上安装 MXNet。因此,让我们了解借助其可以安装 MXNet 的方式−

Once you done with the above-mentioned prerequisites, you can now install MXNet on NVIDIA Jetson Devices. So, let us understand the ways with the help of which you can install MXNet−

By using a Jetson MXNet pip wheel for Python development −如果您想使用已准备好的 Python 轮子,那么请将以下内容下载到 Jetson 并运行−

By using a Jetson MXNet pip wheel for Python development− If you want to use a prepared Python wheel then download the following to your Jetson and run it−

  1. MXNet 1.4.0 (for Python 3) available at https://docs.docker.com

  2. MXNet 1.4.0 (for Python 2) available at https://docs.docker.com

Native Build (from source)

为了从源安装 MXNet,我们需要按照以下两个步骤操作:

In order to install MXNet from source, we need to follow the following two steps−

Step 1

Build the shared library from the Apache MXNet C++ source code

Build the shared library from the Apache MXNet C++ source code

若要从 Apache MXNet C++ 源代码构建共享库,您可以使用 Docker 方法或手动构建−

To build the shared library from the Apache MXNet C++ source code, you can either use Docker method or do it manually−

Docker method

此方法中,您首先需要安装 Docker 并能够在不使用 sudo 的情况下运行它(在前面的步骤中也有说明)。完成后,运行以下内容以通过 Docker 执行交叉编译−

In this method, you first need to install Docker and able to run it without sudo (which is also explained in previous steps). Once done, run the following to execute cross-compilation via Docker−

$MXNET_HOME/ci/build.py -p jetson

Manual

此方法中,您需要编辑 Makefile (使用以下命令)以使用 CUDA 绑定安装 MXNet,以利用 NVIDIA Jetson 设备上的图形处理单元 (GPU):

In this method, you need to edit the Makefile (with below command) to install the MXNet with CUDA bindings to leverage the Graphical Processing units (GPU) on NVIDIA Jetson devices:

cp $MXNET_HOME/make/crosscompile.jetson.mk config.mk

编辑 Makefile 之后,您需要编辑 config.mk 文件,以便对 NVIDIA Jetson 设备进行一些额外更改。

After editing the Makefile, you need to edit config.mk file to make some additional changes for the NVIDIA Jetson device.

为此,请更新以下设置−

For this, update the following settings−

  1. Update the CUDA path: USE_CUDA_PATH = /usr/local/cuda

  2. Add -gencode arch=compute-63, code=sm_62 to the CUDA_ARCH setting.

  3. Update the NVCC settings: NVCCFLAGS := -m64

  4. Turn on OpenCV: USE_OPENCV = 1

现在为了确保 MXNet 以 Pascal 的硬件级别低精度加速构建,我们需要编辑 Mshadow Makefile,如下所示:

Now to ensure that the MXNet builds with Pascal’s hardware level low precision acceleration, we need to edit the Mshadow Makefile as follow−

MSHADOW_CFLAGS += -DMSHADOW_USE_PASCAL=1

最后,在以下命令的帮助下,您可以构建完整的 Apache MXNet 库:

Finally, with the help of following command you can build the complete Apache MXNet library−

cd $MXNET_HOME
make -j $(nproc)

一旦你执行完上述命令,它就会开始构建过程,这将需要几个小时才能完成。您将在 mxnet/lib directory 中获得一个名为 libmxnet.so 的文件。

Once you execute the above commands, it will start the build process which will take couple of hours to finish. You will get a file named libmxnet.so in the mxnet/lib directory.

Step 2

Install the Apache MXNet Python Bindings

Install the Apache MXNet Python Bindings

在这一步中,我们将安装 MXNet Python 绑定。为此,我们需要在 MXNet 目录中运行以下命令:

In this step, we will install MXNet Python bindings. To do so we need to run the following command in the MXNet directory−

cd $MXNET_HOME/python
sudo pip install -e .

在完成上述步骤后,你现在可以准备在你的 NVIDIA Jetson 设备 TX2 或 Nano 上运行 MXNet。可以使用以下命令进行验证:

Once done with above steps, you are now ready to run MXNet on your NVIDIA Jetson devices TX2 or Nano. It can be verified with the following command−

import mxnet
mxnet.__version__

如果一切都正常工作,它将返回版本号。

It will return the version number if everything is properly working.