El Blog de Visión por Computador: Llevando la tecnología a la industria


Deep Learning development setup for ubuntu 16.04 Xenial

El mes pasado actualizamos nuestros servidores de deep learning a Ubuntu 16.04 LTS. Dado que los principales entornos de deep learning ni los drivers de cuda y cuDNN soportan de forma directa ubuntu 16.04, en este post proporcionamos un resumen de los pasos realizados para poder configurar Theano, Caffe y TensorFlow sobre esta última versión de LTS de ubuntu. Esta guía es también válida para otras versiones de ubuntu simplemente ignorandolos pasos relacionados con la adaptación de ficheros para compilación a gcc 5.x. Incluimos también algunos ficheros de configuración/test que se describen en esta guía (link). A continuación incluimos la guía en inglés:

1. Install prerequisites:

This install prerequisites required to build the different dependencies and frameworks.

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential
sudo apt-get autoremove
sudo apt-get install git
git config --global $MYNAME
git config --global $MYMAIL

2. Install nvidia graphics driver:

      • Download drivers Nvidia driver
      • Start a text console ctrl + alt + F1
      • Stop X Server
        sudo service lightdm stop
      • Prevent noveau driver to be used:
        sudo nano /etc/modprobe.d/blacklist.conf

        blacklist nouveau
        blacklist lbm-nouveau
        options nouveau modeset=0
        alias nouveau off
        alias lbm-nouveau off

        sudo update-initramfs -u

      • Reboot and:
        • Start a text console ctrl + alt + F1
        • Stop xserver
      • Run script for downloaded Nvidia driver
        sudo ./
      • Notes:
        • In order to avoid compatibility issues with NVIDIA OpenGL driver and some cards (such as GTX920M) we recommend not to install the NVIDIA OpenGL driver. In this way, the X server will use the NVIDIA card with the generic Intel driver for visualization purposes and you can access to CUDA normally:

          • sudo ./ –Z –no-opengl-files
        • Nvidia card data info can be obtained by command: nvidia-smi.

3. Install CUDA:

  • Install cuda drivers:
    • Run CUDA SDK executable
      sudo ./ --override

      • Notes:
        • Do not install the graphics driver as we already have installed it before.
        • Override option is used to go around (by now) the unsupported compiler error.
  • Get rid off the gcc version error at Ubuntu 16.04:
    First, we have to solve the incompatibility of default gcc version at Xenial. To do that, error at host_config.h have to be removed manually:sudo nano /usr/local/cuda/include/host_config.h
    At line: 115 comment out error://#error — unsupported GNU version! gcc versions later than 4.9 are not supported!
  • Configure paths:
    At this stage we need to modify $LD_LIBRARY_PATH variable to let the S.O. access the installed libraries.
    We will do these modifications for the user’s bash and for the Application Environment Setup (/etc/profile.d/)
    For the shake of completeness we include also modifications required for caffe and conda.
    Note that we have installed all packages at /usr/progtools.
  • Content for /etc/profile.d/
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/progtools/caffe-nv/distribute/lib
    export PYTHONPATH=/usr/progtools/caffe-nv/python:$PYTHONPATH
  • Content added to ~/.bashrc:
    export PATH=$PATH:/usr/local/cuda/bin
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/lib:/usr/lib/x86_64-linux-gnu
    export PYTHONPATH=/usr/progtools/caffe-nv/python:$PYTHONPATH
    export CAFFE_HOME=/usr/progtools/caffe-nv
    export PATH=/usr/progtools/anaconda2/bin:$PATH

4. Install cuDNN:

  • Download cuDNN linux libraries.
  • Uncompress tar
    tar -xvf cudnn-7.5-linux-x64-v5.0-ga.tgz
  • Copy files to cuda library
    sudo cp cudnn.h /usr/local/cuda-7.5/include/
    sudo cp ./libcudnn* /usr/local/cuda-7.5/lib64/
  • Run ldconfig from /lib64 to update libs cache
    sudo ldconfig /usr/local/cuda/lib64

5. Install Theano:

Now it is time to install Theano and our Conda environment. This section is partially based on this Donald Kinghorn’s post

  • Configure and build openblas:sudo apt-get install gfortran
    git clone
    cd OpenBLAS
    make FC=gfortran
    sudo make PREFIX=/usr/local install
  • Install Conda environment:
    Conda is a really flexible package and environment system that will ease development and dl framework version changes on our developoment systems.
    Please, note that we will install conda at /usr/progtools/anaconda2)

    • Download conda.
    • Run conda .sh and install it:
      source activate root
    • Update conda:
      conda update conda
      conda update anaconda
      conda update --all
      conda install pydot
      conda update theano

      • If you want latest Theano version instead run:
        pip install --upgrade --no-deps git+git://
    • Remove mkl:
      conda install nomkl
      conda install nomkl numpy scipy scikit-learn numexpr
      conda remove mkl mkl-service
    • Solve gblas problems:
      conda install libgfortran
      conda install openblas
  • Install and set Theano parameters:
    • Create ~/.theanorc file with the following content:
          device = gpu  
          floatX = float32
          ldflags = -L/usr/local/lib -lopenblas
          fastmath = True
          root = /usr/lib/nvidia-cuda-toolkit
    • Run python
          from theano import function, config, shared, sandbox
          import theano.tensor as T
          import numpy
          import time            
          vlen = 10 * 30 * 768  # 10 x #cores x # threads per core
          iters = 1000            
          rng = numpy.random.RandomState(22)
          x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
          f = function([], T.exp(x))
          print f.maker.fgraph.toposort()
          t0 = time.time()
          for i in xrange(iters):
              r = f()
          t1 = time.time()
          print 'Looping %d times took' % iters, t1 - t0, 'seconds'
          print 'Result is', r
          if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
              print 'Used the cpu'
              print 'Used the gpu'

6. Install TensorFlow:

  • Install TensorFlow (No current CuDNN version support)
    $ pip install --ignore-installed --upgrade
  • In order to install TensorFlow from sources and to support latest CuDNN we have to build it from sources link:
    • Install Bazel:
      • Add Bazel distribution URI as a package source (one time setup)
        echo "deb stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
        curl | sudo apt-key add -
      • Install java 8
        sudo add-apt-repository ppa:webupd8team/java
        sudo apt-get update
        sudo apt-get install oracle-java8-installer
      • Install bazel
        sudo apt-get update && sudo apt-get install bazel
        sudo apt-get upgrade bazel
        sudo apt-get install python-numpy swig python-dev python-wheel
    • Configure and build tensorflow:
      • First we have to allow it working with gcc 5.x
        Edit file $tensorflow_sources_folder/third_party/gpus/crosstool/CROSSTOOL
        Add this line:
        cxx_flag: "-D_FORCE_INLINES"
        cxx_flag: "-D_MWAITXINTRIN_H_INCLUDED"
        below any tool_path { name: “gcc” path: “clang/bin/crosstool_wrapper_driver_is_not_gcc” }
      • and build it:
        bazel build -c opt --config=cuda //tensorflow/cc:tutorials_example_trainer
        bazel-bin/tensorflow/cc/tutorials_example_trainer --use_gpu
      • Build pip installation:
        bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
        bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
        pip install /tmp/tensorflow_pkg/tensorflow-0.8.0-py2-none-any.whl
    • Internal Test:
      cd tensorflow/models/image/mnist
  • Test TensorFlow:
    Run python

    import tensorflow as tf
    hello = tf.constant('Hello, TensorFlow!')
    sess = tf.Session()
    a = tf.constant(10)
    b = tf.constant(32)
    print( + b))

7. Make Caffe and pyCaffe:

This section helps you to install Caffe and pyCaffe. It is based on wikidot blog’s post.

  • Install prerequisites:
  • Common dependencies:
    sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler
    sudo apt-get install --no-install-recommends libboost-all-dev
    sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev
  • Glog:
    tar zxvf glog-0.3.3.tar.gz
    cd glog-0.3.3
    make && make install
  • gflags:
    cd gflags-master
    mkdir build && cd build
    export CXXFLAGS="-fPIC" && cmake .. && make VERBOSE=1
    make && make install
  • lmdb:
    git clone
    cd lmdb/libraries/liblmdb
    make && make install
  • opencv:
    • Download opencv:
      sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
      sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
      git clone
      cd ~/opencv
    • Modify CMakeLists.txt to override compiler version error:
    • Build opencv:
      mkdir release
      cd release
      sudo cmake -DBUILD_TIFF=ON -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local /home/apicon/opencv
      sudo make install
  • Build Caffe:
    • Download caffe into /usr/progtools:
      git clone caffe-nv
    • Edit Makefile:
      Change MakeFile to support inline declarations and avoid gcc 5.0 errors adding this line in line 52:


    Add also opencv libraries:

            LIBRARIES += opencv_core opencv_highgui opencv_imgproc opencv_imgcodecs
    • Edit Makefile.config:
      ## Refer to
      # Contributions simplifying and improving our build system are welcome!
      # cuDNN acceleration switch (uncomment to build with cuDNN).
      USE_CUDNN := 1
      # CPU-only switch (uncomment to build without GPU support).
      # CPU_ONLY := 1
      # uncomment to disable IO dependencies and corresponding data layers
      # USE_OPENCV := 0
      # USE_LEVELDB := 0
      # USE_LMDB := 0
      # uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
      #   You should not set this flag if you will be reading LMDBs with any
      #   possibility of simultaneous read and write
      # ALLOW_LMDB_NOLOCK := 1
      # To customize your choice of compiler, uncomment and set the following.
      # N.B. the default for Linux is g++ and the default for OSX is clang++
      # CUSTOM_CXX := g++
      # CUDA directory contains bin/ and lib/ directories that we need.
      CUDA_DIR := /usr/local/cuda
      # On Ubuntu 14.04, if cuda tools are installed via
      # "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
      # CUDA_DIR := /usr
      # CUDA architecture setting: going with all of them.
      # For CUDA < 6.0, comment the *_50 lines for compatibility.
      CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
              -gencode arch=compute_20,code=sm_21 \
              -gencode arch=compute_30,code=sm_30 \
              -gencode arch=compute_35,code=sm_35 \
              -gencode arch=compute_50,code=sm_50 \
              -gencode arch=compute_50,code=compute_50
      # BLAS choice:
      # atlas for ATLAS (default)
      # mkl for MKL
      # open for OpenBlas
      BLAS := open
      # Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
      # Leave commented to accept the defaults for your choice of BLAS
      # (which should work)!
      BLAS_INCLUDE := /usr/local/include
      BLAS_LIB := /usr/local/lib
      # Homebrew puts openblas in a directory that is not on the standard search path
      #BLAS_INCLUDE := $(shell brew --prefix openblas)/include
      #BLAS_LIB := $(shell brew --prefix openblas)/lib
      # This is required only if you will compile the matlab interface.
      # MATLAB directory should contain the mex binary in /bin.
      # MATLAB_DIR := /usr/local
      # MATLAB_DIR := /Applications/
      # NOTE: this is required only if you will compile the python interface.
      # We need to be able to find Python.h and numpy/arrayobject.h.
      PYTHON_INCLUDE := /usr/include/python2.7 \
      # Anaconda Python distribution is quite popular. Include path:
      # Verify anaconda location, sometimes it's in root.
      ANACONDA_HOME := $(HOME)/anaconda2
      PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
               $(ANACONDA_HOME)/include/python2.7 \
               $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \
      # We need to be able to find or .dylib.
      #PYTHON_LIB := /usr/lib
      # Homebrew installs numpy in a non standard path (keg only)
      PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
      PYTHON_LIB += $(shell brew --prefix numpy)/lib
      # Uncomment to support layers written in Python (will link against Python libs)
      # Whatever else you find you need goes here.
      INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
      LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib
      LIBRARY_DIRS += /usr/lib/x86_64-linux-gnu/
      # If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
      #INCLUDE_DIRS += $(shell brew --prefix)/include
      #LIBRARY_DIRS += $(shell brew --prefix)/lib
      # Uncomment to use `pkg-config` to specify OpenCV library paths.
      # (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
      # USE_PKG_CONFIG := 1
      BUILD_DIR := build
      DISTRIBUTE_DIR := distribute
      # Uncomment for debugging. Does not work on OSX due to
      # DEBUG := 1
      # The ID of the GPU that 'make runtest' will use to run unit tests.
      TEST_GPUID := 0
      # enable pretty build (comment to see full commands)
      Q ?= @ 
      # shared object suffix name to differentiate branches
    • Make coffe:
      make all -j16
      make test -j16
      make runtest -j16
    • Troubleshotting:
      If we find problems with libflags:
      Uninstall libgflags:
      sudo apt-get remove -y libgflags
      Delete make install versions:
      sudo rm -f /usr/local/lib/libgflags.a /usr/local/lib/libgflags_nothreads.a
      sudo rm -rf /usr/local/include/gflags
      Clean Caffe build:
      cd <path>/<to>/caffe
      make clean
      Re-install libgflags package:
      sudo apt-get install -y libgflags-dev
      Rebuild Caffe
  • Build pyCaffe
    • Install python dependencies:
      From $CAFFE_HOME/python directory run:
      for req in $(cat requirements.txt); do pip install $req; done
    • Export $PYTHONPATH on your /etc/profile.d/ and reboot:
      export PYTHONPATH=<caffe-home>/python:$PYTHONPATH
    • Make pycaffe:
      make pycaffe
      make distribute
  • Export generated libraries:
    • They have been generated at:
    • We add the PATHS to allow it being discovered by conda:
      • This will go to /etc/profile.d/
          export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/progtools/caffe-nv/distribute/lib
          export PYTHONPATH=/usr/progtools/caffe-nv/python:$PYTHONPATH
      • This will go to ~/.bashrc:
          export PYTHONPATH=/usr/progtools/caffe-nv/python:$PYTHONPATH
          export CAFFE_HOME=/usr/progtools/caffe-nv
    • Check that caffe works running provided
      import caffe    
      #output = (input - kernel_size) / stride + 1        
      net = caffe.Net('conv.prototxt', caffe.TEST)        
      print net.inputs

7. Build Nvidia digits:

We followed the instructions from NVIDIA github for DIGITS

  • Install Prerequisites:
    sudo apt-get install python-dev python-pip graphviz
    sudo apt-get install python-pil python-numpy python-scipy python-protobuf python-gevent python-flask gunicorn python-h5py
  • Download digits to /usr/progtools:
    git clone digits
  • Install python requirements:
    pip install -r requirements.txt
  • Launch server:
    • Start Development Server:
    • Start Production Server:

8. Create new users on the system:

Just setting the /usr/progtools folder to the group developers we can generate new pre-configured environments to any user by following these easy steps:

  • Copy the content of provided /tools folder into /usr/progtools
  • Copy the file into /etc/profile.d folder
  • To create a new user:
    • Run sudo ./create_deep_user
    • A new user will be created with:
      • Configured PATHs for all the installed frameworks
      • A project folder containing test python scripts for theano, tensorflow and caffe.
      • a soft link to the progtool folder
    • Just remember to include the proper python interpreter (conda) on PyCharm or on your favorite IDE.

The provided script:


    if [[ $1 == "" ]]; then
       echo "Usage: sudo ./ <USERNAME>";
       exit 2       

    if [ "$(whoami)" != "root" ]; then
            echo "Sorry, you are not root."
            exit 1

    echo "Generating user: $USERNAME"
    sudo useradd -G sudo,developers $USERNAME

    echo "Creating home..."
    mkdir $HOME_USER
    cd $HOME_USER

    echo "Linking programming tools..."
    #link  progtools folder
    ln -s /usr/progtools progtools
    echo  "Creating projects folder"
    #create project dir
    mkdir projects
    cp -R /usr/progtools/tools/deep_test_python $HOME_USER/projects

    echo "Creating .bashrc and .theanorc files"
    cp /usr/progtools/tools/bashrc_orig $HOME_USER/.bashrc
    cp /usr/progtools/tools/theanorc_orig $HOME_USER/.theanorc

    echo "Acquiring property of $USERNAME own home"
    #acquire property
    chown -R $USERNAME:$USERNAME *

    echo "setting password for $USERNAME"
    #set password
    passwd  $USERNAME

    echo "Remember to set your python interpreter to /usr/progtools/anaconda2/bin/python on your IDE"

Deep Learning development setup for ubuntu 16.04 Xenial by Artzai Picon & Aitor Alvarez-Gila @ Tecnalia Research & Innovation.  Tested by Adrian Galdran

El Blog de Visión por Computador: Llevando la tecnología a la industria

Uso de cookies

Este sitio web utiliza cookies para que usted tenga la mejor experiencia de usuario. Si continúa navegando está dando su consentimiento para la aceptación de las mencionadas cookies y la aceptación de nuestra política de cookies, pinche el enlace para mayor información.plugin cookies