My goal was to set up my new Lenovo y50 so that the integrated Intel GPU is used for all interactive UI tasks, and the NVIDIA GPU only for computation tasks. This way the entire memory of the NVIDIA GPU is available for computation, and when you are not using the GPU for computation it remains idle and doesn’t drain your battery or burn your belly.

I purchased the laptop to learn how to do GPU-accelerated machine learning and deep learning. My Initial research into this topic led me to the understanding that the situation with nVidia and Linux can be perilous. I encountered many frustrating dead-end black screen situations and freezes, but eventually with the help of @osdf and a little bit of additional trial and error I was able to get things working. The order of operations is critical, and many of the problems I encountered were due to doing the necessary steps in the wrong order. These instructions worked for me but may not work for you.

These directions assume you will be doing a fresh install of eOS from scratch:

Disable the discrete graphics in the BIOS. for the y50 this means choosing UMA Graphic in Configuration>Graphic Device:

Install eOS Freya normally. I wrote the .ISO to a usb drive with Unetbootin. While the nVidia GPU is still disabled, add the Nouveau blacklist per osdf’s guide:

The next steps are copied from the main guide. Blacklist any driver that is in conflict with NVIDIA’s binary driver (e.g. nouveau ): Create the file blacklist-file-drivers.conf in /etc/modprobe.d/ with contents: blacklist nouveau blacklist lbm-nouveau blacklist amd76x_edac blacklist vga16fb blacklist rivatv blacklist rivafb blacklist nvidiafb blacklist nvidia-173 blacklist nvidia-96 blacklist nvidia-current blacklist nvidia-173-updates blacklist nvidia-96-updates alias nvidia nvidia_current_updates alias nouveau off alias lbm-nouveau off Save the file, reboot, enter the BIOS setup, and re-enable the Hybrid graphics, and reboot back into eOS. Luckily, the next step from osdf we don’t require since the nVidia toolkit 7.0 is compiled with GCC 4.8, so we can skip that part about setting up GCC 4.6 as an alternative compiler. to verify that X11 is running on the Intel GPU still but that the nVidia GPU is now available, perform the following command: lspci | grep 'NVIDIA\|VGA' which should output something like this showing both Intel and NVIDIA present: 00:02.0 VGA compatible controller: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller (rev 06) 01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 860M] (rev a2) The next command requires glxinfo, which isn’t installed by default on eOS. this can be rectified by running: sudo apt-get install mesa-utils After which we should be able to run: glxinfo | egrep "OpenGL vendor|OpenGL renderer" Your output should be something like this, confirming that the Intel GPU is being used: OpenGL vendor string: Intel Open Source Technology Center OpenGL renderer string: Mesa DRI Intel(R) Haswell Mobile At this point I went ahead and updated eOS to the most recent packages: sudo apt-get update & sudo apt-get upgrade & sudo apt-get dist-upgrade And installed the most recent 3.19.8 Linux kernel:

(Note: A more recent / better kernel may be available when you read this, don’t just copy/paste the following into your terminal unless you just want to follow pedantically) cd ~ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.19.8-vivid/linux-headers-3.19.8-031908-generic_3.19.8-031908.201505110938_amd64.deb wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.19.8-vivid/linux-headers-3.19.8-031908_3.19.8-031908.201505110938_all.deb wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.19.8-vivid/linux-image-3.19.8-031908-generic_3.19.8-031908.201505110938_amd64.deb sudo dpkg -i linux-headers-3.19.8*.deb sudo dpkg -i linux-image-3.19.8*.deb And installed tlp for power management: sudo add-apt-repository ppa:linrunner/tlp sudo apt-get update sudo apt-get install tlp tlp-rdw sudo service tlp start Now the fun part. Download the 346.72 driver from NVIDIA’s website:

http://www.nvidia.com/Download/driverResults.aspx/84721/en-us

This is a file with extension .run, meant to be run from the terminal. We want to install the NVIDIA driver but NOT overwrite the Intel openGL driver, which Nvidia naughtily does by default. The parameters to do that for the driver is –no-opengl-files chmod +x NVIDIA-Linux-x86_64-346.72.run Press CTRL-ALT-F1 to switch to a text terminal, login, and then run this command in the location you downloaded the driver: sudo service lightdm stop && sudo ./NVIDIA-Linux-x86_64-346.72.run --dkms --no-opengl-files --accept-license When the driver install finishes, you should be able to reboot, and verify the driver is installed: sudo modprobe nvidia-uvm nvidia-smi Which should output something like this: Mon May 25 18:45:28 2015 +------------------------------------------------------+ | NVIDIA-SMI 346.72 Driver Version: 346.72 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 860M Off | 0000:01:00.0 N/A | N/A | | N/A 40C P0 N/A / N/A | 9MiB / 4095MiB | N/A Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 C+G Not Supported | +-----------------------------------------------------------------------------+ Now it’s time to install the CUDA framework, with the flag –no-opengl-libs this time to not overwrite the Intel openGL: cd ~ wget http://developer.download.nvidia.com/compute/cuda/7_0/Prod/local_installers/cuda_7.0.28_linux.run chmod +x cuda_7.0.28_linux.run sudo ./cuda_7.0.28_linux.run --no-opengl-libs --toolkit --samples Note that we didn’t say to install the driver at this step. If’n it does ask, make sure to say NO. It will warn you about not installing the driver, but Good News! We installed the driver in a previous step. It is probably worth reading the NVIDIA CUDA Linux getting started guide now. Add the CUDA library and path to your profile: echo export CUDA_HOME=/usr/local/cuda-7.0 >> ~/.bashrc echo export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH >> ~/.bashrc echo export PATH=${CUDA_HOME}/bin:$PATH >> ~/.bashrc Now exit the terminal and re-open to load the new path. Install G++ compiler, then compile and run the samples: sudo apt-get install g++ cd ~/NVIDIA_CUDA-7.0_Samples make 1_Utilities/deviceQuery/deviceQuery You should explore the other samples and make sure they run. At this point you should sign up for the NVIDIA registered developer program, as the next step will be to download and install cuDNN which requires you to be registered. Take the survey, agree to the conditions, then download the cuDNN Library for Linux, User Guide, and Code Samples and install the library: tar -xvf cudnn-6.5-linux-x64*.tgz cd cudnn-6.5* sudo cp *.h $CUDA_HOME/include sudo cp *.so $CUDA_HOME/lib64 sudo cp *.a $CUDA_HOME/lib64 sudo cp *.h /usr/local/include sudo cp *.so /usr/local/lib sudo cp *.a /usr/local/lib sudo ldconfig Now install the cuDNN sample program: tar -xvf cudnn-sample-v2.tgz cd cudnn-sample-v2 make If all goes well you shouldn’t get any errors. Run the sample: ./mnistCUDNN which should output: Loading image data/one_28x28.pgm Performing forward propagation ... Resulting weights from Softmax: 4.05186e-07 0.999404 2.21383e-07 1.20837e-08 0.000587085 5.06682e-08 2.80583e-06 1.47965e-06 3.56051e-06 2.46337e-07 Loading image data/three_28x28.pgm Performing forward propagation ... Resulting weights from Softmax: 4.67739e-05 5.83973e-07 1.76501e-06 0.75859 1.06138e-11 0.24133 2.62157e-10 1.11104e-05 3.39113e-07 1.88164e-05 Loading image data/five_28x28.pgm Performing forward propagation ... Resulting weights from Softmax: 3.22452e-10 8.69774e-10 3.73033e-12 3.2219e-07 2.67785e-11 0.999992 4.58862e-06 5.08385e-10 9.35238e-07 1.87656e-06 Result of classification: 1 3 5 Test passed! Now that cuDNN is installed and working, it is time to install Anaconda (free for all) or Anaconda Accelerate (paid or free with academic affiliation). To just install free Anaconda go to http://continuum.io/downloads, download the 64 bit installation and run: bash Anaconda-2.2.0-Linux-x86_64.sh conda update conda The easiest library to get started with is numbapro (free 30 day trial or included already with Anaconda Accelerate), so let’s install it: conda install numbapro If you haven’t installed chrome yet, it’s time to do so. I was unable to install chrome from the download on http://www.google.com/chrome for some reason, so I had to use the repository method: wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add - sudo sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list' sudo apt-get update sudo apt-get install google-chrome-stable Now run Chrome at least once and let it become the default browser. I’m not sure eOS’s default web browser Midori is up to the task of running IPython notebook. Let’s test things out with an IPython notebook. Go to http://nbviewer.ipython.org/gist/harrism/f5707335f40af9463c43 and click on the download the notebook icon on the top right, then run in the location you downloaded it: ipython notebook mandelbrot_numbapro.ipynb Which will open a web browser window with a mandelbrot set calculation example. Go through the notebook. If everything is installed and working correctly, your time on the CUDA numbapro run should be at least 10x-20x faster than the CPU autojit version. Now we will install theano, pyCUDA, and keras: GPU accelerated deep learning libraries that don’t require a license. For this we’ll first need to install git, python-dev, cmake, check: sudo apt-get install git sudo apt-get install python-dev sudo add-apt-repository ppa:george-edison55/cmake-3.x sudo apt-get update sudo apt-get install cmake sudo apt-get install check Now install boost: sudo apt-get install libboost-all-dev Now install mako, cython, nose: pip install mako pip install cython conda install nose h5py from http://docs.h5py.org/en/latest/index.html conda install h5py Theano from http://deeplearning.net/software/theano/ sudo pip install theano --upgrade You can run the tests on http://deeplearning.net/software/theano/tutorial/using_gpu.html to verify that Theano is using the GPU. Note that Theano doesn’t play so nice with ATLAS and there is no way to dynamically link openBLAS into Anaconda. pyCUDA from http://mathema.tician.de/software/pycuda/ cd ~ git clone --recursive http://git.tiker.net/trees/pycuda.git cd pycuda ./configure.py make sudo make install maxAs (Maxwell GPU assembler) from https://github.com/NervanaSystems/maxas: cd ~ git clone https://github.com/NervanaSystems/maxas.git cd maxas perl Makefile.PL make sudo make install now install keras from https://github.com/fchollet/keras: cd ~ git clone https://github.com/fchollet/keras.git cd keras sudo python setup.py install Go ahead and run the unit tests: cd ~/keras/test python test_models.py python test_constraints.py python test_save_weights.py And an example (will take a long time): cd ~/keras/examples THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python cifar10_cnn.py

That’s it! Now get to deep learning.