安装
CPU 版
最简单的方法使用 pip 来安装
# Python 2.7
pip install --upgrade tensorflow
# Python 3.x
pip3 install --upgrade tensorflow
docker 使用镜像 gcr.io/tensorflow/tensorflow
启动 CPU 版 Tensorflow:
docker run -it -p 8888:8888 gcr.io/tensorflow/tensorflow
验证安装
$ python
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
>>> print(sess.run(hello))
Hello, TensorFlow!
>>>
GPU 版
注意:从 1.2 版本开始,Mac OSX 不再支持 GPU 版本(CPU 版仍继续支持)。
pip
最简单的方法是使用 pip 安装:
# Python 2.7
pip install --upgrade tensorflow-gpu
# Python 3.x
pip3 install --upgrade tensorflow-gpu
Docker
首先安装 nvidia-docker:
# Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb
# Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi
然后可以使用 gcr.io/tensorflow/tensorflow:latest-gpu
镜像启动 GPU 版 Tensorflow:
nvidia-docker run -it -p 8888:8888 gcr.io/tensorflow/tensorflow:latest-gpu
CUDA 和 cuDNN
安装 CUDA:
# Check for CUDA and try to install.
if ! dpkg-query -W cuda; then
# The 16.04 installer works with 16.10.
curl -O http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
dpkg -i ./cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
rm -f ./cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
apt-get update
apt-get install libcupti-dev cuda -y
fi
安装 cuDNN:
首先到网站 https://developer.nvidia.com/cudnn 注册,并下载 cuDNN v5.1,然后运行命令安装
wget https://www.dropbox.com/s/xdak8t60lzk11zb/cudnn-8.0-linux-x64-v5.1.tgz?dl=1 -O cudnn-8.0-linux-x64-v5.1.tgz
tar zxvf cudnn-8.0-linux-x64-v5.1.tgz
ln -s /usr/local/cuda-8.0 /usr/local/cuda
sudo cp -P cuda/include/cudnn.h /usr/local/cuda/include
sudo cp -P cuda/lib64/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
安装完成后,可以运行 nvidia-smi 查看 GPU 设备的状态
$ nvidia-smi
Fri Jun 16 19:33:35 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66 Driver Version: 375.66 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 0000:00:04.0 Off | 0 |
| N/A 74C P0 80W / 149W | 0MiB / 11439MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
验证安装
$ python
>>> from tensorflow.python.client import device_lib
>>> print device_lib.list_local_devices()
...
[name: "/cpu:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 9675741273569321173
, name: "/gpu:0"
device_type: "GPU"
memory_limit: 11332668621
locality {
bus_id: 1
}
incarnation: 7807115828340118187
physical_device_desc: "device: 0, name: Tesla K80, pci bus id: 0000:00:04.0"
]
>>>