Tensorflow 1.13.2
Default environment for Tensorflow w/ Keras and TFLearn
This notebook builds a reusable environment for Tensorflow, based on the Python 3 environment. Tensorflow is compiled here, to make use of SIMD instruction sets and the cuDNN, NCCL, and TensorRT CUDA libraries.
Learn more about environments on Nextjournal.
You can quickly make use of the Tensorflow environment by remixing the Nextjournal Tensorflow template, or use the environment with any existing runtime by following these steps (see image below):
Activate the runtime settings in the sidebar.
Bring up the Environments dropdown.
Select Import environment… at the bottom of the list.
Search for this notebook,
mpd/tensorflow-1.13.Select it to list all environments within.
Select the
Python Tensorflowenvironment.

If the end state of the runtime in which Tensorflow was compiled is needed, the Build Py3 TF environment is also exported. In addition, the wheel installation file of this compiled Tensorflow is available for download here: tensorflow-1.13.2-cp36-cp36m-linux_x86_64.whl
Showcase
Plain Tensorflow
We'll follow the deep convolutional generative adversarial networks (DCGAN) example by Aymeric Damien, from the Tensorflow Examples project, to generate digit images from a noise distribution.
Reference paper: Unsupervised representation learning with deep convolutional generative adversarial networks. A Radford, L Metz, S Chintala. arXiv:1511.06434.
First, parameters.
# Training Paramsnum_steps = 5000batch_size = 32# Network Paramsimage_dim = 784 # 28*28 pixels * 1 channelgen_hidden_dim = 256disc_hidden_dim = 256noise_dim = 200 # Noise data pointsDefine networks.
# Generator Network# Input: Noise, Output: Imagedef generator(x, reuse=False): with tf.variable_scope('Generator', reuse=reuse): # TensorFlow Layers automatically create variables and calculate their # shape, based on the input. x = tf.layers.dense(x, units=6 * 6 * 128) x = tf.nn.tanh(x) # Reshape to a 4-D array of images: (batch, height, width, channels) # New shape: (batch, 6, 6, 128) x = tf.reshape(x, shape=[-1, 6, 6, 128]) # Deconvolution, image shape: (batch, 14, 14, 64) x = tf.layers.conv2d_transpose(x, 64, 4, strides=2) # Deconvolution, image shape: (batch, 28, 28, 1) x = tf.layers.conv2d_transpose(x, 1, 2, strides=2) # Apply sigmoid to clip values between 0 and 1 x = tf.nn.sigmoid(x) return x# Discriminator Network# Input: Image, Output: Prediction Real/Fake Imagedef discriminator(x, reuse=False): with tf.variable_scope('Discriminator', reuse=reuse): # Typical convolutional neural network to classify images. x = tf.layers.conv2d(x, 64, 5) x = tf.nn.tanh(x) x = tf.layers.average_pooling2d(x, 2, 2) x = tf.layers.conv2d(x, 128, 5) x = tf.nn.tanh(x) x = tf.layers.average_pooling2d(x, 2, 2) x = tf.contrib.layers.flatten(x) x = tf.layers.dense(x, 1024) x = tf.nn.tanh(x) # Output 2 classes: Real and Fake images x = tf.layers.dense(x, 2) return xNetwork setup.
import matplotlib.pyplot as pltimport numpy as npimport tensorflow as tf# Import MNIST data (http://yann.lecun.com/exdb/mnist/)from tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets("/tmp/data/", one_hot=True)# Build Networks# Network Inputsnoise_input = tf.placeholder(tf.float32, shape=[None, noise_dim])real_image_input = tf.placeholder(tf.float32, shape=[None, 28, 28, 1])# Build Generator Networkgen_sample = generator(noise_input)# Build 2 Discriminator Networks (one from noise input, one from generated samples)disc_real = discriminator(real_image_input)disc_fake = discriminator(gen_sample, reuse=True)disc_concat = tf.concat([disc_real, disc_fake], axis=0)# Build the stacked generator/discriminatorstacked_gan = discriminator(gen_sample, reuse=True)# Build Targets (real or fake images)disc_target = tf.placeholder(tf.int32, shape=[None])gen_target = tf.placeholder(tf.int32, shape=[None])# Build Lossdisc_loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits( logits=disc_concat, labels=disc_target))gen_loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits( logits=stacked_gan, labels=gen_target))# Build Optimizersoptimizer_gen = tf.train.AdamOptimizer(learning_rate=0.001)optimizer_disc = tf.train.AdamOptimizer(learning_rate=0.001)# Training Variables for each optimizer# By default in TensorFlow, all variables are updated by each optimizer, so we# need to precise for each one of them the specific variables to update.# Generator Network Variablesgen_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='Generator')# Discriminator Network Variablesdisc_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='Discriminator')# Create training operationstrain_gen = optimizer_gen.minimize(gen_loss, var_list=gen_vars)train_disc = optimizer_disc.minimize(disc_loss, var_list=disc_vars)# Initialize the variables (i.e. assign their default value)init = tf.global_variables_initializer()Finally, training.
# Start trainingsess = tf.Session()# Run the initializersess.run(init)for step in range(1, num_steps+1): # Prepare Input Data # Get the next batch of MNIST data (only images are needed, not labels) batch_x, _ = mnist.train.next_batch(batch_size) batch_x = np.reshape(batch_x, newshape=[-1, 28, 28, 1]) # Generate noise to feed to the generator z = np.random.uniform(-1., 1., size=[batch_size, noise_dim]) # Prepare Targets (Real image: 1, Fake image: 0) # The first half of data fed to the generator are real images, # the other half are fake images (coming from the generator). batch_disc_y = np.concatenate( [np.ones([batch_size]), np.zeros([batch_size])], axis=0) # Generator tries to fool the discriminator, thus targets are 1. batch_gen_y = np.ones([batch_size]) # Training feed_dict = {real_image_input: batch_x, noise_input: z, disc_target: batch_disc_y, gen_target: batch_gen_y} _, _, gl, dl = sess.run([train_gen, train_disc, gen_loss, disc_loss], feed_dict=feed_dict) if step % 1000 == 0 or step == 1: print('Step %i: Generator Loss: %f, Discriminator Loss: %f' % (step, gl, dl)) # Generate images from noise, using the generator network. f, a = plt.subplots(4, 10, figsize=(10, 4)) for i in range(10): # Noise input. z = np.random.uniform(-1., 1., size=[4, noise_dim]) g = sess.run(gen_sample, feed_dict={noise_input: z}) for j in range(4): # Generate image from noise. Extend to 3 channels for matplot figure. img = np.reshape(np.repeat(g[j][:, :, np.newaxis], 3, axis=2), newshape=(28, 28, 3)) a[j][i].imshow(img) #f.show() plt.suptitle("Step {}".format(step)) plt.savefig("/results/step-{}.svg".format(step)) plt.close()Keras
Adapted from mnist_mlp.py in the Keras examples collection. Can be run on CPU or GPU, just depends what the runtime's Machine Type is set to.
Trains a simple deep NN on the MNIST dataset. Gets to 98.40% test accuracy after 20 epochs(there is *a lot* of margin for parameter tuning). 2 seconds per epoch on a K520 GPU.
Imports and settings.
from __future__ import print_functionimport kerasfrom keras.datasets import mnistfrom keras.models import Sequentialfrom keras.layers import Dense, Dropoutfrom keras.optimizers import RMSpropbatch_size = 128num_classes = 10epochs = 20Data.
# the data, split between train and test sets(x_train, y_train), (x_test, y_test) = mnist.load_data()x_train = x_train.reshape(60000, 784)x_test = x_test.reshape(10000, 784)x_train = x_train.astype('float32')x_test = x_test.astype('float32')x_train /= 255x_test /= 255print(x_train.shape[0], 'train samples')print(x_test.shape[0], 'test samples')# convert class vectors to binary class matricesy_train = keras.utils.to_categorical(y_train, num_classes)y_test = keras.utils.to_categorical(y_test, num_classes)Define the model.
model = Sequential()model.add(Dense(512, activation='relu', input_shape=(784,)))model.add(Dropout(0.2))model.add(Dense(512, activation='relu'))model.add(Dropout(0.2))model.add(Dense(num_classes, activation='softmax'))model.summary()model.compile(loss='categorical_crossentropy', optimizer=RMSprop(), metrics=['accuracy'])Training. We can save our result to a file at the end.
history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test))model.save("/results/mnist.kerasave")In a new runtime, load the test data and saved model with training data.
import kerasfrom keras.datasets import mnistfrom keras.models import load_modelnum_classes = 10(_,_), (x_test, y_test) = mnist.load_data()x_test = x_test.reshape(10000, 784)x_test = x_test.astype('float32')x_test /= 255y_test = keras.utils.to_categorical(y_test, num_classes)model = load_model(mnist.kerasave)Evaluate.
score = model.evaluate(x_test, y_test, verbose=0)print('Test loss:', score[0])print('Test accuracy:', score[1])TFLearn
From the TFLearn + Tensorflow layers.py example.
from __future__ import print_functionimport tensorflow as tfimport tflearn# --------------------------------------# High-Level API: Using TFLearn wrappers# --------------------------------------# Using MNIST Datasetimport tflearn.datasets.mnist as mnistmnist_data = mnist.read_data_sets(one_hot=True)# User defined placeholderswith tf.Graph().as_default(): # Placeholders for data and labels X = tf.placeholder(shape=(None, 784), dtype=tf.float32) Y = tf.placeholder(shape=(None, 10), dtype=tf.float32) net = tf.reshape(X, [-1, 28, 28, 1]) # Using TFLearn wrappers for network building net = tflearn.conv_2d(net, 32, 3, activation='relu') net = tflearn.max_pool_2d(net, 2) net = tflearn.local_response_normalization(net) net = tflearn.dropout(net, 0.8) net = tflearn.conv_2d(net, 64, 3, activation='relu') net = tflearn.max_pool_2d(net, 2) net = tflearn.local_response_normalization(net) net = tflearn.dropout(net, 0.8) net = tflearn.fully_connected(net, 128, activation='tanh') net = tflearn.dropout(net, 0.8) net = tflearn.fully_connected(net, 256, activation='tanh') net = tflearn.dropout(net, 0.8) net = tflearn.fully_connected(net, 10, activation='linear') # Defining other ops using Tensorflow loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=net, labels=Y)) optimizer = tf.train.AdamOptimizer(learning_rate=0.01).minimize(loss) # Initializing the variables init = tf.global_variables_initializer() # Launch the graph with tf.Session() as sess: sess.run(init) batch_size = 128 for epoch in range(2): # 2 epochs avg_cost = 0. total_batch = int(mnist_data.train.num_examples / batch_size) for i in range(total_batch): batch_xs, batch_ys = mnist_data.train.next_batch(batch_size) sess.run(optimizer, feed_dict={X: batch_xs, Y: batch_ys}) cost = sess.run(loss, feed_dict={X: batch_xs, Y: batch_ys}) avg_cost += cost / total_batch if i % 20 == 0: print("Epoch:", '%03d' % (epoch + 1), "Step:", '%03d' % i, "Loss:", str(cost))Setup
Build Tensorflow
Building Tensorflow allows use of SIMD CPU enhancements like AVX. Cuda 9.2 supports up to GCC7. To get the Nvidia CUDA libraries we must set the environment variable NEXTJOURNAL_MOUNT_CUDA in the runtime configuration. Tensorflow can see some speedups if we give it libjemalloc.
apt-get -qq updateapt-get install --no-install-recommends \ xutils-dev zlib1g-dev libjemalloc-devupdate-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 25update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-7 25echo "/usr/local/cuda/extras/CUPTI/lib64" > /etc/ld.so.conf.d/cupti.confldconfigInstall TensorRT from tarfile downloaded in the Appendix. Have to fudge the python install because the wheel file is minor-version specific for some reason.
conda install protobufcd /usr/localtar zxf NJ__REF_ln -s TensorRT* tensorrtecho '/usr/local/tensorrt/lib' > /etc/ld.so.conf.d/tensorrt.confldconfigcd tensorrt cp python/tensorrt-4.0.1.6-cp35-cp35m-linux_x86_64.whl \ python/tensorrt-4.0.1.6-cp37-cp37m-linux_x86_64.whlpip install python/tensorrt-4.0.1.6-cp37-cp37m-linux_x86_64.whl \ uff/uff*.whl graphsurgeon/graphsurgeon*.whlInstall dependencies for the pip package build, listed here.
conda install \ absl-py astor gast protobuf tensorboard termcolor \ keras-applications keras-preprocessingThe Tensorflow compilation configure script is hardcoded to look for libnccl.so in <nccl_install_dir>/lib, but we have /lib64, so we need to set up some links to redirect it.
mkdir -p /usr/local/nccl_redircd /usr/local/nccl_redirfor i in `ls /usr/local/cuda`; do ln -s /usr/local/cuda/$i ./; doneln -s lib64 libInstall Bazel. Tensorflow 1.13.2 works with Bazel 0.19.2.
export BAZEL_VERSION=0.19.2export BAZEL_FILE=bazel-${BAZEL_VERSION}-installer-linux-x86_64.shwget --progress=dot:giga \ https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/$BAZEL_FILEchmod +x $BAZEL_FILE./$BAZEL_FILEClone the source and checkout the release.
git clone https://github.com/tensorflow/tensorflowcd tensorflowgit checkout v1.13.2This configure script uses environment variables to do a non-interactive config. The march flag set through CC_OPT_FLAGS is of particular interest for CPU-only computation, as it controls which SIMD instruction sets Tensorflow will use, which can have large performance impacts. Some important flag values:
nehalem: Core-i family (circa 2008) supports MMX, SSE1-4.2, and POPCNT, equivalent to the corei7 march flag pre-GCC5.sandybridge: Adds AVX (large potential speedups), AES and PCLMUL, and is oldest family that the Google Cloud runs (2011). Requires GCC5+.skylake: Adds a wide variety of SIMD instructions, including AVX2, and is currently the newest family the Google Cloud has. Requires GCC6+.
Also of interest for CPU computation is TF_NEED_MKL. Enabling this compiles Tensorflow to use the Intel Math Kernel Library, which is highly optimized for any CPU the Google Cloud will provide. In Tensorflow the MKL and CUDA are mutually exclusive—MKL is reserved for CPU-optimized builds.
cd /tensorflowexport TF_ROOT="/opt/tensorflow"export PYTHON_BIN_PATH="/opt/conda/bin/python"export PYTHON_LIB_PATH="$($PYTHON_BIN_PATH -c 'import site; print(site.getsitepackages()[0])')"export PYTHONPATH=${TF_ROOT}/libexport PYTHON_ARG=${TF_ROOT}/libexport TF_NEED_GCP=1 # Google Cloudexport TF_NEED_HDFS=1 # Hadoop Filesystem accessexport TF_NEED_S3=1 # Amazon S3export TF_NEED_AWS=0 # Amazon AWSexport TF_NEED_IGNITE=1export TF_NEED_KAFKA=1 # Apache KAFKAexport TF_NEED_JEMALLOC=1 # Alternative mallocexport TF_NEED_GDR=0 # GPU Direct RDMAexport TF_NEED_VERBS=0 # VERBS RDMAexport TF_NEED_CUDA=1export CUDA_TOOLKIT_PATH=/usr/local/cudaexport TF_CUDA_VERSION="$($CUDA_TOOLKIT_PATH/bin/nvcc --version | sed -n 's/^.*release \(.*\),.*/\1/p')"export TF_CUDA_COMPUTE_CAPABILITIES=7.0,6.1,6.0,3.7 # V100, P100, P4, K80export CUDNN_INSTALL_PATH=/usr/local/cudaexport TF_CUDNN_VERSION="$(sed -n 's/^#define CUDNN_MAJOR\s*\(.*\).*/\1/p' $CUDNN_INSTALL_PATH/include/cudnn.h)"export TF_NEED_TENSORRT=1 # Nvidia TensorRTexport TENSORRT_INSTALL_PATH=/usr/local/tensorrtexport NCCL_INSTALL_PATH=/usr/local/nccl_redir # Nvidia NCCLexport TF_NCCL_VERSION="$(sed -n 's/^#define NCCL_MAJOR\s*\(.*\).*/\1/p' $NCCL_INSTALL_PATH/include/nccl.h)"export TF_CUDA_CLANG=0 # Use clang compiler instead of nvccexport TF_NEED_OPENCL=0export TF_NEED_OPENCL_SYCL=0export TF_NEED_ROCM=0export TF_ENABLE_XLA=0 # Accelerated Linear Algebra JIT compilerexport TF_NEED_MKL=0 # Intel Math Kernel Libraryexport TF_DOWNLOAD_MKL=0export TF_NEED_MPI=0 # Message Passing Interfaceexport TF_SET_ANDROID_WORKSPACE=0export GCC_HOST_COMPILER_PATH=$(which gcc)export CC_OPT_FLAGS="-march=sandybridge"./configureFinally, the build—this takes about six hours.
export LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/local/nvidia/lib64"export CUDNN_INCLUDE_DIR="/usr/local/cuda/include"export CUDNN_LIBRARY="/usr/local/cuda/lib64/libcudnn.so"export TMP="/tmp"cd /tensorflowbazel build --config=opt --config=cuda --verbose_failures --jobs="auto" \ --action_env="LD_LIBRARY_PATH=${LD_LIBRARY_PATH}" \ --action_env="CUDNN_INCLUDE_DIR=${CUDNN_INCLUDE_DIR}" \ --action_env="CUDNN_LIBRARY=${CUDNN_LIBRARY}" \ //tensorflow/tools/pip_package:build_pip_packageWe'll export this environment just in case anyone wants to play with the compiled result, but the important part here is the creation of a .whl wheel file which can be installed via pip.
cd /tensorflowbazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkgcp /tmp/tensorflow_pkg/tensorflow*.whl /results/Install Tensorflow and Frontends to Environment
Finally, we'll install the package we created in a clean environment, plus the TFLearn and standalone Keras frontends.
conda install -c anaconda -c intel \ absl-py astor gast protobuf termcolor mock pbr \ keras-applications keras-preprocessing \ h5py grpcio markdown werkzeug cython jemalloc \ pyyaml graphviz pydot # for use with Kerasconda clean -qtipycd /usr/localtar zxf NJ__REF_ln -s TensorRT* tensorrtecho '/usr/local/tensorrt/lib' > /etc/ld.so.conf.d/tensorrt.confcd tensorrtcp python/tensorrt-4.0.1.6-cp35-cp35m-linux_x86_64.whl \ python/tensorrt-4.0.1.6-cp37-cp37m-linux_x86_64.whlpip install python/tensorrt-4.0.1.6-cp37-cp37m-linux_x86_64.whl \ uff/uff*.whl graphsurgeon/graphsurgeon*.whlpip install NJ__REF_ \ keras git+https://github.com/tflearn/tflearn.gitecho "/usr/local/cuda/extras/CUPTI/lib64" > /etc/ld.so.conf.d/cupti.confldconfigdu -hsx /Appendix
Download TensorRT. The link needs to be pulled from the console when downloading off the Nvidia website.
wget --progres="dot:giga" \'https://developer.download.nvidia.com/compute/machine-learning/tensorrt/secure/4.0/ga/TensorRT-4.0.1.6.Ubuntu-16.04.4.x86_64-gnu.cuda-9.2.cudnn7.1.tar.gz?I7uYxt0PQMJN3bjc7DP1uq62yAu4xIOcH8L78_k2DARdAwlb1rptdRiHUEO00WgJU9owKbFszuUK0eZfiSdlZYwns2mNKUazshNJJeR-PQgNpXb4M4U8RRNdWP66k071yy6f1xemg5uhAAAJHC2gQATo0jEzBenlnn8QKdKEvL0aSgZLvy8HUesQ3xq5PK31oinD8Y4d5rP1GCCgxc5Bg7HzQWTnE3DlrqPk3HoZMq8KDA'fn=`ls TensorRT*`mv "$fn" "/results/${fn%\?*}"