# Tutorials

The following post are intended for people who are interested to understand very basic TF workflows.

### TensorFlow and C++ - Once And For All

Looking at stackoverflow for the tags tensorflow, c++ one might get the impression combining both is the direct connection to hell. But it is quite simple to use both and not ending in the loony bin.

### Introduction TensorFlow - Understanding the Computation Graph

Most introduction guides just rephrase the official MNIST example from the TensorFlow documentation without any additional information. And the official guide is definitely not the best way to get into working with TensorFlow (TF). To understand things behind the scene you need to understand the entire concept of TensorFlow, its symbolic graph, and start with very basic examples.

### Introduction TensorFlow - Optimization of Objective Functions

As ‘deep learning’ is simply a nicer slogan for ‘non-linear optimization + data’, we will now consider how to optimize “things” using TensorFow without manually computing all these derivatives by hand.

### Introduction TensorPack - Data Prefetching

There are many libraries/wrappers for TensorFlow claiming to be easy to apply to your problems. Most of them oversimplify the usage, such that they are not flexible enough. Just take a look at Keras, PrettyTensor, TfSlim, sugarTensor, tflearn, … . There are probably much more libraries doing the same thing. But they have been looking at the wrong problem all the time. It is not hard to come up with writing a Conv2D layer or a ReLU layer (yes people actually wrapping tf.nn.relu). So writing a layer is not the issue, efficient training is! The entire interplay between CPU processing and GPU processing. You do not want one of these units to feel bored while waiting for the other one. But so many people think they need to provide another TF-wrapper for commons layers. They make the same mistake again and again. But I care about speed during training.