PatWie - Blog
June 5, 2017
GPUs were running for several weeks, and you got angry emails from your colleges about the occupied computing machines. At this moment your mind struggles: ‘Did I contributed to the global climate change?’. Nonetheless, the training error decreased, and the cross-validation information puts you in a happy mood. Your model performance surpassed all competitors methods. Then it is time to deploy your model!
June 4, 2017
In the last post, I covered a nice way of producing training data on-the-fly. In this section, we are going to create a model with trainable parameters to predict high-res MNIST digits.
March 1, 2017
There are many libraries/wrappers for TensorFlow claiming to be easy to apply to your problems. Most of them over simply the usage, such that they are not flexible enough. Just take a look at Keras, PrettyTensor, TfSlim, sugarTensor, tflearn, … . There are probably much more libraries doing the same thing. But they have been looking at the wrong problem all the time. It is not hard to come up with writing a Conv2D layer or a ReLU layer (yes people actually wrapping
tf.nn.relu). So writing a layer is not the issue, efficient training is! But so many people think they need to provide another TF-wrapper for commons layers. They do the same mistake again and again. But I care about speed during training.
January 4, 2017
As ‘deep learning’ is simply a nicer slogan for ‘non-linear optimization’, we will now consider how to optimize things using TensorFow without manually computing all these derivatives by hand.
January 2, 2017
During this term, I gave an introduction into TensorFlow at the University. Most introduction guides just rephrase the official MNIST example from the TensorFlow documentation without any additional information. And this is definitely not the best way to get into working with TF.
To understand things behind the scene you need to understand the entire concept of TensorFlow and start with basic examples.