Beautiful diagrams

The VGG is the convolution neural network for object recognition developed and trained by Oxford’s Visual Geometry Group. And what is VGG-19?  This is the neural network with 19 layers deep, and this CNN can classify images into 1000 object categories.


Wojciech Rosinski performed the training time comparison for 3  popular frameworks:    Tensorflow 1.4.0,    Keras 2.1.1,    Pytorch 0.2.0+f964105.

Training is performed for 10 epochs,  each model is trained with Adam (adaptive learning rate optimization algorithm, first published in 2014) and SGD (stochastic gradient descent) ,  with batch size = 4 and batch size = 16, this results in 4 runs per model per framework. Pytorch wins, at least, on VGG-19 network tests:





Just tell me which one is better? Tensorflow or PyTorch

Tensorflow has been developed by Google,  PyTorch has been developed by Facebook. Different  models of CNNs are compared by the model training duration:


Keras… One more framework?  Do not hesitate, there are a few more.


In PyTorch (unlike TensorFlow), you can define the dynamic computational graphs.  This is helpful, for example,  while using variable length inputs in RNNs.

TensorFlow has the real-time representation of the graphs (by the tool called TensorBoard),  and gives us the opportunity to get the pictorial representation of the neural network .




Classic problem in dynamics: Inverted pendulum

It is often implemented with the pivot point mounted on a cart that can move horizontally under control of an electronic servo system. This system is frequently called Cart-Pole. The following model is learning to control a real Cart-Pole system from scratch in only 7 trials and 17.5 sec:

An inverted pendulum in which the pivot is oscillated rapidly up and down can be stable in the inverted position. This is called Kapitza’s pendulum, after Russian physicist Pyotr Kapitzawho first analysed it.

The Cart-Pole Python model is implemented in the OpenAI Gym Environment:

The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center.

The model is implemented with the concept of Deep Q-Learning (DQN).

Wow… It can be Double Inverted Pendulum, comparison between 2 controllers from 2:48