Data Science

Playing around with neural networks – Python version

In my last post I said that I would try to replicate the code in Python. Well here it is.

It is a first attempt, and unfortunately the predictive power of the network thus created is awful (it’s even worst than a random guess…). I need to explore more deeply the options of the module in order to understand where lies the difference between the network created in R and this one.

As you can see if you compare with the other version, numpy is a real help when it comes to reformatting data. No need for a custom vectorization function and the image is directly imported in the right format.

Pybrain

I used the pybrain module, a child project of the scipy module itself. The syntax is quite easy to pick up and the module is written so that you will be able to easily create the network’s architecture that best suits your need by specifying a specific activation function for any layer or playing with a whole other range of parameters.

You can also execute each iteration of the training process individually, thus allowing you to control the process more precisely.

The DataSet(s) objects you’ll be using to store your input(s) and output(s) are formatted just the right way to handle the kind of data related with the problem you’re dealing with. You can even randomly split them in samples in order to obtain a training and testing set with the .splitWithProportion() method.

If you’re running on python 3.3 like myself, you might as well use this version and switch the

reduce()

line 250 of the file /pybrain/supervised/trainers/backprop.py to

functools.reduce()

(don’t forget to import functools).

The code

Start by importing the parts of the module we will need to build and train our network.

import numpy as np
from skimage.io import imread
from pybrain.tools.shortcuts import buildNetwork
from pybrain.supervised.trainers import BackpropTrainer
from pybrain.datasets import ClassificationDataSet
from pybrain.structure.modules   import SoftmaxLayer
from pybrain.structure import TanhLayer

Next instanciate two ClassificationDataSet(s) objects, specifying the structure of data that they are going to hold. SupervisedDataSet and Sequentialdataset are also available. We fill them with their corresponding data.

category = 0
shapes = ["rectangle", "triangle", "circle"]

ds_training = ClassificationDataSet(1024, nb_classes=3, class_labels = ["rectangle", "triangle", "circle"])
ds_testing = ClassificationDataSet(1024, nb_classes=3)

for shape in shapes:
    for i in range(15):
        image = imread('C:/.../Neural Networks/Visual classification/shapes/training/'+shape+str(i+1)+'.png', as_grey=True, plugin=None, flatten=None)
        image_vector = image.flatten() #make it a 1 dimension array
        ds_training.appendLinked(image_vector, [category]) #append the (input, output) pair to the dataset
    category+=1

category = 0

for shape in shapes:
    for i in range(8):
        image = imread('C:/.../Neural Networks/Visual classification/shapes/testing/'+shape+str(i+1)+'.png', as_grey=True, plugin=None, flatten=None)
        image_vector = image.flatten()
        ds_testing.appendLinked(image_vector, [category])

To check how many members of each class are in the dataset, and the structure of the output :

ds_training.calculateStatistics()
ds_testing.calculateStatistics()
ds_training.getClass(0)
print(ds_training.getField('target'))

Recall that in the case of a classification problem, the number of output nodes needs to be equal to the number of classes. So we have to reorganize our output in 3 columns with value 1 when that row corresponds to the right class, 0 else.

ds_training._convertToOneOfMany(bounds=[0, 1])
ds_testing._convertToOneOfMany(bounds=[0, 1])
print(ds_training.getField('target'))

Now for the network. buildNetwork() allows you to specify the architecture of the network. You can choose as many hidden nodes as you like, and specify the activation function to use for any layer (linear, tanh, sigmoid, Gaussian, custom etc.). The activation function and other models are stored in pybrain.structure.
We then parametrize the trainer and get it to work until convergence is reached (if convergence is ever reached).

net = buildNetwork(1024,12, 12, 3, hiddenclass = TanhLayer, outclass=SoftmaxLayer)
trainer = BackpropTrainer(net, dataset=ds_training, verbose=True, learningrate=0.01)

trainer.trainUntilConvergence()

Finally, let’s test our model on the training data, because we’re so eager to see how badly it will perform (Yay) :

out = net.activateOnDataset(ds_testing)
out = out.argmax(axis=1) 
print(out)

I will come back to this post to improve the code a get it to work properly, but that’s already a starting point if you want to give it a try !

References :

http://www.pybrain.org/

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.