Using Torch for autoencoder

There are many different DNN frameworks. I was trying different ones when I taught DNN earlier this year. Somehow Torch gave me the impression that it is very easy to install and use. I was mainly using Python/Theano during teaching since I wanted my students to program their network, feed forward, and back propagation all from scratch.

Now, I am testing ideas (not only CNN). Torch seams a capable framework and also easy to implement. So I turned back and started to learn Torch.

Torch seems to have a very good online tutorial, demos, and examples May of them are for CNN and RNN.

I want to try some ideas on autoencoder. So, I started to look for auto encoder examples and basic syntax examples.

To implement any DNN, there are four steps.
1. Prepare data
2. Construct DNN
3. Train DNN
4. Test the DNN

1. Prepare data
1.1. load in data from a file
Usually, everything starts with a data file. The format of the data file should be known. For example, I have a torch tensor data file contains four columns. The first three columns are inputs and the last is the targets output. I can load it using torch.load
data = torch.load('example.t7')

If the data file I have is a csv file that is very common, I would need to convert it to a t7 file. There is csv2tensor package. First,
require ‘csv2tensor’

Then, call the following to load the csv file as tensor:
data = csv2tensor.load('test.csv')

If csv2tensor package is not installed, run
luarocks install csv2tensor
to install the package.

If I want to load an image file, I would need the image package.

If I want to use nn.StochasticGradient, I need to make sure the tensor data has :size() function and can be indexed using [i].

1.2 Preprocessing
I need to standardize the data. So, I want to make the training data’s mean 0.0 and standard-deviation 1.0.

I have to be careful here. I don’t want to include the test data when I normalize. So, I need to split the data first.

For example, I want to use 20% of the data for testing, and 80% data for training. Here, the data do not contain labels and auto encoder doesn’t not need any label.

Total_N = data:size()
test_frac = 0.2
test_n = torch.floor(N * test_frac)
train_n = total_N - test_n

Then I can extract the training data
train = data:narrow(1, 1, train_n)

Use the training data, I estimate the mean and variance.
mean = train:mean()
stdv = train:std()

Now, use mean and stdv, we can normalize the training data:
train_n = (train - mean) / stdv

2. Setup Network

Leave a Reply

Your email address will not be published. Required fields are marked *