Announcing neural2d, a neural net optimized for image processing

November 24, 2014
Neural2d is an open-source neural net simulator with features for image processing.

Links

Video introduction (YouTube, 11 min.):  https://www.youtube.com/watch?v=yB43jj-wv8Q
Landing page with links: http://neural2d.net

Features

✔ Optimized for 2D image data — input data can be read from .bmp image files
✔ Neuron layers can be abstracted as 1D or 2D arrangements of neurons
✔ Network topology is defined in a text file
✔ Neurons in layers can be fully or sparsely connected
✔ Selectable transfer function per layer
✔ Adjustable or automatic training rate (eta)
✔ Optional momentum (alpha) and regularization (lambda)
✔ Any layer(s) can be configured as convolution filters
✔ Standalone console program
✔ Simple, heavily-commented code, < 3000 lines, suitable for prototyping, learning, and experimentation
✔ Optional web-browser-based GUI controller

tags: , , ,
posted in C++ by Dave

Follow comments via the RSS Feed | Leave a comment

9 Comments to "Announcing neural2d, a neural net optimized for image processing"

  1. IMAGE SENSOR wrote:

    Thanks for this nice article.I like the post.

  2. Kapila Gunasekera wrote:

    I am a postdoctoral research fellow in Cancer Biology. Before moved into molecular biology, I was an electrical and electronic engineer by education and some work experience, but developed an interest towards experimental biology, while following masters course, in which I had to study number of biology related subjects and practicals.

    I am interested in building a neural networks based machine learning algorithm to classify whether a spectra of a peptide having a residue with a post transnational modification assigned is belong to one of the three categories. So its a classification problem and have around 9 inputs.

    Your comments and advice would be greatly appreciated.

    Best regards

    Kapila

  3. Dave wrote:

    Hi Kapila, sounds like an interesting project. It sounds like the output layer of your neural net could be just three neurons corresponding to the three categories you’re recognizing. The input layer of the net would have nine inputs. Your training data would be a set of sample cases that train the net to output a high signal (like +1.0) on one of the outputs to indicate which class was recognized, while outputting the inverse (like -1.0) on the other outputs. The number of hidden layers and the number of neurons that you will need is hard to predict, so you may need to experiment a bit. Put it all in a loop and keep presenting the training samples to the backprop net until it converges on a solution — or not. Keep fiddling with the network topology and other parameters to find the best results.

  4. Kapila Gunasekera wrote:

    Hi Dave,

    Thank you for your reply and valuable info and suggestions. I will keep fiddling around the network topology. What would you suggests as a reasonable starting number of hidden layers. Would it be logical to include say hidden layers equal to number of inputs (in my case n inputs and therefore, n hidden layers.

    I thought the same way for the outputs i.e. to toggle among 1.0, 0.0 and -1.0 classifications.

    Would it be alright if I keep posting in this post or is there another place I could post my questions and achievements.

    Best regards

    Kapila

  5. Dave wrote:

    Hi Kapila, The number of hidden neurons a net needs is not so much a function of the number of input neurons. It’s more related to how detailed a curve fit you need to make. When you train a neural net, you’re just creating a function that fits a curve to some data. If the nature of your data requires only a rough, smooth curve fit, then you won’t need very many hidden neurons. But if your problem requires a higher-order, more detailed curve fit, then you’ll need more hidden neurons to represent that. Too many hidden layers and neurons introduces the risk of over-fitting, so personally I’d suggest starting out with a very simple topology, like just a single hidden layer, and see how training goes, adding hidden layers or neurons only when experiments show that they improve the net accuracy.

    Feel free to post here for anything related to neural2d.

  6. Kapila Gunasekera wrote:

    Hi Dave,

    I used your tutorial script with the following topology.

    input neurons: 9

    hidden layers: 13 (I found this provides the best estimation with 250 training points). I included 1/4 to have output 1, another 1/4 to have output -1 and 1/2 to have output 0.

    output neurons: 1

    And if we look at the last 4 estimation, it seems that this topology with 250 data points does a pretty good job.

    Training data number – 247
    Input values : 0 0 0 0 0 0 0 0 0
    Predicted spectra bin : -0.947972
    Manually annotated spectra bin : -1
    NN spectral classification : Bronz
    NN recent average estimation error : 0.0532736

    Training data number – 248
    Input values : 0.2 0.166667 0.142857 0.125 0.111111 0 0 0 0
    Predicted spectra bin : -0.00153653
    Manually annotated spectra bin : 0
    NN spectral classification : Silver
    NN recent average estimation error : 0.0527614

    Training data number – 249
    Input values : 0 0 0 0 0.2 0.166667 0.142857 0.125 0.111111
    Predicted spectra bin : -0.00233952
    Manually annotated spectra bin : 0
    NN spectral classification : Silver
    NN recent average estimation error : 0.0522622

    Training data number – 250
    Input values : 0.333333 7 7 0.25 0.2 0.166667 0.142857 0.125 0.111111
    Predicted spectra bin : 0.999764
    Manually annotated spectra bin : 1
    NN spectral classification : Gold
    NN recent average estimation error : 0.0517471

    What do you think?

    Best regards

    Kapila

  7. Dave wrote:

    Very nice @Kapila, it looks like you got a trained net :-)

  8. Kapila wrote:

    Hi Dave,

    I am constructing the validation and testing section of my NN and have a few questions. Since, I could use another set of manually classified data set during the validation I could ask the script to output both predicted output and actual output together with average error between the target output and the predicted output without performing the back propagation. That means I should disable back propagation during both validation and testing right.

    Best regards

    Kapila

  9. Dave wrote:

    Yes, you’re on the right track.

Leave Your Comment

Before you post, please demonstrate that you are a live, honest person:

What is the square root of 144?

 
Powered by Wordpress and MySQL. Theme by Shlomi Noach, openark.org