Announcing neural2d, a neural net optimized for image processing

Neural2d is an open-source neural net simulator with features for image processing.


Video introduction (YouTube, 11 min.):
Landing page with links:


✔ Optimized for 2D image data — input data can be read from .bmp image files
✔ Neuron layers can be abstracted as 1D or 2D arrangements of neurons
✔ Network topology is defined in a text file
✔ Neurons in layers can be fully or sparsely connected
✔ Selectable transfer function per layer
✔ Adjustable or automatic training rate (eta)
✔ Optional momentum (alpha) and regularization (lambda)
✔ Any layer(s) can be configured as convolution filters
✔ Standalone console program
✔ Simple, heavily-commented code, < 3000 lines, suitable for prototyping, learning, and experimentation
✔ Optional web-browser-based GUI controller


  1. I am a postdoctoral research fellow in Cancer Biology. Before moved into molecular biology, I was an electrical and electronic engineer by education and some work experience, but developed an interest towards experimental biology, while following masters course, in which I had to study number of biology related subjects and practicals.

    I am interested in building a neural networks based machine learning algorithm to classify whether a spectra of a peptide having a residue with a post transnational modification assigned is belong to one of the three categories. So its a classification problem and have around 9 inputs.

    Your comments and advice would be greatly appreciated.

    Best regards


    1. Hi Kapila, sounds like an interesting project. It sounds like the output layer of your neural net could be just three neurons corresponding to the three categories you’re recognizing. The input layer of the net would have nine inputs. Your training data would be a set of sample cases that train the net to output a high signal (like +1.0) on one of the outputs to indicate which class was recognized, while outputting the inverse (like -1.0) on the other outputs. The number of hidden layers and the number of neurons that you will need is hard to predict, so you may need to experiment a bit. Put it all in a loop and keep presenting the training samples to the backprop net until it converges on a solution — or not. Keep fiddling with the network topology and other parameters to find the best results.

  2. Hi Dave,

    Thank you for your reply and valuable info and suggestions. I will keep fiddling around the network topology. What would you suggests as a reasonable starting number of hidden layers. Would it be logical to include say hidden layers equal to number of inputs (in my case n inputs and therefore, n hidden layers.

    I thought the same way for the outputs i.e. to toggle among 1.0, 0.0 and -1.0 classifications.

    Would it be alright if I keep posting in this post or is there another place I could post my questions and achievements.

    Best regards


    1. Hi Kapila, The number of hidden neurons a net needs is not so much a function of the number of input neurons. It’s more related to how detailed a curve fit you need to make. When you train a neural net, you’re just creating a function that fits a curve to some data. If the nature of your data requires only a rough, smooth curve fit, then you won’t need very many hidden neurons. But if your problem requires a higher-order, more detailed curve fit, then you’ll need more hidden neurons to represent that. Too many hidden layers and neurons introduces the risk of over-fitting, so personally I’d suggest starting out with a very simple topology, like just a single hidden layer, and see how training goes, adding hidden layers or neurons only when experiments show that they improve the net accuracy.

      Feel free to post here for anything related to neural2d.

  3. Hi Dave,

    I used your tutorial script with the following topology.

    input neurons: 9

    hidden layers: 13 (I found this provides the best estimation with 250 training points). I included 1/4 to have output 1, another 1/4 to have output -1 and 1/2 to have output 0.

    output neurons: 1

    And if we look at the last 4 estimation, it seems that this topology with 250 data points does a pretty good job.

    Training data number – 247
    Input values : 0 0 0 0 0 0 0 0 0
    Predicted spectra bin : -0.947972
    Manually annotated spectra bin : -1
    NN spectral classification : Bronz
    NN recent average estimation error : 0.0532736

    Training data number – 248
    Input values : 0.2 0.166667 0.142857 0.125 0.111111 0 0 0 0
    Predicted spectra bin : -0.00153653
    Manually annotated spectra bin : 0
    NN spectral classification : Silver
    NN recent average estimation error : 0.0527614

    Training data number – 249
    Input values : 0 0 0 0 0.2 0.166667 0.142857 0.125 0.111111
    Predicted spectra bin : -0.00233952
    Manually annotated spectra bin : 0
    NN spectral classification : Silver
    NN recent average estimation error : 0.0522622

    Training data number – 250
    Input values : 0.333333 7 7 0.25 0.2 0.166667 0.142857 0.125 0.111111
    Predicted spectra bin : 0.999764
    Manually annotated spectra bin : 1
    NN spectral classification : Gold
    NN recent average estimation error : 0.0517471

    What do you think?

    Best regards


  4. Hi Dave,

    I am constructing the validation and testing section of my NN and have a few questions. Since, I could use another set of manually classified data set during the validation I could ask the script to output both predicted output and actual output together with average error between the target output and the predicted output without performing the back propagation. That means I should disable back propagation during both validation and testing right.

    Best regards


  5. Refreshing to find probably the only person on this planet that is able to explain such a complicated subject so clearly, and as a bonus you provide easy code to experiment with. Thanks.

  6. @ Gordon Barnett. I couldn’t agree with you more. Hello Dave, I have found in you a teacher already. Please tell, for instance, the neural network that you have trained in your other tutorials, having finished training how could you make it remain in the trained state or do you have to train it whenever you want to use it? If you did the training today by feeding training data to it, could you pick it up and use it another day with it still trained. What could be done to keep a network trained?

    1. Fortunately, you only need to train it once. In neural2d, you can save the weights using Net::saveWeights(), then later restore them using Net::loadWeights(). See this wiki article and compare the example main() functions for TRAINING vs. TRAINED modes of operation. Also see the section “How do I use a trained net on new data?” in the user manual.

  7. Hi Dave
    I have tried your neural 2d code to train my own data. It is very fast. Thanks for sharing such a beautiful piece of code for deep learning learners. I have observed that during training for some times not always I am getting an error as “vector subscript out of range”. When I try to debug I have found the following line of neural2d.cpp as the error:

    if (sample.targetVals[maxIdx] > 0.0) {
    info << " " << string("Correct");

    In the above code the sample.targetVals[maxIds] is trying to access some data which beyond some range. Basically this kind of error arises when we try to access array beyond its index range. So If I am not wrong can you please tell me whether you have fixed any limit for targetVals. So if we try to increase that may be our code run.
    My Data format is: { 0.534722 } 0.00964286

    Is it because my data points are beyond some limits????

  8. Hi Dave
    I forgot to mention the function name where I am getting the error : It is in neural2d.cpp and function is:

    void Net::reportResults(const Sample &sample) const

    1. Hi Mousumi, sorry about that error message. I don’t immediately see how it could happen during training, but I think there’s a quick workaround for your use case. From your training sample, it appears that you have a net with one input and one output, and you’re not using it as a “classifier” kind of network. The block of code where you got the error is mainly useful for reporting success/fail results as a classifier, so you can disable that block of code by changing line 1202 in reportResults() to “if (false)”. If the program were smarter, it wouldn’t even run that block of code when the net has only a single output neuron.

      However, the out-of-bounds condition should not have happened whether you’re using the net as a classifier or not. At line 1214, the number of target values and the number of output neurons should both be 1 in your case. At line 1214, maxIdx holds the index of the output neuron with the highest value, and so it can only have the value 0. If you want to pursue this bug, I’d suggest setting a breakpoint around line 1214 to check that sample.targetVals.size() and layers.back()->neurons[0].size() are both 1 and that maxIdx is 0. If they don’t match and you can’t figure out why and want to pursue the issue, feel free to submit a bug report at, and include your topology configuration file.

Comments are closed.