Git Product home page Git Product logo

neurolab's People

Stargazers

 avatar  avatar

Watchers

 avatar  avatar

neurolab's Issues

Learning Rate is not present

I can not find option for setting Learning Rate. I used earlier versions like 
this
error = Classifier.train(Inputs, Targets, epochs = 500, show=10, goal = 1, lr = 
0.001 )
and it worked fine for me.

What version of the product are you using? On what operating system?
I am using Version 0.2.3 on Ubuntu 12.4LTS

Original issue reported on code.google.com by [email protected] on 9 Feb 2014 at 4:55

strange result

Hello there, there's some strange result happening to me, when training the nn, 
here is the configuration of the nn ...
----PYTHON CODE ---- 
net = nl.net.newff([[0.123, 19.182], [0.2, 359.9]], [5,1])

err = net.train(input, target, epochs=500, show=100, goal=0.2)

net.save('test1.net')

result = net.sim(input)
---------------
here is the strange result 

all the time, this result happens to me
...
 [ 1.]
 [ 1.]
 [ 1.]
 [ 1.]
 [ 1.]
 [ 1.]
 [ 1.]]

so you see, it's always 1 as the result, really odd, that's really far off, of 
what i want. 


i wonder if it is a problem with the python build. 

Thanks, André Pereira

Original issue reported on code.google.com by [email protected] on 22 Jul 2013 at 3:50

Training Fails for Non-Default Activation Functions

I've modified the example for how to use a feed forward network by passing an 
argument to the newff function.

import neurolab as nl
import numpy as np

# Create train samples
x = np.linspace(-7, 7, 20)
y = np.sin(x) * 0.5

size = len(x)

inp = x.reshape(size,1)
tar = y.reshape(size,1)

# Create network with 2 layers and random initialized
net = nl.net.newff([[-7, 7]],[5, 1], transf=[nl.net.trans.LogSig]*2)

# Train network
error = net.train(inp, tar, epochs=500, show=100, goal=0.02)


Original issue reported on code.google.com by [email protected] on 8 Jun 2014 at 7:31

Failing to add Levenberg-Marquardt-training

Hello everyone!

I've been trying to add the Levenberg-Marquardt algorithm as a training-method 
but I fail doing so - I get a strange error-message I don't understand.

Here's the small method I've added to spo.py, using scipy.optimize.leastsq:
class TrainLM(TrainSO):
    def __call__(self, net, input, target):
        from scipy.optimize import leastsq
        x = leastsq(self.fcn, self.x.copy())
        self.x[:] = x

And in train/__init__.py I added the line train_lm = trainer(spo.TrainLM)

A simple test-run using the XOR-function looks like this:
>>import neurolab
>>target = [[0], [1], [1], [0]]
>>input = [[0,0], [0,1], [1,0], [1,1]]
>>net = neurolab.net.newff([[-0.5, 0.5], [-0.5, 0.5]], [5, 1])
>>net.trainf = neurolab.train.train_lm
>>err = net.train(input, target, show=15)

I get this error-message:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/neurolab/core.py", line 163, in train
    return self.trainf(self, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/neurolab/core.py", line 345, in __call__
    train(net, *args)
  File "/usr/local/lib/python2.7/dist-packages/neurolab/train/spo.py", line 44, in __call__
    x = leastsq(self.fcn, self.x.copy())
  File "/usr/lib/python2.7/dist-packages/scipy/optimize/minpack.py", line 278, in leastsq
    raise TypeError('Improper input: N=%s must not exceed M=%s' % (n,m))
TypeError: Improper input: N=21 must not exceed M=1

I've been playing around with this and I do not understand what M is in this 
case? What am I doing wrong? 

Thanks in advance!

Original issue reported on code.google.com by [email protected] on 6 Nov 2011 at 5:52

Missing newelm example in doc

Hi. Your doc pages:

https://pythonhosted.org/neurolab/ex_newff.html
https://pythonhosted.org/neurolab/ex_newelm.html

have the same example.

The newelm example is missing.


Original issue reported on code.google.com by [email protected] on 20 Aug 2014 at 3:35

PureLin in outputl layer does not work

input = np.random.uniform(-0.5, 0.5, (10, 1))
target = input
net = nl.net.newff([[-1, 1]],[5, 1],
                    [nl.trans.TanSig(), nl.trans.PureLin()])
net.train(input,
          target,
          epochs=100, show=1, goal=0.00000)


Training gives nans...

Original issue reported on code.google.com by [email protected] on 7 Jun 2013 at 9:05

cannot save nn

Just followed one the examples, and tried 

>> net.save("test") 

it returns 

 File "/usr/lib/python2.7/copy_reg.py", line 70, in _reduce_ex
    raise TypeError, "can't pickle %s objects" % base.__name__
TypeError: can't pickle function objects

Original issue reported on code.google.com by [email protected] on 16 Mar 2014 at 9:49

fmin_bfgs() got an unexpected keyword argument 'lr', train func does not take lr as parameter

What steps will reproduce the problem?
1.net = nl.net.newff([[-1.2499051235,7.7401906134]], [2, 15, 1], 
[nl.trans.TanSig(), nl.trans.TanSig(), nl.trans.PureLin()])
2.error = net.train(inputs, targets, epochs = 1000, show = 50, goal = 0.02, lr 
= 0.001)
3.

What is the expected output? What do you see instead?


What version of the product are you using? On what operating system?
Neurolab 0.3.4, Linux Mint

Please provide any additional information below.

I can't adjust learning rate with adding additional parameter lr = "some float".


Original issue reported on code.google.com by [email protected] on 19 Sep 2014 at 10:27

0.3.5 version not available in pypi

Hello Zuev, 0.3.5 version was released in the end of January 
(https://code.google.com/p/neurolab/source/detail?r=134) but the tarball is not 
available in pypi website (https://pypi.python.org/pypi/neurolab/0.3.5).

In 0.3.4 website there is a big green button to download 
https://pypi.python.org/pypi/neurolab/0.3.4; it is no available in 0.3.5 
website.

Thank you for all your work with neurolab!

Original issue reported on code.google.com by [email protected] on 5 Feb 2015 at 2:24

Added regularization and cross-entropy error to Neurolab

I have added l2 and l1 regularization capabilities for training feed-forward 
networks to the Neurolab library. I have also added cross-entropy error (used 
in logistic regression) to error.py.

I would like to see these modifications incorporated into the standard 
distribution of Neurolab, but I'm not sure how to contact the developers of 
Neurolab. If anyone has advice, please let me know.

You can find my modifications, an explanation, and a demonstration here:
1. https://github.com/kwecht/NeuroLab
2. 
http://nbviewer.ipython.org/github/kwecht/NeuroLab/blob/master/Adding%20Regulari
zation%20to%20Neurolab.ipynb

Original issue reported on code.google.com by [email protected] on 22 Dec 2014 at 9:18

Cannot install successfully for Python 3.2

The package installs automatically and successfully into the python 2.7 
directories...I am using python 3.2 and cannot import it into a python 3.2 
program,  I get a "no module found" error.   When I set the path to the 2.7 
directory and try to 'import neurolab' it finds _init_.py and then pulls an 
error saying no 'net' module found.  I admit I am new to python and linux so 
any help that can be given is appreciated. I have installed this package using 
easy_install, pip and the setup.py methods and get the same result.


Original issue reported on code.google.com by [email protected] on 2 Jan 2014 at 3:29

Problem when using a modified network property

What steps will reproduce the problem?
1. get weights from matlab
2. put them in new neurolab network
3. try to simulate

What is the expected output? What do you see instead?
expect a float wich is the output of the modified network

What version of the product are you using? On what operating system?
0.2.0 on windows 8

Please provide any additional information below.
import neurolab as ann
import numpy as np

net = ann.net.newff([[0,1]], [23,1])

input_w = [32.3171744820845, 32.2043800918988, 32.136163926, -32.4341000141303, 
-32.1704015080038, -32.2030201318901, -32.2184171138208, 32.2054821716765, 
-32.2000345407995, 32.1932919978403, -32.2152346298993, -32.2009029189835, 
-32.2030452045251, -32.197564589311, -32.2262278025944, -32.2007690478912, 
-32.2061326911494, 32.2054374979395, -32.2000906699287, 32.2002622663218, 
32.2136938638243, -32.2613292694393, 32.085593347726]
input_w = np.array(input_w)
input_w = np.reshape(input_w, (23,1))

layer_w =[1.10124332908008, -0.494722289691433, -0.101584394298617, 
-0.28572291232131, 0.0564139037327468, -0.0673409338356199, -0.10433867778315, 
0.0444076509409609, 0.0108271751116084, 0.170011665078436, 0.818805906556299, 
-0.85047177872318, -0.208978701318012, 0.0195281063602321, 0.895884671127506, 
-0.767414283405509, -0.542756561580723, -0.435516658445986, -0.072659996288381, 
0.290697018638296, -0.288497132738669, -0.0155286198283307, 0.0586577490603509]
layer_w = np.array(layer_w)
layer_w = np.reshape(layer_w, (23,1))

bias_l = [-32.0828297294381, -29.2671282870437, -26.4229098531693, 
23.084892247868, 20.5330427640324, 17.5574048061564, 14.5933028851275, 
-11.6953673408047, 8.78125670346926, -5.88688037216909, 2.8049409477511, 
0.01632911630747, -2.87473387686928, -5.86458351035239, -8.67274920868569, 
-11.7074322977483, -14.6220474424409, 17.5543393781814, -20.4907773265343, 
23.4177496369265, 26.3296732266106, -29.2015427394162, 32.3143938192403]
bias_l = np.array([-32.0828297294381, -29.2671282870437, -26.4229098531693, 
23.084892247868, 20.5330427640324, 17.5574048061564, 14.5933028851275, 
-11.6953673408047, 8.78125670346926, -5.88688037216909, 2.8049409477511, 
0.01632911630747, -2.87473387686928, -5.86458351035239, -8.67274920868569, 
-11.7074322977483, -14.6220474424409, 17.5543393781814, -20.4907773265343, 
23.4177496369265, 26.3296732266106, -29.2015427394162, 32.3143938192403])
bias_l = np.reshape(bias_l, (23))


input_bias = 0.386570823201307

net.layers[0].np['w'] = input_w
net.layers[1].np['w'] = layer_w
net.layers[1].np['b'] = input_bias
net.layers[0].np['b'] = bias_l


print net.sim([[0.5]])
-------------------------------------------------------------
output of the program:
0 [ 0.5]
Traceback (most recent call last):
  File "C:\Users\houssem\.spyder2\.temp.py", line 37, in <module>
    print net.sim([[0.5]])
  File "C:\Python27\lib\site-packages\neurolab\core.py", line 146, in sim
    output[inp_num, :] = self.step(inp)
ValueError: output operand requires a reduction, but reduction is not enabled

--------------------------------------------------------------
any help will be helpful :p

Original issue reported on code.google.com by [email protected] on 20 Feb 2013 at 7:54

competitive transfer function produces incorrect output

What steps will reproduce the problem?
1. Import neurolab as nl and create a simple vector n with 3 values, one value 
< 0, one value > 0 and one value < 0, e.g. n = (-0.5763, 0.8345, -0.1234)
2. let f = nl.trans.Competitive() 
3. a = f(n)

What is the expected output? What do you see instead?
I expect [0, 1, 0], instead I see [1, 0, 0 ]

What version of the product are you using? On what operating system?
Version 0.1.0 on Ubuntu 11.04

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 5 Aug 2011 at 7:21

Parameters Ignored in Training Function Construction

For TrainGMD, TrainGDA, and TrainGDX (and possibly other classes), some of the 
parameters passed to the constructor are ignored. Instead, the parameter is 
initialized to the default value. For example, here mc is given the default 
value .9 in the function definition but subsequently ignored in the function 
body.

 def __init__(self, net, input, target, lr=0.01, adapt=False, mc=0.9):
        super(TrainGDM, self).__init__(net, input, target, lr, adapt)
        self.mc = 0.9
        self.dw = [0] * len(net.layers)
        self.db = [0] * len(net.layers)

Original issue reported on code.google.com by [email protected] on 8 Jun 2014 at 1:22

Linear Activation Leads to NaN minmax

Slightly modified standard feed-forward example

x = np.linspace(-7,7,20)
y = np.sin(x) * .5
size = len(x)
inp = x.reshape(size,1)
tar = y.reshape(size,1)

net = nl.net.newff([[-7,7]], [5,1], transf=[nl.net.trans.PureLin()]*2)

Leads to infinite minmax in core.py:
self.init()   # line 97, minmax = [[-inf inf]]

Which leads to a problem in init.py, line 129/130
x = 2. / (minmax[:, 1] - minmax[:, 0])
y = 1. - minmax[:, 1] * x



Original issue reported on code.google.com by [email protected] on 9 Jun 2014 at 10:00

output norm and different resutls

Hi there,
I just realized that the newff output is fixed to range [-1, 1] and I do the 
following to test how should output outside the range work.

import neurolab as nl
import numpy as np

# Create train samples
x = np.linspace(-7, 7, 20)
y = x * 10

size = len(x)

inp = x.reshape(size,1)
tar = y.reshape(size,1)

norm_inp = nl.tool.Norm(inp)
inp = norm_inp(inp)

norm_tar = nl.tool.Norm(tar)
tar = norm_tar(tar)

# Create network with 2 layers and random initialized
# as I normalized the inp, the input range is set to [0, 1] (BTW, I don't know 
how
#to norm it to [-1, 1])
net = nl.net.newff([[0, 1]],[5, 1])

# Train network
error = net.train(inp, tar, epochs=500, show=100, goal=0.02)

# Simulate network
out = norm_tar.renorm(net.sim([[ 0.21052632 ]]))

print "final output:-----------------"
print out
inp before norm
[[-7.        ]
 [-6.26315789]
 [-5.52631579]
 [-4.78947368]
 [-4.05263158]
 [-3.31578947]
 [-2.57894737]
 [-1.84210526]
 [-1.10526316]
 [-0.36842105]
 [ 0.36842105]
 [ 1.10526316]
 [ 1.84210526]
 [ 2.57894737]
 [ 3.31578947]
 [ 4.05263158]
 [ 4.78947368]
 [ 5.52631579]
 [ 6.26315789]
 [ 7.        ]]

tar before norm
[[-70.        ]
 [-62.63157895]
 [-55.26315789]
 [-47.89473684]
 [-40.52631579]
 [-33.15789474]
 [-25.78947368]
 [-18.42105263]
 [-11.05263158]
 [ -3.68421053]
 [  3.68421053]
 [ 11.05263158]
 [ 18.42105263]
 [ 25.78947368]
 [ 33.15789474]
 [ 40.52631579]
 [ 47.89473684]
 [ 55.26315789]
 [ 62.63157895]
 [ 70.        ]]

I expect the out to be around -40 after renorm for the input 0.21052632
but the results are not repeatable, sometimes is right (around -40) but 
sometimes is wrong (become -70).

I am wondering why the training results are not stable and is there a better 
way to train a nn that produce output value out range [-1, 1] 

Many thanks,
Derrick


Original issue reported on code.google.com by [email protected] on 28 Apr 2014 at 2:30

Elman network

А можно увидеть пример работы сети Элмана 
для предсказания числовых 
последовательностей ?
Обучив сеть на последовательности [1, 3, 5, 7, 
9], и подав на вход, я хочу чтобы она 
предсказала следующий элемент (11) ? 
Пробежавшись по диагонали по исходникам не 
смог найти ответа - step() требует элемент и 
возвращает примерно его же.

Original issue reported on code.google.com by [email protected] on 17 Oct 2011 at 11:02

Multiprocessing can not pickle unbound function

What steps will reproduce the problem?
Using the neurolab Net modules is not possible to parallelize its execution due 
to the unbound functions are unpickable through queues.

It could be solved not defining unbound functions of trainf or errof during the 
instantiation of the Net structure. Its just a fast solution, defininf the 
errorf and trainf as strings and calling them in the right moment as in error 
function calling:

return getattr(error, net.errof)(target - output)

Original issue reported on code.google.com by [email protected] on 6 Mar 2014 at 1:13

feedforward network not learning

What steps will reproduce the problem?
1. training nff
2. large input --42 input neurons floaing point vales as inputs
3. out put (0,0,0,1)  ,(0,0,1,0),(0,1,0,0),(1,0,0,0)

What is the expected output? What do you see instead?
   decrease in error but constant error seen

What version of the product are you using? On what operating system?
   most recent version on my fedora 17(linux 3.7)

Please provide any additional information below.

This was to recognize handgesture where the inputs are x,y,z acceleration 
values and 4 patterns are the possible outputs

Original issue reported on code.google.com by sarath.sp06 on 10 Mar 2013 at 5:56

Attachments:

Setup issue

What steps will reproduce the problem?
1.I installed by:pip install neurolab
2.Run the Single Layer Perceptron (newp)


What is the expected output? What do you see instead?
The output is:
Traceback (most recent call last):
  File "neurolab.py", line 10, in <module>
    import neurolab
  File "/home/lilei/neurolab.py", line 17, in <module>
    net = neurolab.net.newp([[0, 1],[0, 1]], 1)
AttributeError: 'module' object has no attribute 'net'

And if I copy the neurolab file to the path of the example file,then the error 
is fixed.
What version of the product are you using? On what operating system?
0.2.3

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 26 Jun 2013 at 4:32

Citing neurolab

First, thanks for this great project.
I would like to cite neurolab in my papers. Is there any paper about neurolab 
itself?

Thank you.

Original issue reported on code.google.com by [email protected] on 1 May 2014 at 8:19

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.