Comments (7)
Ah, it looks like you're using the Conv2DLayer
implementation that doesn't like batches that aren't exactly of size batch_size
. The cuda_convnet-based implementation (used in the tutorial) doesn't have this problem. See here for a discussion with two possible solutions.
from nolearn.
It turns out that there's a much easier solution to this problem. In the tutorial, I made an error and falsely set the input layer's shape[0]
(the batch size) to be 128. This should have been None
. I verified that with this setting the "legacy" Theano convnet layer (for CPU) is happy and every other layer that I tested was too.
So that means I could undo the forced_even
change again. Please update your code.
from nolearn.
Could you try and print out the values of Xb.shape
and yb.shape
at this point:
batch_train_loss = self.train_iter_(Xb, yb)
from nolearn.
Xb (31, 1, 300, 400)
yb (31,)
from nolearn.
Well maybe an easier solution to what's discussed in that other ticket is to just hack BatchIterator to skip remainder batches that are smaller than batch_size
. Could be an option to BatchIterator
, maybe you want to send a pull request.
from nolearn.
Is there any performance gain to having the Theano compiler know up front the size of the batch size?
from nolearn.
Tried with a large net, didn't see any performance difference.
from nolearn.
Related Issues (20)
- RememberBestWeights does not honor the verbose parameter HOT 2
- A replayable fit() method - diff/patch attached HOT 1
- remove('trainable') Lasagne's command doesn't work in nolearn HOT 6
- flip_filters and pad parameter not used by NeuralNet's class HOT 5
- OSError: could not read bytes when trying to fetch mldata HOT 2
- CUDA error, possibly related to network size? HOT 2
- Trained on GPU, inference on CPU doesn't make sense
- Install nolearn with Lasagne dependance not working HOT 2
- Bug in calculating average scores
- nolearn is not installing
- Bug when using Lasagne `mask_input` parameter
- 'NeuralNet' object has no attribute 'layers_' HOT 1
- Weights sum up to zero
- Future issue with sklearn.cross_validation
- Dependency on both backends in requirements.txt switches off GPU support HOT 3
- Enable to reproduce the last value of trainning when predicting CNN
- enable to reproduce loss value of training when predicting CNN HOT 1
- python 3 support not working with Lasagne? HOT 12
- TypeError: Failed to instantiate <class 'lasagne.layers.pool.MaxPool2DLayer'> with args {'name': 'pool1', 'ds': (2, 2), 'incoming': <lasagne.layers.conv.Conv2DLayer object at 0x7ff765fa29e8>}. Maybe parameter names have changed?
- nolearn now on conda-forge HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nolearn.