nathaniel-rodriguez / alectrnn Goto Github PK
View Code? Open in Web Editor NEWA CTRNN implementation for the arcade learning environment
License: MIT License
A CTRNN implementation for the arcade learning environment
License: MIT License
Make the default IAF neurons have not refractory period. This can be added at a later date to a new neuron.
Adjust the IAF equations to properly handle time-scaling.
We would need to add a soft-max function and RNG for the soft-max. This means also passing in a seed on agent creation.
Soft-max could be an option in agent-parameters
Soft-max would be applied to Neural net output and then the RNG would use the soft-max output to select an action for the agent.
See newest asyncga additions.
Add a new agent, layer, integrator, bindings, and nn code for an agent which gets motor inputs and reward feedback.
Support running NN agents with numpy arrays from python and maintaining a log of the NN history. The whole time-series array of inputs would be provided to the agent and then ran. This may involve adding support in experiments and experiment managers for this capability for a given agent.
Add a class that supports building and running neural network without ALE.
The current IAF equation doesn't account for step size in the inputs.
This can be added by expanding conv + recurrent layers, where a graph is made by default connecting neurons that are adjacent (via a grid) within a specific channel.
Add a function like execute_async_batch that executes all batches in a list before closing the scheduler.
Attempt to work around the numpy memory leak
Updated load_experiment function that uses consolidated results.
New load_experiment function that loads just a single batch result.
Update async batch function to consolidate following run.
During spike weight normalization the weights are shrunk rather than multiplied.
Add a member that allows for parameter rescaling with the asyncga
The Eigen-based RNN integrator/layer combination has a bug where previous state is not carried on and the wrong equation is calculated.
Add support for other CSA GA by adding a new member
Use Python macros to release the GIL in order to allow other threads to use the interpreter while the objective function runs.
also a support new weight initialization method
current single runs are not agnostic to the type of optimizer, it tries to load from the optimizer. This is bad, just make one that takes parameters.
Add new 3-dim binary motor format. Make new agent that follows new format.
Re-evaluate this. It requires -1 to 1 activations which means non-spike and it means weights leading up to motor layer should be positive and negative (alternative might be 0-2.0 with 0.5-1.5 as center).
The experiment, its results, and batch are loaded fully. This can be costly in terms if time and memory. If we implement the load to return a generator then experimental results and experiments can be pulled as they are used.
This object would injected in a batch and its call back executed to store the network. The object can then just be added to the storage section of the batch object.
A little bug where the index of the first layer corresponds with 1 rather than 0 because the inputs are counted and as a result we need one initial buffer value to start. Can fix by just doing -1 on the layer indices so layer indices match the bound indices.
Make sure seeds are properly working.
See if multi-episode runs properly support summing scores and adjusting random starts works.
Currently the motor layer is created automatically, preventing it from adopting a different activation function (in the case of spiking networks for instance) and preventing the adoption of say laterally-inhibiting motor layers.
Options: Add an override to motor layer in python NeuralSystem class.
Options: Currently all layers use same activation function, add override to select activation functions on a layer-by-layer basis.
update the async_execute function to use arguments from the batch to configure the scheduler so that loopback and nanny's can be used.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.