lava-nc / lava Goto Github PK
View Code? Open in Web Editor NEWA Software Framework for Neuromorphic Computing
Home Page: https://lava-nc.org
License: Other
A Software Framework for Neuromorphic Computing
Home Page: https://lava-nc.org
License: Other
See builder.py and check all the build() methods.
While it works. It does not seem to be good style.
Implement dataloader that can pull data from Github LFS link.
lava/lava/utils/dataloader/mnist.py
Lines 31 to 36 in 9d80ea1
Or update instructions to use git lfs.
git lfs install
git lfs pull
After changing the repository structure the install instructions (in detail the PYTHONPATH to set) is out of data.
Currently a RefPort can only be connected to exactly one VarPort (1:1 connection).
Enabling 1:many connections to set multiple Vars
Open questions:
A preliminary version of Monitor process should be implemented to monitor/probe internal states and outputs (OutPorts) of other processes during runtime. The collected data should be available to the user afterwards. In the preliminary version a Monitor process should be able to monitor at least single variable of a process.
A process (can be hierarchical or sub) when connected recurrently doesn't terminate after the process.run() command. The same process can be connected to another process serially and run without problems.
A minimal example for this behavior is detailed below. Two process are created where process 1 has one InPort and one OutPort and process 2 has only an InPort. Initially process 1 is connected to itself and run. This leads to non-terminating behavior. Also, the get() function to inspect the process variable leads to non-terminating behavior. The same process can be connected to process 2 and run without any problems. The problem appears to be recurrent connections for processes. The same behavior was observed for hierarchical process that housed sub-processes with recurrent connections (not shown in code below). The code that demonstrates the behavior I described above is given below. It has been commented to make the issue clear. @joyeshmishra
import numpy as np
from lava.magma.core.process.process import AbstractProcess
from lava.magma.core.process.variable import Var
from lava.magma.core.process.ports.ports import InPort, OutPort
from lava.magma.core.sync.protocols.loihi_protocol import LoihiProtocol
from lava.magma.core.model.py.ports import PyInPort, PyOutPort
from lava.magma.core.model.py.type import LavaPyType
from lava.magma.core.resources import CPU
from lava.magma.core.decorator import implements, requires
from lava.magma.core.model.py.model import PyLoihiProcessModel
from lava.magma.core.model.sub.model import AbstractSubProcessModel
from lava.magma.core.run_conditions import RunSteps
from lava.magma.core.run_configs import Loihi1SimCfg
class RecurrentProcess(AbstractProcess):
def __init__(self, **kwargs):
super().__init__(**kwargs)
shape = kwargs.pop("shape", (1,1))
self.s_in = InPort(shape=shape)
self.x = Var(shape=shape,
init=np.zeros(shape)
)
self.a_out = OutPort(shape=shape)
class OutProbeProcess(AbstractProcess):
def __init__(self, **kwargs):
"""Use to set read output spike from a process
Kwargs:
out_shape (int tuple): set OutShape to custom value
"""
super().__init__(**kwargs)
shape = kwargs.pop("out_shape", (1,1))
self.s_in = InPort(shape=shape)
self.spike_out = Var(shape=shape,
init=np.zeros(shape)
)
@implements(proc=OutProbeProcess, protocol=LoihiProtocol)
@requires(CPU)
class PyOPPModel(PyLoihiProcessModel):
s_in: PyInPort = LavaPyType(PyInPort.VEC_DENSE, np.int32, precision=24)
spike_out: np.ndarray = LavaPyType(np.ndarray, np.int32, precision=24)
def run_spk(self):
s_in = self.s_in.recv()
self.spike_out = s_in
@implements(proc=RecurrentProcess, protocol=LoihiProtocol)
@requires(CPU)
class PyRPModel(PyLoihiProcessModel):
s_in: PyInPort = LavaPyType(PyInPort.VEC_DENSE, np.int32, precision=24)
x: np.ndarray = LavaPyType(np.ndarray, np.int32, precision=24)
a_out: PyOutPort = LavaPyType(PyOutPort.VEC_DENSE, np.int32, precision=24)
def run_spk(self):
s_in = self.s_in.recv()
self.x += s_in
self.a_out.send(self.x)
#self.a_out.flush()
if __name__ == '__main__':
input_spike = np.array([[1],[2]])
rec_process = RecurrentProcess(shape=input_spike.shape)
# another process made to test serial connection which works well (Program terminates)
out_spike_process = OutProbeProcess(out_shape=input_spike.shape)
# Uncomment below line and comment the line after that to see how the process connected with serial processes works without issues (Change Blocking to True)
#rec_process.a_out.connect(out_spike_process.s_in)
rec_process.a_out.connect(rec_process.s_in)
print("Starting Recurrent Process")
# Blocking False for recurrrent process and True for serial connection
rec_process.run(condition=RunSteps(num_steps=1, blocking=True),
run_cfg=Loihi1SimCfg(select_sub_proc_model=False))
#print(rec_process.vars.x.get()) #Running get() method doesn't go to next line only for recurrent connection
print("Trying to end Recurrent Process")
rec_process.stop()
# recurrent process never goes to end of execution
print("Process Ended")
It's difficult to replicate. But there is the behavior
[INFO] Executing unit tests from Python modules in /home/sshresth/lava-nc/lava/tests
Runtime not started yet.
Runtime not started yet.
Runtime not started yet.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/usr/lib/python3.8/multiprocessing/spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
File "/usr/lib/python3.8/multiprocessing/shared_memory.py", line 102, in __init__
self._fd = _posixshmem.shm_open(
FileNotFoundError: [Errno 2] No such file or directory: '/psm_ee759658'
$ python -m unittest discover tests/
.....................................................................................sss......Runtime not started yet.
Exception ignored in: <function AbstractProcess.__del__ at 0x7f9e9b076820>
Traceback (most recent call last):
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 256, in __del__
Runtime not started yet.
Runtime not started yet.
Exception ignored in: <function AbstractProcess.__del__ at 0x7f9e9b076820>
Traceback (most recent call last):
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 256, in __del__
Runtime not started yet.
Runtime not started yet.
self.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 417, in stop
self.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 417, in stop
self.runtime.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/runtime.py", line 272, in stop
self.runtime.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/runtime.py", line 272, in stop
self._messaging_infrastructure.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/message_infrastructure/multiprocessing.py", line 56, in stop
Runtime not started yet.
Exception ignored in: <function AbstractProcess.__del__ at 0x7f9e9b076820>
Traceback (most recent call last):
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 256, in __del__
self._messaging_infrastructure.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/message_infrastructure/multiprocessing.py", line 56, in stop
actor.join()
File "/usr/lib/python3.8/multiprocessing/process.py", line 147, in join
assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Runtime not started yet.
Exception ignored in: <function AbstractProcess.__del__ at 0x7f9e9b076820>
Traceback (most recent call last):
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 256, in __del__
actor.join()
File "/usr/lib/python3.8/multiprocessing/process.py", line 147, in join
self.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 417, in stop
assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Runtime not started yet.
Exception ignored in: <function AbstractProcess.__del__ at 0x7f9e9b076820>
Traceback (most recent call last):
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 256, in __del__
self.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 417, in stop
self.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 417, in stop
self.runtime.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/runtime.py", line 272, in stop
self.runtime.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/runtime.py", line 272, in stop
self.runtime.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/runtime.py", line 272, in stop
self._messaging_infrastructure.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/message_infrastructure/multiprocessing.py", line 56, in stop
self._messaging_infrastructure.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/message_infrastructure/multiprocessing.py", line 56, in stop
self._messaging_infrastructure.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/message_infrastructure/multiprocessing.py", line 56, in stop
actor.join()
File "/usr/lib/python3.8/multiprocessing/process.py", line 147, in join
assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Runtime not started yet.
Exception ignored in: <function AbstractProcess.__del__ at 0x7f9e9b076820>
Traceback (most recent call last):
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 256, in __del__
actor.join()
File "/usr/lib/python3.8/multiprocessing/process.py", line 147, in join
assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Runtime not started yet.
Exception ignored in: <function AbstractProcess.__del__ at 0x7f9e9b076820>
Traceback (most recent call last):
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 256, in __del__
actor.join()
File "/usr/lib/python3.8/multiprocessing/process.py", line 147, in join
self.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 417, in stop
assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Runtime not started yet.
Exception ignored in: <function AbstractProcess.__del__ at 0x7f9e9b076820>
Traceback (most recent call last):
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 256, in __del__
self.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 417, in stop
self.runtime.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/runtime.py", line 272, in stop
self.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 417, in stop
self.runtime.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/runtime.py", line 272, in stop
self._messaging_infrastructure.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/message_infrastructure/multiprocessing.py", line 56, in stop
self.runtime.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/runtime.py", line 272, in stop
actor.join()
File "/usr/lib/python3.8/multiprocessing/process.py", line 147, in join
self._messaging_infrastructure.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/message_infrastructure/multiprocessing.py", line 56, in stop
assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Runtime not started yet.
Exception ignored in: <function AbstractProcess.__del__ at 0x7f9e9b076820>
Traceback (most recent call last):
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 256, in __del__
self._messaging_infrastructure.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/message_infrastructure/multiprocessing.py", line 56, in stop
actor.join()
File "/usr/lib/python3.8/multiprocessing/process.py", line 147, in join
self.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 417, in stop
assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
actor.join()
File "/usr/lib/python3.8/multiprocessing/process.py", line 147, in join
assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
self.runtime.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/runtime.py", line 272, in stop
self._messaging_infrastructure.stop()
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/message_infrastructure/multiprocessing.py", line 56, in stop
actor.join()
File "/usr/lib/python3.8/multiprocessing/process.py", line 147, in join
assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
..Runtime not started yet.
......E..
======================================================================
ERROR: test_source_sink (lava.proc.conv.test_models.TestConvProcessModels)
Test for source-sink process.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/sshresth/lava-nc/lava/tests/lava/proc/conv/test_models.py", line 203, in test_source_sink
sink.run(condition=run_condition, run_cfg=run_config)
File "/home/sshresth/lava-nc/lava/src/lava/magma/core/process/process.py", line 398, in run
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/runtime.py", line 92, in initialize
File "/home/sshresth/lava-nc/lava/src/lava/magma/runtime/runtime.py", line 146, in _build_sync_channels
File "/home/sshresth/lava-nc/lava/src/lava/magma/compiler/builder.py", line 685, in build
File "/home/sshresth/lava-nc/lava/src/lava/magma/compiler/channels/pypychannel.py", line 292, in __init__
File "/usr/lib/python3.8/multiprocessing/managers.py", line 1385, in SharedMemory
File "/usr/lib/python3.8/multiprocessing/connection.py", line 502, in Client
File "/usr/lib/python3.8/multiprocessing/connection.py", line 628, in SocketClient
File "/usr/lib/python3.8/socket.py", line 231, in __init__
OSError: [Errno 24] Too many open files
----------------------------------------------------------------------
Ran 105 tests in 8.874s
FAILED (errors=1, skipped=3)
/usr/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 2 leaked shared_memory objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
Right now, we require users to call Runtime.stop() or AbstractProcess.stop() explicitly at the end of a run to shut remote processes and RuntimeServices down.
However, people tend to forget this. Therefore we should overload the Runtime or AbstractProcess destructor to call stop automatically when these objects get garbage collected in the current user system process, i.e. when the user code finishes. This will become even more critical once the Runtime allocates actual Loihi systems.
Users should still be encouraged to call stop() explicitly as soon as they are done with their Lava processes but it should not be a requirement.
Just noticed that we have not pure unit tests that exercise and validate PyProcModel <-> PyProcModel channel communication in isolation. There are several tests that exercise execution of PyProcModels as part of the Runtime unit tests. In addition, this feature gets tested indirectly by the fact that higher level tutorials work.
Nevertheless, this is an oversight we need to fix. Perhaps the pull request on PyPorts fixes this partially. But furthermore we need to actually validate that PyPorts work completely as part of a PyProcModel.
Implement pause(..) in Runtime and enable it in the PyLoihiProcessModel and LoihiPyRuntimeService.
All LICENSE files throughout lava and its libraries that declare the BSD-3 license give copyright to "Intel NRC Ecosystem".
It should (probably) be "Intel Corporation" instead?
@mgkwill Can you confirm this is in fact true?
Find and fix any typos in any tutorials.
Right now, the currently only available PyProcBuilder validates that the Vars and Ports defined in a Proc have a corresponding Var and Port implementation in a ProcModel.
Since this will also be required for other types of ProcModels in the future, we should pull this out of the builder and put it into the compiler directly and call it as early as possible once a ProcModel has been selected for a Proc.
As a user I'd like to be able to use different ports based on sparsity using vectors and scalars.
Implied ports:
Retrofit to work with other input types/shapes:
lava/lava/magma/core/model/py/ports.py
Lines 53 to 75 in 9d80ea1
Ensure that ports process data correctly when receiving from the matrix:
Vector | Vector | Scalar | Scalar | |
---|---|---|---|---|
----------- | -Receive- Dense | -Receive- Sparse | -Receive- Dense | -Receive- Sparse |
Vec -Send- Dense | ------- | ------- | ------- | ------- |
Vec -Send- Sparse | ------- | ------- | ------- | ------- |
Sca -Send- Dense | ------- | ------- | ------- | ------- |
Sca -Send- Sparse | ------- | ------- | ------- | ------- |
To have asynchronous recv
capability in ProcessModel
(i.e calling recv()
on a PyInPort
only when data actually reached the port), we need the probe
method of PyInPort
to be implemented, so that we can probe the PyInPort
and only call recv
when probe()
returns True.
Data send to a Var through set_var(..) command is currently send one integer item after another it is an array.
Make it more efficient by sending the whole array at once.
PyLoihiProcessModel and LoihiPyRuntimeService
Reference:
Originally posted by @awintel in #46 (comment)
During development code often crashes. When this happens in a parallel system Process (involving Python multiprocessing) via jupyter then there are a lot of parallel Python processes left that need to be killed manually.
Interestingly, this does not happen when running the same code from a *.py file (via PyCharm).
This needs to be investigated and fixed.
Currently, @tag
decorator adds ProcessModel.tags
attribute, which is used by appropriate RunConfig
s. If a ProcessModel is not decorated with the decorator, then there is no tag attribute. This should be handled with grace by raising a verbose exception. Currently, we only get <...> has no attribute tags
As a user I'd like to user python type hint checking like flake8-annotations
that would enable us to commit code that has proper type hints and keeps people from committing code with improper code hints.
Right now we have two issues in the RuntimeService:
There are multiple places across lava and lava libs codebase that include explicate Namespace packages.
Such as:
Line 1 in cfcd72a
lava/src/lava/magma/__init__.py
Line 1 in cfcd72a
However since python 3.3 Namespaces are defined implicitly:
PEP 420 -- Implicit Namespace Packages
This issue involves
Currently, only np.ndarray
and int
are supported as LavaPyTypes
when building a PyProcModel within RunTime. Supporting float
should be a straight-forward fix.
# Initialize Vars
for name, v in self.vars.items():
# Build variable
lt = self._get_lava_type(name)
if issubclass(lt.cls, np.ndarray):
var = lt.cls(v.shape, lt.d_type)
var[:] = v.value
elif issubclass(lt.cls, int):
var = v.value
else:
raise NotImplementedError
While sending floating-point values across processes, the values get rounded down. This is most likely an issue with the get() function. Below code is a minimal example of the issue. Process 1 with a var containing a variable with floating-points sends the variable to process 2. When the variable is accessed from process 2 with a get() function and printed, the result is [[1. , 1., 1.]] instead of [[1.34, 1.0, 1.0 ]].
class InSpikeSetProcess(AbstractProcess):
def __init__(self, **kwargs):
"""Use to set value of input spike to a process
Kwargs:
------
in_shape : int tuple, optional
Set a_out to custom value
spike_in : 1-D array, optional
Input spike value to send
"""
super().__init__(**kwargs)
shape = kwargs.pop("in_shape", (1, 1))
self.a_out = OutPort(shape=shape)
self.spike_inp = Var(shape=shape, init=kwargs.pop("spike_in", 0))
class OutProbeProcess(AbstractProcess):
def __init__(self, **kwargs):
"""Use to set read output spike from a process
Kwargs:
------
out_shape : int tuple, optional
Set OutShape to custom value
"""
super().__init__(**kwargs)
shape = kwargs.pop("out_shape", (1, 1))
self.s_in = InPort(shape=shape)
self.spike_out = Var(shape=shape, init=np.zeros(shape))
@implements(proc=InSpikeSetProcess, protocol=LoihiProtocol)
@requires(CPU)
class PyISSModel(PyLoihiProcessModel):
a_out: PyOutPort = LavaPyType(PyOutPort.VEC_DENSE, np.float64)
spike_inp: np.ndarray = LavaPyType(np.ndarray, np.float64)
def run_spk(self):
a_out = self.spike_inp
self.a_out.send(a_out)
self.a_out.flush()
@implements(proc=OutProbeProcess, protocol=LoihiProtocol)
@requires(CPU)
class PyOPPModel(PyLoihiProcessModel):
s_in: PyInPort = LavaPyType(PyInPort.VEC_DENSE, np.float64)
spike_out: np.ndarray = LavaPyType(np.ndarray, np.float64)
def run_spk(self):
s_in = self.s_in.recv()
self.spike_out = s_in
def test_floating_send(self):
input_spike = np.array([[1.34], [1], [1]])
in_spike_process = InSpikeSetProcess(
in_shape=input_spike.shape, spike_in=input_spike
)
out_spike_process = OutProbeProcess(out_shape=in_spike_process.a_out.shape)
in_spike_process.a_out.connect(out_spike_process.s_in)
in_spike_process.run(
condition=RunSteps(num_steps=1), run_cfg=Loihi1SimCfg()
)
in_spike_process.pause()
print(out_spike_process.vars.spike_out.get())
in_spike_process.stop()
Each lava module contains a top level license that explains the licensing of the module but utils folder is missing such a license.
This issue is to provide such a license file for BSD-3 Clause.
We are currently in a transitory state where Runtime.wait() and Runtime.pause() is not fully implemented so running in non-blocking mode does actually not make any sense. So perhaps we should just not allow blocking=False right now and throw an error as long as this is not implemented.
But regardless of that, when someone runs with blocking=False, then there is the potential that users call stop() while the underlying processes are still not done running.
This does currently end in not very meaningful error messages saying that some processes did not TERMINATE but are only DONE. Obviously this should be fixed.
The expected behavior would be:
If the Runtime gets stopped while still running, the TERMINATE token gets distributed to all processes. All processes stop at the next possible iteration and then they should acknowledge with the TERMINATED token. If not, the error message should specify why that's not possible.
@tags
decorator for ProcessModels will distinguish between ProcessModel classes of same name, which @implement
same Process class, perhaps on the same backend. For example,
@tags('keyword1', 'keyword2') class TestModel(AbstractProcessModel): ...
This will add 'keyword1' and 'keyword2' to a tags list in TestModel class, which can enable compiler to differentiate it from another TestModel class, perhaps implementing a slightly different behaviour for the same Process class.
Depends on issue #53 (implement pause)
Calling get/set Var should be possible during pause.
Lava was released with a combination of BSD3 and LGPL 2.1+ licenses. Any other license should be removed.
The LICENSE files in the directory structure seem to reflect our licensing scheme but all file headers are BSD-3, including all files in lava/magma/compiler
and lava/magma/runtime
.
Is it meant to be that way? I was expecting all files under compiler
and runtime
to have LGPL-2.1 headers.
Why does the Lava developer guide state the following?
For
lava-nc/magma/compiler
andlava-nc/magma/runtime
use either BSD 3 or LGPL 2.1+.
If the headers are indeed incorrect, feel free to assign me.
Right now there is a lot of content duplication in the lava readme and the lava-nc.org landing page.
Now that the project has been launched and we have more and more actual code-content, we should revisit both pages and simplify them and avoid that we have to manage the same content in different places.
The lava readme should be come shorter and simpler and only reference key information in other places.
The content on the lava-nc.org page could be broken up and put on different sub pages as well.
In addition, we should think about search engine optimization for lava-nc.org so that the page structure gets displayed better by google.
As a user, I would like to monitor more than one variable with each Monitor
Process. This would make code for process monitoring much more concise.
main
.Monitoring multiple Vars requires disabling checks for coherence between Process and ProcessModels
Dynamically adding new probes to a Monitor process (i.e. creating new
RefPort
andInPorts
in the Monitor process) requires disabling checks for existence of correspondingLavaPyType
for each newly createdRefPorts/InPorts
. More specifically the following lines in thecompiler.py
checks for this coherence between Process and ProcessModels:b.set_variables(v) self._get_port_dtype(pt, pm) self._map_var_port_class(pt, proc_groups))
There is a year-old pull request (#238), that tried to enable this, however this never went through or reviewed, as it got deprioritized at the time.
The install instructions for Windows in the README.md file currently include:
source python3_venv\bin\activate.bat
Generally, I am not sure where those commands should be executed, the Windows cmd or Powershell? There, source
does not exist.
I believe the correct command for cmd would be
python3_venv\Scripts\activate.bat
I suggest to change that command in the README.md and also add a line at the beginning to explain where to execute all those commands.
Implement conv process and models with float, and fixed precision implemetatation
The Runtime and RuntimeService do currently contain a number of hacks that need to be cleaned up:
Side question: Should we use asyncio in PyProcModels and PyRuntimeService instead of busy waiting on channels? Will this be more efficient?
It looks like build scripts pack lava incorrectly.
For example:
> wget https://github.com/lava-nc/lava/releases/download/v0.1.0/lava-nc-0.1.0.tar.gz
> pip install lava-nc-0.1.0.tar.gz
> python -c "import lava"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'lava'
/usr/lib/python3.9/site-packages/
contains
lava_nc.egg-info/
magma/
proc/
tutorials/
utils/
but no lava
directory exists.
Moreover, uninstalling attempts to delete everything in /usr/lib/python3.9/site-packegs/
> sudo pip uninstall lava-nc
Found existing installation: lava-nc 0.1.0
Uninstalling lava-nc-0.1.0:
Would remove:
/usr/lib/python3.9/site-packages/*
Would not remove (might be manually added):
...
Currently, there is an 'unexpected argument' type warning in PyProcBuilder.build() at:
port = port_cls(csp_ports, pm, p.shape, lt.d_type)
This is because PyInPort, PyOutPort, ... have constructors like this:
def __init__(self, csp_recv_ports: ty.List[CspRecvPort], *args):
self._csp_recv_ports = csp_recv_ports
super().__init__(*args)
... calling the AbstractPortImplementation constructor with the following signature:
def __init__(
self,
process_model: "AbstractProcessModel", # noqa: F821
shape: ty.Tuple[int, ...] = tuple(),
d_type: type = int):
While this works, it is not good style. The Builder only expects an AbstractPyPort which does not have its own overloaded init method but inherits it from AbstractPortImplementation. Thus the Builder throws the 'unexpected argument' warning.
PyInPort and PyOutPort must have common parent class with an __init__function signature that expects the csp_port passed to it by the Builder.
Then we can also get rid of just accepting an anonymous argument list like *args which is unnecessary.
On a related note: Why is there an AbstractCspSendPort in the type hierarchy between CspSendPort and AbstractCspPort?
Runtime and ChannelBuilder needs to support one channel receiving inputs from multiple channels or one channel sending outputs to multiple channels as well as the pyports.
See https://github.com/lava-nc/lava/blob/main/tests/magma/core/process/test_ports.py
lava/tests/magma/core/process/test_ports.py
Lines 67 to 82 in 9d80ea1
https://github.com/lava-nc/lava/blob/main/lava/magma/core/model/py/ports.py
Fork:
In one to many case iterate and copy data over many channels.
Join:
Needs to define reduce method, how the data from multiple ports should get defined when they connect to single port.
Possible visitor pattern implementation.
Implement ReduceOps for reduce function.
The initialization of RefPorts and VarPorts during build(..) in the PyProcessBuilder expects 2 csp_ports, one receiving and one sending.
Currently there could be a list of csp_ports. This needs to be cleaned up.
Should be done in parallel with Issue #55
Reference:
This is not so elegant and looks like a hack. Normally there could be multiple csp_ports in case of 1:many or many:1 connectivity.
Here there are multiple ports because we've just misused it to pass the different send and recv ports. That's conceptually different. If we throw errors that forbid non 1:1 connections then this will work but we should find a better solution.
Either implement a better way of file a descriptive issue to address this hack and make it cleaner.
Originally posted by @awintel in #46 (comment)
@ashishrao7's quadratic programming process in lava-optim seems to be super slow compared to a pure numpy implementation. Since the only main difference between the two implementation seems to be the event-based message passing in between, we need to figure out why that is the case.
Leaky-Integrate-and-Fire dynamics is implemented in Loihi hardware as the default neuron dynamics. There should be a ProcessModel that implements this behaviour exactly as it takes place inside Loihi.
Initial enabling of RefPorts connecting to VarPorts.
This allows a user to access variables of a another process.
Limitations:
Currently the compiler is searching for ProcessModel with 'implements' decorator in the same directory with Process.
Probably 'implements' can be changed into 'has_models' to strictly define all behaviours for Process class.
So there won't be an issue with using model from different directory and the compiler code will be simplified.
Example:
@requires(CPU)
class ProcessCPUModel(AbstractProcessModel):
...
@requires(GPU)
class ProcessGPUModel(AbstractProcessModel):
...
@has_models(ProcessCPUModel, ProcessGPUModel)
class Process(AbstractProcess):
...
None of our unit tests so far has detected, that any errors happening during building of a remote ProcessModel or during execution do not get thrown to the user process.
Instead, the Runtime just deadlocks and it is very hard to figure out where the problem happened.
Presumably, this is because the remote system process, running the ProcessModel, dies and thus the Runtime or RuntimeService waits forever on a channel response that the ProcessModel is done doing what it was tasked to do.
Perhaps, if we detect that something failed in the remote system process and communicate this back via an ERROR management token, then the system can terminate gracefully and may even print any exceptions that get thrown.
Aside from fixing this issue, we should also create a ProcessTester utility which wraps around any individual ProcessModel (without a Compiler and Runtime), creates complementary Ports to any ProcessModel Ports and allows to inject targeted messages into the process in order to debug it without having to set up the entire machinery of Compiler and Runtime.
We should combine the creation of Var- and PortInitializers (for Vars, I/O Ports, RefPorts, VarPorts) in both the PyProcCompiler
and CProcCompiler
. There are currently many copies of the same functionality with slight differences (for instance, RefPorts vs. InPorts).
inport_initializers = self._create_inport_initializers(process)
outport_initializers = self._create_outport_initializers(process)
refport_initializers = self._create_refport_initializers(process)
varport_initializers = self._create_varport_initializers(process)
All these methods could be based on the same method with parameters.
See lava/src/lava/magma/compiler/subcompilers/py/pyproc_compiler.py
Reference:
Originally posted by @awintel in #46 (comment)
https://github.com/lava-nc/lava/blob/main/lava/magma/compiler/channels/channel_utils.py
# Copyright (C) 2021 Intel Corporation
# SPDX-License-Identifier: BSD-3-Clause
import numpy as np
from lava.magma.compiler.channels.pypychannel import __PyPyChannel
def __create_pypy_mgmt_channel(smm, name) -> __PyPyChannel:
"""
Helper function to create a python to python Mgmt channel. This is typically
backed by shared memory. Shared memory needs to be managed by the creator.
:param smm: Shared Memory Manager
:param name: Name of the Mgmt Channel. Only Mgmt Commands are sent on this.
:return: __PyPyChannel Channel Handler
"""
channel = __PyPyChannel(
smm=smm, name=name, shape=(1,), dtype=np.int32, size=8
)
return channel
I was looking into improving the test coverage and noticed the function __create_pypy_mgmt_channel
looks out of date. There doesn't appear to be __PyPyChannel in the code base currently. Possibly the class has been renamed to PyPyChannel
with a different __init__
signature.
Without knowing the codebase well it seems like either __create_pypy_mgmt_channel
needs to be updated or deleted
Currently a csp_port has a fixed shape to send data. Sometimes though, a "header" is additional needed, which is likely to have a different shape as the data.
Example:
Originally posted by @awintel in #46 (comment)
This was recently recommended by someone for efficient graph partitioning:
http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview
We could explore its applicability to Lava's process partitioning problem to Loihi neuro cores.
In the example code in README.md, we have local variables:
lif1 = LIF()
dense = Dense()
lif2 = LIF()
but connect them using self
:
lif1.out_ports.s_out.connect(self.dense.in_ports.s_in)
dense.out_ports.a_out.connect(self.lif2.in_ports.a_in)
Requirements:
[ ] Compile a C program from a specified source in the python object
[ ] Load a C function into a python object
[ ] simple C API for channel communication (send, recv, peek, probe)
[ ] Share state (numpy data and protocol signals) with the python object
Possibilities:
Starting definition:
lava/lava/magma/core/model/c/model.py
Line 8 in 9d80ea1
As an NCL developer, I want the PyProcessBuilder
to use the same code to initialize PyPorts, RefPorts, and VarPorts in the build()
method to avoid code duplication.
Currently the initialization of PyPorts, RefPorts and VarPorts is done separately within the build()
method, although they share common code. Ideally a helper method is created to handle the initialization of the different types of Ports, which can then be called within the build()
method.
The code lives in lava/src/lava/magma/compiler/builders/py_builder.py
Note: This is not a full user story because it is only a refactoring effort that has no direct value to external customers.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.