Git Product home page Git Product logo

htm's People

Contributors

calumroy avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

htm's Issues

learning function decreases temporal pooling.

When an input doesn't change the learning function still updates a columns synapses. This can result in poor temporal pooling since periods when the lower level has temporally pooled means a higher layers input stays constant. The higher layers columns synapses are updated so any synapses not currently active but used in other patterns for temporal pooling will be decremented harshly. This can result in the column "forgetting" previously temporally pooled patterns.

Spatial pooler GUI viewer

The current GUI allows each column to be selected to show that columns spatial pooler connected synapses. Since the input can now be of different dimensions to the HTM column array then this view in the GUI should be shown on the input and not the HTM.

Temporal pooling degraded by spatial learning bug.

The temporal pooling of columns relies on columns becoming activated after predicting that they will become active. Then on the next input the column is given a greater chance of staying active by boosting its overlap with the input. If the overlap is still larger then the minOverlap parameter then this column will remain active, it has temporally been pooled. The problem is when a columns spatial pooler has pruned most of the input synapses away so it only responds to a single feature in the input. Once this happens it's unlikely to successfully temporally pool on the next input since most potential synapses are not connected any longer.

What needs to occur is the columns needs to allow all its potential synapses including unconnected ones to contribute and help the columns temporally pool and hence stay active into the next input.

Debug ipdb with pyQt

Use
from PyQt4.QtCore import pyqtRemoveInputHook; import ipdb; pyqtRemoveInputHook(); ipdb.set_trace()

Add a way to specify where each layers feedback comes from

Add to the config a parameter specifying the layer and level that each layer gets a feedback from. This feedback is the output of the specified layer in the specified level. It is combined with the new input to become the total new input to any layer.

np_activeCells profiling speed

Related to Issue #5

The updateActiveCells function has now 1bf77dc been extracted into its own calculator class. A few changes where made mainly to stop constantly changing array sizes and make them a fixed size. This will improve speed as arrays are no longer being appended to. It has made the update structure for storing information about which synapses learning ay be perform on more complicated.

Further improvements can be made by also having in addition to a tensor storing the end connections of all cells distal synapses (distalSynapses) another tensor storing the starting position of every cells distal synapse distalSynapsesSegOrigin. This will help improve the updateActiveCells function as now instead of calling the function getBestMatchingSegment to find for a cell the segment that was active the most for a particular timeStep we can just go through the list of active cells for that timeStep and see which distal synapses have an end connection to them. This means only going through the synapses that are active instead of checking all distal synapses for a particular cell. Each active distal synapses can add 1 to the segment from which it originates form and once you have finished checking all active synapses a total score for each segment will have been created. This total score will indicate which segment is the best matching for a particular cell. The function getBestMatchingSegment can then return this segment.

Below is a cProfiling run on the updateActiveCells function for the following parameters. This is the standard np_activeCells calculator class (no distalSynapsesSegOrigin tensor), it uses the following inputs;

timeStep, activeColumns, predictiveCells, activeSeg, distalSynapses

Parameters;

    numRows = 400
    numCols = 40
    cellsPerColumn = 10
    numColumns = numRows * numCols
    maxSegPerCell = 10
    maxSynPerSeg = 10
    minNumSynThreshold = 1
    minScoreThreshold = 1
    newSynPermanence = 0.3
    timeStep = 3
2264896 function calls in 11.559 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     4001    0.073    0.000    0.190    0.000 np_activeCells.py:284(newRandomPrevActiveSynapses)
     4001    0.029    0.000    0.031    0.000 np_activeCells.py:295(findLeastUsedSeg)
     3943    0.010    0.000    0.027    0.000 np_activeCells.py:313(checkColBursting)
     1981    0.001    0.000    0.003    0.000 np_activeCells.py:329(findActiveCell)
     1962    0.002    0.000    0.004    0.000 np_activeCells.py:340(findLearnCell)
    25602    0.045    0.000    0.047    0.000 np_activeCells.py:349(setActiveCell)
     7944    0.014    0.000    0.015    0.000 np_activeCells.py:361(setLearnCell)
   425663    1.891    0.000    1.891    0.000 np_activeCells.py:373(checkCellActive)
     1962    0.001    0.000    0.001    0.000 np_activeCells.py:383(checkCellLearn)
    40010    0.047    0.000    0.047    0.000 np_activeCells.py:393(checkCellPredicting)
    40010    0.796    0.000    2.682    0.000 np_activeCells.py:403(segmentHighestScore)
   400100    7.749    0.000    7.882    0.000 np_activeCells.py:420(segmentNumSynapsesActive)
    40010    0.259    0.000    8.159    0.000 np_activeCells.py:443(getBestMatchingSegment)
        1    0.116    0.116   10.958   10.958 np_activeCells.py:474(updateActiveCellScores)
        1    0.235    0.235   11.559   11.559 np_activeCells.py:503(updateActiveCells)
    40010    0.083    0.000    0.116    0.000 random.py:293(sample)
    40010    0.022    0.000    0.022    0.000 {hasattr}
   568144    0.041    0.000    0.041    0.000 {len}
    40010    0.004    0.000    0.004    0.000 {method 'add' of 'set' objects}
    33546    0.003    0.000    0.003    0.000 {method 'append' of 'list' objects}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}
    40010    0.004    0.000    0.004    0.000 {method 'random' of '_random.Random' objects}
   505974    0.136    0.000    0.136    0.000 {range}

Temporal pooling is disturbs spatial pooling

When temporal pooling occurs then this column stays active. This active column will influence columns in the inhibition step for the spatial pooling in the next time step. Eg columns that would not have been active now can win the inhibition round because of the way that the temporally pooled columns inhibition radius influences the outcome.

This is having the effect that learning is taking longer then it should. It's related to the issue #12

Pause and load the GUI in a test.

Within the nosetests the HTM GUI can be loaded and halt the testing. This can be very useful to see what is happening in the HTM network during a test. Use the following code inserted into the test

app = QtGui.QApplication(sys.argv)
self.htmGui = GUI_HTM.HTMGui(self.htm, self.InputCreator)
sys.exit(app.exec_())

Note this doesn't allow you to return back to the test and as a result the test states it failed.

Integration testing

More tests need to be written to tests large scale features of the HTM networks. This such as temporal and sequence pooling need to be thoroughly tested. Temporal pooling needs to be tested as the new Q learning function will greatly depend on the reliability and output of temporal pooling.

spatial pooler active column inhibition bug

The inhibition step of the spatial pooler contains a bug. A column may incorrectly not be activated because it doesn't have a larger overlap value then minLocalActivity. The minLocalActivity is calculated by choosing the largest overlap value from the neighboring columns that will result in only a certain amount of columns becoming active (desiredLocalActivity parameter). The calculation of minLocalActivity doesn't take into account when columns have already been inhibited, these columns overlap values shouldn't affect the calculation of the minLocalActivity for other columns.

What happens is columns that have a larger overlap then minOverlap and are more then the inhibition radius away from active columns may still not be activated. See screen shot below.

The htm level is on the right with the green squares showing active columns, red squares are not active columns. The white squares show the inhibition radius around the center selected column. The inhibition radius is 2 in this case and the selected column on the right in the middle of the highlighted white square should be active. It's overlap with the input is shown on the left highlighted in white. The green squares are active and the red squares are deactivate in the input on the left. The minOverlap value is 3 but this column has an overlap value of 10. It should therefore be active since it is further then 2 (inhibitionRadius=2) columns away from any of the active columns as shown on the right.

bugspatialpooler_inhibradius_is_2

Optimizing synapse connection.

The HTM could be optimised to give better performance if references in each cell to the synapses connect to them were stored. This would make it quick to find out which synapses should be checked in which segments when trying to find which cells are predicting. This issue is related to increasing the speed of the HTM #5.

example HTM learning is broken

The example HTM code doesn't learn the pattern correctly. The self.minScoreThreshold parameter is not working. repeated patterns are being learnt over and over.

I had a similar problem previously which is why the self.minScoreThreshold was introduced. This doesn't seem to work anylonger.

The spatial pooler bias disrupts temporal pooling

When a column has learnt a sequence and has temporally pooled over the sequence sometimes it is not correctly set a s the winning column when part of it's temporally pooled pattern appears as an input.

This happens in the following situation;

  1. The column first bursts because it was not expecting it's temporally pooled pattern to begin.
  2. On one of the next inputs that the spatial pooler has temporally pooled over for that column has also been temporally pooled as part of a different pattern.
  3. This other column has been given a slight bias by the inhibition calculator because of it's position. This is done by the overlap calculator to resolve any ties in the spatial pooler.

This other column will win the inhibition stage and therefore become active. This is incorrect as we were receiving inputs corresponding to a sequence that had been learnt and pooled by the first column. Normally a column that has learnt and temporally pooled over a pattern is given preference, but this wouldn't happen in this case because the first inputs was unexpected and caused bursting.

If the inhibition calculator gave a slight bias to any column that had previously been active on the input before this may fix the problem.

The temporal pooler disrupts the spatial pooler.

When a column temporally pools over a sequence of inputs such as ABC then the spatial pooler learns to activate that column when it sees A, B or C. This means that when a different sequence A, X, Y is seen then that column will likely be activated when it sees input A. That columns spatial pooler will then over time forget the synapses to B and C and only be activate when A, X, or Y are seen.

This create the problem where the spatial pooler has forgotten a pattern because of the temporal pooler. The cells in the column (sequence pooler) still remember the sequence A, B, C but the spatial pooler has forgotten it.

Is it reasonable for the spatial pooler to forget the old pattern? In this case a new column would be chosen to relearn the pattern A, B, C. The original column would keep the pattern A, X, Y. Maybe this is ok more testing needs to be done :(

appending arrays of 2d arrays

To append arrays of 2d arrays you need to used the command np.append(array, [newGrid], axis=0)
The newGrid (which is a 2d array) needs to enclosed in "[]" otherwise append will compain about the sizes being incompatible. Alos axis=0 must be specified otherwise append flattens the two input arrays.

A column beginning to temporally pool a pattern can "throw off"a higher layer.

When a column in a layer tries to temporally pool this means the output is disrupted because the column stays active longer then it normally would (this isn't a problem but it means the next columns that normally would be activated are eventually not activated at all and skipped). The output sequence from this layer to a higher layer changes because of this. This means columns in the higher layer will burst because they were not expecting a change of sequence.

The bursting every time a lower column tries to temporally pool makes forming higher level stable temporal patterns take longer then it should have. A solution needs to be found were when a column temporally pools the higher layer don't burst because of this.

predictiveCells Calculator profiling

The current calcualtor that calculates the predictive cells is just a numpy implementation. This calculators function updatePredictiveState uses about half the total calcualtion time for a HTM step.

See below profile

Number of TimeSteps=5
------------------------------------------
NEW TimeStep
PART 1 Update Input
PART 2 Update HTM
         233216 function calls in 0.288 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000    0.288    0.288 HTM_network.py:1002(spatialTemporal)
        1    0.000    0.000    0.000    0.000 HTM_network.py:1105(updateHTMInput)
        1    0.000    0.000    0.288    0.288 HTM_network.py:1173(spatialTemporal)
        3    0.000    0.000    0.000    0.000 HTM_network.py:599(getPotentialOverlaps)
        3    0.000    0.000    0.000    0.000 HTM_network.py:662(updateInput)
        3    0.000    0.000    0.000    0.000 HTM_network.py:673(updateOutput)
        3    0.000    0.000    0.018    0.006 HTM_network.py:716(Overlap)
        3    0.000    0.000    0.019    0.006 HTM_network.py:738(inhibition)
        3    0.000    0.000    0.038    0.013 HTM_network.py:754(spatialLearning)
        3    0.000    0.000    0.147    0.049 HTM_network.py:765(sequencePooler)
        3    0.000    0.000    0.016    0.005 HTM_network.py:777(calcActiveCells)
        3    0.000    0.000    0.122    0.041 HTM_network.py:797(calcPredictCells)
        3    0.000    0.000    0.009    0.003 HTM_network.py:809(sequenceLearning)
        3    0.000    0.000    0.065    0.022 HTM_network.py:823(temporalPooler)
        1    0.000    0.000    0.000    0.000 HTM_network.py:951(updateRegionInput)
        4    0.000    0.000    0.000    0.000 _methods.py:37(_any)
       20    0.000    0.000    0.000    0.000 arraypad.py:101(<genexpr>)
       20    0.000    0.000    0.000    0.000 arraypad.py:1069(<genexpr>)
        2    0.000    0.000    0.000    0.000 arraypad.py:1072(_validate_lengths)
        8    0.000    0.000    0.000    0.000 arraypad.py:111(_append_const)
        2    0.000    0.000    0.000    0.000 arraypad.py:1117(pad)
       20    0.000    0.000    0.000    0.000 arraypad.py:135(<genexpr>)
        8    0.000    0.000    0.000    0.000 arraypad.py:77(_prepend_const)
        4    0.000    0.000    0.000    0.000 arraypad.py:989(_normalize_shape)
       15    0.000    0.000    0.002    0.000 cc.py:1525(__call__)
        2    0.000    0.000    0.000    0.000 fromnumeric.py:2767(round_)
        8    0.000    0.000    0.000    0.000 fromnumeric.py:43(_wrapit)
        8    0.000    0.000    0.000    0.000 fromnumeric.py:823(argsort)
       24    0.014    0.001    0.018    0.001 function_module.py:482(__call__)
        3    0.000    0.000    0.000    0.000 link.py:324(__get__)
        3    0.000    0.000    0.000    0.000 np_activeCells.py:213(getCurrentLearnCellsList)
        3    0.000    0.000    0.000    0.000 np_activeCells.py:221(getActiveCellsList)
        3    0.000    0.000    0.000    0.000 np_activeCells.py:225(getSegUpdates)
       96    0.005    0.000    0.005    0.000 np_activeCells.py:230(findNumSegs)
       32    0.000    0.000    0.001    0.000 np_activeCells.py:245(getSegmentActiveSynapses)
       32    0.000    0.000    0.010    0.000 np_activeCells.py:266(getBestMatchingCell)
       32    0.001    0.000    0.001    0.000 np_activeCells.py:334(newRandomPrevActiveSynapses)
      131    0.001    0.000    0.001    0.000 np_activeCells.py:359(findLeastUsedSeg)
        4    0.000    0.000    0.000    0.000 np_activeCells.py:385(checkColBursting)
        4    0.000    0.000    0.000    0.000 np_activeCells.py:412(findLearnCell)
      108    0.000    0.000    0.000    0.000 np_activeCells.py:421(setActiveCell)
       36    0.000    0.000    0.000    0.000 np_activeCells.py:433(setLearnCell)
      362    0.000    0.000    0.000    0.000 np_activeCells.py:445(checkCellActive)
        4    0.000    0.000    0.000    0.000 np_activeCells.py:458(checkCellLearn)
       96    0.000    0.000    0.000    0.000 np_activeCells.py:468(checkCellPredicting)
     1920    0.005    0.000    0.006    0.000 np_activeCells.py:495(segmentNumSynapsesActive)
      192    0.001    0.000    0.007    0.000 np_activeCells.py:521(getBestMatchingSegment)
        3    0.000    0.000    0.004    0.001 np_activeCells.py:552(updateActiveCellScores)
        3    0.001    0.000    0.016    0.005 np_activeCells.py:582(updateActiveCells)
      496    0.018    0.000    0.018    0.000 np_inhibition.py:270(calcualteInhibition)
        3    0.001    0.000    0.019    0.006 np_inhibition.py:333(calculateWinningCols)
    14090    0.032    0.000    0.034    0.000 np_learning.py:67(updatePermanence)
        3    0.004    0.001    0.038    0.013 np_learning.py:78(updatePermanenceValues)
        3    0.000    0.000    0.000    0.000 np_predictCells.py:117(getActiveSegTimes)
        3    0.000    0.000    0.000    0.000 np_predictCells.py:122(getSegUpdates)
      960    0.001    0.000    0.001    0.000 np_predictCells.py:177(checkCellActive)
    30690    0.084    0.000    0.091    0.000 np_predictCells.py:190(segmentNumSynapsesActive)
        3    0.031    0.010    0.122    0.041 np_predictCells.py:210(updatePredictiveState)
       64    0.001    0.000    0.001    0.000 np_sequenceLearning.py:101(updateCurrentSegSyn)
       32    0.000    0.000    0.002    0.000 np_sequenceLearning.py:137(adaptSegments)
     6174    0.006    0.000    0.006    0.000 np_sequenceLearning.py:168(checkCellTime)
        3    0.002    0.001    0.009    0.003 np_sequenceLearning.py:182(sequenceLearning)
       32    0.001    0.000    0.001    0.000 np_sequenceLearning.py:78(addNewSegSyn)
       19    0.000    0.000    0.000    0.000 np_temporal.py:116(setLearnCell)
     6138    0.007    0.000    0.007    0.000 np_temporal.py:126(checkCellPredict)
     4092    0.002    0.000    0.011    0.000 np_temporal.py:139(checkCellActivePredict)
    13887    0.014    0.000    0.037    0.000 np_temporal.py:149(checkColBursting)
        2    0.000    0.000    0.001    0.000 np_temporal.py:280(getPrev2NewLearnCells)
        3    0.006    0.002    0.043    0.014 np_temporal.py:365(updateProximalTempPool)
        3    0.005    0.002    0.022    0.007 np_temporal.py:428(updateDistalTempPool)
       22    0.000    0.000    0.000    0.000 np_temporal.py:84(checkCellLearn)
    33964    0.028    0.000    0.028    0.000 np_temporal.py:94(checkCellActive)
        2    0.000    0.000    0.000    0.000 numeric.py:141(ones)
       26    0.000    0.000    0.001    0.000 numeric.py:406(asarray)
        6    0.000    0.000    0.000    0.000 numeric.py:476(asanyarray)
        9    0.000    0.000    0.000    0.000 numeric.py:79(zeros_like)
       15    0.000    0.000    0.002    0.000 op.py:742(rval)
      320    0.000    0.000    0.001    0.000 random.py:293(sample)
       12    0.000    0.000    0.001    0.000 safe_asarray.py:12(_asarray)
        1    0.000    0.000    0.000    0.000 sdrFunctions.py:29(joinInputArrays)
        6    0.000    0.000    0.000    0.000 shape_base.py:113(atleast_3d)
        2    0.000    0.000    0.000    0.000 shape_base.py:319(dstack)
        3    0.000    0.000    0.000    0.000 theano_overlap.py:304(checkNewInputParams)
        2    0.000    0.000    0.000    0.000 theano_overlap.py:314(addPaddingToInput)
        3    0.000    0.000    0.002    0.001 theano_overlap.py:458(addVectTieBreaker)
        3    0.000    0.000    0.005    0.002 theano_overlap.py:463(maskTieBreaker)
        3    0.000    0.000    0.001    0.000 theano_overlap.py:476(getColInputs)
        3    0.000    0.000    0.000    0.000 theano_overlap.py:522(getPotentialOverlaps)
        3    0.000    0.000    0.016    0.005 theano_overlap.py:528(calculateOverlap)
        3    0.000    0.000    0.002    0.001 theano_overlap.py:564(removeSmallOverlaps)
       36    0.000    0.000    0.000    0.000 type.py:385(<lambda>)
       36    0.000    0.000    0.001    0.000 type.py:67(filter)
        3    0.000    0.000    0.002    0.001 vm.py:204(__call__)
       15    0.002    0.000    0.002    0.000 {cutils_ext.cutils_ext.run_cthunk}
       56    0.000    0.000    0.000    0.000 {getattr}
       24    0.000    0.000    0.000    0.000 {hasattr}
       38    0.000    0.000    0.000    0.000 {isinstance}
    49258    0.002    0.000    0.002    0.000 {len}
        8    0.000    0.000    0.000    0.000 {math.ceil}
      116    0.000    0.000    0.000    0.000 {math.floor}
    13864    0.002    0.000    0.002    0.000 {max}
        4    0.000    0.000    0.000    0.000 {method 'any' of 'numpy.ndarray' objects}
      215    0.000    0.000    0.000    0.000 {method 'append' of 'list' objects}
        8    0.000    0.000    0.000    0.000 {method 'argsort' of 'numpy.ndarray' objects}
        2    0.000    0.000    0.000    0.000 {method 'astype' of 'numpy.ndarray' objects}
        2    0.000    0.000    0.000    0.000 {method 'copy' of 'numpy.ndarray' objects}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}
        6    0.000    0.000    0.000    0.000 {method 'flatten' of 'numpy.ndarray' objects}
      320    0.000    0.000    0.000    0.000 {method 'random' of '_random.Random' objects}
        2    0.000    0.000    0.000    0.000 {method 'ravel' of 'numpy.ndarray' objects}
        4    0.000    0.000    0.000    0.000 {method 'reduce' of 'numpy.ufunc' objects}
        9    0.000    0.000    0.000    0.000 {method 'reshape' of 'numpy.ndarray' objects}
        2    0.000    0.000    0.000    0.000 {method 'round' of 'numpy.ndarray' objects}
        2    0.000    0.000    0.000    0.000 {method 'setdefault' of 'dict' objects}
       10    0.000    0.000    0.000    0.000 {method 'tolist' of 'numpy.ndarray' objects}
      546    0.000    0.000    0.000    0.000 {min}
       39    0.001    0.000    0.001    0.000 {numpy.core.multiarray.array}
       10    0.000    0.000    0.000    0.000 {numpy.core.multiarray.concatenate}
       11    0.000    0.000    0.000    0.000 {numpy.core.multiarray.copyto}
        9    0.000    0.000    0.000    0.000 {numpy.core.multiarray.empty_like}
        2    0.000    0.000    0.000    0.000 {numpy.core.multiarray.empty}
        2    0.000    0.000    0.000    0.000 {numpy.core.multiarray.unravel_index}
       52    0.000    0.000    0.000    0.000 {numpy.core.multiarray.zeros}
    54022    0.007    0.000    0.007    0.000 {range}
       96    0.000    0.000    0.000    0.000 {time.time}
       29    0.000    0.000    0.000    0.000 {zip}

np_temporal profiling speed

The numpy temporal calculator is slow.
Here is the profiling for a the top htm layer performing temporal poooling with the following configuration parameters for the HTM network;

testParameters = {
                  'HTM': {
                        'numLevels': 1,
                        'columnArrayWidth': 11,
                        'columnArrayHeight': 31,
                        'cellsPerColumn': 3,

                        'HTMRegions': [{
                            'numLayers': 3,
                            'enableHigherLevFb': 0,
                            'enableCommandFeedback': 0,

                            'HTMLayers': [{
                                'desiredLocalActivity': 1,
                                'minOverlap': 3,
                                'wrapInput':0,
                                'inhibitionWidth': 4,
                                'inhibitionHeight': 2,
                                'centerPotSynapses': 1,
                                'potentialWidth': 5,
                                'potentialHeight': 5,
                                'spatialPermanenceInc': 0.1,
                                'spatialPermanenceDec': 0.02,
                                'activeColPermanenceDec': 0.02,
                                'tempDelayLength': 3,
                                'permanenceInc': 0.1,
                                'permanenceDec': 0.02,
                                'tempSpatialPermanenceInc': 0,
                                'tempSeqPermanenceInc': 0,
                                'connectPermanence': 0.3,
                                'minThreshold': 5,
                                'minScoreThreshold': 5,
                                'newSynapseCount': 10,
                                'maxNumSegments': 10,
                                'activationThreshold': 6,
                                'colSynPermanence': 0.1,
                                'cellSynPermanence': 0.4
                                },
                                {
                                'desiredLocalActivity': 1,
                                'minOverlap': 2,
                                'wrapInput':0,
                                'inhibitionWidth': 8,
                                'inhibitionHeight': 4,
                                'centerPotSynapses': 1,
                                'potentialWidth': 7,
                                'potentialHeight': 7,
                                'spatialPermanenceInc': 0.2,
                                'spatialPermanenceDec': 0.02,
                                'activeColPermanenceDec': 0.02,
                                'tempDelayLength': 3,
                                'permanenceInc': 0.1,
                                'permanenceDec': 0.02,
                                'tempSpatialPermanenceInc': 0.2,
                                'tempSeqPermanenceInc': 0.1,
                                'connectPermanence': 0.3,
                                'minThreshold': 5,
                                'minScoreThreshold': 3,
                                'newSynapseCount': 10,
                                'maxNumSegments': 10,
                                'activationThreshold': 6,
                                'colSynPermanence': 0.1,
                                'cellSynPermanence': 0.4
                                },
                                {
                                'desiredLocalActivity': 1,
                                'minOverlap': 2,
                                'wrapInput':1,
                                'inhibitionWidth': 30,
                                'inhibitionHeight': 2,
                                'centerPotSynapses': 1,
                                'connectPermanence': 0.3,
                                'potentialWidth': 34,
                                'potentialHeight': 31,
                                'spatialPermanenceInc': 0.1,
                                'spatialPermanenceDec': 0.01,
                                'activeColPermanenceDec': 0.0,
                                'tempDelayLength': 10,
                                'permanenceInc': 0.15,
                                'permanenceDec': 0.05,
                                'tempSpatialPermanenceInc': 0.04,
                                'tempSeqPermanenceInc': 0.1,
                                'minThreshold': 5,
                                'minScoreThreshold': 3,
                                'newSynapseCount': 10,
                                'maxNumSegments': 10,
                                'activationThreshold': 6,
                                'colSynPermanence': 0.1,
                                'cellSynPermanence': 0.4
                                }]
                            }]
                        }
                    }

The profiling of one step of this layer is shown below.

 142438 function calls in 0.169 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000    0.169    0.169 HTM_network.py:1002(spatialTemporal)
        1    0.000    0.000    0.000    0.000 HTM_network.py:1105(updateHTMInput)
        1    0.000    0.000    0.169    0.169 HTM_network.py:1173(spatialTemporal)
        3    0.000    0.000    0.000    0.000 HTM_network.py:599(getPotentialOverlaps)
        3    0.000    0.000    0.000    0.000 HTM_network.py:662(updateInput)
        3    0.000    0.000    0.000    0.000 HTM_network.py:673(updateOutput)
        3    0.000    0.000    0.015    0.005 HTM_network.py:716(Overlap)
        3    0.000    0.000    0.018    0.006 HTM_network.py:738(inhibition)
        3    0.000    0.000    0.010    0.003 HTM_network.py:754(spatialLearning)
        3    0.000    0.000    0.031    0.010 HTM_network.py:765(sequencePooler)
        3    0.000    0.000    0.011    0.004 HTM_network.py:777(calcActiveCells)
        3    0.000    0.000    0.012    0.004 HTM_network.py:797(calcPredictCells)
        3    0.000    0.000    0.008    0.003 HTM_network.py:809(sequenceLearning)
        3    0.000    0.000    0.095    0.032 HTM_network.py:823(temporalPooler)
        1    0.000    0.000    0.000    0.000 HTM_network.py:951(updateRegionInput)
       20    0.000    0.000    0.000    0.000 arraypad.py:101(<genexpr>)
       20    0.000    0.000    0.000    0.000 arraypad.py:1069(<genexpr>)
        2    0.000    0.000    0.000    0.000 arraypad.py:1072(_validate_lengths)
        8    0.000    0.000    0.000    0.000 arraypad.py:111(_append_const)
        2    0.000    0.000    0.000    0.000 arraypad.py:1117(pad)
       20    0.000    0.000    0.000    0.000 arraypad.py:135(<genexpr>)
        8    0.000    0.000    0.000    0.000 arraypad.py:77(_prepend_const)
        4    0.000    0.000    0.000    0.000 arraypad.py:989(_normalize_shape)
        3    0.000    0.000    0.000    0.000 basic.py:4352(perform)
       15    0.000    0.000    0.002    0.000 cc.py:1525(__call__)
        2    0.000    0.000    0.000    0.000 fromnumeric.py:2767(round_)
        8    0.000    0.000    0.000    0.000 fromnumeric.py:43(_wrapit)
        8    0.000    0.000    0.000    0.000 fromnumeric.py:823(argsort)
       36    0.024    0.001    0.036    0.001 function_module.py:482(__call__)
        3    0.000    0.000    0.000    0.000 function_module.py:691(free)
        3    0.000    0.000    0.000    0.000 link.py:324(__get__)
        3    0.000    0.000    0.000    0.000 np_activeCells.py:213(getCurrentLearnCellsList)
        3    0.000    0.000    0.000    0.000 np_activeCells.py:221(getActiveCellsList)
        3    0.000    0.000    0.000    0.000 np_activeCells.py:225(getSegUpdates)
       57    0.003    0.000    0.003    0.000 np_activeCells.py:230(findNumSegs)
       19    0.000    0.000    0.000    0.000 np_activeCells.py:245(getSegmentActiveSynapses)
       19    0.000    0.000    0.005    0.000 np_activeCells.py:266(getBestMatchingCell)
       19    0.000    0.000    0.001    0.000 np_activeCells.py:334(newRandomPrevActiveSynapses)
       85    0.000    0.000    0.000    0.000 np_activeCells.py:359(findLeastUsedSeg)
       90    0.000    0.000    0.000    0.000 np_activeCells.py:377(checkColPrevActive)
       10    0.000    0.000    0.000    0.000 np_activeCells.py:385(checkColBursting)
        6    0.000    0.000    0.000    0.000 np_activeCells.py:401(findActiveCell)
        4    0.000    0.000    0.000    0.000 np_activeCells.py:412(findLearnCell)
       84    0.000    0.000    0.000    0.000 np_activeCells.py:421(setActiveCell)
       38    0.000    0.000    0.000    0.000 np_activeCells.py:433(setLearnCell)
      310    0.000    0.000    0.000    0.000 np_activeCells.py:445(checkCellActive)
        6    0.000    0.000    0.000    0.000 np_activeCells.py:458(checkCellLearn)
       84    0.000    0.000    0.000    0.000 np_activeCells.py:468(checkCellPredicting)
        9    0.000    0.000    0.000    0.000 np_activeCells.py:478(segmentHighestScore)
     1410    0.004    0.000    0.004    0.000 np_activeCells.py:495(segmentNumSynapsesActive)
      141    0.001    0.000    0.005    0.000 np_activeCells.py:521(getBestMatchingSegment)
        3    0.000    0.000    0.004    0.001 np_activeCells.py:552(updateActiveCellScores)
        3    0.000    0.000    0.011    0.004 np_activeCells.py:582(updateActiveCells)
      431    0.017    0.000    0.017    0.000 np_inhibition.py:270(calcualteInhibition)
        3    0.001    0.000    0.018    0.006 np_inhibition.py:333(calculateWinningCols)
       56    0.001    0.000    0.001    0.000 np_sequenceLearning.py:101(updateCurrentSegSyn)
       28    0.000    0.000    0.001    0.000 np_sequenceLearning.py:137(adaptSegments)
     6194    0.005    0.000    0.005    0.000 np_sequenceLearning.py:168(checkCellTime)
        3    0.001    0.000    0.008    0.003 np_sequenceLearning.py:182(sequenceLearning)
       28    0.000    0.000    0.000    0.000 np_sequenceLearning.py:78(addNewSegSyn)
       19    0.000    0.000    0.000    0.000 np_temporal.py:116(setLearnCell)
     6138    0.006    0.000    0.006    0.000 np_temporal.py:126(checkCellPredict)
     4092    0.002    0.000    0.011    0.000 np_temporal.py:139(checkCellActivePredict)
    20901    0.021    0.000    0.063    0.000 np_temporal.py:149(checkColBursting)
        1    0.000    0.000    0.000    0.000 np_temporal.py:163(updateAvgPesist)
        2    0.000    0.000    0.001    0.000 np_temporal.py:280(getPrev2NewLearnCells)
        3    0.011    0.004    0.074    0.025 np_temporal.py:365(updateProximalTempPool)
        3    0.005    0.002    0.021    0.007 np_temporal.py:428(updateDistalTempPool)
       24    0.000    0.000    0.000    0.000 np_temporal.py:84(checkCellLearn)
    50046    0.046    0.000    0.046    0.000 np_temporal.py:94(checkCellActive)
        2    0.000    0.000    0.000    0.000 numeric.py:141(ones)
       77    0.000    0.000    0.002    0.000 numeric.py:406(asarray)
        6    0.000    0.000    0.000    0.000 numeric.py:476(asanyarray)
        9    0.000    0.000    0.000    0.000 numeric.py:79(zeros_like)
       15    0.000    0.000    0.002    0.000 op.py:742(rval)
        6    0.000    0.000    0.005    0.001 op.py:767(rval)
      190    0.000    0.000    0.000    0.000 random.py:293(sample)
       63    0.000    0.000    0.002    0.000 safe_asarray.py:12(_asarray)
        3    0.000    0.000    0.002    0.001 scan_op.py:638(<lambda>)
        3    0.000    0.000    0.002    0.001 scan_op.py:670(rval)
        1    0.000    0.000    0.000    0.000 sdrFunctions.py:29(joinInputArrays)
        6    0.000    0.000    0.000    0.000 shape_base.py:113(atleast_3d)
        2    0.000    0.000    0.000    0.000 shape_base.py:319(dstack)
        3    0.005    0.002    0.005    0.002 subtensor.py:2084(perform)
        3    0.000    0.000    0.010    0.003 theano_learning.py:133(updatePermanenceValues)
        3    0.000    0.000    0.000    0.000 theano_overlap.py:304(checkNewInputParams)
        2    0.000    0.000    0.000    0.000 theano_overlap.py:314(addPaddingToInput)
        3    0.000    0.000    0.001    0.000 theano_overlap.py:458(addVectTieBreaker)
        3    0.000    0.000    0.005    0.002 theano_overlap.py:463(maskTieBreaker)
        3    0.000    0.000    0.001    0.000 theano_overlap.py:476(getColInputs)
        3    0.000    0.000    0.000    0.000 theano_overlap.py:522(getPotentialOverlaps)
        3    0.000    0.000    0.014    0.005 theano_overlap.py:528(calculateOverlap)
        3    0.000    0.000    0.001    0.000 theano_overlap.py:564(removeSmallOverlaps)
        3    0.000    0.000    0.000    0.000 theano_predictCells.py:250(getActiveSegTimes)
        3    0.000    0.000    0.000    0.000 theano_predictCells.py:259(getSegUpdates)
       14    0.000    0.000    0.000    0.000 theano_predictCells.py:264(getSegmentActiveSynapses)
       14    0.000    0.000    0.000    0.000 theano_predictCells.py:287(checkCellPredicting)
       14    0.000    0.000    0.000    0.000 theano_predictCells.py:297(setPredictCell)
      140    0.000    0.000    0.000    0.000 theano_predictCells.py:314(checkCellActive)
        3    0.000    0.000    0.012    0.004 theano_predictCells.py:347(updatePredictiveState)
       93    0.000    0.000    0.000    0.000 type.py:385(<lambda>)
        3    0.000    0.000    0.000    0.000 type.py:579(value_zeros)
       93    0.000    0.000    0.003    0.000 type.py:67(filter)
        3    0.000    0.000    0.002    0.001 vm.py:204(__call__)
       15    0.002    0.000    0.002    0.000 {cutils_ext.cutils_ext.run_cthunk}
       83    0.000    0.000    0.000    0.000 {getattr}
       36    0.000    0.000    0.000    0.000 {hasattr}
      119    0.000    0.000    0.000    0.000 {isinstance}
    24339    0.001    0.000    0.001    0.000 {len}
        8    0.000    0.000    0.000    0.000 {math.ceil}
       92    0.000    0.000    0.000    0.000 {math.floor}
      280    0.000    0.000    0.000    0.000 {max}
      202    0.000    0.000    0.000    0.000 {method 'append' of 'list' objects}
        8    0.000    0.000    0.000    0.000 {method 'argsort' of 'numpy.ndarray' objects}
        2    0.000    0.000    0.000    0.000 {method 'astype' of 'numpy.ndarray' objects}
        2    0.000    0.000    0.000    0.000 {method 'copy' of 'numpy.ndarray' objects}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}
        6    0.000    0.000    0.000    0.000 {method 'flatten' of 'numpy.ndarray' objects}
        9    0.000    0.000    0.000    0.000 {method 'item' of 'numpy.ndarray' objects}
        3    0.000    0.000    0.000    0.000 {method 'keys' of 'dict' objects}
      190    0.000    0.000    0.000    0.000 {method 'random' of '_random.Random' objects}
        2    0.000    0.000    0.000    0.000 {method 'ravel' of 'numpy.ndarray' objects}
        9    0.000    0.000    0.000    0.000 {method 'reshape' of 'numpy.ndarray' objects}
        2    0.000    0.000    0.000    0.000 {method 'round' of 'numpy.ndarray' objects}
        2    0.000    0.000    0.000    0.000 {method 'setdefault' of 'dict' objects}
       10    0.000    0.000    0.000    0.000 {method 'tolist' of 'numpy.ndarray' objects}
      136    0.000    0.000    0.000    0.000 {min}
        3    0.000    0.000    0.000    0.000 {numpy.core.multiarray.arange}
       90    0.002    0.000    0.002    0.000 {numpy.core.multiarray.array}
       10    0.000    0.000    0.000    0.000 {numpy.core.multiarray.concatenate}
       11    0.000    0.000    0.000    0.000 {numpy.core.multiarray.copyto}
        9    0.000    0.000    0.000    0.000 {numpy.core.multiarray.empty_like}
        2    0.000    0.000    0.000    0.000 {numpy.core.multiarray.empty}
        2    0.000    0.000    0.000    0.000 {numpy.core.multiarray.unravel_index}
       56    0.000    0.000    0.000    0.000 {numpy.core.multiarray.zeros}
    25139    0.003    0.000    0.003    0.000 {range}
        3    0.002    0.001    0.002    0.001 {theano.scan_module.scan_perform.perform}
      144    0.000    0.000    0.000    0.000 {time.time}
       41    0.000    0.000    0.000    0.000 {zip}

Note the temporal pooling time of ~0.095 seconds, this is over half the total calcualtion time.
(0.095 0.032 HTM_network.py:823(temporalPooler))

Temporal pooler isn't robust.

Temporal pooler brittleness.

Currently (current commit 7a07099) the temporal pooler works by keeping the first set of active columns on longer throughout a sequence to pool over the sequence and increase temporal stability. Once a sequence has been temporally pooled this method does not include any columns active other then the first ones of that sequence.

An example;
Sequence A,B,C,D is learnt and temporally pooled over. The columns making up pattern A would stay active through out the sequence. This works fine except when noise is introduced or the temporally pooled pattern is to be compared to other similar patterns.

Similar temporally pooled patterns produce different output SDR's.

The output of the temporally pooled pattern for A,B,C,D will be 100% different to the output of the temporally pooled pattern B,C,D. This is bad since both patterns are very similar and should create a similar output SDR.

Introducing noise disrupts the temporally pooled patterns significantly.

If a single input of the above pattern A,B,C,D contains noise then the output temporally pooled pattern can vary wildly. E.g If the unlearnt pattern A,E,C,D would produce a completely different output SDR after the input E since this was unexpected causing new columns to burst. What should happen is the unexpected input E should cause bursting but then the next input C should not cause bursting as the temporally pooled pattern A,B,C,D should have been expecting this.

averageReceptiveFeildSize is incorrectly calculated.

The averageReceptiveFeildSize is being incorrectly calculated. It is used to adjust the inhibition field size for each column. It uses the length of every columns connected synapse list. Unfortunately columns start with all their potential synapses connected. This synapses are weakened over time but only when the column becomes activated. The problem is then that all the columns that are never activated still have many connected synapses and this pushes up the averageReceptiveFeildSize.

The averageReceptiveFeildSize is important because in changes the inhibition radius. This radius determines how far away columns inhibit each other. Its supposed to be adjusted so that the sparsity of SDR's is constant no matter what sort of inputs to the layer are given. This needs to be fixed maybe just including columns which where activated when calculating the average receptive field size?

Active times for synapses may be incorrectly updated and stored.

When clicking on a cell in the GUI and selecting one of the cells segments to display the active times on the synapses active times appear to sometimes be incorrect. Not sure if this is just an update issue or if it affects the learning and function of the cells. Needs more investigation.

HTM Speed

Speed

The HTM code needs to be optimized and refactored so it can run faster. Previous work has not focused on making the algorithms faster and as such the current implementation is very slow even for the small HTM layers being tested. Before the code is refactored to increase its speed profiling on the current code needs to be performed. Unit tests also need to be written to ensure the new code runs the same as the old code (this should have been done a long time ago but i'm slack and under resourced for this project :( ).

The algorithm could probably be optimized firstly by refactoring the structure of synapses and other objects in the code. After this a multithreading parralization of the algorithm could probably give vast improments in the size and speed of the HTM.

High level commands

The HTM hierarchy has been implemented in the balancer project. Feedback commands have not been tested on any results yet. An Issue with the current design (commit 51b2037) is that there is no way to direct the commands coming from the highest level.

A possible solution is to add some sort of SDR recognizer. This could perform a function where it recognizes SDR's that are "desirable" and then attempt to issue only commands that have been known to produce the desired SDR. This function could be something the thalamus does in the real neocortex through gating the output of SDR's from different levels. It could be thought as the thalamus remembering a desirable past experience and attempting to change the output of the neocortex to produce the same experience.

command Input

Currently the input to the command layer is from the layer below. This won't work as only the next input will reinforce the current command. The command layer needs to reinforces it's commands with a different input then the layer below.

Variable size input

If the input doesn't have the same dimensions as the HTM column array then the spatial pooler doesn't match columns to the input correctly.

Updating the spatial pooler when no cells win

The potential synapses that connect a column to the input are only updated when a column wins.
This presents a problem when no column wins as no potential synapses permanences will be updated and no column will learn this pattern.

Sometimes no columns win when all the columns have learnt inputs already and no columns potential connected synapses connect to this new input. How should a column be chosen to update it's synapses so this new input can be learnt?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.