当前位置: 首页>>代码示例>>Python>>正文


Python RandomDistributedScalarEncoder.getBucketIndices方法代码示例

本文整理汇总了Python中nupic.encoders.random_distributed_scalar.RandomDistributedScalarEncoder.getBucketIndices方法的典型用法代码示例。如果您正苦于以下问题:Python RandomDistributedScalarEncoder.getBucketIndices方法的具体用法?Python RandomDistributedScalarEncoder.getBucketIndices怎么用?Python RandomDistributedScalarEncoder.getBucketIndices使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在nupic.encoders.random_distributed_scalar.RandomDistributedScalarEncoder的用法示例。


在下文中一共展示了RandomDistributedScalarEncoder.getBucketIndices方法的5个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: testMapBucketIndexToNonZeroBits

# 需要导入模块: from nupic.encoders.random_distributed_scalar import RandomDistributedScalarEncoder [as 别名]
# 或者: from nupic.encoders.random_distributed_scalar.RandomDistributedScalarEncoder import getBucketIndices [as 别名]
  def testMapBucketIndexToNonZeroBits(self):
    """
    Test that mapBucketIndexToNonZeroBits works and that max buckets and
    clipping are handled properly.
    """
    enc = RandomDistributedScalarEncoder(resolution=1.0, w=11, n=150)
    # Set a low number of max buckets
    enc._initializeBucketMap(10, None)
    enc.encode(0.0)
    enc.encode(-7.0)
    enc.encode(7.0)

    self.assertEqual(len(enc.bucketMap), enc._maxBuckets,
      "_maxBuckets exceeded")
    self.assertTrue(
      (enc.mapBucketIndexToNonZeroBits(-1) == enc.bucketMap[0]).all(),
      "mapBucketIndexToNonZeroBits did not handle negative index")
    self.assertTrue(
      (enc.mapBucketIndexToNonZeroBits(1000) == enc.bucketMap[9]).all(),
      "mapBucketIndexToNonZeroBits did not handle negative index")

    e23 = enc.encode(23.0)
    e6  = enc.encode(6)
    self.assertEqual((e23 == e6).sum(), enc.getWidth(),
      "Values not clipped correctly during encoding")

    e_8 = enc.encode(-8)
    e_7  = enc.encode(-7)
    self.assertEqual((e_8 == e_7).sum(), enc.getWidth(),
      "Values not clipped correctly during encoding")

    self.assertEqual(enc.getBucketIndices(-8)[0], 0,
                "getBucketIndices returned negative bucket index")
    self.assertEqual(enc.getBucketIndices(23)[0], enc._maxBuckets-1,
                "getBucketIndices returned bucket index that is too large")
开发者ID:AI-Cdrone,项目名称:nupic,代码行数:37,代码来源:random_distributed_scalar_test.py

示例2: testVerbosity

# 需要导入模块: from nupic.encoders.random_distributed_scalar import RandomDistributedScalarEncoder [as 别名]
# 或者: from nupic.encoders.random_distributed_scalar.RandomDistributedScalarEncoder import getBucketIndices [as 别名]
 def testVerbosity(self):
     """
 Test that nothing is printed out when verbosity=0
 """
     _stdout = sys.stdout
     sys.stdout = _stringio = StringIO()
     encoder = RandomDistributedScalarEncoder(name="mv", resolution=1.0, verbosity=0)
     output = numpy.zeros(encoder.getWidth(), dtype=defaultDtype)
     encoder.encodeIntoArray(23.0, output)
     encoder.getBucketIndices(23.0)
     sys.stdout = _stdout
     self.assertEqual(len(_stringio.getvalue()), 0, "zero verbosity doesn't lead to zero output")
开发者ID:08s011003,项目名称:nupic,代码行数:14,代码来源:random_distributed_scalar_test.py

示例3: testEncoding

# 需要导入模块: from nupic.encoders.random_distributed_scalar import RandomDistributedScalarEncoder [as 别名]
# 或者: from nupic.encoders.random_distributed_scalar.RandomDistributedScalarEncoder import getBucketIndices [as 别名]
  def testEncoding(self):
    """
    Test basic encoding functionality. Create encodings without crashing and
    check they contain the correct number of on and off bits. Check some
    encodings for expected overlap. Test that encodings for old values don't
    change once we generate new buckets.
    """
    # Initialize with non-default parameters and encode with a number close to
    # the offset
    enc = RandomDistributedScalarEncoder(name='enc', resolution=1.0, w=23,
                                         n=500, offset = 0.0)
    e0 = enc.encode(-0.1)

    self.assertEqual(e0.sum(), 23, "Number of on bits is incorrect")
    self.assertEqual(e0.size, 500, "Width of the vector is incorrect")
    self.assertEqual(enc.getBucketIndices(0.0)[0], enc._maxBuckets / 2,
                     "Offset doesn't correspond to middle bucket")
    self.assertEqual(len(enc.bucketMap), 1, "Number of buckets is not 1")

    # Encode with a number that is resolution away from offset. Now we should
    # have two buckets and this encoding should be one bit away from e0
    e1 = enc.encode(1.0)
    self.assertEqual(len(enc.bucketMap), 2, "Number of buckets is not 2")
    self.assertEqual(e1.sum(), 23, "Number of on bits is incorrect")
    self.assertEqual(e1.size, 500, "Width of the vector is incorrect")
    self.assertEqual(computeOverlap(e0, e1), 22,
                     "Overlap is not equal to w-1")

    # Encode with a number that is resolution*w away from offset. Now we should
    # have many buckets and this encoding should have very little overlap with
    # e0
    e25 = enc.encode(25.0)
    self.assertGreater(len(enc.bucketMap), 23, "Number of buckets is not 2")
    self.assertEqual(e25.sum(), 23, "Number of on bits is incorrect")
    self.assertEqual(e25.size, 500, "Width of the vector is incorrect")
    self.assertLess(computeOverlap(e0, e25), 4,
                     "Overlap is too high")

    # Test encoding consistency. The encodings for previous numbers
    # shouldn't change even though we have added additional buckets
    self.assertEqual((e0 == enc.encode(-0.1)).sum(), 500,
      "Encodings are not consistent - they have changed after new buckets "
      "have been created")
    self.assertEqual((e1 == enc.encode(1.0)).sum(), 500,
      "Encodings are not consistent - they have changed after new buckets "
      "have been created")
开发者ID:AI-Cdrone,项目名称:nupic,代码行数:48,代码来源:random_distributed_scalar_test.py

示例4: runHotgym

# 需要导入模块: from nupic.encoders.random_distributed_scalar import RandomDistributedScalarEncoder [as 别名]
# 或者: from nupic.encoders.random_distributed_scalar.RandomDistributedScalarEncoder import getBucketIndices [as 别名]
def runHotgym(numRecords):
  with open(_PARAMS_PATH, "r") as f:
    modelParams = yaml.safe_load(f)["modelParams"]
    enParams = modelParams["sensorParams"]["encoders"]
    spParams = modelParams["spParams"]
    tmParams = modelParams["tmParams"]

  timeOfDayEncoder = DateEncoder(
    timeOfDay=enParams["timestamp_timeOfDay"]["timeOfDay"])
  weekendEncoder = DateEncoder(
    weekend=enParams["timestamp_weekend"]["weekend"])
  scalarEncoder = RandomDistributedScalarEncoder(
    enParams["consumption"]["resolution"])

  encodingWidth = (timeOfDayEncoder.getWidth()
                   + weekendEncoder.getWidth()
                   + scalarEncoder.getWidth())

  sp = SpatialPooler(
    inputDimensions=(encodingWidth,),
    columnDimensions=(spParams["columnCount"],),
    potentialPct=spParams["potentialPct"],
    potentialRadius=encodingWidth,
    globalInhibition=spParams["globalInhibition"],
    localAreaDensity=spParams["localAreaDensity"],
    numActiveColumnsPerInhArea=spParams["numActiveColumnsPerInhArea"],
    synPermInactiveDec=spParams["synPermInactiveDec"],
    synPermActiveInc=spParams["synPermActiveInc"],
    synPermConnected=spParams["synPermConnected"],
    boostStrength=spParams["boostStrength"],
    seed=spParams["seed"],
    wrapAround=True
  )

  tm = TemporalMemory(
    columnDimensions=(tmParams["columnCount"],),
    cellsPerColumn=tmParams["cellsPerColumn"],
    activationThreshold=tmParams["activationThreshold"],
    initialPermanence=tmParams["initialPerm"],
    connectedPermanence=spParams["synPermConnected"],
    minThreshold=tmParams["minThreshold"],
    maxNewSynapseCount=tmParams["newSynapseCount"],
    permanenceIncrement=tmParams["permanenceInc"],
    permanenceDecrement=tmParams["permanenceDec"],
    predictedSegmentDecrement=0.0,
    maxSegmentsPerCell=tmParams["maxSegmentsPerCell"],
    maxSynapsesPerSegment=tmParams["maxSynapsesPerSegment"],
    seed=tmParams["seed"]
  )

  classifier = SDRClassifierFactory.create()
  results = []
  with open(_INPUT_FILE_PATH, "r") as fin:
    reader = csv.reader(fin)
    headers = reader.next()
    reader.next()
    reader.next()

    for count, record in enumerate(reader):

      if count >= numRecords: break

      # Convert data string into Python date object.
      dateString = datetime.datetime.strptime(record[0], "%m/%d/%y %H:%M")
      # Convert data value string into float.
      consumption = float(record[1])

      # To encode, we need to provide zero-filled numpy arrays for the encoders
      # to populate.
      timeOfDayBits = numpy.zeros(timeOfDayEncoder.getWidth())
      weekendBits = numpy.zeros(weekendEncoder.getWidth())
      consumptionBits = numpy.zeros(scalarEncoder.getWidth())

      # Now we call the encoders to create bit representations for each value.
      timeOfDayEncoder.encodeIntoArray(dateString, timeOfDayBits)
      weekendEncoder.encodeIntoArray(dateString, weekendBits)
      scalarEncoder.encodeIntoArray(consumption, consumptionBits)

      # Concatenate all these encodings into one large encoding for Spatial
      # Pooling.
      encoding = numpy.concatenate(
        [timeOfDayBits, weekendBits, consumptionBits]
      )

      # Create an array to represent active columns, all initially zero. This
      # will be populated by the compute method below. It must have the same
      # dimensions as the Spatial Pooler.
      activeColumns = numpy.zeros(spParams["columnCount"])

      # Execute Spatial Pooling algorithm over input space.
      sp.compute(encoding, True, activeColumns)
      activeColumnIndices = numpy.nonzero(activeColumns)[0]

      # Execute Temporal Memory algorithm over active mini-columns.
      tm.compute(activeColumnIndices, learn=True)

      activeCells = tm.getActiveCells()

      # Get the bucket info for this input value for classification.
      bucketIdx = scalarEncoder.getBucketIndices(consumption)[0]
#.........这里部分代码省略.........
开发者ID:Erichy94,项目名称:nupic,代码行数:103,代码来源:complete-algo-example.py

示例5: runHotgym

# 需要导入模块: from nupic.encoders.random_distributed_scalar import RandomDistributedScalarEncoder [as 别名]
# 或者: from nupic.encoders.random_distributed_scalar.RandomDistributedScalarEncoder import getBucketIndices [as 别名]

#.........这里部分代码省略.........
    # be considered neighbors when mapping columns to inputs.
    wrapAround=False
  )

  tm = TemporalMemory(
    # Must be the same dimensions as the SP
    columnDimensions=(tmParams["columnCount"],),
    # How many cells in each mini-column.
    cellsPerColumn=tmParams["cellsPerColumn"],
    # A segment is active if it has >= activationThreshold connected synapses
    # that are active due to infActiveState
    activationThreshold=tmParams["activationThreshold"],
    initialPermanence=tmParams["initialPerm"],
    # TODO: This comes from the SP params, is this normal
    connectedPermanence=spParams["synPermConnected"],
    # Minimum number of active synapses for a segment to be considered during
    # search for the best-matching segments.
    minThreshold=tmParams["minThreshold"],
    # The max number of synapses added to a segment during learning
    maxNewSynapseCount=tmParams["newSynapseCount"],
    permanenceIncrement=tmParams["permanenceInc"],
    permanenceDecrement=tmParams["permanenceDec"],
    predictedSegmentDecrement=0.0,
    maxSegmentsPerCell=tmParams["maxSegmentsPerCell"],
    maxSynapsesPerSegment=tmParams["maxSynapsesPerSegment"],
    seed=tmParams["seed"]
  )

  classifier = SDRClassifierFactory.create()
  results = []
  with open(_INPUT_FILE_PATH, "r") as fin:
    reader = csv.reader(fin)
    headers = reader.next()
    reader.next()
    reader.next()

    for count, record in enumerate(reader):

      if count >= numRecords: break

      # Convert data string into Python date object.
      dateString = datetime.datetime.strptime(record[0], "%m/%d/%y %H:%M")
      # Convert data value string into float.
      consumption = float(record[1])

      # To encode, we need to provide zero-filled numpy arrays for the encoders
      # to populate.
      timeOfDayBits = numpy.zeros(timeOfDayEncoder.getWidth())
      weekendBits = numpy.zeros(weekendEncoder.getWidth())
      consumptionBits = numpy.zeros(scalarEncoder.getWidth())

      # Now we call the encoders create bit representations for each value.
      timeOfDayEncoder.encodeIntoArray(dateString, timeOfDayBits)
      weekendEncoder.encodeIntoArray(dateString, weekendBits)
      scalarEncoder.encodeIntoArray(consumption, consumptionBits)

      # Concatenate all these encodings into one large encoding for Spatial
      # Pooling.
      encoding = numpy.concatenate(
        [timeOfDayBits, weekendBits, consumptionBits]
      )

      # Create an array to represent active columns, all initially zero. This
      # will be populated by the compute method below. It must have the same
      # dimensions as the Spatial Pooler.
      activeColumns = numpy.zeros(spParams["columnCount"])

      # Execute Spatial Pooling algorithm over input space.
      sp.compute(encoding, True, activeColumns)
      activeColumnIndices = numpy.nonzero(activeColumns)[0]

      # Execute Temporal Memory algorithm over active mini-columns.
      tm.compute(activeColumnIndices, learn=True)

      activeCells = tm.getActiveCells()

      # Get the bucket info for this input value for classification.
      bucketIdx = scalarEncoder.getBucketIndices(consumption)[0]

      # Run classifier to translate active cells back to scalar value.
      classifierResult = classifier.compute(
        recordNum=count,
        patternNZ=activeCells,
        classification={
          "bucketIdx": bucketIdx,
          "actValue": consumption
        },
        learn=True,
        infer=True
      )

      # Print the best prediction for 1 step out.
      oneStepConfidence, oneStep = sorted(
        zip(classifierResult[1], classifierResult["actualValues"]),
        reverse=True
      )[0]
      print("1-step: {:16} ({:4.4}%)".format(oneStep, oneStepConfidence * 100))
      results.append([oneStep, oneStepConfidence * 100, None, None])

    return results
开发者ID:mrcslws,项目名称:nupic,代码行数:104,代码来源:complete-algo-example.py


注:本文中的nupic.encoders.random_distributed_scalar.RandomDistributedScalarEncoder.getBucketIndices方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。