当前位置: 首页>>代码示例>>Python>>正文


Python Test.assertEquals方法代码示例

本文整理汇总了Python中test_helper.Test.assertEquals方法的典型用法代码示例。如果您正苦于以下问题:Python Test.assertEquals方法的具体用法?Python Test.assertEquals怎么用?Python Test.assertEquals使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在test_helper.Test的用法示例。


在下文中一共展示了Test.assertEquals方法的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: run_tests

# 需要导入模块: from test_helper import Test [as 别名]
# 或者: from test_helper.Test import assertEquals [as 别名]
def run_tests():
  Test.assertEquals(test_year(1945, df), [u'Mary', u'Linda', u'Barbara', u'Patricia', u'Carol'], 'incorrect top 5 names for 1945')
  Test.assertEquals(test_year(1970, df), [u'Jennifer', u'Lisa', u'Kimberly', u'Michelle', u'Amy'], 'incorrect top 5 names for 1970')
  Test.assertEquals(test_year(1987, df), [u'Jessica', u'Ashley', u'Amanda', u'Jennifer', u'Sarah'], 'incorrect top 5 names for 1987')
  Test.assertTrue(len(test_year(1945, df)) <= 5, 'list not limited to 5 names')
  Test.assertTrue(u'James' not in test_year(1945, df), 'male names not filtered')
  Test.assertTrue(test_year(1945, df) != [u'Linda', u'Linda', u'Linda', u'Linda', u'Mary'], 'year not filtered')
  Test.assertEqualsHashed(test_year(1880, df), "2038e2c0bb0b741797a47837c0f94dbf24123447", "incorrect top 5 names for 1880")
开发者ID:smoltis,项目名称:spark,代码行数:10,代码来源:Lab.py

示例2: float

# 需要导入模块: from test_helper import Test [as 别名]
# 或者: from test_helper.Test import assertEquals [as 别名]
# Remember to cast the value you extract from the Vector using float()
getElement = udf(lambda v, i: float(v[i]), DoubleType())

irisSeparateFeatures = (irisTwoFeatures
                        .withColumn('sepalLength', getElement('features', lit(0)))
                        .withColumn('sepalWidth', getElement('features', lit(1))))
display(irisSeparateFeatures)


# COMMAND ----------

# TEST
from test_helper import Test
firstRow = irisSeparateFeatures.select('sepalWidth', 'features').map(lambda r: (r[0], r[1])).first()
Test.assertEquals(firstRow[0], firstRow[1][1], 'incorrect definition for getElement')

# COMMAND ----------

# MAGIC %md
# MAGIC What about using `Column`'s `getItem` method?

# COMMAND ----------

from pyspark.sql.functions import col

display(irisTwoFeatures.withColumn('sepalLength', col('features').getItem(0)))

# COMMAND ----------

# MAGIC %md
开发者ID:Inscrutive,项目名称:spark,代码行数:32,代码来源:V.py

示例3: filter

# 需要导入模块: from test_helper import Test [as 别名]
# 或者: from test_helper.Test import assertEquals [as 别名]
        
    splt = re.split(split_regex, string)
    
    fltr = filter(None,splt)
    
    return fltr

#print simpleTokenize(quickbrownfox)
print simpleTokenize(quickbrownfox) # Should give ['a', 'quick', 'brown', ... ]


# In[5]:

# TEST Tokenize a String (1a)
Test.assertEquals(simpleTokenize(quickbrownfox),
                  ['a','quick','brown','fox','jumps','over','the','lazy','dog'],
                  'simpleTokenize should handle sample text')
Test.assertEquals(simpleTokenize(' '), [], 'simpleTokenize should handle empty string')
Test.assertEquals(simpleTokenize('!!!!123A/456_B/789C.123A'), ['123a','456_b','789c','123a'],
                  'simpleTokenize should handle puntuations and lowercase result')
Test.assertEquals(simpleTokenize('fox fox'), ['fox', 'fox'],
                  'simpleTokenize should not remove duplicates')


# ### **(1b) Removing stopwords**
# #### *[Stopwords][stopwords]* are common (English) words that do not contribute much to the content or meaning of a document (e.g., "the", "a", "is", "to", etc.). Stopwords add noise to bag-of-words comparisons, so they are usually excluded.
# #### Using the included file "stopwords.txt", implement `tokenize`, an improved tokenizer that does not emit stopwords.
# [stopwords]: https://en.wikipedia.org/wiki/Stop_words

# In[6]:
开发者ID:Mvrm,项目名称:Spark,代码行数:32,代码来源:3.py

示例4: DenseVector

# 需要导入模块: from test_helper import Test [as 别名]
# 或者: from test_helper.Test import assertEquals [as 别名]
# COMMAND ----------

# MAGIC %md
# MAGIC Create a `DenseVector` with the values 1.5, 2.5, 3.0 (in that order).

# COMMAND ----------

# ANSWER
denseVec = Vectors.dense([1.5, 2.5, 3.0])

# COMMAND ----------

# TEST
from test_helper import Test
Test.assertEquals(denseVec, DenseVector([1.5, 2.5, 3.0]), 'incorrect value for denseVec')

# COMMAND ----------

# MAGIC %md
# MAGIC Create a `LabeledPoint` with a label equal to 10.0 and features equal to `denseVec`

# COMMAND ----------

# ANSWER
labeledP = LabeledPoint(10.0, denseVec)

# COMMAND ----------

# TEST
Test.assertEquals(str(labeledP), '(10.0,[1.5,2.5,3.0])', 'incorrect value for labeledP')
开发者ID:smoltis,项目名称:spark,代码行数:32,代码来源:1-mllib-datatypes_answers.py

示例5: product

# 需要导入模块: from test_helper import Test [as 别名]
# 或者: from test_helper.Test import assertEquals [as 别名]
v = np.arange(5, 10, .5)

elementWise = u*v
dotProduct = u.dot(v)
print 'u: {0}'.format(u)
print 'v: {0}'.format(v)
print '\nelementWise\n{0}'.format(elementWise)
print '\ndotProduct\n{0}'.format(dotProduct)


# In[14]:

# TEST Element-wise multiplication and dot product (2b)
Test.assertTrue(np.all(elementWise == [ 0., 2.75, 6., 9.75, 14., 18.75, 24., 29.75, 36., 42.75]),
                'incorrect value for elementWise')
Test.assertEquals(dotProduct, 183.75, 'incorrect value for dotProduct')


# #### ** (2c) Matrix math **
# #### With NumPy it is very easy to perform matrix math.  You can use [np.matrix()](http://docs.scipy.org/doc/numpy/reference/generated/numpy.matrix.html) to generate a NumPy matrix.  Just pass a two-dimensional `ndarray` or a list of lists to the function.  You can perform matrix math on NumPy matrices using `*`.
# #### You can transpose a matrix by calling [numpy.matrix.transpose()](http://docs.scipy.org/doc/numpy/reference/generated/numpy.matrix.transpose.html) or by using `.T` on the matrix object (e.g. `myMatrix.T`).  Transposing a matrix produces a matrix where the new rows are the columns from the old matrix. For example: $$  \begin{bmatrix} 1 & 2 & 3 \\\ 4 & 5 & 6 \end{bmatrix}^\mathbf{\top} = \begin{bmatrix} 1 & 4 \\\ 2 & 5 \\\ 3 & 6 \end{bmatrix} $$
#  
# #### Inverting a matrix can be done using [numpy.linalg.inv()](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.inv.html).  Note that only square matrices can be inverted, and square matrices are not guaranteed to have an inverse.  If the inverse exists, then multiplying a matrix by its inverse will produce the identity matrix.  $ \scriptsize ( \mathbf{A}^{-1} \mathbf{A} = \mathbf{I_n} ) $  The identity matrix $ \scriptsize \mathbf{I_n} $ has ones along its diagonal and zero elsewhere. $$ \mathbf{I_n} = \begin{bmatrix} 1 & 0 & 0 & \dots & 0 \\\ 0 & 1 & 0 & \dots & 0 \\\ 0 & 0 & 1 & \dots & 0 \\\ \vdots & \vdots & \vdots & \ddots & \vdots \\\ 0 & 0 & 0 & \dots & 1 \end{bmatrix} $$
# #### For this exercise, multiply $ \mathbf{A} $ times its transpose $ ( \mathbf{A}^\top ) $ and then calculate the inverse of the result $ (  [ \mathbf{A} \mathbf{A}^\top ]^{-1}  ) $.

# In[15]:

# TODO: Replace <FILL IN> with appropriate code
from numpy.linalg import inv

A = np.matrix([[1,2,3,4],[5,6,7,8]])
开发者ID:navink,项目名称:Apache-Spark_CS190.1x,代码行数:33,代码来源:ML_lab1_review_student.py

示例6: SparseVector

# 需要导入模块: from test_helper import Test [as 别名]
# 或者: from test_helper.Test import assertEquals [as 别名]
Test.assertEqualsHashed(sampleOHEDictManual[(0,'mouse')],
                        'da4b9237bacccdf19c0760cab7aec4a8359010b0',
                        "incorrect value for sampleOHEDictManual[(0,'mouse')]")
Test.assertEqualsHashed(sampleOHEDictManual[(1,'black')],
                        '77de68daecd823babbb58edb1c8e14d7106e83bb',
                        "incorrect value for sampleOHEDictManual[(1,'black')]")
Test.assertEqualsHashed(sampleOHEDictManual[(1,'tabby')],
                        '1b6453892473a467d07372d45eb05abc2031647a',
                        "incorrect value for sampleOHEDictManual[(1,'tabby')]")
Test.assertEqualsHashed(sampleOHEDictManual[(2,'mouse')],
                        'ac3478d69a3c81fa62e60f5c3696165a4e5e6ac4',
                        "incorrect value for sampleOHEDictManual[(2,'mouse')]")
Test.assertEqualsHashed(sampleOHEDictManual[(2,'salmon')],
                        'c1dfd96eea8cc2b62785275bca38ac261256e278',
                        "incorrect value for sampleOHEDictManual[(2,'salmon')]")
Test.assertEquals(len(sampleOHEDictManual.keys()), 7,
                  'incorrect number of keys in sampleOHEDictManual')


# ** Sparse vectors **
import numpy as np
from pyspark.mllib.linalg import SparseVector

aDense = np.array([0., 3., 0., 4.])
aSparse = SparseVector(4, [[0,0.], [1,3.], [2,0.], [3,4.]])

bDense = np.array([0., 0., 0., 1.])
bSparse = SparseVector(4, [[0,0.], [1,0.], [2,0.], [3,1.]])

w = np.array([0.4, 3.1, -1.4, -.5])
print aDense.dot(w)
print aSparse.dot(w)
开发者ID:samkujovich,项目名称:SparkExperience,代码行数:34,代码来源:ClickThroughPrediction.py

示例7: makePlural

# 需要导入模块: from test_helper import Test [as 别名]
# 或者: from test_helper.Test import assertEquals [as 别名]
# One way of completing the function
def makePlural(word):
    return word + 's'

print makePlural('cat')


# In[8]:

# Load in the testing code and check to see if your answer is correct
# If incorrect it will report back '1 test failed' for each failed test
# Make sure to rerun any cell you change before trying the test again
from test_helper import Test
# TEST Pluralize and test (1b)
Test.assertEquals(makePlural('rat'), 'rats', 'incorrect result: makePlural does not add an s')


# #### ** (1c) Apply `makePlural` to the base RDD **
# #### Now pass each item in the base RDD into a [map()](http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD.map) transformation that applies the `makePlural()` function to each element. And then call the [collect()](http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD.collect) action to see the transformed RDD.

# In[9]:

# TODO: Replace <FILL IN> with appropriate code
wordsList = ['cat', 'elephant', 'rat', 'rat', 'cat']
wordsRDD = sc.parallelize(wordsList, 4)
def makePlural(word):
    return word + 's'
pluralRDD = wordsRDD.map(makePlural)
print pluralRDD.collect()
开发者ID:Mvrm,项目名称:Spark,代码行数:31,代码来源:2.py

示例8: makePlural

# 需要导入模块: from test_helper import Test [as 别名]
# 或者: from test_helper.Test import assertEquals [as 别名]
# One way of completing the function
def makePlural(word):
    return word + 's'

print makePlural('cat')


# In[4]:

# Load in the testing code and check to see if your answer is correct
# If incorrect it will report back '1 test failed' for each failed test
# Make sure to rerun any cell you change before trying the test again
from test_helper import Test
# TEST Pluralize and test (1b)
Test.assertEquals(makePlural('rat'), 'rats', 'incorrect result: makePlural does not add an s')


# #### ** (1c) Apply `makePlural` to the base RDD **
# #### Now pass each item in the base RDD into a [map()](http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD.map) transformation that applies the `makePlural()` function to each element. And then call the [collect()](http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD.collect) action to see the transformed RDD.

# In[7]:

# TODO: Replace <FILL IN> with appropriate code
pluralRDD = wordsRDD.map(makePlural)
print pluralRDD.collect()


# In[ ]:

# TEST Apply makePlural to the base RDD(1c)
开发者ID:harishashok,项目名称:Big-Data-with-Apache-Spark-,代码行数:32,代码来源:lab2_wordcount.py

示例9: lit

# 需要导入模块: from test_helper import Test [as 别名]
# 或者: from test_helper.Test import assertEquals [as 别名]
# COMMAND ----------

# ANSWER
from pyspark.sql.functions import lit, concat

pluralDF = wordsDF.select(concat('word', lit('s')).alias('word'))
pluralDF.show()

# COMMAND ----------

# Load in the testing code and check to see if your answer is correct
# If incorrect it will report back '1 test failed' for each failed test
# Make sure to rerun any cell you change before trying the test again
from test_helper import Test
# TEST Using DataFrame functions to add an 's' (1b)
Test.assertEquals(pluralDF.first()[0], 'cats', 'incorrect result: you need to add an s')
Test.assertEquals(pluralDF.columns, ['word'], "there should be one column named 'word'")

# COMMAND ----------

# PRIVATE_TEST Using DataFrame functions to add an 's' (1b)
Test.assertEquals(pluralDF.first()[0], 'cats', 'incorrect result: you need to add an s')
Test.assertEquals(pluralDF.columns, ['word'], "there should be one column named 'word'")

# COMMAND ----------

# MAGIC %md
# MAGIC ** (1c) Length of each word **
# MAGIC 
# MAGIC Now use the SQL `length` function to find the number of characters in each word.  The [`length` function](http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.length) is found in the `pyspark.sql.functions` module.
开发者ID:ashokvardhankari,项目名称:mooc-setup,代码行数:32,代码来源:cs105_lab1b_word_count.py

示例10: hash

# 需要导入模块: from test_helper import Test [as 别名]
# 或者: from test_helper.Test import assertEquals [as 别名]
assert shakespeareCount == 122395


# ### ** Part 2: Check class testing library **

# #### ** (2a) Compare with hash **

# In[ ]:

# TEST Compare with hash (2a)
# Check our testing library/package
# This should print '1 test passed.' on two lines
from test_helper import Test

twelve = 12
Test.assertEquals(twelve, 12, "twelve should equal 12")
Test.assertEqualsHashed(
    twelve, "7b52009b64fd0a2a49e6d8a939753077792b0554", "twelve, once hashed, should equal the hashed value of 12"
)


# #### ** (2b) Compare lists **

# In[ ]:

# TEST Compare lists (2b)
# This should print '1 test passed.'
unsortedList = [(5, "b"), (5, "a"), (4, "c"), (3, "a")]
Test.assertEquals(sorted(unsortedList), [(3, "a"), (4, "c"), (5, "a"), (5, "b")], "unsortedList does not sort properly")

开发者ID:pombredanne,项目名称:BigDataSpark,代码行数:31,代码来源:lab0_student.py

示例11: getCountsAndAverages

# 需要导入模块: from test_helper import Test [as 别名]
# 或者: from test_helper.Test import assertEquals [as 别名]
assert oneSorted1 == twoSorted1

def getCountsAndAverages(IDandRatingsTuple):
    """ Calculate average rating
    Args:
        IDandRatingsTuple: a single tuple of (MovieID, (Rating1, Rating2, Rating3, ...))
    Returns:
        tuple: a tuple of (MovieID, (number of ratings, averageRating))
    """
    tup = len(IDandRatingsTuple[1])
    return (IDandRatingsTuple[0], (tup,sum(IDandRatingsTuple[1])/float((tup))))



Test.assertEquals(getCountsAndAverages((1, (1, 2, 3, 4))), (1, (4, 2.5)),
                            'incorrect getCountsAndAverages() with integer list')
Test.assertEquals(getCountsAndAverages((100, (10.0, 20.0, 30.0))), (100, (3, 20.0)),
                            'incorrect getCountsAndAverages() with float list')
Test.assertEquals(getCountsAndAverages((110, xrange(20))), (110, (20, 9.5)),
                            'incorrect getCountsAndAverages() with xrange')



def sortFunction(tuple):
    """ Construct the sort string (does not perform actual sorting)
    Args:
        tuple: (rating, MovieName)
    Returns:
        sortString: the value to sort with, 'rating MovieName'
    """
开发者ID:JBed,项目名称:edx-spark,代码行数:32,代码来源:Lab5.py

示例12: of

# 需要导入模块: from test_helper import Test [as 别名]
# 或者: from test_helper.Test import assertEquals [as 别名]
    Args:
        IDandRatingsTuple: a single tuple of (MovieID, (Rating1, Rating2, Rating3, ...))
    Returns:
        tuple: a tuple of (MovieID, (number of ratings, averageRating))
    """
    movieId = IDandRatingsTuple[0]
    ratings = IDandRatingsTuple[1]
    return (movieId, (len(ratings), float(sum(ratings)) / len(ratings)))


# In[10]:

# TEST Number of Ratings and Average Ratings for a Movie (1a)

Test.assertEquals(
    getCountsAndAverages((1, (1, 2, 3, 4))), (1, (4, 2.5)), "incorrect getCountsAndAverages() with integer list"
)
Test.assertEquals(
    getCountsAndAverages((100, (10.0, 20.0, 30.0))),
    (100, (3, 20.0)),
    "incorrect getCountsAndAverages() with float list",
)
Test.assertEquals(
    getCountsAndAverages((110, xrange(20))), (110, (20, 9.5)), "incorrect getCountsAndAverages() with xrange"
)


# #### **(1b) Movies with Highest Average Ratings**
# #### Now that we have a way to calculate the average ratings, we will use the `getCountsAndAverages()` helper function with Spark to determine movies with highest average ratings.
# #### The steps you should perform are:
# * #### Recall that the `ratingsRDD` contains tuples of the form (UserID, MovieID, Rating). From `ratingsRDD` create an RDD with tuples of the form (MovieID, Python iterable of Ratings for that MovieID). This transformation will yield an RDD of the form: `[(1, <pyspark.resultiterable.ResultIterable object at 0x7f16d50e7c90>), (2, <pyspark.resultiterable.ResultIterable object at 0x7f16d50e79d0>), (3, <pyspark.resultiterable.ResultIterable object at 0x7f16d50e7610>)]`. Note that you will only need to perform two Spark transformations to do this step.
开发者ID:avenezia,项目名称:CS100.1x-Introduction-to-Big-Data-with-Apache-Spark,代码行数:33,代码来源:lab4_machine_learning_student.py

示例13: pca

# 需要导入模块: from test_helper import Test [as 别名]
# 或者: from test_helper.Test import assertEquals [as 别名]
ax.set_zlim((-20, 120)), ax.set_ylim((-20, 100)), ax.set_xlim((30, 75))
ax.plot_surface(xx, yy, z, alpha=.1)
plt.tight_layout()
pass


threeDData = sc.parallelize(dataThreeD)
componentsThreeD, threeDScores, eigenvaluesThreeD = pca(threeDData)

print 'componentsThreeD: \n{0}'.format(componentsThreeD)
print ('\nthreeDScores (first three): \n{0}'
       .format('\n'.join(map(str, threeDScores.take(3)))))
print '\neigenvaluesThreeD: \n{0}'.format(eigenvaluesThreeD)


Test.assertEquals(componentsThreeD.shape, (3, 2), 'incorrect shape for componentsThreeD')
Test.assertTrue(np.allclose(np.sum(eigenvaluesThreeD), 969.796443367),
                'incorrect value for eigenvaluesThreeD')
Test.assertTrue(np.allclose(np.abs(np.sum(componentsThreeD)), 1.77238943258),
                'incorrect value for componentsThreeD')
Test.assertTrue(np.allclose(np.abs(np.sum(threeDScores.take(3))), 237.782834092),
                'incorrect value for threeDScores')


scoresThreeD = np.asarray(threeDScores.collect())

fig, ax = preparePlot(np.arange(20, 150, 20), np.arange(-40, 110, 20))
ax.set_xlabel(r'New $x_1$ values'), ax.set_ylabel(r'New $x_2$ values')
ax.set_xlim(5, 150), ax.set_ylim(-45, 50)
plt.scatter(scoresThreeD[:,0], scoresThreeD[:,1], s=14**2, c=clrs, edgecolors='#8cbfd0', alpha=0.75)
pass
开发者ID:JsNoNo,项目名称:Spark-Test-Program,代码行数:33,代码来源:PCAtest.py

示例14: display

# 需要导入模块: from test_helper import Test [as 别名]
# 或者: from test_helper.Test import assertEquals [as 别名]
# TODO: Replace <FILL IN> with appropriate code
from pyspark.ml.feature import StringIndexer

stringIndexer = (<FILL IN>
                 .<FILL IN>
                 .<FILL IN>)

indexerModel = stringIndexer.<FILL IN>
irisTrainIndexed = indexerModel.<FILL IN>
display(irisTrainIndexed)

# COMMAND ----------

# TEST
from test_helper import Test
Test.assertEquals(irisTrainIndexed.select('indexed').take(50)[-1][0], 2.0, 'incorrect values in indexed column')
Test.assertTrue(irisTrainIndexed.schema.fields[2].metadata != {}, 'indexed should have metadata')

# COMMAND ----------

# MAGIC %md
# MAGIC We've updated the metadata for the field.  Now we know that the field takes on three values and is nominal.

# COMMAND ----------

print irisTrainIndexed.schema.fields[1].metadata
print irisTrainIndexed.schema.fields[2].metadata

# COMMAND ----------

# MAGIC %md
开发者ID:smoltis,项目名称:spark,代码行数:33,代码来源:4-trees_student.py

示例15: makePlural

# 需要导入模块: from test_helper import Test [as 别名]
# 或者: from test_helper.Test import assertEquals [as 别名]
def makePlural(word):
    return word + "s"


print makePlural("cat")


# In[5]:

# Load in the testing code and check to see if your answer is correct
# If incorrect it will report back '1 test failed' for each failed test
# Make sure to rerun any cell you change before trying the test again
from test_helper import Test

# TEST Pluralize and test (1b)
Test.assertEquals(makePlural("rat"), "rats", "incorrect result: makePlural does not add an s")


# #### ** (1c) Apply `makePlural` to the base RDD **
# #### Now pass each item in the base RDD into a [map()](http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD.map) transformation that applies the `makePlural()` function to each element. And then call the [collect()](http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD.collect) action to see the transformed RDD.

# In[6]:

# TODO: Replace <FILL IN> with appropriate code
pluralRDD = wordsRDD.map(makePlural)
print pluralRDD.collect()


# In[7]:

# TEST Apply makePlural to the base RDD(1c)
开发者ID:luckylouis,项目名称:MachineLearningSpark,代码行数:33,代码来源:lab1_word_count_student.py


注:本文中的test_helper.Test.assertEquals方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。