本文整理汇总了Python中opus_core.datasets.dataset.DatasetSubset.get_primary_attribute_names方法的典型用法代码示例。如果您正苦于以下问题:Python DatasetSubset.get_primary_attribute_names方法的具体用法?Python DatasetSubset.get_primary_attribute_names怎么用?Python DatasetSubset.get_primary_attribute_names使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类opus_core.datasets.dataset.DatasetSubset
的用法示例。
在下文中一共展示了DatasetSubset.get_primary_attribute_names方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: run
# 需要导入模块: from opus_core.datasets.dataset import DatasetSubset [as 别名]
# 或者: from opus_core.datasets.dataset.DatasetSubset import get_primary_attribute_names [as 别名]
#.........这里部分代码省略.........
','.join([col+"="+str(criterion[col]) for col in column_names]) + '\n'
#if diff < 0: #TODO demolition; not yet supported
##log status
action = "0"
if this_sampled_index.size > 0:
action_num = total_spaces_in_sample_dataset[this_sampled_index].sum()
if diff > 0: action = "+" + str(action_num)
if diff < 0: action = "-" + str(action_num)
cat = [ str(criterion[col]) for col in column_names]
cat += [str(actual_num), str(target_num), str(expected_num), str(diff), action]
if PrettyTable is not None:
status_log.add_row(cat)
else:
logger.log_status("\t".join(cat))
if PrettyTable is not None:
logger.log_status("\n" + status_log.get_string())
if error_log:
logger.log_error(error_log)
#logger.log_note("Updating attributes of %s sampled development events." % sampled_index.size)
result_data = {}
result_dataset = None
index = array([], dtype='int32')
if sampled_index.size > 0:
### ideally duplicate_rows() is all needed to add newly cloned rows
### to be more cautious, copy the data to be cloned, remove elements, then append the cloned data
##realestate_dataset.duplicate_rows(sampled_index)
#result_data.setdefault(year_built, resize(year, sampled_index.size).astype('int32')) # Reset the year_built attribute. Uncommented because it is overwritten in the for loop afterwards.
## also add 'independent_variables' to the new dataset
for attribute in set(sample_from_dataset.get_primary_attribute_names() + independent_variables):
if reset_attribute_value.has_key(attribute):
result_data[attribute] = resize(array(reset_attribute_value[attribute]), sampled_index.size)
else:
result_data[attribute] = sample_from_dataset.get_attribute_by_index(attribute, sampled_index)
# Reset the year_built attribute.
result_data['year_built'] = resize(year, sampled_index.size).astype('int32')
# TODO: Uncomment the following three lines to reset land_area, tax_exempt, zgde. Test still to be done. parcel_id should be changed by location choice model.
#result_data['land_area'] = resize(-1, sampled_index.size).astype('int32')
#result_data['tax_exempt'] = resize(-1, sampled_index.size).astype('int32')
#result_data['zgde'] = resize(-1, sampled_index.size).astype('int32')
if id_name and result_data and id_name not in result_data:
result_data[id_name] = arange(sampled_index.size, dtype='int32') + 1
storage = StorageFactory().get_storage('dict_storage')
storage.write_table(table_name=table_name, table_data=result_data)
result_dataset = Dataset(id_name = id_name,
in_storage = storage,
in_table_name = table_name,
dataset_name = dataset_name
)
index = arange(result_dataset.size())
if append_to_realestate_dataset:
if len(result_data) > 0:
logger.start_block('Appending development events and living units')
logger.log_note("Append %d sampled development events to real estate dataset." % len(result_data[result_data.keys()[0]]))
index = realestate_dataset.add_elements(result_data, require_all_attributes=False,
change_ids_if_not_unique=True)
logger.start_block('Creating id mapping')
# remember the ids from the development_event_history dataset.
mapping_new_old = self.get_mapping_of_old_ids_to_new_ids(result_data, realestate_dataset, index)