当前位置: 首页>>代码示例>>Python>>正文


Python DataSelectionApplet.configure_operator_with_parsed_args方法代码示例

本文整理汇总了Python中ilastik.applets.dataSelection.DataSelectionApplet.configure_operator_with_parsed_args方法的典型用法代码示例。如果您正苦于以下问题:Python DataSelectionApplet.configure_operator_with_parsed_args方法的具体用法?Python DataSelectionApplet.configure_operator_with_parsed_args怎么用?Python DataSelectionApplet.configure_operator_with_parsed_args使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在ilastik.applets.dataSelection.DataSelectionApplet的用法示例。


在下文中一共展示了DataSelectionApplet.configure_operator_with_parsed_args方法的7个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: DataConversionWorkflow

# 需要导入模块: from ilastik.applets.dataSelection import DataSelectionApplet [as 别名]
# 或者: from ilastik.applets.dataSelection.DataSelectionApplet import configure_operator_with_parsed_args [as 别名]
class DataConversionWorkflow(Workflow):
    """
    Simple workflow for converting data between formats.  Has only two applets: Data Selection and Data Export.
    
    Also supports a command-line interface for headless mode.
    
    For example:
    
    .. code-block:: bash

        python ilastik.py --headless --new_project=NewTemporaryProject.ilp --workflow=DataConversionWorkflow --output_format="png sequence" ~/input1.h5 ~/input2.h5
    
    Or if you have an existing project with input files already selected and configured:

    .. code-block:: bash

        python ilastik.py --headless --project=MyProject.ilp --output_format=jpeg
    
    .. note:: Beware of issues related to absolute vs. relative paths.  Relative links are stored relative to the project file.
              To avoid this issue entirely, either 
                 (1) use only absolute filepaths
              or (2) cd into your project file's directory before launching ilastik.
    
    """
    def __init__(self, shell, headless, workflow_cmdline_args, project_creation_args, *args, **kwargs):

        
        # Create a graph to be shared by all operators
        graph = Graph()
        super(DataConversionWorkflow, self).__init__(shell, headless, workflow_cmdline_args, project_creation_args, graph=graph, *args, **kwargs)
        self._applets = []

        # Create applets 
        self.dataSelectionApplet = DataSelectionApplet(self, 
                                                       "Input Data", 
                                                       "Input Data", 
                                                       supportIlastik05Import=True, 
                                                       batchDataGui=False,
                                                       force5d=False)

        opDataSelection = self.dataSelectionApplet.topLevelOperator
        role_names = ["Input Data"]
        opDataSelection.DatasetRoles.setValue( role_names )

        self.dataExportApplet = DataExportApplet(self, "Data Export")

        opDataExport = self.dataExportApplet.topLevelOperator
        opDataExport.WorkingDirectory.connect( opDataSelection.WorkingDirectory )
        opDataExport.SelectionNames.setValue( ["Input"] )        

        self._applets.append( self.dataSelectionApplet )
        self._applets.append( self.dataExportApplet )

        # Parse command-line arguments
        # Command-line args are applied in onProjectLoaded(), below.
        self._workflow_cmdline_args = workflow_cmdline_args
        self._data_input_args = None
        self._data_export_args = None
        if workflow_cmdline_args:
            self._data_export_args, unused_args = self.dataExportApplet.parse_known_cmdline_args( unused_args )
            self._data_input_args, unused_args = self.dataSelectionApplet.parse_known_cmdline_args( workflow_cmdline_args, role_names )
            if unused_args:
                logger.warn("Unused command-line args: {}".format( unused_args ))

    def onProjectLoaded(self, projectManager):
        """
        Overridden from Workflow base class.  Called by the Project Manager.
        
        If the user provided command-line arguments, use them to configure 
        the workflow inputs and output settings.
        """
        # Configure the batch data selection operator.
        if self._data_input_args and self._data_input_args.input_files:
            self.dataSelectionApplet.configure_operator_with_parsed_args( self._data_input_args )
        
        # Configure the data export operator.
        if self._data_export_args:
            self.dataExportApplet.configure_operator_with_parsed_args( self._data_export_args )

        if self._headless and self._data_input_args and self._data_export_args:
            # Now run the export and report progress....
            opDataExport = self.dataExportApplet.topLevelOperator
            for i, opExportDataLaneView in enumerate(opDataExport):
                logger.info( "Exporting file #{} to {}".format(i, opExportDataLaneView.ExportPath.value) )
    
                sys.stdout.write( "Result #{}/{} Progress: ".format( i, len( opDataExport ) ) )
                def print_progress( progress ):
                    sys.stdout.write( "{} ".format( progress ) )
    
                # If the operator provides a progress signal, use it.
                slotProgressSignal = opExportDataLaneView.progressSignal
                slotProgressSignal.subscribe( print_progress )
                opExportDataLaneView.run_export()
                
                # Finished.
                sys.stdout.write("\n")

    def connectLane(self, laneIndex):
        opDataSelectionView = self.dataSelectionApplet.topLevelOperator.getLane(laneIndex)
        opDataExportView = self.dataExportApplet.topLevelOperator.getLane(laneIndex)
#.........这里部分代码省略.........
开发者ID:stuarteberg,项目名称:ilastik,代码行数:103,代码来源:dataConversionWorkflow.py

示例2: PixelClassificationWorkflow

# 需要导入模块: from ilastik.applets.dataSelection import DataSelectionApplet [as 别名]
# 或者: from ilastik.applets.dataSelection.DataSelectionApplet import configure_operator_with_parsed_args [as 别名]

#.........这里部分代码省略.........
        # "Regular" (i.e. with the images that the user selected as input data)
        if slotId == "Predictions":
            return self.pcApplet.topLevelOperator.HeadlessPredictionProbabilities
        elif slotId == "PredictionsUint8":
            return self.pcApplet.topLevelOperator.HeadlessUint8PredictionProbabilities
        # "Batch" (i.e. with the images that the user selected as batch inputs).
        elif slotId == "BatchPredictions":
            return self.opBatchPredictionPipeline.HeadlessPredictionProbabilities
        if slotId == "BatchPredictionsUint8":
            return self.opBatchPredictionPipeline.HeadlessUint8PredictionProbabilities
        
        raise Exception("Unknown headless output slot")


    def onProjectLoaded(self, projectManager):
        """
        Overridden from Workflow base class.  Called by the Project Manager.
        
        If the user provided command-line arguments, use them to configure 
        the workflow for batch mode and export all results.
        (This workflow's headless mode supports only batch mode for now.)
        """
        if self.generate_random_labels:
            self._generate_random_labels(self.random_label_count, self.random_label_value)
            logger.info("Saving project...")
            self._shell.projectManager.saveProject()
            logger.info("Done.")
        
        if self.print_labels_by_slice:
            self._print_labels_by_slice( self.label_search_value )

        # Configure the batch data selection operator.
        if self._batch_input_args and (self._batch_input_args.input_files or self._batch_input_args.raw_data):
            self.batchInputApplet.configure_operator_with_parsed_args( self._batch_input_args )
        
        # Configure the data export operator.
        if self._batch_export_args:
            self.batchResultsApplet.configure_operator_with_parsed_args( self._batch_export_args )

        if self._batch_input_args and self.pcApplet.topLevelOperator.classifier_cache._dirty:
            logger.warn("Your project file has no classifier.  A new classifier will be trained for this run.")

        if self._headless:
            # In headless mode, let's see the messages from the training operator.
            logging.getLogger("lazyflow.operators.classifierOperators").setLevel(logging.DEBUG)
        
        if self.retrain:
            # Cause the classifier to be dirty so it is forced to retrain.
            # (useful if the stored labels were changed outside ilastik)
            self.pcApplet.topLevelOperator.opTrain.ClassifierFactory.setDirty()
            
            # Request the classifier, which forces training
            self.pcApplet.topLevelOperator.FreezePredictions.setValue(False)
            _ = self.pcApplet.topLevelOperator.Classifier.value

            # store new classifier to project file
            projectManager.saveProject(force_all_save=False)

        if self._headless and self._batch_input_args and self._batch_export_args:
            # Make sure we're using the up-to-date classifier.
            self.pcApplet.topLevelOperator.FreezePredictions.setValue(False)
        
            # Now run the batch export and report progress....
            opBatchDataExport = self.batchResultsApplet.topLevelOperator
            for i, opExportDataLaneView in enumerate(opBatchDataExport):
                logger.info( "Exporting result {} to {}".format(i, opExportDataLaneView.ExportPath.value) )
开发者ID:fdiego,项目名称:ilastik,代码行数:70,代码来源:pixelClassificationWorkflow.py

示例3: CarvingWorkflow

# 需要导入模块: from ilastik.applets.dataSelection import DataSelectionApplet [as 别名]
# 或者: from ilastik.applets.dataSelection.DataSelectionApplet import configure_operator_with_parsed_args [as 别名]

#.........这里部分代码省略.........
        
        # Expose to shell
        self._applets = []
        self._applets.append(self.projectMetadataApplet)
        self._applets.append(self.dataSelectionApplet)
        self._applets.append(self.preprocessingApplet)
        self._applets.append(self.carvingApplet)

    def connectLane(self, laneIndex):
        ## Access applet operators
        opData = self.dataSelectionApplet.topLevelOperator.getLane(laneIndex)
        opPreprocessing = self.preprocessingApplet.topLevelOperator.getLane(laneIndex)
        opCarvingLane = self.carvingApplet.topLevelOperator.getLane(laneIndex)
        
        opCarvingLane.connectToPreprocessingApplet(self.preprocessingApplet)
        op5 = OpReorderAxes(parent=self)
        op5.AxisOrder.setValue("txyzc")
        op5.Input.connect(opData.Image)

        ## Connect operators
        opPreprocessing.InputData.connect(op5.Output)
        #opCarvingTopLevel.RawData.connect(op5.output)
        opCarvingLane.InputData.connect(op5.Output)
        opCarvingLane.FilteredInputData.connect(opPreprocessing.FilteredImage)
        opCarvingLane.MST.connect(opPreprocessing.PreprocessedData)
        opCarvingLane.UncertaintyType.setValue("none")
        
        # Special input-input connection: WriteSeeds metadata must mirror the input data
        opCarvingLane.WriteSeeds.connect( opCarvingLane.InputData )
        
        self.preprocessingApplet.enableDownstream(False)

    def handleAppletStateUpdateRequested(self):
        # If no data, nothing else is ready.
        opDataSelection = self.dataSelectionApplet.topLevelOperator
        input_ready = len(opDataSelection.ImageGroup) > 0

        # If preprocessing isn't configured yet, don't allow carving
        preprocessed_data_ready = input_ready and self.preprocessingApplet._enabledDS
        
        # Enable each applet as appropriate
        self._shell.setAppletEnabled(self.preprocessingApplet, input_ready)
        self._shell.setAppletEnabled(self.carvingApplet, preprocessed_data_ready)

    def onProjectLoaded(self, projectManager):
        """
        Overridden from Workflow base class.  Called by the Project Manager.

        If the user provided command-line arguments, apply them to the workflow operators.
        Currently, we support command-line configuration of:
        - DataSelection
        - Preprocessing, in which case preprocessing is immediately executed
        """
        # If input data files were provided on the command line, configure the DataSelection applet now.
        # (Otherwise, we assume the project already had a dataset selected.)
        input_data_args, unused_args = DataSelectionApplet.parse_known_cmdline_args(self.workflow_cmdline_args, DATA_ROLES)
        if input_data_args.raw_data:
            self.dataSelectionApplet.configure_operator_with_parsed_args(input_data_args)

        #
        # Parse the remaining cmd-line arguments
        #
        filter_indexes = { 'bright-lines' : OpFilter.HESSIAN_BRIGHT,
                           'dark-lines'   : OpFilter.HESSIAN_DARK,
                           'step-edges'   : OpFilter.STEP_EDGES,
                           'original'     : OpFilter.RAW,
                           'inverted'     : OpFilter.RAW_INVERTED }

        parser = argparse.ArgumentParser()
        parser.add_argument('--run-preprocessing', action='store_true')
        parser.add_argument('--preprocessing-sigma', type=float, required=False)
        parser.add_argument('--preprocessing-filter', required=False, type=str.lower,
                            choices=filter_indexes.keys())

        parsed_args, unused_args = parser.parse_known_args(unused_args)
        if unused_args:
            logger.warn("Did not use the following command-line arguments: {}".format(unused_args))

        # Execute pre-processing.
        if parsed_args.run_preprocessing:
            if len(self.preprocessingApplet.topLevelOperator) != 1:
                raise RuntimeError("Can't run preprocessing on a project with no images.")

            opPreprocessing = self.preprocessingApplet.topLevelOperator.getLane(0) # Carving has only one 'lane'

            # If user provided parameters, override the defaults.
            if parsed_args.preprocessing_sigma is not None:
                opPreprocessing.Sigma.setValue(parsed_args.preprocessing_sigma)

            if parsed_args.preprocessing_filter:
                filter_index = filter_indexes[parsed_args.preprocessing_filter]
                opPreprocessing.Filter.setValue(filter_index)

            logger.info("Running Preprocessing...")
            opPreprocessing.PreprocessedData[:].wait()
            logger.info("FINISHED Preprocessing...")

            logger.info("Saving project...")
            self._shell.projectManager.saveProject()
            logger.info("Done saving.")
开发者ID:CVML,项目名称:ilastik,代码行数:104,代码来源:carvingWorkflow.py

示例4: CountingWorkflow

# 需要导入模块: from ilastik.applets.dataSelection import DataSelectionApplet [as 别名]
# 或者: from ilastik.applets.dataSelection.DataSelectionApplet import configure_operator_with_parsed_args [as 别名]

#.........这里部分代码省略.........
        
        # Connect Image pathway:
        # Input Image -> Features Op -> Prediction Op -> Export
        opBatchFeatures.InputImage.connect( opBatchInputs.Image )
        opBatchPredictionPipeline.FeatureImages.connect( opBatchFeatures.OutputImage )
        
        opBatchResults.SelectionNames.setValue( ['Probabilities'] )        
        # opBatchResults.Inputs is indexed by [lane][selection],
        # Use OpTranspose to allow connection.
        opTransposeBatchInputs = OpTransposeSlots( parent=self )
        opTransposeBatchInputs.OutputLength.setValue(0)
        opTransposeBatchInputs.Inputs.resize(1)
        opTransposeBatchInputs.Inputs[0].connect( opBatchPredictionPipeline.HeadlessPredictionProbabilities ) # selection 0
        
        # Now opTransposeBatchInputs.Outputs is level-2 indexed by [lane][selection]
        opBatchResults.Inputs.connect( opTransposeBatchInputs.Outputs )

        # We don't actually need the cached path in the batch pipeline.
        # Just connect the uncached features here to satisfy the operator.
        opBatchPredictionPipeline.CachedFeatureImages.connect( opBatchFeatures.OutputImage )

        self.opBatchPredictionPipeline = opBatchPredictionPipeline

    def onProjectLoaded(self, projectManager):
        """
        Overridden from Workflow base class.  Called by the Project Manager.
        
        If the user provided command-line arguments, use them to configure 
        the workflow for batch mode and export all results.
        (This workflow's headless mode supports only batch mode for now.)
        """
        # Configure the batch data selection operator.
        if self._batch_input_args and (self._batch_input_args.input_files or self._batch_input_args.raw_data):
            self.batchInputApplet.configure_operator_with_parsed_args( self._batch_input_args )
        
        # Configure the data export operator.
        if self._batch_export_args:
            self.batchResultsApplet.configure_operator_with_parsed_args( self._batch_export_args )

        if self._batch_input_args and self.countingApplet.topLevelOperator.classifier_cache._dirty:
            logger.warn("Your project file has no classifier.  "
                        "A new classifier will be trained for this run.")

        if self._headless:
            # In headless mode, let's see the messages from the training operator.
            logging.getLogger("lazyflow.operators.classifierOperators").setLevel(logging.DEBUG)
        
        if self._headless and self._batch_input_args and self._batch_export_args:
            # Make sure we're using the up-to-date classifier.
            self.countingApplet.topLevelOperator.FreezePredictions.setValue(False)

            csv_path = self.parsed_counting_workflow_args.csv_export_file
            if csv_path:
                logger.info( "Exporting Object Counts to {}".format(csv_path) )
                sys.stdout.write("Progress: ")
                sys.stdout.flush()
                def print_progress( progress ):
                    sys.stdout.write( "{:.1f} ".format( progress ) )
                    sys.stdout.flush()

                self.batchResultsApplet.progressSignal.connect(print_progress)
                req = self.batchResultsApplet.prepareExportObjectCountsToCsv( csv_path )
                req.wait()

                # Finished.
                sys.stdout.write("\n")
开发者ID:stuarteberg,项目名称:ilastik,代码行数:70,代码来源:countingWorkflow.py

示例5: ObjectClassificationWorkflow

# 需要导入模块: from ilastik.applets.dataSelection import DataSelectionApplet [as 别名]
# 或者: from ilastik.applets.dataSelection.DataSelectionApplet import configure_operator_with_parsed_args [as 别名]

#.........这里部分代码省略.........

        # Connect the batch OUTPUT applet
        opBatchExport = self.batchExportApplet.topLevelOperator
        opBatchExport.RawData.connect( batchInputsRaw )
        opBatchExport.RawDatasetInfo.connect( opTransposeDatasetGroup.Outputs[0] )
        
        # See EXPORT_SELECTION_PREDICTIONS and EXPORT_SELECTION_PROBABILITIES, above
        opBatchExport.SelectionNames.setValue( ['Object Predictions', 'Object Probabilities'] )        
        # opBatchResults.Inputs is indexed by [lane][selection],
        # Use OpTranspose to allow connection.
        opTransposeBatchInputs = OpTransposeSlots( parent=self )
        opTransposeBatchInputs.OutputLength.setValue(0)
        opTransposeBatchInputs.Inputs.resize(2)
        opTransposeBatchInputs.Inputs[EXPORT_SELECTION_PREDICTIONS].connect( opBatchClassify.PredictionImage ) # selection 0
        opTransposeBatchInputs.Inputs[EXPORT_SELECTION_PROBABILITIES].connect( opBatchClassify.ProbabilityChannelImage ) # selection 1
        
        # Now opTransposeBatchInputs.Outputs is level-2 indexed by [lane][selection]
        opBatchExport.Inputs.connect( opTransposeBatchInputs.Outputs )

    def onProjectLoaded(self, projectManager):
        if self._headless and self._batch_input_args and self._batch_export_args:
            
            # Check for problems: Is the project file ready to use?
            opObjClassification = self.objectClassificationApplet.topLevelOperator
            if not opObjClassification.Classifier.ready():
                logger.error( "Can't run batch prediction.\n"
                              "Couldn't obtain a classifier from your project file: {}.\n"
                              "Please make sure your project is fully configured with a trained classifier."
                              .format(projectManager.currentProjectPath) )
                return

            # Configure the batch data selection operator.
            if self._batch_input_args and self._batch_input_args.raw_data:
                self.dataSelectionAppletBatch.configure_operator_with_parsed_args( self._batch_input_args )
            
            # Configure the data export operator.
            if self._batch_export_args:
                self.batchExportApplet.configure_operator_with_parsed_args( self._batch_export_args )

            self.opBatchClassify.BlockShape3dDict.disconnect()

            # For each BATCH lane...
            for lane_index, opBatchClassifyView in enumerate(self.opBatchClassify):
                # Force the block size to be the same as image size (1 big block)
                tagged_shape = opBatchClassifyView.RawImage.meta.getTaggedShape()
                try:
                    tagged_shape.pop('t')
                except KeyError:
                    pass
                try:
                    tagged_shape.pop('c')
                except KeyError:
                    pass
                opBatchClassifyView.BlockShape3dDict.setValue( tagged_shape )

                # For now, we force the entire result to be computed as one big block.
                # Force the batch classify op to create an internal pipeline for our block.
                opBatchClassifyView._ensurePipelineExists( (0,0,0,0,0) )
                opSingleBlockClassify = opBatchClassifyView._blockPipelines[(0,0,0,0,0)]

                # Export the images (if any)
                if self.input_types == 'raw':
                    # If pixel probabilities need export, do that first.
                    # (They are needed by the other outputs, anyway)
                    if self._export_args.export_pixel_probability_img:
                        self._export_batch_image( lane_index, EXPORT_SELECTION_PIXEL_PROBABILITIES, 'pixel-probability-img' )
开发者ID:jakirkham,项目名称:ilastik,代码行数:70,代码来源:objectClassificationWorkflow.py

示例6: ObjectClassificationWorkflow

# 需要导入模块: from ilastik.applets.dataSelection import DataSelectionApplet [as 别名]
# 或者: from ilastik.applets.dataSelection.DataSelectionApplet import configure_operator_with_parsed_args [as 别名]

#.........这里部分代码省略.........

        # Connect the batch OUTPUT applet
        opBatchExport = self.batchExportApplet.topLevelOperator
        opBatchExport.RawData.connect( batchInputsRaw )
        opBatchExport.RawDatasetInfo.connect( opTransposeDatasetGroup.Outputs[0] )
        
        # See EXPORT_SELECTION_PREDICTIONS and EXPORT_SELECTION_PROBABILITIES, above
        opBatchExport.SelectionNames.setValue( ['Object Predictions', 'Object Probabilities'] )        
        # opBatchResults.Inputs is indexed by [lane][selection],
        # Use OpTranspose to allow connection.
        opTransposeBatchInputs = OpTransposeSlots( parent=self )
        opTransposeBatchInputs.OutputLength.setValue(0)
        opTransposeBatchInputs.Inputs.resize(2)
        opTransposeBatchInputs.Inputs[EXPORT_SELECTION_PREDICTIONS].connect( opBatchClassify.PredictionImage ) # selection 0
        opTransposeBatchInputs.Inputs[EXPORT_SELECTION_PROBABILITIES].connect( opBatchClassify.ProbabilityChannelImage ) # selection 1
        
        # Now opTransposeBatchInputs.Outputs is level-2 indexed by [lane][selection]
        opBatchExport.Inputs.connect( opTransposeBatchInputs.Outputs )

    def onProjectLoaded(self, projectManager):
        if self._headless and self._batch_input_args and self._batch_export_args:
            
            # Check for problems: Is the project file ready to use?
            opObjClassification = self.objectClassificationApplet.topLevelOperator
            if not opObjClassification.Classifier.ready():
                logger.error( "Can't run batch prediction.\n"
                              "Couldn't obtain a classifier from your project file: {}.\n"
                              "Please make sure your project is fully configured with a trained classifier."
                              .format(projectManager.currentProjectPath) )
                return

            # Configure the batch data selection operator.
            if self._batch_input_args and self._batch_input_args.raw_data:
                self.dataSelectionAppletBatch.configure_operator_with_parsed_args( self._batch_input_args )
            
            # Configure the data export operator.
            if self._batch_export_args:
                self.batchExportApplet.configure_operator_with_parsed_args( self._batch_export_args )

            self.opBatchClassify.BlockShape3dDict.disconnect()

            # For each BATCH lane...
            for lane_index, opBatchClassifyView in enumerate(self.opBatchClassify):
                # Force the block size to be the same as image size (1 big block)
                tagged_shape = opBatchClassifyView.RawImage.meta.getTaggedShape()
                try:
                    tagged_shape.pop('t')
                except KeyError:
                    pass
                try:
                    tagged_shape.pop('c')
                except KeyError:
                    pass
                opBatchClassifyView.BlockShape3dDict.setValue( tagged_shape )

                # For now, we force the entire result to be computed as one big block.
                # Force the batch classify op to create an internal pipeline for our block.
                opBatchClassifyView._ensurePipelineExists( (0,0,0,0,0) )
                opSingleBlockClassify = opBatchClassifyView._blockPipelines[(0,0,0,0,0)]

                # Export the images (if any)
                if self.input_types == 'raw':
                    # If pixel probabilities need export, do that first.
                    # (They are needed by the other outputs, anyway)
                    if self._export_args.export_pixel_probability_img:
                        self._export_batch_image( lane_index, EXPORT_SELECTION_PIXEL_PROBABILITIES, 'pixel-probability-img' )
开发者ID:ilastikdev,项目名称:ilastik,代码行数:70,代码来源:objectClassificationWorkflow.py

示例7: PixelClassificationWorkflow

# 需要导入模块: from ilastik.applets.dataSelection import DataSelectionApplet [as 别名]
# 或者: from ilastik.applets.dataSelection.DataSelectionApplet import configure_operator_with_parsed_args [as 别名]

#.........这里部分代码省略.........
    def handleAppletStateUpdateRequested(self):
        """
        Overridden from Workflow base class
        Called when an applet has fired the :py:attr:`Applet.appletStateUpdateRequested`
        """
        # If no data, nothing else is ready.
        opDataSelection = self.dataSelectionApplet.topLevelOperator
        input_ready = len(opDataSelection.ImageGroup) > 0 and not self.dataSelectionApplet.busy

        opFeatureSelection = self.featureSelectionApplet.topLevelOperator
        featureOutput = opFeatureSelection.OutputImage
        features_ready = input_ready and \
                         len(featureOutput) > 0 and  \
                         featureOutput[0].ready() and \
                         (TinyVector(featureOutput[0].meta.shape) > 0).all()

        opDataExport = self.dataExportApplet.topLevelOperator
        predictions_ready = features_ready and \
                            len(opDataExport.Input) > 0 and \
                            opDataExport.Input[0].ready() and \
                            (TinyVector(opDataExport.Input[0].meta.shape) > 0).all()

        # Problems can occur if the features or input data are changed during live update mode.
        # Don't let the user do that.
        opPixelClassification = self.pcApplet.topLevelOperator
        live_update_active = not opPixelClassification.FreezePredictions.value

        self._shell.setAppletEnabled(self.dataSelectionApplet, not live_update_active)
        self._shell.setAppletEnabled(self.featureSelectionApplet, input_ready and not live_update_active)
        self._shell.setAppletEnabled(self.pcApplet, features_ready)
        self._shell.setAppletEnabled(self.dataExportApplet, predictions_ready)

        if self.batchInputApplet is not None:
            # Training workflow must be fully configured before batch can be used
            self._shell.setAppletEnabled(self.batchInputApplet, predictions_ready)
    
            opBatchDataSelection = self.batchInputApplet.topLevelOperator
            batch_input_ready = predictions_ready and \
                                len(opBatchDataSelection.ImageGroup) > 0
            self._shell.setAppletEnabled(self.batchResultsApplet, batch_input_ready)
            
        # Lastly, check for certain "busy" conditions, during which we 
        #  should prevent the shell from closing the project.
        busy = False
        busy |= self.dataSelectionApplet.busy
        busy |= self.featureSelectionApplet.busy
        busy |= self.dataExportApplet.busy
        self._shell.enableProjectChanges( not busy )

    def getHeadlessOutputSlot(self, slotId):
        # "Regular" (i.e. with the images that the user selected as input data)
        if slotId == "Predictions":
            return self.pcApplet.topLevelOperator.HeadlessPredictionProbabilities
        elif slotId == "PredictionsUint8":
            return self.pcApplet.topLevelOperator.HeadlessUint8PredictionProbabilities
        # "Batch" (i.e. with the images that the user selected as batch inputs).
        elif slotId == "BatchPredictions":
            return self.opBatchPredictionPipeline.HeadlessPredictionProbabilities
        if slotId == "BatchPredictionsUint8":
            return self.opBatchPredictionPipeline.HeadlessUint8PredictionProbabilities
        
        raise Exception("Unknown headless output slot")
    
    def onProjectLoaded(self, projectManager):
        """
        Overridden from Workflow base class.  Called by the Project Manager.
        
        If the user provided command-line arguments, use them to configure 
        the workflow for batch mode and export all results.
        (This workflow's headless mode supports only batch mode for now.)
        """
        # Configure the batch data selection operator.
        if self._batch_input_args and self._batch_input_args.input_files: 
            self.batchInputApplet.configure_operator_with_parsed_args( self._batch_input_args )
        
        # Configure the data export operator.
        if self._batch_export_args:
            self.batchResultsApplet.configure_operator_with_parsed_args( self._batch_export_args )

        if self._headless and self._batch_input_args and self._batch_export_args:
            
            # Make sure we're using the up-to-date classifier.
            self.pcApplet.topLevelOperator.FreezePredictions.setValue(False)
        
            # Now run the batch export and report progress....
            opBatchDataExport = self.batchResultsApplet.topLevelOperator
            for i, opExportDataLaneView in enumerate(opBatchDataExport):
                print "Exporting result {} to {}".format(i, opExportDataLaneView.ExportPath.value)
    
                sys.stdout.write( "Result {}/{} Progress: ".format( i, len( opBatchDataExport ) ) )
                def print_progress( progress ):
                    sys.stdout.write( "{} ".format( progress ) )
    
                # If the operator provides a progress signal, use it.
                slotProgressSignal = opExportDataLaneView.progressSignal
                slotProgressSignal.subscribe( print_progress )
                opExportDataLaneView.run_export()
                
                # Finished.
                sys.stdout.write("\n")
开发者ID:lfiaschi,项目名称:ilastik,代码行数:104,代码来源:pixelClassificationWorkflow.py


注:本文中的ilastik.applets.dataSelection.DataSelectionApplet.configure_operator_with_parsed_args方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。