當前位置: 首頁>>代碼示例>>Java>>正文


Java GraphicOverlay類代碼示例

本文整理匯總了Java中com.google.android.gms.samples.vision.ocrreader.ui.camera.GraphicOverlay的典型用法代碼示例。如果您正苦於以下問題:Java GraphicOverlay類的具體用法?Java GraphicOverlay怎麽用?Java GraphicOverlay使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


GraphicOverlay類屬於com.google.android.gms.samples.vision.ocrreader.ui.camera包,在下文中一共展示了GraphicOverlay類的7個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: OcrGraphic

import com.google.android.gms.samples.vision.ocrreader.ui.camera.GraphicOverlay; //導入依賴的package包/類
OcrGraphic(GraphicOverlay overlay, TextBlock text) {
    super(overlay);

    mText = text;

    if (sRectPaint == null) {
        sRectPaint = new Paint();
        sRectPaint.setColor(TEXT_COLOR);
        sRectPaint.setStyle(Paint.Style.STROKE);
        sRectPaint.setStrokeWidth(1.0f);
    }

    if (sTextPaint == null) {
        sTextPaint = new Paint();
        sTextPaint.setColor(TEXT_COLOR);
        sTextPaint.setTextSize(13.6f);
    }
    // Redraw the overlay, as this graphic has been added.
    postInvalidate();
}
 
開發者ID:thegenuinegourav,項目名稱:Questor,代碼行數:21,代碼來源:OcrGraphic.java

示例2: OcrGraphic

import com.google.android.gms.samples.vision.ocrreader.ui.camera.GraphicOverlay; //導入依賴的package包/類
OcrGraphic(GraphicOverlay overlay, TextBlock text) {
    super(overlay);

    mText = text;

    if (sRectPaint == null) {
        sRectPaint = new Paint();
        sRectPaint.setColor(TEXT_COLOR);
        sRectPaint.setStyle(Paint.Style.STROKE);
        sRectPaint.setStrokeWidth(4.0f);
    }

    if (sTextPaint == null) {
        sTextPaint = new Paint();
        sTextPaint.setColor(TEXT_COLOR);
        sTextPaint.setTextSize(54.0f);
    }
    // Redraw the overlay, as this graphic has been added.
    postInvalidate();
}
 
開發者ID:JimSeker,項目名稱:googleplayAPI,代碼行數:21,代碼來源:OcrGraphic.java

示例3: onCreate

import com.google.android.gms.samples.vision.ocrreader.ui.camera.GraphicOverlay; //導入依賴的package包/類
/**
 * Initializes the UI and creates the detector pipeline.
 */
@Override
public void onCreate(Bundle icicle) {
    super.onCreate(icicle);
    setContentView(R.layout.ocr_capture);

    mToolbar = (Toolbar) findViewById(R.id.toolbar);
    setSupportActionBar(mToolbar);

    mPreview = (CameraSourcePreview) findViewById(R.id.preview);
    mGraphicOverlay = (GraphicOverlay<OcrGraphic>) findViewById(R.id.graphicOverlay);

    // read parameters from the intent used to launch the activity.
    boolean autoFocus = getIntent().getBooleanExtra(AutoFocus, false);
    boolean useFlash = getIntent().getBooleanExtra(UseFlash, false);

    // Check for the camera permission before accessing the camera.  If the
    // permission is not granted yet, request permission.
    int rc = ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA);
    if (rc == PackageManager.PERMISSION_GRANTED) {
        createCameraSource(autoFocus, useFlash);
    } else {
        requestCameraPermission();
    }

    gestureDetector = new GestureDetector(this, new CaptureGestureListener());
    scaleGestureDetector = new ScaleGestureDetector(this, new ScaleListener());

    Snackbar.make(mGraphicOverlay, "Tap to capture. Pinch/Stretch to zoom",
            Snackbar.LENGTH_LONG)
            .show();
}
 
開發者ID:thegenuinegourav,項目名稱:Questor,代碼行數:35,代碼來源:OcrCaptureActivity.java

示例4: onCreate

import com.google.android.gms.samples.vision.ocrreader.ui.camera.GraphicOverlay; //導入依賴的package包/類
/**
 * Initializes the UI and creates the detector pipeline.
 */
@Override
public void onCreate(Bundle icicle) {
    super.onCreate(icicle);
    setContentView(R.layout.ocr_capture);

    mPreview = (CameraSourcePreview) findViewById(R.id.preview);
    mGraphicOverlay = (GraphicOverlay<OcrGraphic>) findViewById(R.id.graphicOverlay);

    // read parameters from the intent used to launch the activity.
    boolean autoFocus = getIntent().getBooleanExtra(AutoFocus, false);
    boolean useFlash = getIntent().getBooleanExtra(UseFlash, false);

    // Check for the camera permission before accessing the camera.  If the
    // permission is not granted yet, request permission.
    int rc = ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA);
    if (rc == PackageManager.PERMISSION_GRANTED) {
        createCameraSource(autoFocus, useFlash);
    } else {
        requestCameraPermission();
    }

    gestureDetector = new GestureDetector(this, new CaptureGestureListener());
    scaleGestureDetector = new ScaleGestureDetector(this, new ScaleListener());

    Snackbar.make(mGraphicOverlay, "Tap to capture. Pinch/Stretch to zoom",
            Snackbar.LENGTH_LONG)
            .show();
}
 
開發者ID:JimSeker,項目名稱:googleplayAPI,代碼行數:32,代碼來源:OcrCaptureActivity.java

示例5: onCreate

import com.google.android.gms.samples.vision.ocrreader.ui.camera.GraphicOverlay; //導入依賴的package包/類
/**
 * Initializes the UI and creates the detector pipeline.
 */
@Override
public void onCreate(Bundle bundle) {
    super.onCreate(bundle);
    setContentView(R.layout.ocr_capture);

    mPreview = (CameraSourcePreview) findViewById(R.id.preview);
    mGraphicOverlay = (GraphicOverlay<OcrGraphic>) findViewById(R.id.graphicOverlay);

    // Set good defaults for capturing text.
    boolean autoFocus = true;
    boolean useFlash = false;

    // Check for the camera permission before accessing the camera.  If the
    // permission is not granted yet, request permission.
    int rc = ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA);
    if (rc == PackageManager.PERMISSION_GRANTED) {
        createCameraSource(autoFocus, useFlash);
    } else {
        requestCameraPermission();
    }

    gestureDetector = new GestureDetector(this, new CaptureGestureListener());
    scaleGestureDetector = new ScaleGestureDetector(this, new ScaleListener());

    Snackbar.make(mGraphicOverlay, "Tap to Speak. Pinch/Stretch to zoom",
            Snackbar.LENGTH_LONG)
            .show();

    // TODO: Set up the Text To Speech engine.
}
 
開發者ID:googlesamples,項目名稱:android-vision,代碼行數:34,代碼來源:OcrCaptureActivity.java

示例6: OcrDetectorProcessor

import com.google.android.gms.samples.vision.ocrreader.ui.camera.GraphicOverlay; //導入依賴的package包/類
OcrDetectorProcessor(GraphicOverlay<OcrGraphic> ocrGraphicOverlay) {
    mGraphicOverlay = ocrGraphicOverlay;
}
 
開發者ID:thegenuinegourav,項目名稱:Questor,代碼行數:4,代碼來源:OcrDetectorProcessor.java

示例7: onCreate

import com.google.android.gms.samples.vision.ocrreader.ui.camera.GraphicOverlay; //導入依賴的package包/類
/**
 * Initializes the UI and creates the detector pipeline.
 */
@Override
public void onCreate(Bundle bundle) {
    super.onCreate(bundle);
    setContentView(R.layout.ocr_capture);

    mPreview = (CameraSourcePreview) findViewById(R.id.preview);
    mGraphicOverlay = (GraphicOverlay<OcrGraphic>) findViewById(R.id.graphicOverlay);

    // Set good defaults for capturing text.
    boolean autoFocus = true;
    boolean useFlash = false;

    // Check for the camera permission before accessing the camera.  If the
    // permission is not granted yet, request permission.
    int rc = ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA);
    if (rc == PackageManager.PERMISSION_GRANTED) {
        createCameraSource(autoFocus, useFlash);
    } else {
        requestCameraPermission();
    }

    gestureDetector = new GestureDetector(this, new CaptureGestureListener());
    scaleGestureDetector = new ScaleGestureDetector(this, new ScaleListener());

    Snackbar.make(mGraphicOverlay, "Tap to Speak. Pinch/Stretch to zoom",
            Snackbar.LENGTH_LONG)
            .show();

    // Set up the Text To Speech engine.
    TextToSpeech.OnInitListener listener =
            new TextToSpeech.OnInitListener() {
                @Override
                public void onInit(final int status) {
                    if (status == TextToSpeech.SUCCESS) {
                        Log.d("OnInitListener", "Text to speech engine started successfully.");
                        tts.setLanguage(Locale.US);
                    } else {
                        Log.d("OnInitListener", "Error starting the text to speech engine.");
                    }
                }
            };
    tts = new TextToSpeech(this.getApplicationContext(), listener);
}
 
開發者ID:googlesamples,項目名稱:android-vision,代碼行數:47,代碼來源:OcrCaptureActivity.java


注:本文中的com.google.android.gms.samples.vision.ocrreader.ui.camera.GraphicOverlay類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。