當前位置: 首頁>>編程示例 >>用法及示例精選 >>正文


Tensorflow.js tf.train.Optimizer.computeGradients()用法及代碼示例

Tensorflow.js是Google開發的開源庫,用於在瀏覽器或節點環境中運行機器學習模型和深度學習神經網絡。

執行 f() 並計算 f() 的標量輸出相對於 varList 提供的可訓練變量列表的梯度。如果未提供列表,則默認為所有可訓練變量。

用法:

Optimizer.computeGradients(f, varList?);

參數:

  • f ( ( ) => tf.Scalar):要執行的函數及其輸出用於計算關於變量的梯度。
  • varLIst( tf.Variable[ ] ):用於計算梯度的可選變量列表。如果指定,則隻有可訓練變量是 varList 才會有關於 的梯度計算。默認為所有可訓練變量。

返回值:{ value:tf.Scalar, grads:{ [ name:string ]:tf.Tensor } }



範例1:

Javascript


// Importing tensorflow
import * as tf from "@tensorflow/tfjs"
     
const xs = tf.tensor1d([3, 4, 5]);
const ys = tf.tensor1d([3.5, 4.7, 5.3]);
     
const x = tf.scalar(Math.random()).variable();
const y = tf.scalar(Math.random()).variable();
     
// Define a function f(x, y) = ( x^2 ) -  y.
const f = x => (x.square()).sub(y);
const loss = (pred, label) =>
    pred.sub(label).square().mean();
     
const learningRate = 0.05;
     
// Create adam optimizer
const optimizer =
tf.train.adam(learningRate);
     
// Train the model.
for (let i = 0; i < 6; i++) {
optimizer.computeGradients(() => loss(f(xs), ys));
}
     
// Make predictions.
console.log(
`x:${x.dataSync()}, y:${y.dataSync()}`);
const preds = f(xs).dataSync();
preds.forEach((pred, i) => {
console.log(`x:${i}, pred:${pred}`);
});

輸出:

x:0.38272422552108765, y:0.7651948928833008
x:0, pred:8.2348051071167
x:1, pred:15.2348051071167
x:2, pred:24.234806060791016

範例2:

Javascript


// Importing tensorflow
import * as tf from "@tensorflow/tfjs"
     
const xs = tf.tensor1d([0, 1, 2, 3]);
const ys = tf.tensor1d([1.3, 3.7, 12.4, 26.6]);
     
// Choosing random coefficients
const a = tf.scalar(Math.random()).variable();
const b = tf.scalar(Math.random()).variable();
const c = tf.scalar(Math.random()).variable();
     
// Defing function f = (a*x^2 + b*x + c)
const f = x => a.mul(x.mul(3)).add(b.square(x)).add(c);
const loss = (pred, label) => pred.sub(label).square().mean();
     
// Setting congigurations for our optimizer
const learningRate = 0.01;
const initialAccumulatorValue = 10;
 
     
// Create the Optimizer
const optimizer = tf.train.adagrad(learningRate,
        initialAccumulatorValue);
     
// Train the model.
for (let i = 0; i < 5; i++) {
optimizer.computeGradients(() => loss(f(xs), ys));
}
     
// Make predictions.
console.log(`a:${a.dataSync()},
    b:${b.dataSync()}, c:${c.dataSync()}`);
const preds = f(xs).dataSync();
preds.forEach((pred, i) => {
console.log(`x:${i}, pred:${pred}`);
});

輸出:

a:0.22211307287216187,
b:0.2304522693157196,
c:0.42621928453445435
x:0, pred:0.479327529668808
x:1, pred:1.1456668376922607
x:2, pred:1.8120059967041016
x:3, pred:2.4783451557159424

參考:https://js.tensorflow.org/api/latest/#tf.train.Optimizer.computeGradients




相關用法


注:本文由純淨天空篩選整理自satyam00so大神的英文原創作品 Tensorflow.js tf.train.Optimizer class .computeGradients() Method。非經特殊聲明,原始代碼版權歸原作者所有,本譯文未經允許或授權,請勿轉載或複製。