728x90
반응형
cost를 미분
minimzing_cost_gradient_update.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | import tensorflow as tf x_data = [1, 2, 3] y_data = [1, 2, 3] W = tf.Variable(tf.random_normal([1]), name='weight') X = tf.placeholder(tf.float32) Y = tf.placeholder(tf.float32) # Our hypothesis for linear model X * W hypothesis = X * W # cost/loss function cost = tf.reduce_mean(tf.square(hypothesis - Y)) # Minimize : Gradient Descent using derivative: W -= Learning_rate * derivative learning_rate = 0.1 gradient = tf.reduce_mean((W * X - Y) * X) descent = W - learning_rate * gradient update = W.assign(descent) # Launch the graph in a session. sess = tf.Session() # Initializes global variables in the graph. sess.run(tf.global_variables_initializer()) for step in range(21): sess.run(update, feed_dict={X:x_data, Y:y_data}) print(step, sess.run(cost, feed_dict={X:x_data, Y:y_data}), sess.run(W)) |
minimizing_tf_optimizer.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | import tensorflow as tf # tf Graph Input X = [1, 2, 3] Y = [1, 2, 3] # Set wrong model weights W = tf.Variable(5.0) # Linear model hypothesis = X * W # cost/loss function cost = tf.reduce_mean(tf.square(hypothesis - Y)) # Minimize: Gradient Descent Magic optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1) train = optimizer.minimize(cost) # Launch the graph in a session. sess = tf.Session() # Initializes global variables in the graph. sess.run(tf.global_variables_initializer()) for step in range(100): print(step, sess.run(W)) sess.run(train) |
반응형
'Study > Machine&Deep Learning' 카테고리의 다른 글
[ML] Logistic Classification (0) | 2018.05.20 |
---|---|
[ML] TensorFlow로 파일에서 데이터 읽어오기 (2) | 2018.05.16 |
[ML] multi-variable linear regression (0) | 2018.05.08 |
[ML] Linear Regression의 Hypothesis와 cost (0) | 2018.05.04 |
[ML] 기본적인 Machine Learning의 용어와 개념 (0) | 2018.05.04 |
댓글