본문 바로가기
Study/Machine&Deep Learning

[ML] Linear Regression의 cost 최소화

by graygreat 2018. 5. 8.
728x90
반응형


  cost를 미분




minimzing_cost_gradient_update.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import tensorflow as tf
 
x_data = [123]
y_data = [123]
 
= tf.Variable(tf.random_normal([1]), name='weight')
= tf.placeholder(tf.float32)
= tf.placeholder(tf.float32)
 
# Our hypothesis for linear model X * W
hypothesis = X * W
 
# cost/loss function
cost = tf.reduce_mean(tf.square(hypothesis - Y))
 
# Minimize : Gradient Descent using derivative: W -= Learning_rate * derivative
learning_rate = 0.1
gradient = tf.reduce_mean((W * X - Y) * X)
descent = W - learning_rate * gradient
update = W.assign(descent)
 
# Launch the graph in a session.
sess = tf.Session()
# Initializes global variables in the graph.
sess.run(tf.global_variables_initializer())
 
for step in range(21):
    sess.run(update, feed_dict={X:x_data, Y:y_data})
    print(step, sess.run(cost, feed_dict={X:x_data, Y:y_data}), sess.run(W))



minimizing_tf_optimizer.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import tensorflow as tf
 
# tf Graph Input
= [123]
= [123]
 
# Set wrong model weights
= tf.Variable(5.0)
 
# Linear model
hypothesis = X * W
 
# cost/loss function
cost = tf.reduce_mean(tf.square(hypothesis - Y))
 
# Minimize: Gradient Descent Magic
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
train = optimizer.minimize(cost)
 
# Launch the graph in a session.
sess = tf.Session()
# Initializes global variables in the graph.
sess.run(tf.global_variables_initializer())
 
for step in range(100):
    print(step, sess.run(W))
    sess.run(train)


반응형

댓글