728x90
반응형
Neural Net
XOR with logistic regression
소스 코드
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | import tensorflow as tf import numpy as np tf.set_random_seed(777) # for reproducibility learning_rate = 0.1 x_data = [[0, 0], [0, 1], [1, 0], [1, 1]] y_data = [[0], [1], [1], [0]] x_data = np.array(x_data, dtype=np.float32) y_data = np.array(y_data, dtype=np.float32) X = tf.placeholder(tf.float32, [None, 2]) Y = tf.placeholder(tf.float32, [None, 1]) W = tf.Variable(tf.random_normal([2, 1]), name='weight') b = tf.Variable(tf.random_normal([1]), name='bias') # Hypothesis using sigmoid: tf.div(1., 1. + tf.exp(tf.matmul(X, W))) hypothesis = tf.sigmoid(tf.matmul(X, W) + b) # cost/loss function cost = -tf.reduce_mean(Y * tf.log(hypothesis) + (1 - Y) * tf.log(1 - hypothesis)) train = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost) # Accuracy computation # True if hypothesis>0.5 else False predicted = tf.cast(hypothesis > 0.5, dtype=tf.float32) accuracy = tf.reduce_mean(tf.cast(tf.equal(predicted, Y), dtype=tf.float32)) # Launch graph with tf.Session() as sess: # Initialize TensorFlow variables sess.run(tf.global_variables_initializer()) for step in range(10001): sess.run(train, feed_dict={X: x_data, Y: y_data}) if step % 100 == 0: print(step, sess.run(cost, feed_dict={ X: x_data, Y: y_data}), sess.run(W)) # Accuracy report h, c, a = sess.run([hypothesis, predicted, accuracy], feed_dict={X: x_data, Y: y_data}) print("\nHypothesis: ", h, "\nCorrect: ", c, "\nAccuracy: ", a) | cs |
실행 결과
소스 코드를 보면 X, Y 데이터를 제대로 줬음에도 불구하고 Accuracy가 좋지 않다.
10000번이나 돌렸음에도 불구하고 상당히 낮은 것을 볼 수 있다.
어떻게 해야 이 문제를 해결할까?
이때 Neural Net을 사용하여 코딩하면 된다.
아래와 같이 계산을 하게 되면 XOR 연산을 풀 수 있다.
소스 코드
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | import tensorflow as tf import numpy as np tf.set_random_seed(777) # for reproducibility learning_rate = 0.1 x_data = [[0, 0], [0, 1], [1, 0], [1, 1]] y_data = [[0], [1], [1], [0]] x_data = np.array(x_data, dtype=np.float32) y_data = np.array(y_data, dtype=np.float32) X = tf.placeholder(tf.float32, [None, 2]) Y = tf.placeholder(tf.float32, [None, 1]) W1 = tf.Variable(tf.random_normal([2, 2]), name='weight1') b1 = tf.Variable(tf.random_normal([2]), name='bias1') layer1 = tf.sigmoid(tf.matmul(X, W1) + b1) W2 = tf.Variable(tf.random_normal([2, 1]), name='weight2') b2 = tf.Variable(tf.random_normal([1]), name='bias2') hypothesis = tf.sigmoid(tf.matmul(layer1, W2) + b2) # cost/loss function cost = -tf.reduce_mean(Y * tf.log(hypothesis) + (1 - Y) * tf.log(1 - hypothesis)) train = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost) # Accuracy computation # True if hypothesis>0.5 else False predicted = tf.cast(hypothesis > 0.5, dtype=tf.float32) accuracy = tf.reduce_mean(tf.cast(tf.equal(predicted, Y), dtype=tf.float32)) # Launch graph with tf.Session() as sess: # Initialize TensorFlow variables sess.run(tf.global_variables_initializer()) for step in range(10001): sess.run(train, feed_dict={X: x_data, Y: y_data}) if step % 100 == 0: print(step, sess.run(cost, feed_dict={ X: x_data, Y: y_data}), sess.run([W1, W2])) # Accuracy report h, c, a = sess.run([hypothesis, predicted, accuracy], feed_dict={X: x_data, Y: y_data}) print("\nHypothesis: ", h, "\nCorrect: ", c, "\nAccuracy: ", a) | cs |
실행 결과
Accruacy가 완벽해진것을 볼 수 있다.
반응형
'Study > Machine&Deep Learning' 카테고리의 다른 글
[ML] ReLU (0) | 2018.06.26 |
---|---|
[ML] TensorBoard 사용하기 (0) | 2018.06.25 |
[ML] MNIST (0) | 2018.06.16 |
[ML] Training and Test datasets (0) | 2018.06.03 |
[ML] Overfitting and Regularization (0) | 2018.06.03 |
댓글