728x90
반응형
MNIST
epoch : 전체 dataset을 한번 돈 것을 1 epoch이라고 한다.
batch size : dataset이 큰 경우, 한 번에 할 수 없기 때문에 짤라서 학습을 시키는데 이 때 자르는 사이즈를 batch size라고 한다.
소스 코드
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | import tensorflow as tf import random import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) nb_classes = 10 # MNIST data image of shape 28 * 28 = 784 X = tf.placeholder(tf.float32, [None, 784]) # 0 ~ 9 digits recognition = 10 classes Y = tf.placeholder(tf.float32, [None, nb_classes]) W = tf.Variable(tf.random_normal([784, nb_classes])) b = tf.Variable(tf.random_normal([nb_classes])) hypothesis = tf.nn.softmax(tf.matmul(X, W) + b) cost = tf.reduce_mean(-tf.reduce_sum(Y * tf.log(hypothesis), axis=1)) optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(cost) # Test model is_correct = tf.equal(tf.arg_max(hypothesis, 1), tf.arg_max(Y, 1)) accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32)) # parameters training_epochs = 15 batch_size = 100 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for epoch in range(training_epochs): avg_cost = 0 total_batch = int(mnist.train.num_examples / batch_size) for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) c, _ = sess.run([cost, optimizer], feed_dict={X: batch_xs, Y: batch_ys}) avg_cost += c / total_batch print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.9f}'.format(avg_cost)) print("Learning finished") # Test the model using test sets print("Accuracy: ", accuracy.eval(session=sess, feed_dict={ X: mnist.test.images, Y: mnist.test.labels})) # Get one and predict r = random.randint(0, mnist.test.num_examples - 1) print("Label: ", sess.run(tf.argmax(mnist.test.labels[r:r + 1], 1))) print("Prediction: ", sess.run( tf.argmax(hypothesis, 1), feed_dict={X: mnist.test.images[r:r + 1]})) plt.imshow( mnist.test.images[r:r + 1].reshape(28, 28), cmap='Greys', interpolation='nearest') plt.show() |
sess.run 와 accuracy.eval는 같다.
실행 결과
반응형
'Study > Machine&Deep Learning' 카테고리의 다른 글
[ML] TensorBoard 사용하기 (0) | 2018.06.25 |
---|---|
[ML] Neural Net for XOR (0) | 2018.06.23 |
[ML] Training and Test datasets (0) | 2018.06.03 |
[ML] Overfitting and Regularization (0) | 2018.06.03 |
[ML] Learning rate (0) | 2018.06.03 |
댓글