# 神经网络

## Perceptron (感知器)

Tensorflow中可用tf.nn.softmax

Y = tf.nn.softmax(tf.malmul(X, W) + b)


### 交叉熵

• 对于所有的训练输入，神经网络的实际输出值很接近期待输出的时候，交叉熵将会非常接近0
• 受误差影响，当误差大的时候，权重更新就快，当误差小的时候，权重的更新就慢

### Tensorflow

import tensorflow as tf

X = tf.placeholder(tf.float32, [None, 28, 28, 1])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
init = tf.initialize_all_variables()

# model
Y=tf.nn.softmax(tf.matmul(tf.reshape(X,[-1, 784]), W) + b)
Y_ = tf.placeholder(tf.float32, [None, 10])
# loss function
cross_entropy = -tf.reduce_sum(Y_ * tf.log(Y))

# % of correct answers found in batch
is_correct = tf.equal(tf.argmax(Y,1), tf.argmax(Y_,1))
accuracy = tf.reduce_mean(tf.cast(is_correct,tf.float32))

train_step = optimizer.minimize(cross_entropy)

sess = tf.Session()
sess.run(init)

for i in range(10000):
batch_X, batch_Y = mnist.train.next_batch(100)
train_data={X: batch_X, Y_: batch_Y}

# train
sess.run(train_step, feed_dict=train_data)

# success ? add code to print it
a,c = sess.run([accuracy, cross_entropy], feed=train_data)

# success on test data ?
test_data={X:mnist.test.images, Y_:mnist.test.labels}
a,c = sess.run([accuracy, cross_entropy], feed=test_data)


## FNN (前馈神经网络)

• BP网络：指连接权调整采用了反向传播（Back Propagation）学习算法的前馈网络，激活函数为Sigmoid函数。
• 卷积神经网络（Convolutional Neural Network, CNN）：由一个或多个卷积层和顶端的全连通层（对应经典的神经网络）组成，同时也包括关联权重和池化层（pooling layer）。这一结构使得卷积神经网络能够利用输入数据的二维结构。卷积神经网络在图像和语音识别方面能够给出更优的结果，并且需要估计的参数更少，使之成为一种颇具吸引力的深度学习结构。

### Tensorflow

K = 200
L = 100
M = 60
N = 30

W1 = tf.Variable(tf.truncated_normal([28*28, K] ,stddev=0.1))
B1 = tf.Variable(tf.zeros([K]))
W2 = tf.Variable(tf.truncated_normal([K, L], stddev=0.1))
B2 = tf.Variable(tf.zeros([L]))
W3 = tf.Variable(tf.truncated_normal([L, M], stddev=0.1))
B3 = tf.Variable(tf.zeros([M]))
W4 = tf.Variable(tf.truncated_normal([M, N], stddev=0.1))
B4 = tf.Variable(tf.zeros([N]))
W5 = tf.Variable(tf.truncated_normal([N, 10], stddev=0.1))
B5 = tf.Variable(tf.zeros([10]))

X = tf.reshape(X, [-1, 28*28])

Y1 = tf.nn.sigmoid(tf.matmul(X, W1) + B1)
Y2 = tf.nn.sigmoid(tf.matmul(Y1, W2) + B2)
Y3 = tf.nn.sigmoid(tf.matmul(Y2, W3) + B3)
Y4 = tf.nn.sigmoid(tf.matmul(Y3, W4) + B4)
Y = tf.nn.relu(tf.matmul(Y4, W5) + B5)


TensorFlow 有一个非常方便的函数可以在单步内计算 softmax 和交叉熵，它是以一种数值上较为稳定的方式实现的。如果要使用它，你需要在应用 softmax 之前将原始的权重和加上你最后一层的偏置隔离开来（在神经网络的术语里叫「logits」）。比如，可以把Y = tf.nn.softmax(tf.matmul(Y4, W5) + B5)替换为

Ylogits = tf.matmul(Y4, W5) + B5
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(Ylogits, Y_)


## DNN (深度神经网络)

• LeNet5
• AlexNet
• GooLeNet以及Inception v3/v4
• Residual Networks (ResNet)
• SqueezeNet
• Efficient Neural Network (Enet)
• Fractalnet