做手写数字识别加入隐藏层,准确率不变化
tensorflow吧
全部回复
仅看楼主
level 1
yyyyyycz 楼主
刚刚学tensorflow 试图通过加入隐藏层的方法增加准确率,可是运行下来,准确率没有变化
代码如下:
#载入数据
mnist=input_data.read_data_sets("MNIST_data",one_hot=True)
#每个批次的大小
batch_size=100
#计算一共有多少批次
n_batch=mnist.train.num_examples // batch_size
#定义两个placeholder
x=tf.placeholder(tf.float32,[None,784])
y=tf.placeholder(tf.float32,[None,10])
#神经网络的输入层
weigh_L1=tf.Variable(tf.zeros([784,100]))
basic_L1=tf.Variable(tf.zeros([100]))
output_L1=tf.nn.sigmoid(tf.matmul(x,weigh_L1)+basic_L1)
#神经网络的L1隐藏层
weigh_L2=tf.Variable(tf.zeros([100,50]))
basic_L2=tf.Variable(tf.zeros([50]))
output_L2=tf.nn.sigmoid(tf.matmul(output_L1,weigh_L2)+basic_L2)
#神经网络的L2隐藏层
weigh_L3=tf.Variable(tf.zeros([50,20]))
basic_L3=tf.Variable(tf.zeros([20]))
output_L3=tf.nn.sigmoid(tf.matmul(output_L2,weigh_L3)+basic_L3)
#神经网络的输出层
weigh_L4=tf.Variable(tf.zeros([20,10]))
basic_L4=tf.Variable(tf.zeros([10]))
output=tf.nn.softmax(tf.matmul(output_L3,weigh_L4)+basic_L4)
#计算二次代价函数
loss=tf.reduce_mean(tf.square(y-output))
optimizer=tf.train.GradientDescentOptimizer(0.2).minimize(loss)
#定义全局变量
init=tf.global_variables_initializer()
#计算准确率
prection_correct=tf.equal(tf.argmax(y,1),tf.argmax(prection,1))
accuracy=tf.reduce_mean(tf.cast(prection_correct,tf.float32))
with tf.Session() as sess:
sess.run(init)
for epoch in range(21):
for batch in range(n_batch):
batch_xs,batch_ys=mnist.train.next_batch(batch_size)
sess.run(optimizer,feed_dict={x:batch_xs,y:batch_ys})
acc=sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels})
print("Iter" + str(epoch)+",Testing Accuracy" + str(acc))
运行结果如下:
Extracting MNIST_data\train-images-idx3-ubyte.gzExtracting MNIST_data\train-labels-idx1-ubyte.gzExtracting MNIST_data\t10k-images-idx3-ubyte.gzExtracting MNIST_data\t10k-labels-idx1-ubyte.gzIter0,Testing Accuracy0.1135Iter1,Testing Accuracy0.1135Iter2,Testing Accuracy0.1135Iter3,Testing Accuracy0.1135Iter4,Testing Accuracy0.1135Iter5,Testing Accuracy0.1135Iter6,Testing Accuracy0.1135Iter7,Testing Accuracy0.1135Iter8,Testing Accuracy0.1135Iter9,Testing Accuracy0.1135Iter10,Testing Accuracy0.1135Iter11,Testing Accuracy0.1135Iter12,Testing Accuracy0.1135Iter13,Testing Accuracy0.1135Iter14,Testing Accuracy0.1135Iter15,Testing Accuracy0.1135Iter16,Testing Accuracy0.1135Iter17,Testing Accuracy0.1135Iter18,Testing Accuracy0.1135Iter19,Testing Accuracy0.1135Iter20,Testing Accuracy0.1135
想求助各位大佬,为什么会这样
2020年03月19日 19点03分 1
level 1
yyyyyycz 楼主
谢谢啦
2020年03月19日 19点03分 2
level 1
你的权重不能全初始化成0用random_normal就不会不变了
2020年03月25日 09点03分 3
1