第一课:使用TensorFlow创建神经网络拟合线性函数

import tensorflow as tf
import numpy as np
#import matplot as plt

#create data
x_data=np.random.rand(200).astype(np.float32)
y_data=0.1*x_data+0.3

#create structure of Neural Network
#设置变量,并初始化
Weights=tf.Variable(tf.random_uniform([1],-1.0,1.0))#权重初始值为-1到1之间的一个数
biases=tf.Variable(tf.zeros(1))#偏置初始值为0

y=Weights*x_data+biases

loss=tf.reduce_mean(tf.square(y-y_data))#使用L1正则作为目标函数
#设置优化器,使用梯度下降
optimizer=tf.train.GradientDescentOptimizer(0.5)#0.5为定义的学习率
train=optimizer.minimize(loss)#最小化目标函数

#init=tf.initialize_all_variables()#用于激活初始化函数
init=tf.tf.global_variables_initializer()
#create structure end

#激活tf中的会话
sess=tf.Session()
sess.run(init)

for step in range(201):#设置总的迭代步数为201步
    sess.run(train)
    if step%20==0:
        print(step,sess.run(Weights),sess.run(biases))

running result:

WARNING:tensorflow:From <ipython-input-3-0eb2a08f92c0>:21: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
0 [0.5747759] [0.04398211]
20 [0.21589634] [0.23572873]
40 [0.12847619] [0.2842083]
60 [0.10699672] [0.29611993]
80 [0.10171912] [0.29904667]
100 [0.1004224] [0.29976577]
120 [0.1001038] [0.29994246]
140 [0.1000255] [0.29998586]
160 [0.10000626] [0.29999655]
180 [0.10000155] [0.29999915]
200 [0.10000039] [0.2999998]

可以看出:随着迭代次数的增加,权重值和偏置逐渐接近线性函数y=0.1x+0.3中的参数:0.1和0.3

备注:

 从运行结果的Warning可以看到tf在2017年3月2日取消了tf.initialize_all_variables()函数,可以使用tf.global_variables_initializer()函数替换。

以下为替换后的运行结果:

0 [0.19939265] [0.34320277]
20 [0.11608802] [0.2911705]
40 [0.10445002] [0.29755774]
60 [0.1012309] [0.29932448]
80 [0.10034049] [0.29981315]
100 [0.10009419] [0.29994833]
120 [0.10002603] [0.29998574]
140 [0.10000721] [0.29999605]
160 [0.10000198] [0.29999894]
180 [0.10000056] [0.2999997]
200 [0.10000015] [0.29999992]

其实大体上没什么区别,只是不再warning了,哈哈~