【动手学深度学习】08 线性回归+基础优化算法
对应目录:notebooks/chapter_linear-networks/linear-regression-scratch.ipynb。
线性回归的从零开始:
对应目录:notebooks/chapter_linear-networks/linear-regression-scratch.ipynb
练习一:
import random
import torch
from d2l import torch as d2l
def synthetic_data(w,b,num_examples):
X = torch.normal(0,1,(num_examples, len(w)))
Y = torch.matmul(X,w) + b
Y += torch.normal(0,0.01, Y.shape)
return X, Y.reshape(-1,1)
#生成数据集
W_true = torch.tensor([2, -3.4])
b_true = 4.2
features, labels = synthetic_data(W_true, b_true,100)
def data_iter(batch_size, features, labels):
num_examples = len(features)
indices = list(range(num_examples))
random.shuffle(indices)
for i in range(0, num_examples, batch_size):
batch_indices = torch.tensor(indices[i:min(i+batch_size, num_examples)])
yield features[batch_indices], labels[batch_indices]
# w = torch.normal(0,0.01,size = (2,1), requires_grad = True)
w = torch.zeros(size = (2,1), requires_grad = True) #参数初始化为0
b = torch.zeros(1, requires_grad = True)
def linreg(X,w,b):
return torch.matmul(X, w) + b
def squared_loss(y_hat, y):
return (y_hat - y.reshape(y_hat.shape)) ** 2 / 2
def sgd(params, lr, batch_size):
with torch.no_grad():
for param in params:
param -= lr * param.grad / batch_size
param.grad.zero_()
lr = 0.03
num_epochs = 3
net = linreg
loss = squared_loss
batch_size = 2
for epoch in range(num_epochs):
for X, y in data_iter(batch_size, features, labels):
los = loss(net(X,w,b), y)
los.sum().backward()
sgd([w,b], lr, batch_size)
with torch.no_grad():
train_l = loss(net(features,w,b), labels)
print(f'epoch {epoch + 1}, loss{float(train_l.mean()):f}')
print(f'w的估计误差:{W_true - w.reshape(W_true.shape)}')
print(f'b的估计误差:{b_true - b}')
可以正常运行,但是因为这是单层线性模型,在多层神经网络中,这种初始化方法是有问题的。
练习二:
可以,U = IR,构建线性模型,设定R为需要学习的参数即可:
import random
import torch
from d2l import torch as d2l
def synthetic_data(R,num_examples):
X = torch.normal(0,1,(num_examples, 1))
Y = torch.matmul(X,R)
Y += torch.normal(0,0.01, Y.shape)
return X, Y.reshape(-1,1)
#生成数据集
R_true = torch.tensor([[10.0]])
features, labels = synthetic_data(R_true,1000)
def data_iter(batch_size, features, labels):
num_examples = len(features)
indices = list(range(num_examples))
random.shuffle(indices)
for i in range(0, num_examples, batch_size):
batch_indices = torch.tensor(indices[i:min(i+batch_size, num_examples)])
yield features[batch_indices], labels[batch_indices]
R = torch.normal(0,0.01,size = (1,1), requires_grad = True)
def linreg(X,R):
return torch.matmul(X, R)
def squared_loss(y_hat, y):
return (y_hat - y.reshape(y_hat.shape)) ** 2 / 2
def sgd(params, lr, batch_size):
with torch.no_grad():
for param in params:
param -= lr * param.grad / batch_size
param.grad.zero_()
lr = 0.03
num_epochs = 3
net = linreg
loss = squared_loss
batch_size = 2
for epoch in range(num_epochs):
for X, y in data_iter(batch_size, features, labels):
los = loss(net(X,R), y)
los.sum().backward()
sgd([R], lr, batch_size)
with torch.no_grad():
train_l = loss(net(features,R), labels)
print(f'epoch {epoch + 1}, loss{float(train_l.mean()):f}')
print(f'R的估计误差:{R_true - R}')
#运行结果
epoch 1, loss0.000050
epoch 2, loss0.000053
epoch 3, loss0.000051
R的估计误差:tensor([[0.0012]], grad_fn=<SubBackward0>)
练习三:
可以,但是没写了。
练习四:
二阶导数需要在一阶导数的基础上求导,第一次反向传播之后会释放计算图,再次.backward()会报错,可以在loss.backward(create_graph = True)从而保留图结构以支持二阶求导。
练习五:
为了统一y_hat和y的形状,使其可以按元素对应计算误差。
练习六:
学习率太大会震荡,学习率太小会使得优化过程慢。
练习七:
不影响,因为data_iter中 indices[i:min(i+batch_size, num_examples)],即使不能整除,最后一个batch仍然会正常运行,只是最后一个batch的样本数少于batch_size。
线性回归的简洁实现
对应地址:notebooks/chapter_linear-networks/linear-regression-concise.ipynb
练习一:
从.sum替换为.mean梯度大小由原先的大小缩小为原先的1/n倍,所以梯度需要乘以批次数。(原先更新公式可以视为*batch_size,求导之后batch_size依然存在,现在.mean,相当于失去了batch_size项,为了保证更新梯度和原先相同,所以需要*batch_size)
练习二:
HuberLoss — PyTorch 2.2 documentation
损失函数对应文档。页面左上角是对应的pytorch版本,需要和自己使用的Pytorch版本相符合
import numpy as np
import torch
from torch.utils import data
from d2l import torch as d2l
from torch import nn
true_w = torch.tensor([2, -3.4])
true_b = 4.2
features, labels = d2l.synthetic_data(true_w, true_b, 1000)
def load_array(data_arrays, batch_size, is_train = True): #@save
dataset = data.TensorDataset(*data_arrays)
return data.DataLoader(dataset, batch_size, shuffle = is_train)
batch_size = 10
data_iter = load_array((features, labels ), batch_size)
net = nn.Sequential(nn.Linear(2,1))
net[0].weight.data.normal_(0, 0.01)
net[0].bias.data.fill_(0)
loss = nn.HuberLoss(reduction='mean', delta=1.0) #更换为HUber损失函数
trainer = torch.optim.SGD(net.parameters(), lr = 0.03)
num_epochs = 3
for epoch in range(num_epochs):
for X,y in data_iter:
l = loss(net(X), y)
trainer.zero_grad()
l.backward()
trainer.step()
l = loss(net(features), labels)
print(f"epoch {epoch + 1}, loss:{l:f}")
###运行结果###
epoch 1, loss:2.200701
epoch 2, loss:0.523420
epoch 3, loss:0.003929
练习三:
net[0].weight.data #权重数值
net[0].weight.grad #权重梯度
net[0].bias.data #权重数值
net[0].bias.grad #权重梯度
更多推荐
所有评论(0)