剛?cè)雙ytorch的坑,代碼還沒看太懂。之前用keras用習(xí)慣了,第一次使用pytorch還有些不適應(yīng),希望廣大老司機(jī)多多指教。 首先說說,我們?nèi)绾慰梢暬P?。在keras中就一句話,keras.summary(),或者plot_model(),就可以把模型展現(xiàn)的淋漓盡致。 但是pytorch中好像沒有這樣一個api讓我們直觀的看到模型的樣子。但是有網(wǎng)友提供了一段代碼,可以把模型畫出來,對我來說簡直就是如有神助啊。話不多說,上代碼吧。 from torch.autograd import Variable from graphviz import Digraph super(CNN, self).__init__() self.conv1 = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=2), nn.MaxPool2d(kernel_size=2) self.conv2 = nn.Sequential( nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=1, padding=2), nn.MaxPool2d(kernel_size=2) self.out = nn.Linear(32*7*7, 10) x = x.view(x.size(0), -1) # (batch, 32*7*7) def make_dot(var, params=None): """ Produces Graphviz representation of PyTorch autograd graph Blue nodes are the Variables that require grad, orange are Tensors saved for backward in torch.autograd.Function params: dict of (name, Variable) to add names to node that require grad (TODO: make optional) assert isinstance(params.values()[0], Variable) param_map = {id(v): k for k, v in params.items()} node_attr = dict(style='filled', dot = Digraph(node_attr=node_attr, graph_attr=dict(size="12,12")) return '('+(', ').join(['%d' % v for v in size])+')' dot.node(str(id(var)), size_to_str(var.size()), fillcolor='orange') elif hasattr(var, 'variable'): name = param_map[id(u)] if params is not None else '' node_name = '%s\n %s' % (name, size_to_str(u.size())) dot.node(str(id(var)), node_name, fillcolor='lightblue') dot.node(str(id(var)), str(type(var).__name__)) if hasattr(var, 'next_functions'): for u in var.next_functions: dot.edge(str(id(u[0])), str(id(var))) if hasattr(var, 'saved_tensors'): for t in var.saved_tensors: dot.edge(str(id(t)), str(id(var))) if __name__ == '__main__': x = Variable(torch.randn(1, 1, 28, 28)) params = list(net.parameters()) print("該層的結(jié)構(gòu):" + str(list(i.size()))) print("該層參數(shù)和:" + str(l)) print("總參數(shù)數(shù)量和:" + str(k))
模型很簡單,代碼也很簡單。就是conv -> relu -> maxpool -> conv -> relu -> maxpool -> fc 大家在可視化的時候,直接復(fù)制make_dot那段代碼即可,然后需要初始化一個net,以及這個網(wǎng)絡(luò)需要的數(shù)據(jù)規(guī)模,此處就以 這段代碼為例,初始化一個模型net,準(zhǔn)備這個模型的輸入數(shù)據(jù)x,shape為(batch,channels,height,width) 然后把數(shù)據(jù)傳入模型得到輸出結(jié)果y。傳入make_dot即可得到下圖。 x = Variable(torch.randn(1, 1, 28, 28))
 最后輸出該網(wǎng)絡(luò)的各種參數(shù)。
該層的結(jié)構(gòu):[16, 1, 5, 5] 該層的結(jié)構(gòu):[32, 16, 5, 5] 該層的結(jié)構(gòu):[10, 1568]
|