本帖最后由 fc013 于 2018-6-16 20:29 編輯 問題導(dǎo)讀: 1.5-4-9模型是什么? 2.什么是五步法? 3.為什么要用函數(shù)式編程? 作為一個程序員,我們可以像學(xué)習(xí)編程一樣學(xué)習(xí)深度學(xué)習(xí)模型開發(fā)。我們以 Keras 為例來說明。我們可以用 5 步 + 4 種基本元素 + 9 種基本層結(jié)構(gòu),這 5-4-9 模型來總結(jié)。 我們通過一張圖來理解下它們之間的關(guān)系: 5步法:
4種基本元素:
9種基本層模型:
五步法 五步法是用深度學(xué)習(xí)來解決問題的五個步驟:
在這五步之中,其實關(guān)鍵的步驟主要只有第一步,這一步確定了,后面的參數(shù)都可以根據(jù)它來設(shè)置。 1. 過程化方法構(gòu)造網(wǎng)絡(luò)模型 我們先學(xué)習(xí)最容易理解的,過程化方法構(gòu)造網(wǎng)絡(luò)模型的過程。 Keras中提供了Sequential容器來實現(xiàn)過程式構(gòu)造。只要用Sequential的add方法把層結(jié)構(gòu)加進來就可以了。10種基本層結(jié)構(gòu)我們會在后面詳細講。 例: [mw_shl_code=python,true]from keras.models import Sequential from keras.layers import Dense, Activation model = Sequential() model.add(Dense(units=64, input_dim=100)) model.add(Activation('relu')) model.add(Dense(units=10)) model.add(Activation('softmax'))[/mw_shl_code] 對于什么樣的問題構(gòu)造什么樣的層結(jié)構(gòu),我們會在后面的例子中介紹。 2. 編譯模型 模型構(gòu)造好之后,下一步就可以調(diào)用Sequential的compile方法來編譯它。 [mw_shl_code=python,true]model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])[/mw_shl_code] 編譯時需要指定兩個基本元素:loss是損失函數(shù),optimizer是優(yōu)化函數(shù)。 如果只想用最基本的功能,只要指定字符串的名字就可以了。如果想配置更多的參數(shù),調(diào)用相應(yīng)的類來生成對象。例:我們想為隨機梯度下降配上Nesterov動量,就生成一個SGD的對象就好了: [mw_shl_code=python,true]from keras.optimizers import SGD model.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.01, momentum=0.9, nesterov=True))[/mw_shl_code] lr是學(xué)習(xí)率,learning rate。 3. 訓(xùn)練模型 調(diào)用fit函數(shù),將輸出的值X,打好標(biāo)簽的值y,epochs訓(xùn)練輪數(shù),batch_size批次大小設(shè)置一下就可以了: [mw_shl_code=python,true]model.fit(x_train, y_train, epochs=5, batch_size=32)[/mw_shl_code] 4. 評估模型 模型訓(xùn)練的好不好,訓(xùn)練數(shù)據(jù)不算數(shù),需要用測試數(shù)據(jù)來評估一下: [mw_shl_code=python,true]loss_and_metrics = model.evaluate(x_test, y_test, batch_size=128)[/mw_shl_code] 5. 用模型來預(yù)測 一切訓(xùn)練的目的是在于預(yù)測: [mw_shl_code=python,true]classes = model.predict(x_test, batch_size=128)[/mw_shl_code] 4種基本元素 1. 網(wǎng)絡(luò)結(jié)構(gòu) 主要用后面的層結(jié)構(gòu)來拼裝。網(wǎng)絡(luò)結(jié)構(gòu)如何設(shè)計呢? 可以參考論文,比如這篇中不管是左邊的19層的VGG-19,還是右邊34層的resnet,只要按圖去實現(xiàn)就好了。 2. 激活函數(shù)
3. 損失函數(shù)
對于多分類來說,主要用categorical_ crossentropy。 4. 優(yōu)化器
本文將著重介紹后兩種教程。 深度學(xué)習(xí)中的函數(shù)式編程 前面介紹的各種基本層,除了可以add進Sequential容器串聯(lián)之外,它們本身也是callable對象,被調(diào)用之后,返回的還是callable對象。所以可以將它們視為函數(shù),通過調(diào)用的方式來進行串聯(lián)。 來個官方例子: [mw_shl_code=python,true]from keras.layers import Input, Dense from keras.models import Model inputs = Input(shape=(784,)) x = Dense(64, activation='relu')(inputs) x = Dense(64, activation='relu')(x) predictions = Dense(10, activation='softmax')(x) model = Model(inputs=inputs, outputs=predictions) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(data, labels)[/mw_shl_code] 為什么要用函數(shù)式編程? 答案是,復(fù)雜的網(wǎng)絡(luò)結(jié)構(gòu)并不是都是線性的add進容器中的。并行的,重用的,什么情況都有。這時候callable的優(yōu)勢就發(fā)揮出來了。 比如下面的Google Inception模型,就是帶并聯(lián)的: 我們的代碼自然是以并聯(lián)應(yīng)對并聯(lián)了,一個輸入input_img被三個模型所重用: [mw_shl_code=python,true]from keras.layers import Conv2D, MaxPooling2D, Input input_img = Input(shape=(256, 256, 3)) tower_1 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img) tower_1 = Conv2D(64, (3, 3), padding='same', activation='relu')(tower_1) tower_2 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img) tower_2 = Conv2D(64, (5, 5), padding='same', activation='relu')(tower_2) tower_3 = MaxPooling2D((3, 3), strides=(1, 1), padding='same')(input_img) tower_3 = Conv2D(64, (1, 1), padding='same', activation='relu')(tower_3) output = keras.layers.concatenate([tower_1, tower_2, tower_3], axis=1)[/mw_shl_code] 案例教程 1. CNN處理MNIST手寫識別 光說不練是假把式。我們來看看符合五步法的處理MNIST的例子。首先解析一下核心模型代碼,因為模型是線性的,我們還是用Sequential容器: 核心是兩個卷積層: 為了防止過擬合,我們加上一個最大池化層,再加上一個Dropout層: 下面要進入全連接層輸出了,這兩個中間的數(shù)據(jù)轉(zhuǎn)換需要一個Flatten層: 下面是全連接層,激活函數(shù)是relu。 還怕過擬合,再來個Dropout層! 最后通過一個softmax激活函數(shù)的全連接網(wǎng)絡(luò)輸出: [mw_shl_code=python,true]model.add(Dense(num_classes, activation='softmax'))[/mw_shl_code] 下面是編譯這個模型,損失函數(shù)是categorical_crossentropy多類對數(shù)損失函數(shù),優(yōu)化器選用Adadelta。 [mw_shl_code=python,true]model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) [/mw_shl_code] 下面是可以運行的完整代碼: [mw_shl_code=python,true]from __future__ import print_function import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras import backend as K batch_size = 128 num_classes = 10 epochs = 12 # input image dimensions img_rows, img_cols = 28, 28 # the data, split between train and test sets (x_train, y_train), (x_test, y_test) = mnist.load_data() if K.image_data_format() == 'channels_first': x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols) x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols) input_shape = (1, img_rows, img_cols) else: x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1])[/mw_shl_code] 然后我們還是老辦法,我們先看一下核心代碼。沒啥說的,這類序列化處理的問題用的一定是RNN,通常都是用LSTM. [mw_shl_code=python,true]encoder_inputs = Input(shape=(None, num_encoder_tokens)) encoder = LSTM(latent_dim, return_state=True) encoder_outputs, state_h, state_c = encoder(encoder_inputs) encoder_states = [state_h, state_c] decoder_inputs = Input(shape=(None, num_decoder_tokens)) decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True) decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states) decoder_dense = Dense(num_decoder_tokens, activation='softmax') decoder_outputs = decoder_dense(decoder_outputs) model = Model([encoder_inputs, decoder_inputs], decoder_outputs) [/mw_shl_code] 優(yōu)化器選用rmsprop,損失函數(shù)還是categorical_crossentropy。validation_split是將一個集合隨機分成訓(xùn)練集和測試集。 [mw_shl_code=python,true]# Run training model.compile(optimizer='rmsprop', loss='categorical_crossentropy') model.fit([encoder_input_data, decoder_input_data], decoder_target_data, batch_size=batch_size, epochs=epochs, validation_split=0.2)[/mw_shl_code] 最后,訓(xùn)練一個模型不容易,我們將其存儲起來。 [mw_shl_code=python,true]model.save('s2s.h5')[/mw_shl_code] 最后,附上完整的實現(xiàn)了機器翻譯功能的代碼,加上注釋和空行有100多行,供有需要的同學(xué)取用。 [mw_shl_code=python,true]from __future__ import print_function from keras.models import Model from keras.layers import Input, LSTM, Dense import numpy as np batch_size = 64 # Batch size for training. epochs = 100 # Number of epochs to train for. latent_dim = 256 # Latent dimensionality of the encoding space. num_samples = 10000 # Number of samples to train on. # Path to the data txt file on disk. data_path = 'fra-eng/fra.txt' # Vectorize the data. input_texts = [] target_texts = [] input_characters = set() target_characters = set() with open(data_path, 'r', encoding='utf-8') as f: lines = f.read().split('\n') for line in lines[: min(num_samples, len(lines) - 1)]: input_text, target_text = line.split('\t') # We use 'tab' as the 'start sequence' character # for the targets, and '\n' as 'end sequence' character. target_text = '\t' + target_text + '\n' input_texts.append(input_text) target_texts.append(target_text) for char in input_text: if char not in input_characters: input_characters.add(char) for char in target_text: if char not in target_characters: target_characters.add(char) input_characters = sorted(list(input_characters)) target_characters = sorted(list(target_characters)) num_encoder_tokens = len(input_characters) num_decoder_tokens = len(target_characters) max_encoder_seq_length = max([len(txt) for txt in input_texts]) max_decoder_seq_length = max([len(txt) for txt in target_texts]) print('Number of samples:', len(input_texts)) print('Number of unique input tokens:', num_encoder_tokens) print('Number of unique output tokens:', num_decoder_tokens) print('Max sequence length for inputs:', max_encoder_seq_length) print('Max sequence length for outputs:', max_decoder_seq_length) input_token_index = dict( [(char, i) for i, char in enumerate(input_characters)]) target_token_index = dict( [(char, i) for i, char in enumerate(target_characters)]) encoder_input_data = np.zeros( (len(input_texts), max_encoder_seq_length, num_encoder_tokens), dtype='float32') decoder_input_data = np.zeros( (len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype='float32') decoder_target_data = np.zeros( (len(input_texts), max_decoder_seq_length, num_decoder_tokens), dtype='float32') for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)): for t, char in enumerate(input_text): encoder_input_data[i, t, input_token_index[char]] = 1. for t, char in enumerate(target_text): # decoder_target_data is ahead of decoder_input_data by one timestep decoder_input_data[i, t, target_token_index[char]] = 1. if t > 0: # decoder_target_data will be ahead by one timestep # and will not include the start character. decoder_target_data[i, t - 1, target_token_index[char]] = 1. # Define an input sequence and process it. encoder_inputs = Input(shape=(None, num_encoder_tokens)) encoder = LSTM(latent_dim, return_state=True) encoder_outputs, state_h, state_c = encoder(encoder_inputs) # We discard `encoder_outputs` and only keep the states. encoder_states = [state_h, state_c] # Set up the decoder, using `encoder_states` as initial state. decoder_inputs = Input(shape=(None, num_decoder_tokens)) # We set up our decoder to return full output sequences, # and to return internal states as well. We don't use the # return states in the training model, but we will use them in inference. decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True) decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states) decoder_dense = Dense(num_decoder_tokens, activation='softmax') decoder_outputs = decoder_dense(decoder_outputs) # Define the model that will turn # `encoder_input_data` & `decoder_input_data` into `decoder_target_data` model = Model([encoder_inputs, decoder_inputs], decoder_outputs) # Run training model.compile(optimizer='rmsprop', loss='categorical_crossentropy') model.fit([encoder_input_data, decoder_input_data], decoder_target_data, batch_size=batch_size, epochs=epochs, validation_split=0.2) # Save model model.save('s2s.h5') encoder_model = Model(encoder_inputs, encoder_states) decoder_state_input_h = Input(shape=(latent_dim,)) decoder_state_input_c = Input(shape=(latent_dim,)) decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c] decoder_outputs, state_h, state_c = decoder_lstm( decoder_inputs, initial_state=decoder_states_inputs) decoder_states = [state_h, state_c] decoder_outputs = decoder_dense(decoder_outputs) decoder_model = Model( [decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states) # Reverse-lookup token index to decode sequences back to # something readable. reverse_input_char_index = dict( (i, char) for char, i in input_token_index.items()) reverse_target_char_index = dict( (i, char) for char, i in target_token_index.items()) def decode_sequence(input_seq): # Encode the input as state vectors. states_value = encoder_model.predict(input_seq) # Generate empty target sequence of length 1. target_seq = np.zeros((1, 1, num_decoder_tokens)) # Populate the first character of target sequence with the start character. target_seq[0, 0, target_token_index['\t']] = 1. # Sampling loop for a batch of sequences # (to simplify, here we assume a batch of size 1). stop_condition = False decoded_sentence = '' while not stop_condition: output_tokens, h, c = decoder_model.predict( [target_seq] + states_value) # Sample a token sampled_token_index = np.argmax(output_tokens[0, -1, :]) sampled_char = reverse_target_char_index[sampled_token_index] decoded_sentence += sampled_char # Exit condition: either hit max length # or find stop character. if (sampled_char == '\n' or len(decoded_sentence) > max_decoder_seq_length): stop_condition = True # Update the target sequence (of length 1). target_seq = np.zeros((1, 1, num_decoder_tokens)) target_seq[0, 0, sampled_token_index] = 1. # Update states states_value = [h, c] return decoded_sentence for seq_index in range(100): # Take one sequence (part of the training set) # for trying out decoding. input_seq = encoder_input_data[seq_index: seq_index + 1] decoded_sentence = decode_sequence(input_seq) print('-') print('Input sentence:', input_texts[seq_index]) print('Decoded sentence:', decoded_sentence)[/mw_shl_code] 來源: weixin 作者: 數(shù)據(jù)派THU |
|