前回はMNISTの分類問題を解いたが、今回は回帰問題の場合。

コード https://github.com/masatoi/mgl-user/blob/master/src/regression.lisp



データ点の表現とそれをまとめてモデルに設定する部分 分類と回帰で何が違うかというと、まず教師信号がラベルでなく実数ベクトルになるので、データ点の構造体を定義しなおす必要がある。それから分類では出力層のtargetにバッチサイズ分のリストを設定していたところを、回帰では教師信号を保持するlumpを別に作って、そのnodesスロットに教師信号ベクトルをバッチサイズ分まとめた行列を設定する。 ( defstruct regression-datum ( id nil :type fixnum ) ( target nil :type mat ) ( array nil :type mat )) ( defun sample-regression-datum-array ( sample ) ( regression-datum-array ( first sample ))) ( defun clamp-regression-data ( samples mat ) ( assert ( = ( length samples ) ( mat-dimension mat 0 ))) ( map-concat #'copy! samples mat :key #'sample-regression-datum-array )) ( defun sample-regression-datum-target ( sample ) ( regression-datum-target ( first sample ))) ( defun clamp-regression-target ( samples mat ) ( assert ( = ( length samples ) ( mat-dimension mat 0 ))) ( map-concat #'copy! samples mat :key #'sample-regression-datum-target )) ( defclass regression-fnn ( fnn ) ()) ( defmethod set-input ( samples ( bpn regression-fnn )) ( let* (( inputs ( find-clump ' inputs bpn )) ( targets ( find-clump ' targets bpn ))) ( clamp-regression-data samples ( nodes inputs )) ( clamp-regression-target samples ( nodes targets ))))

データの正規化 MNISTでは入力は[0,1]に正規化されていたので特に何もしていなかったが、データセットによっては入出力の各次元ごとに、平均を引いて標準偏差で割って平均0、分散1にする正規化をやった方がいいことがある。 ( defun copy-regression-dataset ( dataset ) ( let (( new-dataset ( map ' vector ( lambda ( datum ) ( copy-regression-datum datum )) dataset ))) ( loop for new-datum across new-dataset for datum across dataset do ( setf ( regression-datum-array new-datum ) ( copy-mat ( regression-datum-array datum )) ( regression-datum-target new-datum ) ( copy-mat ( regression-datum-target datum )))) new-dataset )) ( defun regression-dataset-average ( dataset ) ( let* (( first-datum-array ( regression-datum-array ( aref dataset 0 ))) ( first-datum-target ( regression-datum-target ( aref dataset 0 ))) ( input-dim ( mat-dimension first-datum-array 0 )) ( output-dim ( mat-dimension first-datum-target 0 )) ( input-ave ( make-mat input-dim )) ( output-ave ( make-mat output-dim ))) ( loop for datum across dataset do ( axpy! ( / 1.0 ( length dataset )) ( regression-datum-array datum ) input-ave ) ( axpy! ( / 1.0 ( length dataset )) ( regression-datum-target datum ) output-ave )) ( values input-ave output-ave ))) ( defun regression-dataset-variance ( dataset input-ave output-ave ) ( let* (( input-dim ( mat-dimension input-ave 0 )) ( output-dim ( mat-dimension output-ave 0 )) ( input-var ( make-mat input-dim )) ( output-var ( make-mat output-dim )) ( input-diff ( make-mat input-dim )) ( output-diff ( make-mat output-dim ))) ( loop for datum across dataset do ( copy! input-ave input-diff ) ( axpy! -1.0 ( regression-datum-array datum ) input-diff ) ( .square! input-diff ) ( axpy! ( / 1.0 ( length dataset )) input-diff input-var ) ( copy! output-ave output-diff ) ( axpy! -1.0 ( regression-datum-target datum ) output-diff ) ( .square! output-diff ) ( axpy! ( / 1.0 ( length dataset )) output-diff output-var )) ( values input-var output-var ))) ( defun regression-dataset-normalize! ( dataset &key test-dataset ( noise-degree 1.0 )) ( let* (( first-datum-array ( regression-datum-array ( aref dataset 0 ))) ( input-dim ( mat-dimension first-datum-array 0 )) ( first-datum-target ( regression-datum-target ( aref dataset 0 ))) ( output-dim ( mat-dimension first-datum-target 0 )) ( input-noise ( make-mat input-dim :initial-element noise-degree )) ( output-noise ( make-mat output-dim :initial-element noise-degree ))) ( multiple-value-bind ( input-ave output-ave ) ( regression-dataset-average dataset ) ( multiple-value-bind ( input-var output-var ) ( regression-dataset-variance dataset input-ave output-ave ) ( axpy! 1.0 input-noise input-var ) ( axpy! 1.0 output-noise output-var ) ( .sqrt! input-var ) ( .sqrt! output-var ) ( .inv! input-var ) ( .inv! output-var ) ( flet (( normalize! ( datum ) ( axpy! -1.0 input-ave ( regression-datum-array datum )) ( geem! 1.0 input-var ( regression-datum-array datum ) 0.0 ( regression-datum-array datum )) ( axpy! -1.0 output-ave ( regression-datum-target datum )) ( geem! 1.0 output-var ( regression-datum-target datum ) 0.0 ( regression-datum-target datum )))) ( loop for datum across dataset do ( normalize! datum )) ( if test-dataset ( loop for datum across test-dataset do ( normalize! datum ))) ' done ))))) ここでaxpy!が行列同士の足し算(というか alpha * X + Y の結果をYに代入する)、geem!が成分ごとの積を破壊的にやるMGL-MATの関数で、ほかにも成分ごとの平方根や逆数を代入する.sqrt!や.inv!など色々ある。これらの関数はCUDAによる高速化が効く。

学習部分とモニタリング 分類問題では評価は正答率によってだったが、回帰問題では予測したものと教師信号とのRMSE(平均二乗誤差の平方根)で測る。

learnerに持たせるmonitorには、測り方を意味するmeasurerとそれをまとめるcounterの2つのスロットがある。measurerに設定されているmgl-bp::costが予測と教師信号との差のL2ノルムを表す。counterには単純に平均を取るbasic-counterとRMSEで平均を取るrmse-counterがあり、今回はrmse-counterを設定する。

あとは大体分類と同じ。 ( defun log-regression-cost ( optimizer learner ) ( when ( zerop ( n-instances optimizer )) ( report-optimization-parameters optimizer learner )) ( log-msg "train/test at n-instances: ~S (~A ephochs)~%" ( n-instances optimizer ) ( / ( n-instances optimizer ) ( length ( training optimizer )))) ( log-padded ( let (( bpn ( bpn learner )) ( monitors ( monitors learner ))) ( append ( monitor-bpn-results ( make-sampler ( training optimizer ) :max-n 10000 ) bpn ( list ( car monitors ))) ( if ( test optimizer ) ( monitor-bpn-results ( make-sampler ( test optimizer )) bpn ( cdr monitors )))))) ( log-mat-room ) ( log-msg "---------------------------------------------------~%" )) ( defun train-regression-fnn-with-monitor ( fnn training &key test ( n-epochs 3000 ) ( learning-rate 0.1 ) ( momentum 0.9 )) ( let* (( optimizer ( monitor-optimization-periodically ( make-instance ' segmented-gd-optimizer-with-data :training training :test test :segmenter ( constantly ( make-instance ' sgd-optimizer :learning-rate learning-rate :momentum momentum :batch-size ( max-n-stripes fnn )))) `( ( :fn log-regression-cost :period , ( length training )) ( :fn reset-optimization-monitors :period , ( length training ) :last-eval 0 ) ) )) ( measurer ( lambda ( instances bpn ) ( declare ( ignore instances )) ( mgl-bp::cost bpn ))) ( monitors ( cons ( make-instance ' monitor :measurer measurer :counter ( make-instance ' rmse-counter :prepend-attributes '( :event "rmse." :dataset "train" ))) ( if test ( list ( make-instance ' monitor :measurer measurer :counter ( make-instance ' rmse-counter :prepend-attributes '( :event "rmse." :dataset "test" ))))))) ( learner ( make-instance ' bp-learner :bpn fnn :monitors monitors )) ( dateset ( make-sampler training :n-epochs n-epochs ))) ( minimize optimizer learner :dataset dateset ) fnn )) ( defun train-regression-fnn-process-with-monitor ( fnn training &key test ( n-epochs 30 ) ( learning-rate 0.1 ) ( momentum 0.9 ) without-initialize ) ( with-cuda* () ( repeatably () ( if ( null without-initialize ) ( init-bpn-weights fnn :stddev 0.01 )) ( train-regression-fnn-with-monitor fnn training :test test :n-epochs n-epochs :learning-rate learning-rate :momentum momentum ))) ( log-msg "End" ) fnn )

モデル定義 ネットワークの定義部分もほとんど分類問題と同じだが、出力層の定義の仕方が少し違う。回帰問題の場合、出力層の活性化関数は恒等写像なので、予測結果は最後の->activationオブジェクトの出力部分になり、activations-output関数でアクセスできる。

出力層では予測と教師信号の二乗誤差を表す->squared-differenceと->lossを組み合わせる。->squared-differenceの生成関数の引数には->activationオブジェクトの出力部分とtargetsという名前を持つ->inputオブジェクトを与える。この->inputオブジェクトにset-inputで教師信号を設定する。

例えば、入力2次元、出力1次元のネットワークならこのように書ける。 ( defun activations-output ( activations ) ( aref ( clumps activations ) 3 )) ( defun find-last-activation ( bpn ) ( let (( clumps-vec ( clumps bpn ))) ( loop for i from ( 1- ( length clumps-vec )) downto 0 do ( let (( clump ( aref clumps-vec i ))) ( typecase clump ( ->activation ( return clump ))))))) ( defparameter fnn-regression ( build-fnn ( :class ' regression-fnn :max-n-stripes 100 ) ( inputs ( ->input :size 2 )) ( f1-activations ( ->activation inputs :name ' f1 :size 1200 )) ( f1 ( ->relu f1-activations )) ( f2-activations ( ->activation f1 :name ' f2 :size 1200 )) ( f2 ( ->relu f2-activations )) ( f3-activations ( ->activation f2 :name ' f3 :size 1200 )) ( f3 ( ->relu f3-activations )) ( prediction-activations ( ->activation f3 :name ' prediction :size 1 )) ( prediction ( ->loss ( ->squared-difference ( activations-output prediction-activations ) ( ->input :name ' targets :size 1 )) :name ' prediction ))))

予測 出力層の予測部分の与え方が違うだけでほとんど同じ ( defun predict-regression-datum ( fnn regression-datum ) ( let* (( a ( regression-datum-array regression-datum )) ( len ( mat-dimension a 0 )) ( input-nodes ( nodes ( find-clump ' inputs fnn ))) ( output-nodes ( nodes ( activations-output ( find-last-activation fnn ))))) ( loop for i from 0 to ( 1- len ) do ( setf ( mref input-nodes 0 i ) ( mref a i ))) ( forward fnn ) ( reshape output-nodes ( mat-dimension output-nodes 1 ))))

例: Rastrigin関数の近似 前に2個体分散GAの記事を書いたときに回帰の例として使ったRastrigin関数を近似してみる。

まずランダムな入力に対するRastrigin関数の値で訓練データセットを作り、格子状の入力でテストデータセットを作る。

それからデータを正規化する。訓練データの正規化パラメータと同じものでテストデータも正規化する必要がある。

最後に先に定義したfnn-regressionを使って訓練を実行する。 ( defun rastrigin ( x-list ) ( let (( n ( length x-list ))) ( + ( * 10 n ) ( loop for xi in x-list summing ( - ( * xi xi ) ( * 10 ( cos ( * 2 pi xi )))))))) ( defparameter *rastrigin-dataset* ( let* (( n 10000 ) ( arr ( make-array n ))) ( loop for i from 0 to ( 1- n ) do ( let (( x ( - ( random 10.24 ) 5.12 )) ( y ( - ( random 10.24 ) 5.12 ))) ( setf ( aref arr i ) ( make-regression-datum :id ( 1+ i ) :target ( make-mat 1 :initial-element ( rastrigin ( list x y ))) :array ( make-mat 2 :initial-contents ( list x y )))))) arr )) ( defparameter *rastrigin-test* ( let* (( n ( * 64 64 )) ( arr ( make-array n ))) ( loop for i from 0 to 63 do ( loop for j from 0 to 63 do ( let (( x ( - ( * i 0.16 ) 5.04 )) ( y ( - ( * j 0.16 ) 5.04 ))) ( setf ( aref arr ( + ( * i 64 ) j )) ( make-regression-datum :id ( 1+ ( + ( * i 64 ) j )) :target ( make-mat 1 :initial-element ( rastrigin ( list x y ))) :array ( make-mat 2 :initial-contents ( list x y ))))))) arr )) ( defparameter *rastrigin-dataset-normal* ( copy-regression-dataset *rastrigin-dataset* )) ( defparameter *rastrigin-test-normal* ( copy-regression-dataset *rastrigin-test* )) ( regression-dataset-normalize! *rastrigin-dataset-normal* :test-dataset *rastrigin-test-normal* :noise-degree 0.0 ) ( train-regression-fnn-process-with-monitor fnn-regression *rastrigin-dataset-normal* :test *rastrigin-test-normal* :n-epochs 500 ) 学習の進行過程の図。縦軸がRMSEで小さいほど良い。横軸は訓練データを何周したか。



左がオリジナルのテストデータで右が予測したもの。



ほぼ再現できている。