目錄1 為什么要記錄特征轉(zhuǎn)換行為? 1 為什么要記錄特征轉(zhuǎn)換行為?使用機器學(xué)習(xí)算法和模型進行數(shù)據(jù)挖掘,有時難免事與愿違:我們依仗對業(yè)務(wù)的理解,對數(shù)據(jù)的分析,以及工作經(jīng)驗提出了一些特征,但是在模型訓(xùn)練完成后,某些特征可能“身微言輕”——我們認為相關(guān)性高的特征并不重要,這時我們便要反思這樣的特征提出是否合理;某些特征甚至“南轅北轍”——我們認為正相關(guān)的特征結(jié)果變成了負相關(guān),造成這種情況很有可能是抽樣與整體不相符,模型過于復(fù)雜,導(dǎo)致了過擬合。然而,我們怎么判斷先前的假設(shè)和最后的結(jié)果之間的差異呢? 線性模型通常有含有屬性coef_,當系數(shù)值大于0時為正相關(guān),當系數(shù)值小于0時為負相關(guān);另外一些模型含有屬性feature_importances_,顧名思義,表示特征的重要性。根據(jù)以上兩個屬性,便可以與先前假設(shè)中的特征的相關(guān)性(或重要性)進行對比了。但是,理想是豐滿的,現(xiàn)實是骨感的。經(jīng)過復(fù)雜的特征轉(zhuǎn)換之后,特征矩陣X已不再是原來的樣子:啞變量使特征變多了,特征選擇使特征變少了,降維使特征映射到另一個維度中。 累覺不愛了嗎?如果,我們能夠?qū)⒆詈蟮奶卣髋c原特征對應(yīng)起來,那么分析特征的系數(shù)和重要性又有了意義了。所以,在訓(xùn)練過程(或者轉(zhuǎn)換過程)中,記錄下所有特征轉(zhuǎn)換行為是一個有意義的工作??上?,sklearn暫時并沒有提供這樣的功能。在這篇博文中,我們嘗試對一些常見的轉(zhuǎn)換功能進行行為記錄,讀者可以在此基礎(chǔ)進行進一步的拓展。
2 有哪些特征轉(zhuǎn)換的方式?《使用sklearn做單機特征工程》一文概括了若干常見的轉(zhuǎn)換功能:
按照特征數(shù)量是否發(fā)生變化,這些轉(zhuǎn)換類可分為:
對于不造成特征數(shù)量變化的轉(zhuǎn)換類,我們只需要保持特征不變即可。在此,我們主要研究那些有變化的轉(zhuǎn)換類,其他轉(zhuǎn)換類都默認為無變化。按照映射的形式,可將以上有變化的轉(zhuǎn)換類可分為:
原特征與新特征為一對一映射通常發(fā)生在特征選擇時,若原特征被選擇則直接變成新特征,否則拋棄。啞編碼為典型的一對多映射,需要啞編碼的原特征將會轉(zhuǎn)換為多個新特征。多對多的映射中PolynomialFeatures并不要求每一個新特征都與原特征建立映射關(guān)系,例如階為2的多項式轉(zhuǎn)換,第一個新特征只由第一個原特征生成(平方)。降維的本質(zhì)在于將原特征矩陣X映射到維度更低的空間中,使用的技術(shù)通常是矩陣乘法,所以它既要求每一個原特征映射到所有新特征,同時也要求每一個新特征被所有原特征映射。
3 特征轉(zhuǎn)換的組合在《使用sklearn優(yōu)雅地進行數(shù)據(jù)挖掘》一文中,我們看到一個基本的數(shù)據(jù)挖掘場景: 特征轉(zhuǎn)換行為通常是流水線型和并行型結(jié)合的。所以,我們考慮重新設(shè)計流水線處理類Pipeline和并行處理類FeatureUnion,使其能夠根據(jù)不同的特征轉(zhuǎn)換類,記錄下轉(zhuǎn)換行為“日志”?!叭罩尽钡谋硎拘问揭彩侵匾模缮蠄D可知,集成后的特征轉(zhuǎn)換過程呈現(xiàn)無環(huán)網(wǎng)狀,故使用網(wǎng)絡(luò)來描述“日志”是合適的。在網(wǎng)絡(luò)中,節(jié)點表示特征,有向連線表示特征轉(zhuǎn)換。 為此,我們新增兩個類型Feature和Transfrom來構(gòu)造網(wǎng)絡(luò)結(jié)構(gòu),F(xiàn)eature類型表示網(wǎng)絡(luò)中的節(jié)點,Transform表示網(wǎng)絡(luò)中的有向邊。python的networkx庫可以很好地表述網(wǎng)絡(luò)和操作網(wǎng)絡(luò),我這是要重新造輪子嗎?其實并不是,現(xiàn)在考慮代表新特征的節(jié)點怎么命名的問題,顯然,不能與網(wǎng)絡(luò)中任意節(jié)點同名,否則會發(fā)生混淆。然而,由于sklearn的訓(xùn)練過程存在并行過程(線程),直接使用network來構(gòu)造網(wǎng)絡(luò)的話,將難以處理節(jié)點重復(fù)命名的問題。所以,我才新增兩個新的類型來描述網(wǎng)絡(luò)結(jié)構(gòu),這時網(wǎng)絡(luò)中的節(jié)點名是可以重復(fù)的。最后,對這網(wǎng)絡(luò)進行廣度遍歷,生成基于networkx庫的網(wǎng)絡(luò),因為這個過程是串行的,故可以使用“當前節(jié)點數(shù)”作為新增節(jié)點的序號了。這兩個類的代碼(feature.py)設(shè)計如下: 1 import numpy as np 2 3 class Transform(object): 4 def __init__(self, label, feature): 5 super(Transform, self).__init__() 6 #邊標簽名,使用networkx等庫畫圖時將用到 7 self.label = label 8 #該邊指向的節(jié)點 9 self.feature = feature 10 11 class Feature(object): 12 def __init__(self, name): 13 super(Feature, self).__init__() 14 #節(jié)點名稱,該名稱在網(wǎng)絡(luò)中不唯一,在某些映射中,該名稱需要直接傳給新特征 15 self.name = name 16 #節(jié)點標簽名,該名稱在網(wǎng)絡(luò)中唯一,使用networkx等庫畫圖時將用到 17 self.label = '%s[%d]' % (self.name, id(self)) 18 #從本節(jié)點發(fā)出的有向邊列表 19 self.transformList = np.array([]) 20 21 #建立從self到feature的有向邊 22 def transform(self, label, feature): 23 self.transformList = np.append(self.transformList, Transform(label, feature)) 24 25 #深度遍歷輸出以本節(jié)點為源節(jié)點的網(wǎng)絡(luò) 26 def printTree(self): 27 print self.label 28 for transform in self.transformList: 29 feature = transform.feature 30 print '--%s-->' % transform.label, 31 feature.printTree() 32 33 def __str__(self): 34 return self.label
4 sklearn源碼分析我們可以統(tǒng)一地記錄不改變特征數(shù)量的轉(zhuǎn)換行為:在“日志”網(wǎng)絡(luò)中,從代表原特征的節(jié)點,引伸出連線連上唯一的代表新特征的節(jié)點。然而,對于改變特征數(shù)量的轉(zhuǎn)換行為來說,需要針對每個轉(zhuǎn)換類編寫不同的“日志”記錄(網(wǎng)絡(luò)生成)代碼。為不改變特征數(shù)量的轉(zhuǎn)換行為設(shè)計代碼(default.py)如下: 1 import numpy as np 2 from feature import Feature 3 4 def doWithDefault(model, featureList): 5 leaves = np.array([]) 6 7 n_features = len(featureList) 8 9 #為每一個輸入的原節(jié)點,新建一個新節(jié)點,并建立映射 10 for i in range(n_features): 11 feature = featureList[i] 12 newFeature = Feature(feature.name) 13 feature.transform(model.__class__.__name__, newFeature) 14 leaves = np.append(leaves, newFeature) 15 16 #返回新節(jié)點列表,之所以該變量取名叫l(wèi)eaves,是因為其是網(wǎng)絡(luò)的邊緣節(jié)點 17 return leaves 4.1 一對一映射映射形式為一對一時,轉(zhuǎn)換類通常為特征選擇類。在這種映射下,原特征要么只轉(zhuǎn)化為一個新特征,要么不轉(zhuǎn)化。通過分析sklearn源碼不難發(fā)現(xiàn),特征選擇類都混入了特質(zhì)sklearn.feature_selection.base.SelectorMixin,因此這些類都有方法get_support來獲取哪些特征轉(zhuǎn)換信息: 所以,在設(shè)計“日志”記錄模塊時,判斷轉(zhuǎn)換類是否混入了該特征,若是則直接調(diào)用get_support方法來得到被篩選的特征的掩碼或者下標,如此我們便可從被篩選的特征引伸出連線連上新特征。為此,我們設(shè)計代碼(one2one.py)如下: 1 import numpy as np 2 from sklearn.feature_selection.base import SelectorMixin 3 from feature import Feature 4 5 def doWithSelector(model, featureList): 6 assert(isinstance(model, SelectorMixin)) 7 8 leaves = np.array([]) 9 10 n_features = len(featureList) 11 12 #新節(jié)點的掩碼 13 mask_features = model.get_support() 14 15 for i in range(n_features): 16 feature = featureList[i] 17 #原節(jié)點被選擇,生成新節(jié)點,并建立映射 18 if mask_features[i]: 19 newFeature = Feature(feature.name) 20 feature.transform(model.__class__.__name__, newFeature) 21 leaves = np.append(leaves, newFeature) 22 #原節(jié)點被拋棄,生成一個名為Abandomed的新節(jié)點,建立映射,但是這個特征不加入下一步繼續(xù)生長的節(jié)點列表 23 else: 24 newFeature = Feature('Abandomed') 25 feature.transform(model.__class__.__name__, newFeature) 26 27 return leaves 4.2 一對多映射OneHotEncoder是典型的一對多映射轉(zhuǎn)換類,其提供了兩個屬性結(jié)合兩個參數(shù)來表示轉(zhuǎn)換信息:
綜上,我們設(shè)計處理OneHotEncoder類的代碼(one2many.py)如下: 1 import numpy as np 2 from sklearn.preprocessing import OneHotEncoder 3 from feature import Feature 4 5 def doWithOneHotEncoder(model, featureList): 6 assert(isinstance(model, OneHotEncoder)) 7 assert(hasattr(model, 'feature_indices_')) 8 9 leaves = np.array([]) 10 11 n_features = len(featureList) 12 13 #定性特征的掩碼 14 if model.categorical_features == 'all': 15 mask_features = np.ones(n_features) 16 else: 17 mask_features = np.zeros(n_features) 18 mask_features[self.categorical_features] = 1 19 20 #定性特征的數(shù)量 21 n_qualitativeFeatures = len(model.feature_indices_) - 1 22 #如果定性特征的取值個數(shù)是自動的,即從訓(xùn)練數(shù)據(jù)中生成 23 if model.n_values == 'auto': 24 #定性特征的有效取值列表 25 n_activeFeatures = len(model.active_features_) 26 #變量j為定性特征的下標,變量k為有效值的下標 27 j = k = 0 28 for i in range(n_features): 29 feature = featureList[i] 30 #如果是定性特征 31 if mask_features[i]: 32 if model.n_values == 'auto': 33 #為屬于第j個定性特征的每個有效值生成一個新節(jié)點,建立映射關(guān)系 34 while k < n_activeFeatures and model.active_features_[k] < model.feature_indices_[j+1]: 35 newFeature = Feature(feature.name) 36 feature.transform('%s[%d]' % (model.__class__.__name__, model.active_features_[k] - model.feature_indices_[j]), newFeature) 37 leaves = np.append(leaves, newFeature) 38 k += 1 39 else: 40 #為屬于第j個定性特征的每個有效值生成一個新節(jié)點,建立映射關(guān)系 41 for k in range(model.feature_indices_[j]+1, model.feature_indices_[j+1]): 42 newFeature = Feature(feature.name) 43 feature.transform('%s[%d]' % (model.__class__.__name__, k - model.feature_indices_[j]), newFeature) 44 leaves = np.append(leaves, newFeature) 45 j += 1 46 #如果不是定性特征,則直接根據(jù)原節(jié)點生成新節(jié)點 47 else: 48 newFeature = Feature(feature.name) 49 feature.transform('%s[r]' % model.__class__.__name__, newFeature) 50 leaves = append(leaves, newFeatures) 51 52 return leaves 4.3 多對多映射PCA類是典型的多對多映射的轉(zhuǎn)換類,其提供了參數(shù)n_components_來表示轉(zhuǎn)換后新特征的個數(shù)。之前說過降維的轉(zhuǎn)換類,其既要求每一個原特征映射到所有新特征,也要求每一個新特征被所有原特征映射。故,我們設(shè)計處理PCA類的代碼(many2many.py)如下: 1 import numpy as np 2 from sklearn.decomposition import PCA 3 from feature import Feature 4 5 def doWithPCA(model, featureList): 6 leaves = np.array([]) 7 8 n_features = len(featureList) 9 10 #按照主成分數(shù)生成新節(jié)點 11 for i in range(model.n_components_): 12 newFeature = Feature(model.__class__.__name__) 13 leaves = np.append(leaves, newFeature) 14 15 #為每一個原節(jié)點與每一個新節(jié)點建立映射 16 for i in range(n_features): 17 feature = featureList[i] 18 for j in range(model.n_components_): 19 newFeature = leaves[j] 20 feature.transform(model.__class__.__name__, newFeature) 21 22 return leaves 5 實踐到此,我們可以專注改進流水線處理和并行處理的模塊了。為了不破壞Pipeline類和FeatureUnion類的核心功能,我們分別派生出兩個類PipelineExt和FeatureUnionExt。其次,為這兩個類增加私有方法getFeatureList,這個方法有只有一個參數(shù)featureList表示輸入流水線處理或并行處理的特征列表(元素為feature.Feature類的對象),輸出經(jīng)過流水線處理或并行處理后的特征列表。設(shè)計內(nèi)部方法_doWithModel,其被getFeatureList方法調(diào)用,其提供了一個公共的入口,將根據(jù)流水線上或者并行中的轉(zhuǎn)換類的不同,具體調(diào)用不同的處理方法(這些不同的處理方法在one2one.py,one2many.py,many2many.py中定義)。在者,我們還需要一個initRoot方法來初始化網(wǎng)絡(luò)結(jié)構(gòu),返回一個根節(jié)點。最后,我們嘗試用networkx庫讀取自定義的網(wǎng)絡(luò)結(jié)構(gòu),基于matplotlib的對網(wǎng)絡(luò)進行圖形化顯示。以上部分的代碼(ple.py)如下: 1 from sklearn.feature_selection.base import SelectorMixin 2 from sklearn.preprocessing import OneHotEncoder 3 from sklearn.decomposition import PCA 4 from sklearn.pipeline import Pipeline, FeatureUnion, _fit_one_transformer, _fit_transform_one, _transform_one 5 from sklearn.externals.joblib import Parallel, delayed 6 from scipy import sparse 7 import numpy as np 8 import networkx as nx 9 from matplotlib import pyplot as plt 10 from default import doWithDefault 11 from one2one import doWithSelector 12 from one2many import doWithOneHotEncoder 13 from many2many import doWithPCA 14 from feature import Feature 15 16 #派生Pipeline類 17 class PipelineExt(Pipeline): 18 def _pre_get_featues(self, featureList): 19 leaves = featureList 20 for name, transform in self.steps[:-1]: 21 leaves = _doWithModel(transform, leaves) 22 return leaves 23 24 #定義getFeatureList方法 25 def getFeatureList(self, featureList): 26 leaves = self._pre_get_featues(featureList) 27 model = self.steps[-1][-1] 28 if hasattr(model, 'fit_transform') or hasattr(model, 'transform'): 29 leaves = _doWithModel(model, leaves) 30 return leaves 31 32 #派生FeatureUnion類,該類不僅記錄了轉(zhuǎn)換行為,同時也支持部分數(shù)據(jù)處理 33 class FeatureUnionExt(FeatureUnion): 34 def __init__(self, transformer_list, idx_list, n_jobs=1, transformer_weights=None): 35 self.idx_list = idx_list 36 FeatureUnion.__init__(self, transformer_list=map(lambda trans:(trans[0], trans[1]), transformer_list), n_jobs=n_jobs, transformer_weights=transformer_weights) 37 38 def fit(self, X, y=None): 39 transformer_idx_list = map(lambda trans, idx:(trans[0], trans[1], idx), self.transformer_list, self.idx_list) 40 transformers = Parallel(n_jobs=self.n_jobs)( 41 delayed(_fit_one_transformer)(trans, X[:,idx], y) 42 for name, trans, idx in transformer_idx_list) 43 self._update_transformer_list(transformers) 44 return self 45 46 def fit_transform(self, X, y=None, **fit_params): 47 transformer_idx_list = map(lambda trans, idx:(trans[0], trans[1], idx), self.transformer_list, self.idx_list) 48 result = Parallel(n_jobs=self.n_jobs)( 49 delayed(_fit_transform_one)(trans, name, X[:,idx], y, 50 self.transformer_weights, **fit_params) 51 for name, trans, idx in transformer_idx_list) 52 53 Xs, transformers = zip(*result) 54 self._update_transformer_list(transformers) 55 if any(sparse.issparse(f) for f in Xs): 56 Xs = sparse.hstack(Xs).tocsr() 57 else: 58 Xs = np.hstack(Xs) 59 return Xs 60 61 def transform(self, X): 62 transformer_idx_list = map(lambda trans, idx:(trans[0], trans[1], idx), self.transformer_list, self.idx_list) 63 Xs = Parallel(n_jobs=self.n_jobs)( 64 delayed(_transform_one)(trans, name, X[:,idx], self.transformer_weights) 65 for name, trans, idx in transformer_idx_list) 66 if any(sparse.issparse(f) for f in Xs): 67 Xs = sparse.hstack(Xs).tocsr() 68 else: 69 Xs = np.hstack(Xs) 70 return Xs 71 72 #定義getFeatureList方法 73 def getFeatureList(self, featureList): 74 transformer_idx_list = map(lambda trans, idx:(trans[0], trans[1], idx), self.transformer_list, self.idx_list) 75 leaves = np.array(Parallel(n_jobs=self.n_jobs)( 76 delayed(_doWithModel)(trans, featureList[idx]) 77 for name, trans, idx in transformer_idx_list)) 78 leaves = np.hstack(leaves) 79 return leaves 80 81 #定義為每個模型進行轉(zhuǎn)換記錄的總?cè)肟诜椒?,該方法將根?jù)不同的轉(zhuǎn)換類調(diào)用不同的處理方法 82 def _doWithModel(model, featureList): 83 if isinstance(model, SelectorMixin): 84 return doWithSelector(model, featureList) 85 elif isinstance(model, OneHotEncoder): 86 return doWithOneHotEncoder(model, featureList) 87 elif isinstance(model, PCA): 88 return doWithPCA(model, featureList) 89 elif isinstance(model, FeatureUnionExt) or isinstance(model, PipelineExt): 90 return model.getFeatureList(featureList) 91 else: 92 return doWithDefault(model, featureList) 93 94 #初始化網(wǎng)絡(luò)的根節(jié)點,輸入?yún)?shù)為原始特征的名稱 95 def initRoot(featureNameList): 96 root = Feature('root') 97 for featureName in featureNameList: 98 newFeature = Feature(featureName) 99 root.transform('init', newFeature) 100 return root 現(xiàn)在,我們需要驗證一下成果了,不妨繼續(xù)使用博文《使用sklearn優(yōu)雅地進行數(shù)據(jù)挖掘》中提供的場景來進行測試: 1 import numpy as np 2 from sklearn.datasets import load_iris 3 from sklearn.preprocessing import Imputer 4 from sklearn.preprocessing import OneHotEncoder 5 from sklearn.preprocessing import FunctionTransformer 6 from sklearn.preprocessing import Binarizer 7 from sklearn.preprocessing import MinMaxScaler 8 from sklearn.feature_selection import SelectKBest 9 from sklearn.feature_selection import chi2 10 from sklearn.decomposition import PCA 11 from sklearn.linear_model import LogisticRegression 12 from sklearn.pipeline import Pipeline, FeatureUnion 13 from ple import PipelineExt, FeatureUnionExt, initRoot 14 15 def datamining(iris, featureList): 16 step1 = ('Imputer', Imputer()) 17 step2_1 = ('OneHotEncoder', OneHotEncoder(sparse=False)) 18 step2_2 = ('ToLog', FunctionTransformer(np.log1p)) 19 step2_3 = ('ToBinary', Binarizer()) 20 step2 = ('FeatureUnionExt', FeatureUnionExt(transformer_list=[step2_1, step2_2, step2_3], idx_list=[[0], [1, 2, 3], [4]])) 21 step3 = ('MinMaxScaler', MinMaxScaler()) 22 step4 = ('SelectKBest', SelectKBest(chi2, k=3)) 23 step5 = ('PCA', PCA(n_components=2)) 24 step6 = ('LogisticRegression', LogisticRegression(penalty='l2')) 25 pipeline = PipelineExt(steps=[step1, step2, step3, step4, step5, step6]) 26 pipeline.fit(iris.data, iris.target) 27 #最終的特征列表 28 leaves = pipeline.getFeatureList(featureList) 29 #為最終的特征輸出對應(yīng)的系數(shù) 30 for i in range(len(leaves)): 31 print leaves[i], pipeline.steps[-1][-1].coef_[i] 32 33 def main(): 34 iris = load_iris() 35 iris.data = np.hstack((np.random.choice([0, 1, 2], size=iris.data.shape[0]+1).reshape(-1,1), np.vstack((iris.data, np.full(4, np.nan).reshape(1,-1))))) 36 iris.target = np.hstack((iris.target, np.array([np.median(iris.target)]))) 37 root = initRoot(['color', 'Sepal.Length', 'Sepal.Width', 'Petal.Length', 'Petal.Width']) 38 featureList = np.array([transform.feature for transform in root.transformList]) 39 40 datamining(iris, featureList) 41 42 root.printTree() 43 44 if __name__ == '__main__': 45 main() 運行程序,最終的特征及對應(yīng)的系數(shù)輸出如下: 輸出網(wǎng)絡(luò)結(jié)構(gòu)的深度遍歷(部分截圖): 為了更好的展示轉(zhuǎn)換行為構(gòu)成的網(wǎng)絡(luò),我們還可以基于networkx構(gòu)建有向圖,通過matplotlib進行展示(ple.py): 1 #遞歸的方式進行深度遍歷,生成基于networkx的有向圖 2 def _draw(G, root, nodeLabelDict, edgeLabelDict): 3 nodeLabelDict[root.label] = root.name 4 for transform in root.transformList: 5 G.add_edge(root.label, transform.feature.label) 6 edgeLabelDict[(root.label, transform.feature.label)] = transform.label 7 _draw(G, transform.feature, nodeLabelDict, edgeLabelDict) 8 9 #判斷是否圖是否存在環(huán) 10 def _isCyclic(root, walked): 11 if root in walked: 12 return True 13 else: 14 walked.add(root) 15 for transform in root.transformList: 16 ret = _isCyclic(transform.feature, walked) 17 if ret: 18 return True 19 walked.remove(root) 20 return False 21 22 #廣度遍歷生成瀑布式布局 23 def fall_layout(root, x_space=1, y_space=1): 24 layout = {} 25 if _isCyclic(root, set()): 26 raise Exception('Graph is cyclic') 27 28 queue = [None, root] 29 nodeDict = {} 30 levelDict = {} 31 level = 0 32 while len(queue) > 0: 33 head = queue.pop() 34 if head is None: 35 if len(queue) > 0: 36 level += 1 37 queue.insert(0, None) 38 else: 39 if head in nodeDict: 40 levelDict[nodeDict[head]].remove(head) 41 nodeDict[head] = level 42 levelDict[level] = levelDict.get(level, []) + [head] 43 for transform in head.transformList: 44 queue.insert(0, transform.feature) 45 46 for level in levelDict.keys(): 47 nodeList = levelDict[level] 48 n_nodes = len(nodeList) 49 offset = - n_nodes / 2 50 for i in range(n_nodes): 51 layout[nodeList[i].label] = (level * x_space, (i + offset) * y_space) 52 53 return layout 54 55 def draw(root): 56 G = nx.DiGraph() 57 nodeLabelDict = {} 58 edgeLabelDict = {} 59 60 _draw(G, root, nodeLabelDict, edgeLabelDict) 61 #設(shè)定網(wǎng)絡(luò)布局方式為瀑布式 62 pos = fall_layout(root) 63 64 nx.draw_networkx_nodes(G,pos,node_size=100, node_color="white") 65 nx.draw_networkx_edges(G,pos, width=1,alpha=0.5,edge_color='black') 66 #設(shè)置網(wǎng)絡(luò)中節(jié)點的標簽內(nèi)容及格式 67 nx.draw_networkx_labels(G,pos,labels=nodeLabelDict, font_size=10,font_family='sans-serif') 68 #設(shè)置網(wǎng)絡(luò)中邊的標簽內(nèi)容及格式 69 nx.draw_networkx_edge_labels(G, pos, edgeLabelDict) 70 71 plt.show() 以圖形界面展示網(wǎng)絡(luò)的結(jié)構(gòu): 6 總結(jié)聰明的讀者你肯定發(fā)現(xiàn)了,記錄下特征轉(zhuǎn)換行為的最好時機其實是轉(zhuǎn)換的同時??上У氖?,sklearn目前并不支持這樣的功能。在本文中,我將這一功能集中到流水線處理和并行處理的模塊當中,只能算是一個臨時的手段,但聊勝于無吧。另外,本文也是拋磚引玉,還有其他的轉(zhuǎn)換類,在原特征與新特征之間的映射關(guān)系上,百家爭鳴。所以,我在Github上新建了個庫,包含本文實例中所有的轉(zhuǎn)換類處理的代碼,在之后,我會慢慢地填這個坑,直到世界的盡頭,抑或sklearn加入該功能。 7 參考資料
|
|
來自: imelee > 《深度學(xué)習(xí)》