本文转自AI源创评论
sample_df = df.sample(100)

A 镇有 100 万工人,
B 镇有 200 万工人,以及
C 镇有 300 万退休人员。
from sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y,stratify=y,test_size=0.25)

假设您有一个项目流,它长度较大且未知以至于我们只能迭代一次。
创建一个算法,从这个流中随机选择一个项目,这样每个项目都有相同的可能被选中。
import randomdef generator(max):number = 1while number < max:number += 1yield number# Create as stream generatorstream = generator(10000)# Doing Reservoir Sampling from the streamk=5reservoir = []for i, element in enumerate(stream):if i+1<= k:reservoir.append(element)else:probability = k/(i+1)if random.random() < probability:# Select item in stream and remove one of the k items already selectedreservoir[random.choice(range(0,k))] = elementprint(reservoir)------------------------------------[1369, 4108, 9986, 828, 5589]
移除第一个项目的概率是项目 3 被选中的概率乘以项目 1 被随机选为水塘中 2 个要素的替代候选的概率。这个概率是:
2/3*1/2 = 1/3
因此,选择项目 1 的概率为:
1–1/3=2/3
from sklearn.datasets import make_classificationX, y = make_classification(n_classes=2, class_sep=1.5, weights=[0.9, 0.1],n_informative=3, n_redundant=1, flip_y=0,n_features=20, n_clusters_per_class=1,n_samples=100, random_state=10)X = pd.DataFrame(X)X[ target ] = y
num_0 = len(X[X[ target ]==0])num_1 = len(X[X[ target ]==1])print(num_0,num_1)# random undersampleundersampled_data = pd.concat([ X[X[ target ]==0].sample(num_1) , X[X[ target ]==1] ])print(len(undersampled_data))# random oversampleoversampled_data = pd.concat([ X[X[ target ]==0] , X[X[ target ]==1].sample(num_0, replace=True) ])print(len(oversampled_data))------------------------------------------------------------OUTPUT:90 1020180
在这个算法中,我们最终从 Tomek Links 中删除了大多数元素,这为分类器提供了一个更好的决策边界。
from imblearn.under_sampling import TomekLinkstl = TomekLinks(return_indices=True, ratio= majority )X_tl, y_tl, id_tl = tl.fit_sample(X, y)
from imblearn.over_sampling import SMOTEsmote = SMOTE(ratio= minority )X_sm, y_sm = smote.fit_sample(X, y)

