项目作者: iqiukp

项目描述 :
Python code for abnormal detection or fault detection using Support Vector Data Description (SVDD)
高级语言: Python
项目地址: git://github.com/iqiukp/SVDD.git
创建时间: 2020-03-25T08:44:27Z
项目社区:https://github.com/iqiukp/SVDD

开源协议:MIT License

下载




Support Vector Data Description (SVDD)

Python code for abnormal detection or fault detection using Support Vector Data Description (SVDD)


Version 1.1, 11-NOV-2021


Email: iqiukp@outlook.com










Main features

  • SVDD BaseEstimator based on sklearn.base for one-class or binary classification
  • Multiple kinds of kernel functions (linear, gaussian, polynomial, sigmoid)
  • Visualization of decision boundaries for 2D data

Requirements

  • cvxopt
  • matplotlib
  • numpy
  • scikit_learn
  • scikit-opt (optional, only used for parameter optimization)

Notices

  • The label must be 1 for positive sample or -1 for negative sample.
  • Detailed applications please see the examples.
  • This code is for reference only.

Examples

01. svdd_example_unlabeled_data.py

An example for SVDD model fitting using unlabeled data.




02. svdd_example_hybrid_data.py

An example for SVDD model fitting with negataive samples.

  1. import sys
  2. sys.path.append("..")
  3. from sklearn.datasets import load_wine
  4. from src.BaseSVDD import BaseSVDD, BananaDataset
  5. # Banana-shaped dataset generation and partitioning
  6. X, y = BananaDataset.generate(number=100, display='on')
  7. X_train, X_test, y_train, y_test = BananaDataset.split(X, y, ratio=0.3)
  8. #
  9. svdd = BaseSVDD(C=0.9, gamma=0.3, kernel='rbf', display='on')
  10. #
  11. svdd.fit(X_train, y_train)
  12. #
  13. svdd.plot_boundary(X_train, y_train)
  14. #
  15. y_test_predict = svdd.predict(X_test, y_test)
  16. #
  17. radius = svdd.radius
  18. distance = svdd.get_distance(X_test)
  19. svdd.plot_distance(radius, distance)




03. svdd_example_kernel.py

An example for SVDD model fitting using different kernels.

  1. import sys
  2. sys.path.append("..")
  3. from src.BaseSVDD import BaseSVDD, BananaDataset
  4. # Banana-shaped dataset generation and partitioning
  5. X, y = BananaDataset.generate(number=100, display='on')
  6. X_train, X_test, y_train, y_test = BananaDataset.split(X, y, ratio=0.3)
  7. # kernel list
  8. kernelList = {"1": BaseSVDD(C=0.9, kernel='rbf', gamma=0.3, display='on'),
  9. "2": BaseSVDD(C=0.9, kernel='poly',degree=2, display='on'),
  10. "3": BaseSVDD(C=0.9, kernel='linear', display='on')
  11. }
  12. #
  13. for i in range(len(kernelList)):
  14. svdd = kernelList.get(str(i+1))
  15. svdd.fit(X_train, y_train)
  16. svdd.plot_boundary(X_train, y_train)





04. svdd_example_KPCA.py

An example for SVDD model fitting using nonlinear principal component.

The KPCA algorithm is used to reduce the dimension of the original data.

  1. import sys
  2. sys.path.append("..")
  3. import numpy as np
  4. from src.BaseSVDD import BaseSVDD
  5. from sklearn.decomposition import KernelPCA
  6. # create 100 points with 5 dimensions
  7. X = np.r_[np.random.randn(50, 5) + 1, np.random.randn(50, 5)]
  8. y = np.append(np.ones((50, 1), dtype=np.int64),
  9. -np.ones((50, 1), dtype=np.int64),
  10. axis=0)
  11. # number of the dimensionality
  12. kpca = KernelPCA(n_components=2, kernel="rbf", gamma=0.1, fit_inverse_transform=True)
  13. X_kpca = kpca.fit_transform(X)
  14. # fit the SVDD model
  15. svdd = BaseSVDD(C=0.9, gamma=10, kernel='rbf', display='on')
  16. # fit and predict
  17. svdd.fit(X_kpca, y)
  18. y_test_predict = svdd.predict(X_kpca, y)
  19. # plot the distance curve
  20. radius = svdd.radius
  21. distance = svdd.get_distance(X_kpca)
  22. svdd.plot_distance(radius, distance)
  23. # plot the boundary
  24. svdd.plot_boundary(X_kpca, y)




05. svdd_example_PSO.py

An example for parameter optimization using PSO.

“scikit-opt” is required in this example.

https://github.com/guofei9987/scikit-opt

  1. import sys
  2. sys.path.append("..")
  3. from src.BaseSVDD import BaseSVDD, BananaDataset
  4. from sko.PSO import PSO
  5. import matplotlib.pyplot as plt
  6. # Banana-shaped dataset generation and partitioning
  7. X, y = BananaDataset.generate(number=100, display='off')
  8. X_train, X_test, y_train, y_test = BananaDataset.split(X, y, ratio=0.3)
  9. # objective function
  10. def objective_func(x):
  11. x1, x2 = x
  12. svdd = BaseSVDD(C=x1, gamma=x2, kernel='rbf', display='off')
  13. y = 1-svdd.fit(X_train, y_train).accuracy
  14. return y
  15. # Do PSO
  16. pso = PSO(func=objective_func, n_dim=2, pop=10, max_iter=20,
  17. lb=[0.01, 0.01], ub=[1, 3], w=0.8, c1=0.5, c2=0.5)
  18. pso.run()
  19. print('best_x is', pso.gbest_x)
  20. print('best_y is', pso.gbest_y)
  21. # plot the result
  22. fig = plt.figure(figsize=(6, 4))
  23. ax = fig.add_subplot(1, 1, 1)
  24. ax.plot(pso.gbest_y_hist)
  25. ax.yaxis.grid()
  26. plt.show()



06. svdd_example_confusion_matrix.py

An example for drawing the confusion matrix and ROC curve.




07. svdd_example_cross_validation.py

An example for cross validation.

  1. import sys
  2. sys.path.append("..")
  3. from src.BaseSVDD import BaseSVDD, BananaDataset
  4. from sklearn.model_selection import cross_val_score
  5. # Banana-shaped dataset generation and partitioning
  6. X, y = BananaDataset.generate(number=100, display='on')
  7. X_train, X_test, y_train, y_test = BananaDataset.split(X, y, ratio=0.3)
  8. #
  9. svdd = BaseSVDD(C=0.9, gamma=0.3, kernel='rbf', display='on')
  10. # cross validation (k-fold)
  11. k = 5
  12. scores = cross_val_score(svdd, X_train, y_train, cv=k, scoring='accuracy')
  13. #
  14. print("Cross validation scores:")
  15. for scores_ in scores:
  16. print(scores_)
  17. print("Mean cross validation score: {:4f}".format(scores.mean()))

Results

  1. Cross validation scores:
  2. 0.5714285714285714
  3. 0.75
  4. 0.9642857142857143
  5. 1.0
  6. 1.0
  7. Mean cross validation score: 0.857143

08. svdd_example_grid_search.py

An example for parameter selection using grid search.

  1. import sys
  2. sys.path.append("..")
  3. from sklearn.datasets import load_wine
  4. from src.BaseSVDD import BaseSVDD, BananaDataset
  5. from sklearn.model_selection import KFold, LeaveOneOut, ShuffleSplit
  6. from sklearn.model_selection import learning_curve, GridSearchCV
  7. # Banana-shaped dataset generation and partitioning
  8. X, y = BananaDataset.generate(number=100, display='off')
  9. X_train, X_test, y_train, y_test = BananaDataset.split(X, y, ratio=0.3)
  10. param_grid = [
  11. {"kernel": ["rbf"], "gamma": [0.1, 0.2, 0.5], "C": [0.1, 0.5, 1]},
  12. {"kernel": ["linear"], "C": [0.1, 0.5, 1]},
  13. {"kernel": ["poly"], "C": [0.1, 0.5, 1], "degree": [2, 3, 4, 5]},
  14. ]
  15. svdd = GridSearchCV(BaseSVDD(display='off'), param_grid, cv=5, scoring="accuracy")
  16. svdd.fit(X_train, y_train)
  17. print("best parameters:")
  18. print(svdd.best_params_)
  19. print("\n")
  20. #
  21. best_model = svdd.best_estimator_
  22. means = svdd.cv_results_["mean_test_score"]
  23. stds = svdd.cv_results_["std_test_score"]
  24. for mean, std, params in zip(means, stds, svdd.cv_results_["params"]):
  25. print("%0.3f (+/-%0.03f) for %r" % (mean, std * 2, params))
  26. print()

Results

  1. best parameters:
  2. {'C': 0.5, 'gamma': 0.1, 'kernel': 'rbf'}
  3. 0.921 (+/-0.159) for {'C': 0.1, 'gamma': 0.1, 'kernel': 'rbf'}
  4. 0.893 (+/-0.192) for {'C': 0.1, 'gamma': 0.2, 'kernel': 'rbf'}
  5. 0.857 (+/-0.296) for {'C': 0.1, 'gamma': 0.5, 'kernel': 'rbf'}
  6. 0.950 (+/-0.086) for {'C': 0.5, 'gamma': 0.1, 'kernel': 'rbf'}
  7. 0.921 (+/-0.131) for {'C': 0.5, 'gamma': 0.2, 'kernel': 'rbf'}
  8. 0.864 (+/-0.273) for {'C': 0.5, 'gamma': 0.5, 'kernel': 'rbf'}
  9. 0.950 (+/-0.086) for {'C': 1, 'gamma': 0.1, 'kernel': 'rbf'}
  10. 0.921 (+/-0.131) for {'C': 1, 'gamma': 0.2, 'kernel': 'rbf'}
  11. 0.864 (+/-0.273) for {'C': 1, 'gamma': 0.5, 'kernel': 'rbf'}
  12. 0.807 (+/-0.246) for {'C': 0.1, 'kernel': 'linear'}
  13. 0.821 (+/-0.278) for {'C': 0.5, 'kernel': 'linear'}
  14. 0.793 (+/-0.273) for {'C': 1, 'kernel': 'linear'}
  15. 0.879 (+/-0.184) for {'C': 0.1, 'degree': 2, 'kernel': 'poly'}
  16. 0.836 (+/-0.305) for {'C': 0.1, 'degree': 3, 'kernel': 'poly'}
  17. 0.771 (+/-0.416) for {'C': 0.1, 'degree': 4, 'kernel': 'poly'}
  18. 0.757 (+/-0.448) for {'C': 0.1, 'degree': 5, 'kernel': 'poly'}
  19. 0.871 (+/-0.224) for {'C': 0.5, 'degree': 2, 'kernel': 'poly'}
  20. 0.814 (+/-0.311) for {'C': 0.5, 'degree': 3, 'kernel': 'poly'}
  21. 0.800 (+/-0.390) for {'C': 0.5, 'degree': 4, 'kernel': 'poly'}
  22. 0.764 (+/-0.432) for {'C': 0.5, 'degree': 5, 'kernel': 'poly'}
  23. 0.871 (+/-0.224) for {'C': 1, 'degree': 2, 'kernel': 'poly'}
  24. 0.850 (+/-0.294) for {'C': 1, 'degree': 3, 'kernel': 'poly'}
  25. 0.800 (+/-0.390) for {'C': 1, 'degree': 4, 'kernel': 'poly'}
  26. 0.771 (+/-0.416) for {'C': 1, 'degree': 5, 'kernel': 'poly'}