Security of Linear Regression Models
In the context of machine learning, a poisoning attack is a type of attack where an adversary deliberately introduces malicious data into a dataset used to train a machine learning model, with the goal of causing the model to make incorrect predictions. This can have serious consequences, particularly in applications where the model's predictions are used to make important decisions, such as in healthcare, finance, or security. The focus is on developing a new poisoning attack algorithm named gdpa that can produce larger errors with the same proportion of poisoning data points compared to previous attack algorithms. To counteract, a new defense algorithm named ebda is also proposed. The proposed attack and defense algorithms are evaluated on datasets of housing prices, loans, and pharmaceuticals. The results demonstrate that the gdpa attack algorithm can effectively compromise the accuracy of machine learning models, and that the ebda defense algorithm is able to mitigate the effects of the attack. Overall, the paper highlights the importance of ensuring the security of machine learning models in the face of potential attacks, and provides new insights and techniques for improving the robustness of such models
| Year of publication: |
2024
|
|---|---|
| Authors: | Veena, S. T. ; Hariharan, S. ; Girivasan, R. |
| Published in: |
Leveraging Futuristic Machine Learning and Next-Generational Security for e-Governance. - IGI Global Scientific Publishing, ISBN 9798369378854. - 2024, p. 183-206
|
Saved in:
Saved in favorites
Similar items by person
-
Diversified expansion by large established firms
Montgomery, Cynthia A., (1991)
-
Correlation between vaccination and child mortality rate using multivariate linear regression model
Appavoo, Revathi, (2024)
-
Plant Scale in Entry Decisions: A Comparison of Start-Ups and Established Firm Entrants
Hariharan, S., (1999)
- More ...