In 2015, Microsoft provided the data science community with an unprecedented malware dataset and encouraging open-source progress on effective techniques for grouping variants of malware files into their respective families. Formatted as a Kaggle Competition, it featured a very large (for that time) dataset comprising of almost 40GB of compressed files containing disarmed malware samples and their corresponding disassembled ASM code.
Alejandro Mosquera is an online safety expert and Kaggle Grandmaster working in cybersecurity. His main research interests are Trustworthy AI and NLP.
https://orcid.org/0000-0002-6020-3569

Wednesday, December 7, 2022
Tuesday, December 6, 2022
On the Intriguing Properties of Backdoored Neural Networks
Introduction
Malicious actors can alter the expected behavior of a neural network in order to respond to data containing certain triggers only known to the attacker, without disrupting model performance when presented with normal inputs. An adversary will commonly force these misclassifications by either performing trigger injection [19] or dataset poisoning [6]. Less popular techniques that operate at hardware level such as manipulating the binary code of a neural network or the tainting the physical circuitry [26, 8] can be equally effective.
Subscribe to:
Posts (Atom)