Skip to content

Robustness verification

groot.verification.kantchelian_attack

KantchelianAttackWrapper (AttackWrapper)

adversarial_examples(self, X, y, order, options={})

Create adversarial examples for each input sample. This method has to be overriden!

Parameters:

Name Type Description Default
X array-like of shape (n_samples, n_features)

Samples to attack.

required
y array-like of shape (n_samples,)

True labels for the samples.

required
order {0, 1, 2, inf}

L-norm order to use. See numpy documentation of more explanation.

required
options dict

Extra attack-specific options.

{}

Returns:

Type Description
ndarray of shape (n_samples, n_features)

Adversarial examples.

attack_feasibility(self, X, y, order, epsilon, options={})

Determine whether an adversarial example is feasible for each sample given the maximum perturbation radius epsilon.

Parameters:

Name Type Description Default
X array-like of shape (n_samples, n_features)

Samples to attack.

required
y array-like of shape (n_samples,)

True labels for the samples.

required
order {0, 1, 2, inf}

L-norm order to use. See numpy documentation of more explanation.

required
epsilon float

Maximum distance by which samples can move.

required
options dict

Extra attack-specific options.

{}

Returns:

Type Description
ndarray of shape (n_samples,) of booleans

Vector of True/False. Whether an adversarial example is feasible.