Adversarial Octopus is an attack tactic created by independent researchers that could be used to target facial recognition systems. It has an influence on a number of current AI-driven facial recognition tools, making them vulnerable to attacks.
Adversarial Octopus – The Attack
Researchers created this new attack against AI-driven face recognition systems, that can alter photographs such that the AI system recognizes a different person or any person of choice.
- This attack’s major feature is that it can target a variety of AI implementations, including physical devices and web APIs. It has the ability to adapt to the environment in which it is used.
- This form of attack can be utilized for both evasion and poisoning scenarios by deceiving computer vision algorithms, and it can have catastrophic repercussions.
- Face recognition services, applications, and APIs are all bypassed by the attack. Furthermore, it has an impact on PimEyes, the most advanced online facial recognition search engine.
PimEyes, the search engine, attacked!
The following tactics from the attack framework were used to create this Adversarial Octopus attack against PimEyes:
- It was trained on multiple facial recognition models as well as random blur and noise to improve Transferability.
- The technique was designed to calculate adversarial changes in every layer of a neural network and to employ a random face detection frame for improved accuracy.
- It was optimized for tiny changes to every pixel and includes unique methods to smooth adversarial noise for higher imperceptibility.
This exploit demonstrates that AI systems require considerably more security attention, and new attack methods like this will help raise awareness. It will assist businesses in dealing with issues that now exist in hostile machine learning systems. Furthermore, academics are working with businesses to safeguard AI applications against this type of assault.
To read more, please check eScan Blog