Hello everyone
I have been doing my research in the field of adversarial robustness for a few months now. I have been studying many literatures on adversarial robustness, and here I have a few questions that feel like I have not satisfactorily been answered:
- Have we been able to frame adversarial robustness properly?
- It feels to me like the actual reality (take, for eg, a traffic scenario) is very high-dimensional. If, in reality, the actual reality is truly high-dimensional, then the images captured for a high-dimensional space are low-dimensional. If this feeling is true, might it be that converting the high-dimensional space to a low-dimensional representation could lead to the loss of critical information, which is responsible for causing adversarial issues in DL models?
- Why are we not trying to address adversarial robustness from a cognitive approach? It feels like the nature or the human brain are adversarially robust system. If it is so, then I think we need to investigate whether artificial models trained by principles of cognitive science are more or less robust than normal DNNs.
Sometimes it looks like everything in this universe has a fundamental geometric configuration. Adversarial attacks damage the outer configuration due to which the models misclassify, but the fundamental geometric configuration or the fundamental manifold structure is not hampered by adversarial attacks.
Are we fundamentally lacking something?