M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen et al., Ten-sorFlow: Large-scale machine learning on heterogeneous systems, 2015.

G. Allaire, Numerical Analysis and Optimization, 2007.
URL : https://hal.archives-ouvertes.fr/hal-01242950

C. M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics, 2006.

A. Boopathy, T. Weng, P. Chen, S. Liu, and L. Daniel, Cnn-cert: An efficient framework for certifying robustness of convolutional neural networks, 2018.

S. Boucheron, G. Lugosi, and P. Massart, Concentration Inequalities, a nonasymptotic theory of independence, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00794821

F. Chollet, , 2015.

N. Couellan, The coupling effect of Lipschitz regularization in deep neural networks, 2019.
URL : https://hal.archives-ouvertes.fr/hal-02090498

A. Dembo, Bounds on the extreme eigenvalues of positive-definite toeplitz matrices, IEEE Transactions on Information Theory, vol.34, issue.2, pp.352-355, 1988.

A. Fawzi, S. Moosavi-dezfooli, and P. Frossard, The robustness of deep networks: A geometrical perspective, IEEE Signal Processing Magazine, vol.34, issue.6, pp.50-62, 2017.

C. Finlay, A. Oberman, and B. Abbasi, Improved robustness to adversarial examples using lipschitz regularization of the loss, 2018.

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, 2016.

H. Gouk, E. Frank, B. Pfahringer, and M. Cree, Regularisation of neural networks by enforcing lipschitz continuity, 2018.

D. Harrison and D. Rubinfeld, Hedonic prices and the demand for clean air, J. Environ. Economics and Management, vol.5, pp.81-102, 1978.

D. P. Kingma and J. Ba, Adam: A method for stochastic optimization. CoRR, abs/1412, vol.6980, 2015.

J. Z. Kolter and E. Wong, Provable defenses against adversarial examples via the convex outer adversarial polytope, 2017.

S. Lacoste-julien, M. Schmidt, and F. Bach, A simpler approach to obtaining an o(1/t) convergence rate for the projected stochastic subgradient method, 2012.
URL : https://hal.archives-ouvertes.fr/hal-00768187

C. Lanczos, An iteration method for the solution of the eigenvalue problem of linear differential and integral operators, Journal of Research of the National Bureau of Standards, vol.45, issue.4, pp.255-282, 1950.
URL : https://hal.archives-ouvertes.fr/hal-01712947

A. Nedic and S. Lee, On stochastic subgradient mirror-descent algorithm with weighted averaging, SIAM Journal on Optimization, vol.24, issue.1, pp.84-107, 2014.

A. Oberman and J. Calder, Lipschitz regularized deep neural networks converge and generalize, 2018.

R. K. Pace and R. Barry, Sparse spatial autoregressions, Statistics and Probability Letters, vol.33, issue.3, pp.291-297, 1997.

A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang et al., Automatic differentiation in pytorch, 2017.

M. Staib and S. Jegelka, Distributionally robust optimization and generalization in kernel methods, 2019.

C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan et al., Intriguing properties of neural networks. International Conference on learning representations (ICLR), 2014.

P. C. Team, Python: A dynamic, open source programming language, python software foundation, 2015.

R. Tibshirani, I. Johnstone, T. Hastie, and B. Efron, Least angle regression, The Annals of Statistics, vol.32, issue.2, p.407499, 2004.

A. Virmaux and K. Scaman, Lipschitz regularity of deep neural networks: analysis and efficient estimation, Advances in Neural Information Processing Systems, 2018.

L. Weng, P. Chen, L. Nguyen, M. Squillante, A. Boopathy et al., PROVEN: Verifying robustness of neural networks with a probabilistic approach, Proceedings of the 36th International Conference on Machine Learning, vol.97, pp.6727-6736, 2019.

H. Xu, C. Caramanis, and S. Mannor, Robustness and regularization of support vector machines, J. Mach. Learn. Res, vol.10, pp.1485-1510, 2009.