Dr. Saurabh Shintre is a Senior Principal Research Engineer. His research focuses on adversarial machine learning, distributed trust mechanisms, and cryptography. Dr. Shintre has published a number of papers and patents in the areas of insider threat detection, cryptopgraphy, machine learning, and privacy. In 2018, Dr. Shintre was selected as one of the future leaders by the Science and Technology in Society (STS) forum. He is a prolific public speaker and also served as the program committee member of the 2018 RSA Asia Pacific and Japan, where he organized a seminar on blockchain technology.
Dr. Shintre received his Ph.D. in Electrical and Computer Engineering from the Carnegie Mellon University and his Masters and Bachelors of Technology in Electrical Engineering from the Indian Institute of Technology Bombay, India.
Selected Academic Papers
In Proceedings of ACM CHI Conference on Human Factors in Computing Systems (CHI 2020) (Honorable Mention Award)
Our online survey of 902 individuals studies the reasons for which users struggle to adhere to expert-recommended security, privacy, and identity-protection practices. We examined 30 of these practices, finding that gender, education, technical background, and prior negative experiences correlate with practice adoption levels. We found that practices were abandoned when they were perceived as low-value, inconvenient, or when overridden by subjective judgment. We discuss how tools and expert recommendations can better align to user needs.
In Proceedings of the 2019 ENISA Annual Privacy Forum (APF 2019)
We specifically analyze how the “right-to-be-forgotten” provided by the European Union General Data Protection Regulation can be implemented on current machine learning models and which techniques can be used to build future models that can forget. This document also serves as a call-to-action for researchers and policy-makers to identify other technologies that can be used for this purpose.
In Proceedings of the Annual Conference of the PHM Society (PHM 2020)
We introduce turbidity detection as a practical super set of the adversarial input detection problem,coping with adversarial campaigns rather than statistically invisible one-offs. This perspective is coupled with ROC-theoretic design guidance that prescribes an inexpensive do-main adaptation layer at the output of a deep learning model during an attack campaign. The result aims to approximate the Bayes optimal mitigation that ameliorates the detection models degraded health.
To appear in the proceedings of the 2020 USENIX Annual Technical Conference (USENIX ATC '20)
We propose SEAL, a family of new searchable encryption schemes with adjustable leakage. In SEAL, the amount of privacy loss is expressed in leaked bits of search or access pattern and can be defined at setup. As our experiments show, when protecting only a few bits of leakage (e.g., three to four bits of access pattern),enough for existing and even new more aggressive attacks to fail, SEAL query execution time is within the realm of practical for real-world applications (a little over one order of magnitude slowdown compared to traditional SE-based encrypted databases).