The Centre is seeking talented AI Security Researchers to advance the frontiers of secure and trustworthy AI systems. We invite you to join our vibrant community of scholars and industry partners shaping the future of secure and responsible AI technologies.
Key Responsibilities:
- Conduct research on trustworthy AI techniques, including fairness, robustness, and explainability in computational processes.
- Develop innovative methods to enhance the reliability and accountability of AI systems.
- Collaborate on integrating trustworthiness principles (e.g., bias mitigation, adversarial defenses) into machine learning models.
- Analyze and interpret experimental results, ensuring rigorous validation and real-world applicability.
- Prepare and publish high-impact research at top venues (e.g., S&P, USENIX, NeurIPS, CCS, NDSS ICML, AAAI, ICLR, etc.)
Job Requirements:
- Master’s degree in Computer Science, AI, Cybersecurity, Mathematics, or a related field with a strong foundation in AI/ML and quantitative methods.
- Prior work in trustworthy AI (e.g., fairness, robustness, explainability, or AI ethics) is highly preferred.
- Publications in top-tier AI/ML venues are a strong advantage.
- Proficiency in Python/PyTorch/TensorFlow and experience with AI security.
- Strong analytical rigor, problem-solving ability, and teamwork to collaborate with world-class researchers.
We regret that only shortlisted candidates will be notified.
Hiring Institution: NTU