PhD or Postdoctoral Position: Advancing Trustworthy AI Security
Deadline: 07.06.2025
Publisert
Our Research Environment: At Simula, we enable a research environment that values collaboration and minimizes traditional hierarchies. Researchers at all levels work as colleagues alongside senior scientists, fostering an atmosphere where innovative ideas are judged on merit. Our Oslo headquarters acts as a hub for discovery, featuring specialized research labs, HPC clusters, and collaborative spaces designed to spark interdisciplinary breakthroughs. You will join a diverse international community bringing varied perspectives to tackle complex global challenges.
Beyond Research Papers: Simula actively translates research into real-world impact. We maintain strong connections with industry partners, government agencies, and tech startups eager to apply our findings. Your research can directly influence how AI systems are designed, deployed, and governed in critical sectors. We have a track record of successfully spinning out companies from our research, and researchers with entrepreneurial interests can access innovation support and connect with Norway's dynamic tech ecosystem.
Life at Simula & Oslo: We embrace Norway's progressive approach to work-life balance, ensuring you have time for both impactful research and personal well-being through generous holiday allowances and flexible working arrangements. Oslo, consistently ranked among the world's most livable cities, offers an exceptional quality of life. Enjoy its vibrant culture, urban amenities, and unparalleled access to stunning natural landscapes, fjords, forests, and mountains perfect for year-round outdoor activities. Furthermore, Simula encourages participation in science communication and outreach, helping you develop broader skills and share your passion for technology.
PhD or Postdoctoral Position: Advancing Trustworthy AI Security - Resilience, Responsibility, and Real-World Impact
Institution: Simula Cybersecurity, Oslo, Norway
Simula Cybersecurity is seeking a highly motivated PhD candidate or Postdoctoral researcher to contribute to cutting-edge research at the nexus of Artificial Intelligence and Cybersecurity. This fully-funded research position (typically three years) offers a unique opportunity to shape the future of secure, reliable, and trustworthy AI systems essential for societal safety and security, while working alongside world-class researchers in Oslo, Norway.
Project Description
In an era where AI increasingly underpins critical infrastructure and decision-making across society, we face an urgent challenge: how do we ensure these powerful systems are secure against sophisticated threats, function reliably, operate fairly, and ultimately deserve societal trust? This advanced research addresses this fundamental question by developing novel methods, frameworks, and evaluation techniques for Trustworthy AI Security. You will investigate how to build AI systems that are not only robust against attack but are also developed and deployed responsibly.
What You Will Investigate
AI Model Robustness & Security: Design, build, and evaluate AI/ML systems resilient to specific threats like adversarial attacks (evasion), data poisoning, and prompt injection.
AI for Cyber Defense Assurance: Investigate AI as a force multiplier for security (e.g., threat detection/response), critically assessing its effectiveness, limitations, fairness, and potential biases.
Meaningful Security Evaluation: Develop novel methods, metrics, and frameworks to rigorously assess the effectiveness, robustness, fairness, and trustworthiness of AI security solutions in realistic contexts, potentially exploring AI Red Teaming approaches.
Responsible & Secure AI Design: Explore the practical implementation of Responsible AI principles (fairness, transparency, privacy) and "Secure by Design" methodologies throughout the entire AI lifecycle.
Real-world Validation & Societal Impact: Test implementations and frameworks against authentic security challenges with industry and public sector partners, ensuring societal relevance and addressing real-world needs.
Your Contributions Will Include
Developing innovative AI security techniques, frameworks, or evaluation methodologies with demonstrable improvements in trustworthiness and resilience.
Publishing findings in top-tier AI and Cybersecurity conferences and journals (e.g., NeurIPS, ICML, IEEE S&P, USENIX Security, ACM CCS).
Presenting your work at international venues and collaborating effectively across disciplines.
Implementing working prototypes or tools addressing real-world AI security and trustworthiness challenges.
Contributing to the wider research community (e.g., through open-source tools, datasets, best practices).
Communicating research effectively, highlighting societal implications to broader audiences.
(For Postdocs): Potentially contributing to student supervision and grant proposal writing, depending on experience and interest.
What We Are Looking For
For PhD Candidates: MSc (or equivalent) in Computer Science, Cybersecurity, Artificial Intelligence, Machine Learning, Data Science, or a closely related technical field.
For Postdoctoral Researchers: A completed PhD in one of the fields mentioned above.
Demonstrated experience implementing and evaluating machine learning or cybersecurity systems (extent expected to correlate with career stage).
Strong programming skills (e.g., Python) and practical experience with AI/ML frameworks (PyTorch, TensorFlow, etc.).
Excellent analytical thinking, critical assessment, and creative problem-solving abilities.
A strong interest and foundational knowledge spanning both cybersecurity principles and AI/ML concepts.
A passion for addressing challenging technical research questions with significant real-world societal impact.
A collaborative mindset and excellent communication skills in English.
Desirable Skills (Applicable to both levels, depth may vary)
Research experience in AI/ML Security (e.g., Adversarial Machine Learning - AML), AI Ethics, or the application of AI in Cybersecurity.
Familiarity with AI-specific vulnerabilities (e.g., prompt injection, evasion, data poisoning) and attack techniques.
Understanding of Responsible AI principles, fairness evaluation, and data privacy considerations in AI/ML.
Knowledge of AI security/governance frameworks (e.g., MITRE ATLAS, NIST AI RMF, OWASP Top 10 for LLMs).
Experience with robust experimental design and statistical analysis for ML/security evaluation.
Familiarity with AI Red Teaming concepts or tools.
(For Postdocs): A strong publication record relevant to the research area.
What We Offer
Competitive salary based on qualifications and experience, following Norwegian PhD/PostDoc salary scales. Starting from NOK 536,200 for PhD students, and NOK 575,400 for Postdocs.
Dedicated research funding for conferences, equipment, international research visits, and publication costs.
Access to Simula's high-performance computing (HPC) resources and state-of-the-art research facilities.
Mentorship appropriate to your career stage from leading international experts in AI, Cybersecurity, and Responsible Technology.
Integration into a dynamic, inclusive, and highly collaborative research community focused on both scientific excellence and societal impact.
A vibrant, international work environment (currently 185 researchers from 35 countries) located in Oslo.
Significant career development opportunities through strong industry partnerships (from startups to large enterprises) and academic networks.
The opportunity to shape the research agenda in a field critical for societal safety, security, and trust.
Competitive employment terms,
Generous equipment budgets (e.g., computer, phone and subscription), comprehensive travel/health insurance policy, subsidized canteen meals, access to company cabin, sponsored social events.
Relocation assistance: accommodation, visas, complimentary Norwegian language courses, etc.
We strongly encourage applications from all qualified candidates and are committed to promoting diversity and gender balance in science. We particularly welcome applications from women, who are currently underrepresented in this research field. We believe diverse teams drive innovation and lead to better research outcomes.
Application requirements
Qualified candidates should submit the following documents through our online portal:
CV: Highlighting relevant research experience, projects, technical skills, publications, and any supervision or grant experience (especially for Postdoc applicants).
Academic Transcripts: Copies of transcripts and degree certificates (Bachelor's, Master's, and PhD certificate for Postdoc applicants).
Cover Letter (max 2 pages): Clearly articulate your motivation for this position. Describe your research interests within Trustworthy AI Security, explain how your skills and background align with the requirements (mentioning if applying as PhD or Postdoc), and share your perspective on the societal importance and challenges of this research area. Postdoc applicants should also outline potential research directions they wish to pursue within the project's scope.
Sample of Technical Writing: This could be a Master's thesis chapter (for PhD applicants), key publications (especially for Postdoc applicants), a technical report, or a link to a significant code repository (e.g., GitHub) with clear documentation.
Contact Information: Names and contact details for 2-3 professional or academic references.
Application Deadline: June 8, 2025.
Applications will be evaluated on a rolling basis, and the expected starting date will be in August-September 2025.
For inquiries about the position, please contact:
Jostein Jensen, Director of Simula Cyber Security: jostein@simula.no
We look forward to receiving your application!
Simula Research Laboratory uses background checks in our recruitment process.
According to the Norwegian Freedom and Information Act (Offentleglova) information about the applicant may be included in the public applicant list, also in cases where the applicant has requested non-disclosure.