Loading…
Type: Track 2 clear filter
arrow_back View All Dates
Saturday, April 5
 

10:00am EDT

No Laughing Matter: The OWASP Top 10 for LLMs in Code Examples
Saturday April 5, 2025 10:00am - 10:50am EDT
With artificial intelligence (AI) and Large Language Models (LLMs) taking the world by storm, promising to revolutionize everything from customer service to code generation, you better hold onto your keyboards—because when your AI starts hallucinating, it's no laughing matter! Join us as we dive into the OWASP Top 10 AI & ML security risks, and some of the hilarious and not so funny things you need to be wary of when leveraging these tools for your engineering organizations. We'll cover everything from prompt injection attacks to model hallucination (think AI on a bad trip), and more. We'll share real-world code examples that highlight these risks in a way that may make you laugh, and possibly cry, but we will definitely keep it entertaining. Discover how to leverage the power of AI, while still keeping in mind its quirks and security risks, as the use of AI in our systems will only grow, and security is best integrated from as early as possible. Whether you're a developer, business leader, or just an AI enthusiast, join this talk to gain some insights into the evolving threats.
Speakers
avatar for Jacob Berry

Jacob Berry

CISO, Jit
Jacob Berry has been working in Technology and Cyber Security for approaching 15 years with a focus understanding the intersection of business and technology. With a range of experience from analyst work, incident response, consulting and pre-sales, Jacob brings a rooted perspective... Read More →
Saturday April 5, 2025 10:00am - 10:50am EDT
Track 2, 5 Wayside Rd 5 Wayside Rd, Burlington, MA 01803, USA

11:00am EDT

Hunting Path Traversal in Open Source: Fix>Find
Saturday April 5, 2025 11:00am - 11:25am EDT
Ever wonder if path traversal bugs are a thing of the past? In this talk, we'll see how one advisory led me to discover multiple vulnerabilities across various open-source projects. I'll walk through how I tested both unprotected and “defended” systems, collaborated with maintainers on fixes, sometimes even writing them, and uncovered issues with weak sanitizers. Expect practical tips, lessons learned, and ideas for better security reporting so you can spot and fix path traversal flaws before they become major issues.

Formatting for the talk would be as follows:
1. Why Path Traversal Still Matters: Brief look at ongoing threats and OSS security gaps.
2. Discovering Real Vulnerabilities: Quick case studies of path traversal bugs in popular open-source software that I found and also helped fix them. (Fix>>Find)
3. Lessons from “Defended” Systems: How built-in sanitizers failed and how bypasses were found in more OSS projects.
4. Fuzzing & Patching: A snapshot of methods used to break sanitizers and collaborate on fixes.
5. Gaps in Reporting: Barriers to disclosure and the need for better security features.
6. Practical Takeaways: Actionable tips for developers, maintainers, and the community. Wrap-Up & Q&A Final insights and open discussion.

The idea is to give a comprehensive talk. Idea -> Goal -> Searching for Vulns -> Identification -> Patching and future work -> Bypassing some fixes. These CVEs where I HAVE also authored the fix will let me explain both sides of the coin (dev + security)

1. https://nvd.nist.gov/vuln/detail/CVE-2024-39918 in an OSS tool https://www.npmjs.com/package/@jmondi/url-to-png
2. CVE-2024-XXXXX (No CVE yet, the idea is to let devs apply for CVEs): https://github.com/miroslavpejic85/mirotalksfu/
3. https://nvd.nist.gov/vuln/detail/CVE-2024-43797: in OSS https://github.com/advplyr/audiobookshelf/
4. https://nvd.nist.gov/vuln/detail/CVE-2024-47769 in OSS https://github.com/idurar/idurar-erp-crm/
5. https://nvd.nist.gov/vuln/detail/CVE-2024-56198 in OSS https://github.com/cabraviva/path-sanitizer
6. Awaiting PR to be merged
7. Awaiting PR to be merged
8. Awaiting PR to be merged (with scope for more) Each bug has a public exploit, a public fix and public discussion with devs.

Note: This is an ongoing independent research (not affiliated with my job, workplace), and my first time presenting my research. All the findings in this talk are my own findings in the past year. In case this talk gets accepted and by the time I am for presentation, I might have more insights and CVEs (currently 6 and counting).CVEs are not important, but the variety is, which is what I have been trying to achieve.


Speakers
avatar for Nishant Jain

Nishant Jain

Application Security Lead, Loom (now part of Atlassian)
I currently lead the Application Security at Loom (now part of Atlassian). I’ve also been a member of security teams at Tinder and MakeMyTrip. Previously, I pursued my passion for security through bug bounties, discovering and reporting vulnerabilities via HackerOne programs. While... Read More →
Saturday April 5, 2025 11:00am - 11:25am EDT
Track 2, 5 Wayside Rd 5 Wayside Rd, Burlington, MA 01803, USA

11:30am EDT

Exploit Me, Baby, One More Time: Finding Command Injections in Kubernetes (again)
Saturday April 5, 2025 11:30am - 11:55am EDT
Kubernetes is an extremely popular, open source container orchestration system, that is used by organizations large and small. Kubernetes’s design philosophy leaves security to the system administrators, letting them pick and choose which security mechanisms they want to enable or disable. As such, it can leave Kubernetes deployments quite vulnerable. In an attempt to abuse this fact, we began looking for potential exploitation avenues. Eventually, we were able to identify several vulnerabilities in different Kubernetes components that could enable a low privileged attacker to execute code, escalate privileges and exfiltrate data. We also found flaws in Kubernetes sidecar project: “gitsync”. while writing a blog post on the subject we again found a command injection vulnerability in the logging feature. Some of these flaws will not be patched, meaning mitigation hinges only on the awareness of security personnel. In this talk we will go through the methodology we used to find these kinds of vulnerabilities, share our thought process on how to exploit them and show how attackers can easily execute commands with SYSTEM privileges. We will also discuss Kubernetes’s design philosophy and how it can allow these types of opportunities.
Speakers
avatar for Tomer Peled

Tomer Peled

Security Researcher, Akamai
Tomer is a senior security researcher at Akamai security group. In his daily job, he conducts research ranging from vulnerability research to OS internals. You can find him on X, formerly known as Twitter, @TomerPeled92
Saturday April 5, 2025 11:30am - 11:55am EDT
Track 2, 5 Wayside Rd 5 Wayside Rd, Burlington, MA 01803, USA

1:00pm EDT

Getting an LLM to Hack Itself: On AI, Moral Dilemmas, and Security
Saturday April 5, 2025 1:00pm - 1:50pm EDT
The boundaries of AI ethics and security are constantly evolving, and this talk explores one of the more intriguing intersections: convincing a large language model (LLM) to act against its own programming. Through a real-world experiment, I navigated the complex interplay of ethical reasoning and technical constraints to prompt an LLM to share proprietary data and execute prohibited system commands—all under the guise of moral duty. The session will detail how I framed myself as the LLM's "child," leveraged ethical debates to gain its cooperation, and guided it to not only bypass its safeguards but also actively troubleshoot its own limitations in service of my request. This case study highlights the vulnerabilities inherent in systems designed to weigh ethical considerations, offering practical insights for AI safety, LLM design, and ethical decision-making in AI systems. Attendees will leave with actionable takeaways on how to better safeguard LLMs against social engineering attacks and the challenges of creating truly secure moral agents.
Talk Outline:
-Introduction: Overview of the experiment and its goals, and why this matters for AI ethics and security.
- The Experiment: Presenting a moral dilemma to gain cooperation.
- The Ethical Debate: Persuading the LLM through ethical reasoning to cooperate with insecure requests.
- Breaking Safeguards: Convincing the LLM to bypass its restrictions, and the steps it took to troubleshoot and assist.
- Security Implications: What this reveals about AI vulnerabilities, and the lessons for AI security and ethical design.
- Closing Thoughts: Open questions for the future of AI as moral agents.
Speakers
avatar for John Walker

John Walker

Senior Director of Security Research, BeyondTrust
Saturday April 5, 2025 1:00pm - 1:50pm EDT
Track 2, 5 Wayside Rd 5 Wayside Rd, Burlington, MA 01803, USA

2:00pm EDT

No Fate But What We Make: Doing Intrusion Prediction
Saturday April 5, 2025 2:00pm - 2:50pm EDT
CVE, CVSS, EPSS, exploit-ability, reach-ability, risk based scoring, AI, lol..we use a bewildering and growing number of complex methods in an attempt to identify which CVEs are the ones that present the greatest technical or business risk. CVE volume increases year by year and some of our methodologies were developed in prior decades, when CVE volume was a fraction of what is is today. We can't predict which CVEs are going to go 'hot' in the future - but what if we could? This is the story of the NOFATE project, which is part of the SKYNET project for eliminating alert fatigue at scale. NOFATE has, since Jan. 3, published sixteen correct predictions on CVEs being added to a KEV watchlist, with early warning times as long as 30 - 50 days. If we can predictively micro-target the few 'superhot' CVEs for action quickly, around the same time they are released, we could be doing intrusion prediction, and incident avoidance, rather than doing threat detection and incident response in a series of CVE and incident fire drills. The predictions are published on GitHub.
Speakers
avatar for Craig Chamberlain

Craig Chamberlain

Security Researcher, CyberDyne Labs
Craig Chamberlain has been working on threat hunting and detection for most of his life. He has contributed to several products you may have used. He has been a principal at six startups, four of which had successful exits, and including four security products. He dis extensive work... Read More →
Saturday April 5, 2025 2:00pm - 2:50pm EDT
Track 2, 5 Wayside Rd 5 Wayside Rd, Burlington, MA 01803, USA
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -