Visualização de leitura
ZDI-CAN-30873: LiteLLM
Securing CI/CD for an open source project: lessons from Cilium
As a maintainer, this is Cilium's take on how we secure our Github Actions in the OSS project. A few highlights:
- SHA pinning every GitHub Action
- Separating trusted vs untrusted code paths in
pull_request_target - Isolating CI credentials from production release credentials
- Cosign signing + SBOM attestations
- Vendoring Go dependencies to make supply chain changes visible in review
- Treating blast radius reduction as the core design principle
and a few gaps:
- no SLSA provenance yet
- remaining mutable u/main references
- no dependency review at PR time
- missing govulncheck integration
[link] [comments]
Seclens: Role-specific Evaluation of LLM's for security vulnerablity detection
Existing benchmarks for LLM-based vulnerability detection compress model performance into a single metric, which fails to reflect the distinct priorities of different stakeholders. For example, a CISO may emphasize high recall of critical vulnerabilities, an engineering leader may prioritize minimizing false positives, and an AI officer may balance capability against cost. To address this limitation, we introduce SecLens-R, a multi-stakeholder evaluation framework structured around 35 shared dimensions grouped into 7 measurement categories. The framework defines five role-specific weighting profiles: CISO, Chief AI Officer, Security Researcher, Head of Engineering, and AI-as-Actor. Each profile selects 12 to 16 dimensions with weights summing to 80, yielding a composite Decision Score between 0 and 100.
We apply SecLens-R to evaluate 12 frontier models on a dataset of 406 tasks derived from 93 open-source projects, covering 10 programming languages and 8 OWASP-aligned vulnerability categories. Evaluations are conducted across two settings: Code-in-Prompt (CIP) and Tool-Use (TU). Results show substantial variation across stakeholder perspectives, with Decision Scores differing by as much as 31 points for the same model. For instance, Qwen3-Coder achieves an A (76.3) under the Head of Engineering profile but a D (45.2) under the CISO profile, while GPT-5.4 shows a similar disparity. These findings demonstrate that vulnerability detection is inherently a multi-objective problem and that stakeholder-aware evaluation provides insights that single aggregated metrics obscure.
[link] [comments]
Those who are in Detection engineering
I work in detection engineering. Wanted to see do other who are working in the same role - do yall ever use python in your role? How important do yall find it related to detection engineering.
I mean like making HTTP requests and parsing response can all be done using codeless tools like logicapps etc and query languages are quite simple as well.
I recently had an interview which i think i wont clear because i didnt ever use python in my work. Not that i never needed to? I could do all of my SOARs using just logicapps / soar platforms / ps scripts / bash scripts. But seems like not knowing how to write python is a big deal? I can Even read python code but not write it, i mean not that i have never needed to in any use case.
Seemed like quite shallow to judge someone just based on programming skills for a detection engineer interview.
[link] [comments]
SANs Courses: How do people get their employers to pay?
My employer wouldn't pay for a CompTIA exam. How are people finding employers to pay multi-thousand dollars for these classes?
[link] [comments]
Would getting Security+ be worthless for me?
Just cause I know it's a bit of a HR checkbox cert.
I have a masters degree in cybersecutity
Have 2.5 years experience in the field
Have done 3 SANs courses
Any use for getting sec+ or nah just skip?
[link] [comments]
NIS2 Article 21: turning compliance controls into technical security evidence
Hi everyone,
Disclosure: I own the project linked below. I’m sharing it because I’m working on the technical side of NIS2 evidence collection, not to pitch services or solicit DMs.
Project context:
https://www.softwareapp-hb.de/projekte.html
The security engineering problem I’m looking at is this:
NIS2 Article 21 requires organizations to address areas like risk management, incident handling, business continuity, supply-chain security, vulnerability handling, access control, asset management, MFA, secure communications, and cyber hygiene. In practice, a lot of “evidence” for these areas still ends up as screenshots, policy PDFs, manual exports, spreadsheets, or consultant-maintained checklists.
That may satisfy some audit workflows, but from a security operations perspective it has obvious weaknesses: evidence goes stale, checks are difficult to reproduce, and there is often a gap between what the policy says and what the infrastructure actually looks like.
I’m building an open-source, self-hostable platform that tries to map NIS2 requirements to concrete technical checks and produce traceable evidence from actual system state. The current design focus is not to replace GRC platforms, legal review, auditors, or an ISMS. The goal is narrower: make certain parts of the evidence layer more repeatable, technical, and defensible.
Examples of evidence areas where this might be useful:
- asset inventory and system classification
- patch/vulnerability state
- account and privilege configuration
- MFA and authentication posture
- backup existence and test evidence
- logging and monitoring configuration
- firewall and network exposure checks
- incident-response process evidence
- technical control mappings to NIS2 Article 21
The hard question is where automation helps and where it becomes misleading.
For example, a system can verify that logging is enabled, but not necessarily that logs are reviewed effectively. A tool can collect patch state, but not decide whether risk acceptance was appropriate. It can validate backup configuration, but not prove that recovery objectives are realistic unless restore tests are captured properly.
For people working in security engineering, SOC, vulnerability management, infrastructure, audit support, or compliance operations:
Where do you think technical automation genuinely improves NIS2 evidence quality?
And where do you think compliance automation creates false confidence?
I’m especially interested in the boundary between measurable technical state and areas that still require human assessment, process maturity, or auditor judgment.
[link] [comments]
UK Advice Needed - VA+ Training?
I’m relatively new to cyber security. Our head of security is leaving soon and I’ve been asked to step up. Mostly in regard to performing CE and CE+.
Initially I was tasked to take the CSTM but after the exam last week I’m worried it’s a step too far at this point. Haven’t had the results yet but I struggled.
I’m considering doing the VA+ in the first instance at least so we can keep doing CE+ when my colleague leaves.
Thing is... I can find hardly any resources on how to prepare for it and there don’t seem to be any official courses I can go on.
Can someone who achieved VA+ let me know how they prepared? Maybe there are some courses (in person preferred) but I’m struggling to find anything.
Hope you can help point me in the right direction.
[link] [comments]
Second security incident at Instructure (Canvas)
Looks like ShinyHunters wasn't done after all... they've apparently defaced several university/college login websites on May 7 to put pressure on Instructure. They may have succeeded, though, since Instructure is no longer listed on their leak site as of May 8. The current timeline is:
- April 29 - first incident involving data exfiltration
- May 5 - they posted the list of impacted universities/colleges/districts
- May 7 - second defacement incident
- May 8 - Instructure removed from their leak site
I'd be interesting to know whether Instructure paid, and if they did, how much.
[link] [comments]
Gateweb - Secure Web Gateway
We built gateweb.io - a local SWG with HTTPS inspection that doesn't send your traffic through someone else's cloud. Free for up to 5 users. Curious what the security community thinks about the local-first approach.
[link] [comments]
MSPs, how are you handling AI usage across your customer environments today?
Are you able to:
• Detect Shadow AI tools being used by employees?
• Monitor what AI platforms are accessing sensitive data?
• Identify AI policy violations before they become risks?
• Offer AI governance as a managed service?
With AI adoption accelerating, it feels like most MSPs still don’t have clear visibility or control over AI activity inside customer environments.
Curious to know:
Is this already becoming a concern for your clients? And are there any tools today that actually solve this well?
[link] [comments]
ecpptv3 Exam in 3–4 Days —
Hey everyone,
I’m planning to take the eCPPT exam in the next 3–4 days and wanted to get some advice from people who’ve already cleared it.
What should I focus on the most during these last few days of preparation? Any common mistakes to avoid or things you wish you knew before attempting the exam?
Also, if you know any Hack The Box or TryHackMe machines/labs that are similar to the exam style, I’d really appreciate the recommendations.
Thanks in advance!
[link] [comments]