Visualização de leitura

Securing CI/CD for an open source project: lessons from Cilium

As a maintainer, this is Cilium's take on how we secure our Github Actions in the OSS project. A few highlights:

  • SHA pinning every GitHub Action
  • Separating trusted vs untrusted code paths in pull_request_target
  • Isolating CI credentials from production release credentials
  • Cosign signing + SBOM attestations
  • Vendoring Go dependencies to make supply chain changes visible in review
  • Treating blast radius reduction as the core design principle

and a few gaps:

  • no SLSA provenance yet
  • remaining mutable u/main references
  • no dependency review at PR time
  • missing govulncheck integration
submitted by /u/xmull1gan
[link] [comments]

Seclens: Role-specific Evaluation of LLM's for security vulnerablity detection

Existing benchmarks for LLM-based vulnerability detection compress model performance into a single metric, which fails to reflect the distinct priorities of different stakeholders. For example, a CISO may emphasize high recall of critical vulnerabilities, an engineering leader may prioritize minimizing false positives, and an AI officer may balance capability against cost. To address this limitation, we introduce SecLens-R, a multi-stakeholder evaluation framework structured around 35 shared dimensions grouped into 7 measurement categories. The framework defines five role-specific weighting profiles: CISO, Chief AI Officer, Security Researcher, Head of Engineering, and AI-as-Actor. Each profile selects 12 to 16 dimensions with weights summing to 80, yielding a composite Decision Score between 0 and 100.
We apply SecLens-R to evaluate 12 frontier models on a dataset of 406 tasks derived from 93 open-source projects, covering 10 programming languages and 8 OWASP-aligned vulnerability categories. Evaluations are conducted across two settings: Code-in-Prompt (CIP) and Tool-Use (TU). Results show substantial variation across stakeholder perspectives, with Decision Scores differing by as much as 31 points for the same model. For instance, Qwen3-Coder achieves an A (76.3) under the Head of Engineering profile but a D (45.2) under the CISO profile, while GPT-5.4 shows a similar disparity. These findings demonstrate that vulnerability detection is inherently a multi-objective problem and that stakeholder-aware evaluation provides insights that single aggregated metrics obscure.

submitted by /u/subho007
[link] [comments]
❌