Visualização de leitura
The immutable mountain: Understanding distributed ledgers through the lens of alpine climbing
In modern enterprises, we often default to centralized command-and-control structures. But in high-stakes environments — whether a whiteout on an Andean peak or a volatile global supply chain — centralization is a single point of failure. To manage complexity and risk, we must look to the architecture of the decentralized network.
A storm at high camp
The stone walls of the refuge did little to settle our hearts against the pounding storm outside. Wind whistled through cracks in the masonry as frozen rain pelted the windows like handfuls of marbles. I lay on my back, bundled in a 10-degree sleeping bag, staring at the bottom of the bunk above me. My pack stood upright beside me, boots and gear stacked with obsessive neatness for maximum efficiency at go-time. My journal, filled with the week’s entries, sat atop the pack next to my headlamp.
I focused on regulating my breathing to acclimate, ensuring full inhales and exhales maximize every element of oxygen in the thin air. At 1:30 AM, our head guide entered the room to announce the weather was challenging; we would hold. Then came 2:00, 2:30, 2:45, 2:46… if there were any climbs I wanted to skip, this was it.
At 2:48, the light flickered on. Damn, I thought.
Our lead guide announced that while the weather was horrible, we would make a go of it. The eight of us moved with sudden purpose. We rallied outside in our four rope teams, confirmed the route and left the safety of the refuge for our summit attempt on Cayambe.
Our guides did not dictate every footstep as in a traditional hierarchical construct, where information must travel to the top for a decision to be made and then back down to be executed. Instead, the expedition operated as a series of nodes (rope teams). Each guide was authoritative within their specific context, having the autonomy to make real-time decisions based on the immediate terrain.
On a mountain, that latency is fatal. By distributing authority, the expedition becomes composable. Each team operates independently but remains synchronized through a shared “state” of the mountain.
The relentless scramble
Our first segment required a difficult scramble: 1,500 feet of exposed rock while we were pounded by the elements. It was relentless. Our headlamps were nearly useless against the whiteout. Frozen rain crusted my face, crystals formed on my brows and my goggles iced over. I kept my head down to protect my face, my lamp illuminating only a few feet of black volcanic ash and ice.
We rose slowly. Each step ended with a deliberate straightening of the trailing leg — the “rest step” — to grant a moment of relief. I lost sight of two teams; the lights from the third glimmered like dying sparks several hundred feet away. The roar of the wind was broken only by short “blips” from the radio. Through the static, I heard the muffled voices of guides discussing locations, hazards and routes. Even in the isolation of the storm, I knew we were connected.
The “blip” of truth
We reached the glacier independently. I stepped into my harness, strapped on crampons and tied off on the rope. Once a team was double-checked for safety, they vanished into the dark. In short order, the distance between us grew until I had no visual reference for the others. My guide and I settled into a rhythm, the rope kept taut between us.
It is in these moments — when no one else can be seen on the mountain — that time slows. The challenge becomes internal, and you begin to question every life choice that led you to a frozen ridge at 19,000 feet.
The radio blips continued. On this day, I was the subject of those blips. Bronchitis had settled in from our previous summit of Antisana, and my blood oxygen was dropping below 85%. My rescue inhaler was failing at the altitude. Two-thirds of the way to the summit, the coughing started.
I pushed until I simply couldn’t. I bent over, coughing hard, my lungs burning and wheezing as fluid began to move. Suddenly, my Apple Watch buzzed — it was dialing an emergency. My mind shifted into a strange, analytical gear: I wonder how the signal even propagates from here? Is it connecting on GPS? Where’s the satellite? How would a rescued team even get here? What would they even do? Does an emergency line actually connect here? Apparently, my life as an INTP reached a new level, as I realized I analyze my own demise while it’s happening.
I disabled the watch, stood straight, ate nacho-flavor chips and drank water. We moved on, the “blips” continued and were more prevalent. Eventually, reality caught up. I was bent over again, moving more fluid from my lungs. In that moment of clarity, I remembered: I’m on vacation. Is this my vacation? What is wrong with me? Am I qualified to make my own life choices? We turned around for a descent that was anything but graceful.
The distributed journal
That afternoon, we met for lunch. Each team member highlighted their journey. Stories were reconciled, and a complete picture of the mountain emerged. We checked into our “cybercast” to recount the story to the world.
The decision to turn back was recorded, not just in my mind, but across the collective memory of the team. This is the essence of immutability. In a distributed ledger, once an event is verified and added to the “block” or the day’s journey, it cannot be altered or erased. It becomes part of a permanent, auditable chain of events that provides a “single source of truth” for the entire organization.
To this day, I am still amazed at the architecture of an expedition. Each guide is authoritative with their rope team, working autonomously yet connected. The head guide doesn’t make every micro-decision; they delegate that to the nodes — the guides on the ground — to do what is best for their specific context. Together, each team’s experience, when reconciled, becomes the “truth” of the trip.
This is exactly how a Distributed Ledger works. In the workplace, a distributed journal or “composable authoritative source” can be split across systems and databases. Much like our rope teams, different organizations or departments, customers, suppliers, buyers, manufacturing — have ownership of their part of a dataset. They work independently, yet together they provide a singular, authoritative ledger.
Consensus mechanisms in high-entropy environments
The most critical challenge of any distributed system — digital or human — is consensus. How do multiple independent actors agree on a single version of the truth and maintain ongoing records of transactions is a core function of any business. The protocol provides an opportunity to synchronize transactions between multiple systems across internal functions or externally to business partners, providing a holistic view of a value stream.
In a distributed ledger, we find “truth” through two main methods, both of which I saw on Cayambe:
- Synchronous consensus: Through radios, our guides provided status updates to ensure current information was mirrored across our rope teams. Reconciling these views across the day ensures “Proof of Work” — the validation that the progress recorded actually happened.
- Gossip protocol: This is the alternative communication method where guides discuss routes and risks with other teams as they pass each other. Information “hops” from team to team. In a digital ledger, this isn’t a “whisper down the lane” where information degrades; it is a rapid, peer-to-peer synchronization that ensures every system eventually holds the same exact data.
In 2026, we see movement beyond consensus, providing a “Proof of Work” to more resilient asynchronous models. Staying with our storyline on Cayambe, the synchronous consensus can incorporate fault tolerance that allows the network to reach an agreement even if some “nodes” (climbers) are offline or sending conflicting signals. Further, the gossip protocol can be extended to pass the history of who said what and when as a Directed Acyclic Graph (DAG). Unlike a linear chain, a DAG allows multiple “events” to happen simultaneously. On the mountain, this meant Team A could be navigating a rockfall while Team B was crossing a glacier, and both realities were synchronized into the master record without one waiting for the other to finish.
Immutability: The frozen record
Our trip reports and journals “lock down” the information for the day. Movements between camps and the summit are codified as blocks of information. These are sequenced together to create a chain of events. If someone tried to change the history of Day 2, it wouldn’t align with the reality of Day 2.
In a digital blockchain, this data is encrypted and sequenced so that the history of a transaction is permanent and verifiable by any participant. This does create a situation where the transactions are unable to be deleted and modified. With that said, there is an industry pragmatic approach if regulation requires a right-to-be-forgotten by using a Hyperledger Fabric that provides the ability to amend or delete information.
Blockchain concepts in the alpine environment
| Object | Alpine example | Distributed ledger |
|---|---|---|
| Node | An individual climber or rope team | Computer or system |
| Transaction | An occurrence of an event or fact while climbing | Data Record |
| Consensus | Consensus through radio communications, storytelling or peer-to-peer gossip | Proof or validation of work state |
| Block | Completed and verified segment of trip | Bundle of verified transactions |
| Chain | Continuous route from basecamp through the segments to completion of the trip | Chronological link of blocks. |
Strategic implications for the enterprise
Why does this matter for the C-Suite? By adopting a distributed ledger mindset, businesses can achieve a distributed value stream with ledgers maintained across external business providers, customers and vendors, accelerating business. This includes:
- Flexibility and agility: Through distributed ledgers, organizations can shift from monolithic systems to composable systems built on microservices, orchestrated together.
- Radical transparency: Every stakeholder has access to an identical, real-time record of truth. This may even include information across boundaries with external business partners, including customers or suppliers, creating a fully integrated, composable value stream.
- Operational resilience: If one “node” (a supplier or a regional office) fails, the rest of the network maintains integrity of the data.
- Reduced friction: Trust is built into the architecture of the system, rather than relying on manual audits and third-party verification.
Ultimately, a distributed ledger is less about the underlying code and more about the philosophy of collective trust. Whether navigating the “death zone” of a mountain or the complexities of a global market, the truth is most resilient when it is not owned by a single leader but held by everyone brave enough to participate in the journey.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Cisco Introduces Model Provenance Kit to Strengthen AI Supply Chain Security
Cisco’s open-source Model Provenance Kit helps organizations verify AI model origins, trace lineage, and reduce AI supply chain security risks.
The post Cisco Introduces Model Provenance Kit to Strengthen AI Supply Chain Security appeared first on TechRepublic.
The Mythos AI Vulnerability Storm: What to Do Next
AI is transforming both software development and software risk.
The post The Mythos AI Vulnerability Storm: What to Do Next appeared first on Security Boulevard.
How Claude Planted Malicious Code In A Crypto-Trading App
A malicious campaign by North Korean state actors saw a malicious npm package dependency slipped into a crypto trading agent by an AI coding agent, according to a new report by ReversingLabs. The incident highlights a troubling new frontier in software supply chain attacks: hackers targeting developers...and the AI tools writing their code.
The post How Claude Planted Malicious Code In A Crypto-Trading App appeared first on The Security Ledger with Paul F. Roberts.
The npm Threat Landscape: Attack Surface and Mitigations (Updated May 1)
Unit 42 analyzes npm supply chain evolution post-Shai Hulud. Discover wormable malware, CI/CD persistence, multi-stage attacks and more.
The post The npm Threat Landscape: Attack Surface and Mitigations (Updated May 1) appeared first on Unit 42.

Xinference PyPI Supply Chain Poisoning Warning
Overview Recently, NSFOCUS CERT detected that Xinference had suffered supply chain poisoning in its PyPI warehouse. The attacker stole the PyPI release permission credentials of Xinference maintainers and released three consecutive malicious versions implanted with Trojans on April 22, GMT+8. When triggered by the user, it will collect cloud credentials, SSH keys, API tokens, Sensitive […]
The post Xinference PyPI Supply Chain Poisoning Warning appeared first on NSFOCUS.
The post Xinference PyPI Supply Chain Poisoning Warning appeared first on Security Boulevard.
Hypersonic Supply Chain Attacks: One Solution That Didn’t Need to Know the Payload
In 2026, the question for security leaders is not whether a supply chain attack is coming. Every serious organization should assume it is. The question is whether their defense architecture can stop a payload it has never seen before. It’s a question that takes on even more critical implications at a time where trusted agentic automation increasingly becomes the norm.
In three weeks this spring, three threat actors each ran a tier-1 supply chain attack against widely deployed software: LiteLLM, a core AI infrastructure package, Axios, the most downloaded HTTP client in the JavaScript ecosystem, and CPU-Z, a trusted system diagnostic tool. Different vectors, different actors, different techniques. SentinelOne® stopped all three on the same day each attack launched, with no prior knowledge of any payload.
The more important story is the how. Each attack arrived as a zero-day at the moment of execution. Each exploited a trusted delivery channel: an AI coding agent running with unrestricted permissions, a phantom dependency staged eighteen hours before detonation, a properly signed binary from an official vendor domain. No signature existed for any of them. No IOA matched.
SentinelOne stopped all three. That outcome is a direct answer to the question every security leader is now running against: What does your defense do when the attack arrives through a channel you explicitly trust, carrying a payload you have never seen before?
The AI Arms Race in Security is Underway
Adversaries are no longer running manual campaigns at human speed. In September 2025, Anthropic disclosed a Chinese state-sponsored group that jailbroke an AI coding assistant and ran a full espionage campaign against approximately 30 organizations. The AI handled 80–90% of tactical operations autonomously (i.e., reconnaissance, vulnerability discovery, exploit development, credential harvesting, lateral movement, exfiltration) with minimal human direction. Anthropic noted only 4–6 human decision points per campaign. The attack achieved limited success across those targets, but the trajectory is clear: AI is compressing the human bottleneck in offensive operations. Security programs designed around manual-speed adversaries are calibrating to a threat that is moving faster.
The LiteLLM attack is the clearest recent example of what this looks like inside an AI development workflow. On March 24, 2026, threat actor TeamPCP compromised the LiteLLM Python package by obtaining PyPI credentials through a prior supply chain compromise of Trivy, a widely-used open-source security scanner. Two malicious versions (1.82.7 and 1.82.8) were published. Any system with those versions during the exposure window executed the embedded credential theft payload automatically. In one confirmed detection, an AI coding agent running with unrestricted permissions (claude --dangerously-skip-permissions) auto-updated to the infected version without human review — no approval, no alert, no visible action before the payload ran. SentinelOne detected and blocked the malicious Python execution on the same day across multiple environments. Most organizations running AI development workflows didn’t know they were exposed until after the fact. The gap where human review processes don’t reach is wide, and it grows with every AI agent added to a pipeline.
Security programs were built for a different adversary. Vulnerability management, triage queues, patch cadences: all of it assumes an attacker who moves at a pace where human response can still close the window. This year’s SentinelOne Annual Threat Report documented what happens when that assumption breaks: adversaries are shifting left, embedding malicious logic in the build process before software ever reaches production. Likewise, the Verizon 2025 Data Breach Investigations Report found that edge device vulnerabilities are now being mass-exploited at or before the day of CVE publication, while organizations take a median of 32 days to patch them. The old model worked when it was designed. Attackers just weren’t running AI yet.
Three Attacks, One Common Failure Mode
Each attack ran through the same gap. Authorization was treated as a sufficient security boundary, and when authorization is automated, that assumption has no floor.
An AI agent with install permissions doesn’t stop to ask whether a package looks right. It installs. Trusted source, valid credentials, done. Supply chain attacks have always exploited trusted delivery channels, but a human at the keyboard introduces at least one friction point: Someone might notice something off, slow down, ask a question. Agents don’t do that. They execute at the speed of the next API call. When you give an agent install permissions, you’ve extended your trust model to cover everything it will ever run. Authorized agents execute exactly what their permissions allow. That’s the design. Treating permission as a proxy for safety is what turns a compromised supply chain hypersonic.
LiteLLM was compromised via credentials stolen through Trivy, a security scanner. The Axios attacker bypassed every npm security control the project had in place by exploiting a legacy access token the maintainers had forgotten to revoke. The CPUID attackers went after the vendor’s distribution infrastructure directly, so anyone who downloaded from the official website got a properly signed binary with a payload inside. In all three cases, the identity was legitimate. The intent wasn’t.
SentinelOne’s Annual Threat Report named the failure precisely: “The identity is verified, but the intent has been subverted, rendering traditional access controls ineffective against the resulting supply chain contamination.” Signature libraries, IOA rule sets, reputation lookups: All of them check authorization. None check intent. These attacks were designed to exploit exactly that. When the authorization model runs automatically, so does the exposure.
What Actually Stopped Them
In each incident, SentinelOne’s on-device behavioral AI flagged the execution pattern, not a known signature or hash for that specific attack.
The LiteLLM detection flagged a Python interpreter executing Base64-decoded code in a spawned subprocess. SentinelOne killed the process preemptively, terminating 424 related events in under 44 seconds, before any human was in a position to observe it. The Axios detection, via the Lunar behavioral engine, caught PowerShell executing under a renamed binary from a non-standard path. The engine flagged the technique regardless of what the payload contained. The first infection occurred 89 seconds after the malicious package went live; the behavioral detection fired on the same day of publication. The CPU-Z detection flagged cpuz_x64.exe building an anomalous process chain: spawning PowerShell, which spawned csc.exe, which spawned cvtres.exe. CPU-Z does not do that. The platform terminated the execution chain mid-attack during a 19-hour active distribution window.
This is the operational output of Autonomous Security Intelligence (ASI), the intelligence fabric built into the Singularity
Platform. ASI runs on-device at the edge as part of the core architecture. It is already running when the attack starts, killing the process before the threat can escalate.
Where customers had SentinelOne fully deployed with the right policies enabled, they were covered. Where they did not, they were exposed, and with average ransomware recovery costs exceeding $4M per incident, that exposure has a real price. If you are not certain your deployment matches the configuration that stopped these three attacks, that certainty is worth getting.
AI to Fight AI
This is the product reality behind the thesis SentinelOne brought to RSAC: AI to fight AI. A machine-speed adversary requires a machine-speed defense. That is an architectural requirement, not a positioning statement. ASI monitors behavioral patterns at the point of execution and kills the process when something deviates, at machine speed, without waiting for a human to write a query or approve a kill.
According to an IDC study, organizations using SentinelOne’s AI platform identify threats 63% faster and remediate 55% faster than legacy solutions, neutralizing 99% of threats without a single manual step. For organizations in regulated industries (healthcare, financial services, manufacturing, critical infrastructure), the stakes compound beyond breach cost. An exposure window that stays open through manual investigation is a potential regulatory notification event, an audit finding, and a conversation the CISO has with the board under circumstances no one wants. The difference between a stopped attack and an active breach is whether the architecture acts before the attacker establishes persistence. By the time a human analyst approves the kill, redundant persistence mechanisms may already be installed. The CPU-Z attack deployed three of them specifically because partial cleanup leaves the payload operational.
Human-driven workflows, manual validation, and legacy tooling cannot keep pace with that attack cadence. When defense relies on investigation before action, the advantage shifts to the adversary. The gap is in the architecture. You cannot tune your way out of it.
Conclusion | The Only Question That Matters
SentinelOne’s latest Annual Threat Report documented the pattern these three attacks confirm: Adversaries are “shifting left” by integrating malicious logic into the build process itself, compromising software before it reaches production. It is the current operating model of advanced threat actors, and it is accelerating.
Three attacks. Three detections. Three outcomes, all in a matter of weeks. The architecture that survived them is real-time, AI-native, and built into the edge.
The question every security leader should be able to answer: Could your current solution have stopped LiteLLM, Axios, and CPU-Z autonomously, on the day of each attack, with no prior knowledge of any payload?
If the answer depends on a signature update, a cloud verdict, a manual investigation step, or a policy that wasn’t enabled, that is your answer.
Read the full technical breakdown of each incident:
- How SentinelOne Stopped the LiteLLM Supply Chain Attack
- Securing the Supply Chain: The Axios Attack
- How SentinelOne Blocked the CPU-Z Watering Hole Attack
Third-Party Trademark Disclaimer:
All third-party product names, logos, and brands mentioned in this publication are the property of their respective owners and are for identification purposes only. Use of these names, logos, and brands does not imply affiliation, endorsement, sponsorship, or association with the third-party.

Why planning structures must evolve in modern manufacturing
Across many manufacturing organizations I have worked with, I keep seeing the same puzzling pattern.
Companies invest in better forecasting tools. They implement advanced planning systems. They improve supply chain processes.
Yet something strange still happens.
Some components are overplanned. Others are repeatedly short. Production teams start expediting parts. Suppliers are pushed to deliver faster.
Eventually, leaders ask the obvious question:
If planning systems are improving, why do these imbalances still occur — and why are teams still relying on spreadsheets and manual workarounds?
In my experience, the issue is rarely forecasting accuracy, execution capability or supplier performance. It begins with how planning parameters are defined inside enterprise systems.
Most ERP environments I have worked with still rely on static assumptions, while the real supply chain behaves dynamically. This mismatch between static planning logic and dynamic operational behavior is where structural imbalances originate.
The hidden problem: Static planning parameters
Across implementations, I consistently find that three tightly connected parameters drive planning behavior:
- Planning Bills of Materials (Planning BOMs)
- Lead Times
- Safety Stock
These are typically maintained as master data, reviewed periodically and updated manually, generally once or twice a year. That approach may have worked in stable environments, but modern manufacturing operates under continuous change. Product configurations evolve, customer preferences shift and supply conditions fluctuate.
When these assumptions remain static, the system does not fail; it drifts. And that drift manifests as imbalance across components, time and availability.
Example #1: Planning BOM
In one environment I worked with, the Planning BOM assumed that 70% of orders used a standard PLC module and 30% used an advanced PLC. Over time, actual demand shifted and advanced PLC usage exceeded 50%.
However, the planning structure did not change, largely because updating it required significant manual effort and coordination across teams.
The result was not simply excess inventory — it was misalignment:
- Overplanning of standard components
- Underplanning of advanced components
- Repeated substitutions and expediting
The forecast itself remained reasonably accurate. The imbalance emerged because demand was being translated through outdated structural assumptions.
More fundamentally, I have observed that Planning Bills of Materials, while central to ERP-driven planning, were never designed to capture the full complexity of manufacturing execution. Traditional BOM structures define what needs to be built, but not how it is built.
This limitation has been highlighted in patent US10832197B1, which introduces the concept of a “bill of work” to represent the actual activities, routing and process steps required for manufacturing. However, this type of execution-aware structural modeling is still rarely implemented in most ERP systems, which continue to rely primarily on static BOM definitions.
In my experience, this gap reinforces a broader point: Static planning structures alone are insufficient to model dynamic, real-world production environments.
Example #2: Lead time
I have seen cases where average demand remained stable at 100 units per week and lead time was assumed to be static at 10 weeks. In reality, lead time fluctuated between 8 and 14 weeks.
This did not just affect total inventory; it disrupted timing alignment:
- Materials arriving too early for some components
- Materials arriving too late for others
The issue was not quantity. It was synchronization across time.
Example #3: Safety stock
When shortages occur, organizations often increase safety stock. Most enterprise systems support this through simple mechanisms:
- Fixed quantities
- Coverage-based calculations
Safety Stock = Average Daily Demand × Days of Coverage
Both approaches assume relatively stable demand variability and supply risk.
However, real supply chains are not stable. Demand patterns shift, suppliers fluctuate and disruptions occur frequently. In this context, increasing safety stock often protects a distorted signal rather than correcting it.
In my work on inventory optimization, sometimes referred to as Garg’s Principle, I evaluate safety stock across the full forecast horizon rather than at a single point.
A simplified representation is:
Safety Stock = Target Service Inventory − Minimum Projected Inventory Across the Forecast Horizon
This approach identifies the lowest projected inventory point and ensures buffers protect that constraint. It transforms safety stock from a static buffer into a forward-looking stability mechanism.
In practice, I consistently see that increasing buffers alone does not resolve imbalance:
- Some components become over-buffered
- Others remain constrained
- Overall inventory may increase, but instability persists
The problem is not how much safety stock exists; it is how it is aligned.
Individually, each of the above three examples (planning BOM, lead time and safety stock) introduces distortion. Together, they amplify it.
Why static planning structures break in a dynamic world
Many ERP planning systems were designed for environments where product configurations, supplier behavior and demand patterns changed slowly.
That reality no longer exists.
Today’s manufacturing environments operate in constant change. Product variants evolve rapidly, customer expectations shift quickly and supply chains face ongoing disruption. Yet many planning models still assume stable product mixes, fixed lead times and constant buffers.
This gap between dynamic markets and static planning structures is where imbalances begin.
At a broader level, this reflects a structural limitation of ERP-centric planning. ERP systems are highly effective at executing transactions and maintaining control, but they extend past data into the future using relatively fixed assumptions. As highlighted in Why ERP-Centric Planning Can’t Keep Up with Modern Supply Chains, such systems often struggle to keep pace when demand patterns, supply variability and product configurations change continuously.
In many cases, supply chains do not struggle because forecasts are wrong; they struggle because the parameters translating demand into supply decisions remain static or are not updated regularly or require huge manual efforts.
Execution systems cannot fix planning imbalance
Planning imbalances do not remain confined to ERP systems, they propagate across the entire manufacturing stack.
Manufacturing Execution Systems (MES) and shop-floor operations depend on the plans they receive. When those plans are structurally imbalanced, execution systems cannot correct them; they simply operationalize the imbalance.
This relationship between planning and execution has been widely discussed in the context of modern MES platforms, which act as the bridge between enterprise systems and real-time production environments, as explored in Manufacturing execution systems: A comprehensive guide to selection and implementation.
I have also discussed a similar pattern in Why your ERP still can’t solve inventory drift — and the architecture that will, where ERP systems struggle not because they are broken, but because they operate on outdated assumptions.
From what I have seen, once a structural error enters the system, it flows through:
Forecast → Planning BOM → ERP → MES → Shop-floor execution
By the time production begins, the imbalance is already embedded.
From static to dynamic planning architecture
For CIOs, I do not see the solution as replacing ERP systems. Instead, I see an opportunity to modernize the intelligence layer that feeds them.
In my experience, artificial intelligence can transform static planning parameters into adaptive models that continuously learn from enterprise data.
AI-driven planning systems can incorporate:
- Historical configurations and production data
- Sales inputs and forward-looking programs
- Engineering changes and substitution patterns
- Supplier performance and variability
Using these inputs, machine learning models can estimate the probability distribution of components and dynamically generate Planning BOMs that reflect real-world behavior.
In parallel:
- Lead times can be adjusted dynamically
- Safety stock can be aligned with forward-looking variability
In practice, this works through four steps:
- Build a structural signature from early demand signals
- Identify comparable configurations using historical data
- Predict component mix probabilities
- Generate a dynamic Planning BOM
ERP remains the execution engine, but the structure feeding it becomes adaptive.
When I experimented with dynamic planning approaches, the impact was structural:
| Behavior | Traditional Static Planning | Dynamic Planning |
| Component alignment | Frequent mismatch | Improved alignment |
| Expediting | Frequent | Reduced by ~30–40% |
| Production schedules | Unstable | More predictable |
| ERP- MES alignment | Frequent substitutions | Improved synchronization |
| Safety stock behavior | Increasing without stability | Targeted and stable |
These results reinforce a broader lesson:
Planning challenges are not driven by lack of inventory; they are driven by lack of alignment.
Mini case study: Resolving structural imbalance
In one manufacturing environment I worked with, forecasting accuracy was strong and supplier performance was stable. Yet planning imbalance persisted.
At a system level, inventory appeared sufficient. However:
- Critical components were frequently unavailable
- Non-critical components accumulated
- Production schedules required constant adjustment
The issue was not shortage, it was misalignment.
When I analyzed the system, I found:
- Planning BOMs reflected outdated configurations
- Lead times were fixed despite variability
- Safety stock was increased uniformly
This created a cycle of persistent imbalance and expediting.
We shifted to a dynamic planning approach:
- BOM assumptions aligned with actual demand
- Lead times adjusted based on observed variability
- Inventory evaluated across the planning horizon
Within a few cycles:
- Imbalance reduced significantly
- Expediting declined
- Production schedules stabilized
The key change was not more inventory; it was better alignment.
A strategic opportunity for CIOs and supply chain VPs
From a CIO perspective, this represents a fundamental shift.
The question is no longer: “How do we improve planning tools?”
The better question is: “How do we transform static planning parameters into adaptive planning intelligence?”
Because in modern manufacturing, planning structure is strategy.
Conclusion
Based on my experience, traditional planning systems rely on static assumptions, while modern supply chains operate in constant change.
The challenge is not about inventory levels; it is planning alignment.
When planning structures remain static, imbalances persist — even when forecasting and execution improve.
But when planning becomes dynamic, when assumptions evolve with reality, those imbalances begin to disappear.
The next era of manufacturing advantage will come not from more inventory or faster execution, but from dynamic real-time alignment between planning assumptions and real-world behavior.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Vibe Coding vs. SBOM: One Builds Fast. The Other Tells You What You Just Built

Explore the clash between "Vibe Coding" and modern software governance. Learn why high-speed AI generation demands stronger SBOM transparency and accountability in 2026.
The post Vibe Coding vs. SBOM: One Builds Fast. The Other Tells You What You Just Built appeared first on Security Boulevard.
Python Supply-Chain Compromise
This is news:
A malicious supply chain compromise has been identified in the Python Package Index package litellm version 1.82.8. The published wheel contains a malicious .pth file (litellm_init.pth, 34,628 bytes) which is automatically executed by the Python interpreter on every startup, without requiring any explicit import of the litellm module.
There are a lot of really boring things we need to do to help secure all of these critical libraries: SBOMs, SLSA, SigStore. But we have to do them.
Banning Routers Won’t Secure the Internet

Washington’s push to ban foreign-made Wi-Fi routers may sound tough on cybersecurity, but like earlier bans on foreign drones and telecom gear it risks becoming security theater that ignores the real problem: Millions of unpatched devices already sitting on American networks.
The post Banning Routers Won’t Secure the Internet appeared first on Security Boulevard.
The Cyber Express Weekly Roundup: Ransomware, and Supply Chain Breaches Surge

The Cyber Express Weekly Roundup
Hasbro Cyberattack Disrupts Operations Amid Rising Ransomware Concerns
Hasbro has reported a cyberattack after detecting unauthorized network access on March 28, 2026. The company responded swiftly by initiating containment measures, isolating affected systems, and engaging external experts to assess the breach. While core operations remain functional under contingency plans, some delays are expected. Read more...Mercor Breach Exposes Supply Chain Risks in AI Ecosystems
A significant development in this weekly roundup involves AI startup Mercor, which confirmed a breach linked to a supply chain compromise in the LiteLLM open-source project. The attack stemmed from a malicious package update, affecting thousands of organizations relying on the software. The group known as TeamPCP has been associated with the incident, while Lapsus$ has also claimed involvement. Read more...Lazarus Group Tied to Axios Supply Chain Attack
Another major highlight is a widespread attack targeting the Axios JavaScript library. The operation has been attributed to North Korea’s Lazarus Group, known for conducting advanced cyber campaigns. Attackers inserted a malicious dependency into the package, enabling backdoor access across multiple operating systems through automated installations. Read more...Personal Email Breach of FBI Director Raises Security Questions
Hackers linked to Iran compromised the personal email account of FBI Director Kash Patel. The breach resulted in the leak of emails and personal data as part of a coordinated “hack-and-leak” campaign. Attributed to the Handala Hack Team, the attack appears designed to inflict reputational damage and psychological pressure. Read more...CareCloud Cyberattack Impacts Health Records System
Healthcare provider CareCloud disclosed a cyberattack involving unauthorized access to its electronic health record system. Detected on March 16, the incident lasted approximately eight hours before being contained. While investigations are ongoing, the breach raises concerns about potential exposure to sensitive patient data. Read more..."764" Cybercrime Case Highlights Dark Web Exploitation Networks
In a separate case, a U.S. individual pleaded guilty to charges related to child exploitation and cyberstalking linked to the extremist “764” network. The case illustrates how cybercriminal ecosystems extend beyond financial motives, involving coordinated abuse, manipulation, and exploitation facilitated by online platforms. Read more...Weekly Takeaway
This edition of The Cyber Express weekly roundup emphasizes the growing scale and complexity of global cybersecurity news, where ransomware, supply chain compromises, and targeted attacks intersect. From corporate breaches and nation-state operations to exploitation networks, the threat landscape continues to expand in both scope and impact. To mitigate these risks, organizations must strengthen supply chain oversight, enforce robust access controls, and prioritize rapid incident response capabilities. As highlighted throughout this weekly roundup, maintaining resilience in today’s environment requires a multi-layered approach that integrates technology, governance, and continuous monitoring to stay ahead of modern-day cyber threats.Threat Brief: Widespread Impact of the Axios Supply Chain Attack
Unit 42 discusses the supply chain attack targeting Axios. Learn about the full attack chain, from the dropper to forensic cleanup.
The post Threat Brief: Widespread Impact of the Axios Supply Chain Attack appeared first on Unit 42.

Weaponizing the Protectors: TeamPCP’s Multi-Stage Supply Chain Attack on Security Infrastructure
TeamPCP continues its string of supply chain attacks, and announces a partnership with Vect ransomware group.
The post Weaponizing the Protectors: TeamPCP’s Multi-Stage Supply Chain Attack on Security Infrastructure appeared first on Unit 42.

Axios supply chain attack chops away at npm trust
Researchers found that compromised Axios versions installed a Remote Access Trojan.
Axios is a promise-based HTTP Client for node.js, basically a helper tool that developers use behind the scenes to let apps talk to the internet. For example, Axios makes requests such as “get my messages from the server” or “send this form to the website” easier and more reliable for programmers and it saves them from having to write a lot of low‑level networking code themselves.
Since it works both in the browser and on servers (Node.js), a lot of modern JavaScript‑based projects include it as a standard building block. Even if you never install Axios yourself, you might indirectly run into it when you:
- Use web apps built with frameworks like React, Vue, or Angular.
- Use mobile apps or desktop apps built with web technologies like Electron, React Native, and others.
- Visit smaller Software-as-a-Service (SaaS) tools, admin panels, or self‑hosted services built by developers who picked Axios.
You could compare it to the plumbing in your house. Usually you don’t notice the pipes, but they bring the water to where you open a faucet. And you don’t need to know where they are until a leak occurs.
What happened?
Using compromised credentials of a lead maintainer of Axios an attacker published poisoned packages to npm: axios@1.14.1 and axios@0.30.4. The malicious versions inject a new dependency, plain-crypto-js@4.2.1, which is never imported anywhere in the axios source code.
Together the two affected packages reach up to 100 million weekly downloads on npm, which means it has a huge impact radius across web apps, services, and pipelines.
It is important to note that the affected Axios version does not appear in the project’s official GitHub tags. This means that the people and projects affected are developers and environments which ran npm install that resolved to:
axios@1.14.1oraxios@0.30.4, or- the dependency
plain-crypto-js@4.2.1.
Any workflow that installed one of those versions with scripts enabled may have exposed all injected secrets (cloud keys, repo deploy keys, npm tokens, etc.) to an interactive attacker, because the postinstall script (node setup.js) that runs automatically on npm install downloaded an obfuscated dropper that retrieves a platform‑specific RAT payload for macOS, Windows, or Linux.
If you are a developer deploying Axios, treat any machine that installed the bad versions as potentially fully compromised and rotate secrets. The attacker may have obtained repo access, signing keys, API keys, or other secrets that can be used to backdoor future releases or attack your backend and users.
Users apps built with Axios do not have any direct reason to worry. If you’re just loading your app in a browser you’re not directly executing this RAT via Axios. The infection path is the install/build step, not app runtime.
Indicators of Compromise (IOCs)
As the rsearchers pointed out the malware dropper cleans up after itself:
“Any post-infection inspection of node_modules/plain-crypto-js/package.json will show a completely clean manifest. There is no postinstall script, no setup.js file, and no indication that anything malicious was ever installed. Running npm audit or manually reviewing the installed package directory will not reveal the compromise.”
What you can look for, then, are these IOCs:
Domain: sfrclak[.]com
IP address: 142.11.206.73
(both blocked by Malwarebytes products)
Files:
- macOS: /Library/Caches/com.apple.act.mond
- Linux: /tmp/ld.py
- Windows: %PROGRAMDATA%\wt and %TEMP%\6202033.vbs/.ps1 which only exist briefly during execution
Malicious npm packages:
axios@1.14.1 sha-256 checksum: 2553649f2322049666871cea80a5d0d6adc700ca
axios@0.30.4 sha-256 checksum: d6f3f62fd3b9f5432f5782b62d8cfd5247d5ee71
plain-crypto-js@4.2.1 sha-256 checksum: 07d889e2dadce6f3910dcbc253317d28ca61c766
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.
Hackers Poison Axios npm Package with 100 Million Weekly Downloads
TeamPCP Uses Fake Ringtone File in Tainted Telnyx SDK to Steal Credentials
What the UK Cyber Security & Resilience Bill Means for Security Practitioners
The UK Cyber Security & Resilience Bill is progressing through Parliament Royal Assent expected later in 2026.
The UK's Cyber Security and Resilience Bill is working its way through Parliament, and if you haven't started paying serious attention yet, now is the time. Introduced to the House of Commons in November 2025, the Bill represents the most significant overhaul of UK cyber regulation since the NIS Regulations in 2018, and its implications for security practitioners are immediate and practical.
What's Actually Changing
At its core, the Bill expands the existing Network and Information Systems regulatory framework. It brings more organisations into scope, imposes stricter incident notification requirements, and hands regulators substantially more enforcement power. Secondary legislation and statutory Codes of Practice will follow, but the primary architecture of what you'll be working within is already taking shape.
One of the most significant shifts for practitioners working in or alongside managed services is the creation of a new regulated entity category: the Relevant Managed Service Provider (RMSP). For the first time, MSPs providing services to in-scope sectors face direct regulatory obligations. If your organisation is an MSP, or relies heavily on one, your compliance exposure has materially changed.
⚠ Key Point - Incident Reporting Timelines
Maximum Penalty Structure
- Standard maximum penalty - £10m or 2% of global turnover
- Higher maximum (serious breaches) - £17m or 4% of worldwide turnover
- Continuing contraventions (daily) - Up to £100,000 per day
- Extended ceiling (exceptional cases) - Up to 10% of worldwide turnover
These are not hypothetical. Regulators will also gain cost recovery powers, able to levy periodic fees to fund their oversight activities. Expect more active enforcement, not passive monitoring.
UK vs NIS2: Don't Assume Alignment
If your organisation already operates under the EU's NIS2 framework, a critical warning: the UK Bill and NIS2 share objectives but diverge in material ways. Reporting thresholds differ, customer notification requirements differ, and the sectors in scope are structured differently. A NIS2-aligned incident response playbook will not automatically satisfy UK obligations.
Practitioners managing cross-border environments will need jurisdiction-specific runbooks. A single process attempting to satisfy both simultaneously risks failing both under pressure.
Supply Chain Risk Is Now Statutory
The Bill introduces the concept of designated "critical suppliers" organisations whose compromise could cause major disruption to the economy or wider society, even if they are not themselves regulated entities. These suppliers will receive formal written notice and will have the right to make representations or appeal.
Secondary legislation will likely impose specific supply chain security obligations on regulated entities potentially including contractual requirements, security assessments, and continuity planning mandates. The era of passing a questionnaire and considering supply chain risk managed is ending.
🔗 Supply Chain Reality Check
The Bill has passed its Report Stage in the Commons and is heading to the House of Lords. Royal Assent is expected later in 2026. Waiting for the final text before acting is not a defensible position.
- Determine whether your organisation or key MSPs fall into newly in-scope categories, including data centres with Rated IT Load above 1 MW
- Review incident detection and escalation workflows against the 24-hour initial notification requirement
- Map divergence between your current NIS/NIS2 compliance posture and what the UK Bill will require
- Audit your supplier assurance programme, move beyond annual questionnaires towards continuous oversight
- Engage legal, compliance, and operational teams together; this cannot be owned by security alone
- Monitor the Bill's progress and watch for secondary legislation, which will contain the operational detail
The regulatory environment for UK cyber security is shifting substantially. The organisations best placed when the Bill receives Royal Assent will be those treating this as a live operational project, not a future compliance task.
Track the Bill's progress via the UK Parliament Bills tracker and the House of Commons Library briefing.
The post What the UK Cyber Security & Resilience Bill Means for Security Practitioners appeared first on Security Boulevard.
