Visualização normal

Antes de ontemStream principal
  • ✇Securelist
  • Exploits and vulnerabilities in Q1 2026 Alexander Kolesnikov
    During Q1 2026, the exploit kits leveraged by threat actors to target user systems expanded once again, incorporating new exploits for the Microsoft Office platform, as well as Windows and Linux operating systems. In this report, we dive into the statistics on published vulnerabilities and exploits, as well as the known vulnerabilities leveraged by popular C2 frameworks throughout Q1 2026. Statistics on registered vulnerabilities This section provides statistical data on registered vulnerabiliti
     

Exploits and vulnerabilities in Q1 2026

7 de Maio de 2026, 07:00

During Q1 2026, the exploit kits leveraged by threat actors to target user systems expanded once again, incorporating new exploits for the Microsoft Office platform, as well as Windows and Linux operating systems.

In this report, we dive into the statistics on published vulnerabilities and exploits, as well as the known vulnerabilities leveraged by popular C2 frameworks throughout Q1 2026.

Statistics on registered vulnerabilities

This section provides statistical data on registered vulnerabilities. The data is sourced from cve.org.

We examine the number of registered CVEs for each month starting from January 2022. The total volume of vulnerabilities continues rising and, according to current reports, the use of AI agents for discovering security issues is expected to further reinforce this upward trend.

Total published vulnerabilities per month from 2022 through 2026 (download)

Next, we analyze the number of new critical vulnerabilities (CVSS > 8.9) over the same period.

Total critical vulnerabilities published per month from 2022 through 2026 (download)

The graph indicates that while the volume of critical vulnerabilities slightly decreased compared to previous years, an upward trend remained clearly visible. At present, we attribute this to the fact that the end of last year was marked by the disclosure of several severe vulnerabilities in web frameworks. The current growth is driven by high-profile issues like React2Shell, the release of exploit frameworks for mobile platforms, and the uncovering of secondary vulnerabilities during the remediation of previously discovered ones. We will be able to test this hypothesis in the next quarter; if correct, the second quarter will show a significant decline, similar to the pattern observed in the previous year.

Exploitation statistics

This section presents statistics on vulnerability exploitation for Q1 2026. The data draws on open sources and our telemetry.

Windows and Linux vulnerability exploitation

In Q1 2026, threat actor toolsets were updated with exploits for new, recently registered vulnerabilities. However, we first examine the list of veteran vulnerabilities that consistently account for the largest share of detections:

  • CVE-2018-0802: a remote code execution (RCE) vulnerability in the Equation Editor component
  • CVE-2017-11882: another RCE vulnerability also affecting Equation Editor
  • CVE-2017-0199: a vulnerability in Microsoft Office and WordPad that allows an attacker to gain control over the system
  • CVE-2023-38831: a vulnerability resulting from the improper handling of objects contained within an archive
  • CVE-2025-6218: a vulnerability allowing the specification of relative paths to extract files into arbitrary directories, potentially leading to malicious command execution
  • CVE-2025-8088: a directory traversal bypass vulnerability during file extraction utilizing NTFS Streams

Among the newcomers, we have observed exploits targeting the Microsoft Office platform and Windows OS components. Notably, these new vulnerabilities exploit logic flaws arising from the interaction between multiple systems, making them technically difficult to isolate within a specific file or library. A list of these vulnerabilities is provided below:

  • CVE-2026-21509 and CVE-2026-21514: security feature bypass vulnerabilities: despite Protected View being enabled, a specially crafted file can still execute malicious code without the user’s knowledge. Malicious commands are executed on the victim’s system with the privileges of the user who opened the file.
  • CVE-2026-21513: a vulnerability in the Internet Explorer MSHTML engine, which is used to open websites and render HTML markup. The vulnerability involves bypassing rules that restrict the execution of files from untrusted network sources. Interestingly, the data provider for this vulnerability was an LNK file.

These three vulnerabilities were utilized together in a single chain during attacks on Windows-based user systems. While this combination is noteworthy, we believe the widespread use of the entire chain as a unified exploit will likely decline due to its instability. We anticipate that these vulnerabilities will eventually be applied individually as initial entry vectors in phishing campaigns.

Below is the trend of exploit detections on user Windows systems starting from Q1 2025.

Dynamics of the number of Windows users encountering exploits, Q1 2025 – Q1 2026. The number of users who encountered exploits in Q1 2025 is taken as 100% (download)

The vulnerabilities listed here can be leveraged to gain initial access to a vulnerable system and for privilege escalation. This underscores the critical importance of timely software updates.

On Linux devices, exploits for the following vulnerabilities were detected most frequently:

  • CVE-2022-0847: a vulnerability known as Dirty Pipe, which enables privilege escalation and the hijacking of running applications
  • CVE-2019-13272: a vulnerability caused by improper handling of privilege inheritance, which can be exploited to achieve privilege escalation
  • CVE-2021-22555: a heap out-of-bounds write vulnerability in the Netfilter kernel subsystem
  • CVE-2023-32233: a vulnerability in the Netfilter subsystem that allows for Use-After-Free conditions and privilege escalation through the improper processing of network requests

Dynamics of the number of Linux users encountering exploits, Q1 2025 – Q1 2026. The number of users who encountered exploits in Q1 2025 is taken as 100% (download)

In the first quarter of 2026, we observed a decrease in the number of detected exploits; however, the detection rates are on the rise relative to the same period last year. For the Linux operating system, the installation of security patches remains critical.

Most common published exploits

The distribution of published exploits by software type in Q1 2026 features an updated set of categories; once again, we see exploits targeting operating systems and Microsoft Office suites.

Distribution of published exploits by platform, Q1 2026 (download)

Vulnerability exploitation in APT attacks

We analyzed which vulnerabilities were utilized in APT attacks during Q1 2026. The ranking provided below includes data based on our telemetry, research, and open sources.

TOP 10 vulnerabilities exploited in APT attacks, Q1 2026 (download)

In Q1 2026, threat actors continued to utilize high-profile vulnerabilities registered in the previous year for APT attacks. The hypothesis we previously proposed has been confirmed: security flaws affecting web applications remain heavily exploited in real-world attacks. However, we are also observing a partial refresh of attacker toolsets. Specifically, during the first quarter of the year, APT campaigns leveraged recently discovered vulnerabilities in Microsoft Office products, edge networking device software, and remote access management systems. Although the most recent vulnerabilities are being exploited most heavily, their general characteristics continue to reinforce established trends regarding the categories of vulnerable software. Consequently, we strongly recommend applying the security patches provided by vendors.

C2 frameworks

In this section, we examine the most popular C2 frameworks used by threat actors and analyze the vulnerabilities targeted by the exploits that interacted with C2 agents in APT attacks.

The chart below shows the frequency of known C2 framework usage in attacks against users during Q1 2026, according to open sources.

TOP 10 C2 frameworks used by APTs to compromise user systems, Q1 2026 (download)

Metasploit has returned to the top of the list of the most common C2 frameworks, displacing Sliver, which now shares the second position with Havoc. These are followed by Covenant and Mythic, the latter of which previously saw greater popularity. After studying open sources and analyzing samples of malicious C2 agents that contained exploits, we determined that the following vulnerabilities were utilized in APT attacks involving the C2 frameworks mentioned above:

  • CVE-2023-46604: an insecure deserialization vulnerability allowing for arbitrary code execution within the server process context if the Apache ActiveMQ service is running
  • CVE-2024-12356 and CVE-2026-1731: command injection vulnerabilities in BeyondTrust software that allow an attacker to send malicious commands even without system authentication
  • CVE-2023-36884: a vulnerability in the Windows Search component that enables command execution on the system, bypassing security mechanisms built into Microsoft Office applications
  • CVE-2025-53770: an insecure deserialization vulnerability in Microsoft SharePoint that allows for unauthenticated command execution on the server
  • CVE-2025-8088 and CVE-2025-6218: similar directory traversal vulnerabilities that allow files to be extracted from an archive to a predefined path, potentially without the archiving utility displaying any alerts to the user

The nature of the described vulnerabilities indicates that they were exploited to gain initial access to the system. Notably, the majority of these security issues are targeted to bypass authentication mechanisms. This is likely due to the fact that C2 agents are being detected effectively, prompting threat actors to reduce the probability of discovery by utilizing bypass exploits.

Notable vulnerabilities

This section highlights the most significant vulnerabilities published in Q1 2026 that have publicly available descriptions.

CVE-2026-21519: Desktop Window Manager vulnerability

At the core of this vulnerability is a Type Confusion flaw. By attempting to access a resource within the Desktop Window Manager subsystem, an attacker can achieve privilege escalation. A necessary condition for exploiting this issue is existing authorization on the system.

It is worth noting that the DWM subsystem has been under close scrutiny by threat actors for quite some time. Historically, the primary attack vector involves interacting with the NtDComposition* function set.

RegPwn (CVE-2026-21533): a system settings access control vulnerability

CVE-2026-21533 is essentially a logic vulnerability that enables privilege escalation. It stems from the improper handling of privileges within Remote Desktop Services (RDS) components. By modifying service parameters in the registry and replacing the configuration with a custom key, an attacker can elevate privileges to the SYSTEM level. This vulnerability is likely to remain a fixture in threat actor toolsets as a method for establishing persistence and gaining high-level privileges.

CVE-2026-21514: a Microsoft Office vulnerability

This vulnerability was discovered in the wild during attacks on user systems. Notably, an LNK file is used to initiate the exploitation process. CVE-2026-21514 is also a logic issue that allows for bypassing OLE technology restrictions on malicious code execution and the transmission of NetNTLM authentication requests when processing untrusted input.

Clawdbot (CVE-2026-25253): an OpenClaw vulnerability

This vulnerability in the AI agent leaks credentials (authentication tokens) when queried via the WebSocket protocol. It can lead to the compromise of the infrastructure where the agent is installed: researchers have confirmed the ability to access local system data and execute commands with elevated privileges. The danger of CVE-2026-25253 is further compounded by the fact that its exploitation has generated numerous attack scenarios, including the use of prompt injections and ClickFix techniques to install stealers on vulnerable systems.

CVE-2026-34070: LangChain framework vulnerability

LangChain is an open-source framework designed for building applications powered by large language models (LLMs). A directory traversal vulnerability allowed attackers to access arbitrary files within the infrastructure where the framework was deployed. The core of CVE-2026-34070 lies in the fact that certain functions within langchain_core/prompts/loading.py handled configuration files insecurely. This could potentially lead to the processing of files containing malicious data, which could be leveraged to execute commands and expose critical system information or other sensitive files.

CVE-2026-22812: an OpenCode vulnerability

CVE-2026-22812 is another vulnerability identified in AI-assisted coding software. By default, the OpenCode agent provided local access for launching authorized applications via an HTTP server that did not require authentication. Consequently, attackers could execute malicious commands on a vulnerable device with the privileges of the current user.

Conclusion and advice

We observe that the registration of vulnerabilities is steadily gaining momentum in Q1 2026, a trend driven by the widespread development of AI tools designed to identify security flaws across various software types. This trajectory is likely to result not only in a higher volume of registered vulnerabilities but also in an increase in exploit-driven attacks, further reinforcing the critical necessity of timely security patch deployment. Additionally, organizations must prioritize vulnerability management and implement effective defensive technologies to mitigate the risks associated with potential exploitation.

To ensure the rapid detection of threats involving exploit utilization and to prevent their escalation, it is essential to deploy a reliable security solution. Key features of such a tool include continuous infrastructure monitoring, proactive protection, and vulnerability prioritization based on real-world relevance. These mechanisms are integrated into Kaspersky Next, which also provides endpoint security and protection against cyberattacks of any complexity.

  • ✇Firewall Daily – The Cyber Express
  • UIDAI, NFSU Sign 5-Year Pact to Boost Cybersecurity and Digital Forensics Ashish Khaitan
    The collaboration between the Unique Identification Authority of India and the National Forensic Sciences University marks a significant development in India's security landscape and digital forensics. In a move aimed at strengthening the country’s digital infrastructure, UIDAI and NFSU have formalized a five-year partnership to advance research, training, and operational capabilities in cybersecurity and digital forensics.  According to an official statement, UIDAI and NFSU have established a
     

UIDAI, NFSU Sign 5-Year Pact to Boost Cybersecurity and Digital Forensics

UIDAI and NFSU

The collaboration between the Unique Identification Authority of India and the National Forensic Sciences University marks a significant development in India's security landscape and digital forensics. In a move aimed at strengthening the country’s digital infrastructure, UIDAI and NFSU have formalized a five-year partnership to advance research, training, and operational capabilities in cybersecurity and digital forensics. 

According to an official statement, UIDAI and NFSU have established a structured collaboration designed to address emerging challenges in cybersecurity and digital forensics.

UIDAI and NFSU Join Forces on Cybersecurity and Digital Forensics

The agreement, announced on May 5 in Ahmedabad, provides a comprehensive framework to bring together expertise from both institutions. It is intended to reinforce cyber resilience across UIDAI’s systems, which form the backbone of India’s digital identity ecosystem.  The Ministry of Electronics and Information Technology highlighted that this partnership creates an umbrella structure for coordinated efforts in research, technical development, and capacity building. The initiative underscores the growing importance of cybersecurity and digital forensics as critical components of national digital infrastructure. 

Six Strategic Pillars Driving UIDAI and NFSU Collaboration 

The UIDAI and NFSU partnership is structured around six key pillars, each targeting specific aspects of cybersecurity and digital forensics. These include academic and professional development, aimed at building skilled talent in the field, as well as strengthening information security and system integrity within UIDAI’s ecosystem.  Another major focus area is the development of advanced forensic infrastructure and laboratory capabilities. This will support deeper investigation and analysis of cyber incidents. Additionally, the agreement outlines provisions for technical support in cybersecurity operations, ensuring that UIDAI benefits from NFSU’s specialized expertise.  The collaboration also emphasizes joint research and technical advisory in emerging technologies. Areas such as artificial intelligence, blockchain, cryptography, and deepfake detection are expected to play a central role. The sixth pillar focuses on strategic placement and outreach, creating pathways for NFSU students to gain hands-on experience and career opportunities within UIDAI-related projects. 

Strengthening India’s Digital Backbone

India’s digital identity framework, powered by UIDAI, requires continuous upgrades to counter evolving cyber threats. The UIDAI and NFSU partnership aims to address this need by integrating advanced cybersecurity and digital forensics practices into the system’s core operations. UIDAI Chief Executive Officer Vivek Chandra Verma described the agreement as a crucial step toward enhancing the security architecture of India’s digital public infrastructure. He stated that the collaboration will significantly improve forensic readiness and resilience, ensuring stronger protection against cyber risks. The signing ceremony was attended by senior officials from both institutions, including Deputy Director General Abhishek Kumar Singh and NFSU Gujarat Campus Director S. O. Junare. Their presence highlighted the institutional commitment to advancing cybersecurity and digital forensics through sustained collaboration. 

Expanding Access While Enhancing Security 

Alongside this partnership, UIDAI has also taken steps to improve accessibility to its services. Collaborations with digital platforms like MapmyIndia and Google now allow users to locate authorized Aadhaar centers more easily. These platforms provide information on available services, operating hours, and accessibility features. While these initiatives focus on user convenience, they also align with the broader objective of strengthening the integrity of India’s digital identity system. By combining improved accessibility with robust cybersecurity and digital forensics measures, UIDAI aims to maintain trust in its infrastructure.
  • ✇Security | CIO
  • What is data analytics? Transforming data into better decisions
    What is data analytics? Data analytics focuses on gleaning insights from data. It comprises the processes, tools, and techniques of data analysis and management, and its chief aim is to apply statistical analysis and technologies on data to find trends and solve problems. Data analytics has become increasingly important in the enterprise to shape business processes and improve decision-making and business results. Data analytics draws from a range of disciplines, incl
     

What is data analytics? Transforming data into better decisions

5 de Maio de 2026, 07:00

What is data analytics?

Data analytics focuses on gleaning insights from data. It comprises the processes, tools, and techniques of data analysis and management, and its chief aim is to apply statistical analysis and technologies on data to find trends and solve problems. Data analytics has become increasingly important in the enterprise to shape business processes and improve decision-making and business results.

Data analytics draws from a range of disciplines, including computer programming, mathematics, and statistics, to perform analysis on data in an effort to describe, predict, and improve performance. So to ensure robust analysis, data analytics teams leverage a range of data management techniques, including data mining, data cleansing, data transformation, data modeling, and more.

What is AI data analytics?

AI data analytics is a rapidly growing specialty within data analytics that applies AI to support, automate, and simplify data analysis. It leverages ML, natural language processing, and data mining, along with foundational models and chat assistance for predictive analytics, sentiment analysis, and AI-enhanced business intelligence. AI tools can be used for data collection and data preparation, while ML models can be trained to extract insights and patterns.

The four types of data analytics

Analytics breaks down broadly into four types: descriptive analytics attempts to describe what has transpired at a particular time; diagnostic analytics assesses why something has happened; predictive analytics ascertains the likelihood of something happening in the future; and prescriptive analytics provides recommended actions to take to achieve a desired outcome.

To explore these more specifically, descriptive analytics uses historical and current data from multiple sources to describe the present state, or a specified historical state, by identifying trends and patterns. Business analytics is the purview of business intelligence (BI). Diagnostic analytics uses data, often generated via descriptive analytics, to discover the factors or reasons for past performance. Predictive analytics applies techniques such as statistical modeling, forecasting, and ML to the output of descriptive and diagnostic analytics to make predictions about future outcomes. Predictive analytics is often considered a type of advanced analytics, and frequently depends on ML and/or deep learning. And prescriptive analytics is another type of advanced analytics that involves the application of testing and other techniques to recommend specific solutions that will deliver outcomes. In business, predictive analytics uses ML, business rules, and algorithms.

Data analytics methods and techniques

Data analysts use a number of methods and techniques to analyze data. According to Emily Stevens, managing editor at CareerFoundry, seven of the most popular include:

  1. Regression analysis: A set of statistical processes used to estimate the relationships between variables to determine how changes to one or more might affect another, like how social media spending might affect sales.
  2. Monte Carlo simulation: A mathematical technique, frequently used for risk analysis, that relies on repeated random sampling to determine the probability of various outcomes of an event that can’t otherwise be readily predicted due to degrees of uncertainty in its inputs.
  3. Factor analysis: A statistical method for taking a massive data set and reducing it to a smaller, more manageable one to uncover hidden patterns, like when analyzing customer loyalty.
  4. Cohort analysis: A form of analysis in which a dataset is broken into groups that share common characteristics, or cohorts, for analysis like understanding customer segments.
  5. Cluster analysis: A statistical method in which items are classified and organized into clusters in an effort to reveal structures in data. Insurance firms might use cluster analysis to investigate why certain locations are associated with particular insurance claims, for instance.
  6. Time series analysis: A statistical technique in which data in set time periods or intervals is analyzed to identify trends over time, such as weekly sales numbers or quarterly sales forecasting.
  7. Sentiment analysis: A technique that uses natural language processing, text analysis, computational linguistics, and other tools to understand sentiments expressed in data, such as how customers feel about a brand or product based on responses in customer forums. While the previous six methods seek to analyze quantitative or measurable data, sentiment analysis seeks to interpret and classify qualitative data by organizing it all into themes.

Data analytics tools

Data analysts use a range of tools to aid them surface insights from data. Some of the most popular include: 

  • Apache Spark: An open source data science platform to process big data and create cluster computing engines. 
  • AskEnola AI: A conversational analytics tool for business users.
  • Data analysis with ChatGPT: OpenAI’s chatbot can generate code to perform data analysis, transformation, and visualization tasks using Python.
  • dbt: An open source analytics engineering tool for data analysts and engineers.
  • Domo Analytics: A BI SaaS platform to gather and transform data.  
  • Excel: Microsoft’s spreadsheet software for mathematical analysis and tabular reporting. 
  • Julius AI: An AI assistant to analyze spreadsheets and databases.
  • Knime: A free and open source data cleaning and analysis tool for data mining.
  • Looker: Google’s data analytics and BI platform. 
  • MySQL: An open source relational database management system to store application data used in data mining.
  • Observable: A data analysis platform with AI tools for exploratory data analysis and data visualization.
  • Orange: A data mining tool ideal for smaller projects.
  • Power BI: Microsoft’s data visualization and analysis tool to create and distribute reports and dashboards. 
  • Python: An open source programming language popular among data scientists to extract, summarize, and visualize data. 
  • Qlik: A suite of tools to explore data and create data visualizations. 
  • R: An open source data analytics tool for statistical analysis and graphical modeling. 
  • RapidMiner: A data science platform that includes a visual workflow designer. 
  • SAS: An analytics platform for business intelligence and data mining. 
  • Sisense: A popular self-service BI platform. 
  • Tableau: Data analysis software from Salesforce to create data dashboards and visualizations.

Data analytics vs. data science

Data analytics is a component of data science used to understand what an organization’s data looks like. Generally, the output of data analytics are reports and visualizations. Data science takes the output of analytics to study and solve problems. The difference between data analytics and data science is often about timescale. Data analytics describes the current or historical state of reality, whereas data science uses that data to predict and/or understand the future.

Data analytics vs. data analysis

While the terms data analytics and data analysis are frequently used interchangeably, data analysis is a subset of data analytics concerned with examining, cleansing, transforming, and modeling data to derive conclusions. Data analytics includes the tools and techniques used to perform data analysis.

Data analytics vs. business analytics

Business analytics is another subset of data analytics. It uses data analytics techniques, including data mining, statistical analysis, and predictive modeling, to drive better business decisions. Gartner defines business analytics as solutions used to build analysis models and simulations to create scenarios, understand realities, and predict future states.

Data analytics examples

Organizations across all industries leverage data analytics to improve operations, increase revenue, and facilitate digital transformations. Here are three examples:

UPS transforms air cargo operations with data, AI: UPS’s Gateway Technology Automation Platform (GTAP) uses AI and digital asset tracking to reduce costs, improve on-time performance, and enhance operational safety at its Worldport air hub.

NFL leverages AI and predictive analytics to reduce injuries: The NFL’s Digital Athlete platform leverages AI and ML to run millions of simulations of in-game scenarios, using video and player tracking data to identify the highest risk of injury during plays, and develop individualized injury prevention courses.

Fresenius Medical Care anticipates complications with predictive analytics: Fresenius Medical Care, which specializes in providing kidney dialysis services, is pioneering the use of a combination of near real-time IoT data and clinical data to predict when kidney dialysis patients might suffer a potentially life-threatening complication called intradialytic hypotension (IDH).

Data analytics salaries

According to data from PayScale, the average annual salary for a data analyst is $70,384, with a reported range from $51,000 to $95,000. Salary data on similar positions include:

JOB TITLESALARY RANGEAVERAGE SALARY
Analytics manager$79,000 to $140,000$110,581
Business analyst, IT$58,000 to $114,000$80,610
Data scientist$73,000 to $145,000$103,441
Quantitative analyst$74,000 to $161,000$109,421
Senior business analyst$72,000 to $127,000$95,484
Statistician$61,000 to $139,000$97,082

PayScale also identifies cities where data analysts earn salaries that are higher than the national average. These include San Francisco (24.2%), Seattle (10.2%), and New York (9.5%).

  • ✇Security | CIO
  • SAS, 에이전트 전략 핵심에 ‘AI 거버넌스’ 전면 배치
    기업들은 AI 실험 단계를 빠르게 넘어 실제 운영으로 전환하고 있다. 그러나 에이전틱 AI가 더 많은 의사결정을 수행하고 다양한 도구를 호출하며 분산된 데이터 환경 전반에서 작동하기 시작하면, 가시성과 거버넌스, 신뢰가 약화될 수 있다. SAS는 연례 행사 ‘SAS 이노베이트(SAS Innovate)’에서 이러한 문제에 대한 해법을 제시했다. 코파일럿, 에이전트 프레임워크, MCP(Model Context Protocol) 플러그인, 관리 도구 등 새로운 제품군을 공개하며, 기업이 통제력을 유지한 채 AI를 운영할 수 있도록 지원하겠다고 밝혔다. SAS 글로벌 AI 및 생성형 AI 시장 전략 총괄 마리넬라 프로피는 해당 행사에서 “지금은 형태를 만드는 AI에서 실제로 행동하는 AI로 전환되는 시점”이라며 “이는 신뢰, 거버넌스, 책임성 측면에서 새로운 요구사항을 만들어내는 중요한 도약”이라고 말했다. 에이전트와 보다 직관적으로
     

SAS, 에이전트 전략 핵심에 ‘AI 거버넌스’ 전면 배치

30 de Abril de 2026, 04:37

기업들은 AI 실험 단계를 빠르게 넘어 실제 운영으로 전환하고 있다. 그러나 에이전틱 AI가 더 많은 의사결정을 수행하고 다양한 도구를 호출하며 분산된 데이터 환경 전반에서 작동하기 시작하면, 가시성과 거버넌스, 신뢰가 약화될 수 있다.

SAS는 연례 행사 ‘SAS 이노베이트(SAS Innovate)’에서 이러한 문제에 대한 해법을 제시했다. 코파일럿, 에이전트 프레임워크, MCP(Model Context Protocol) 플러그인, 관리 도구 등 새로운 제품군을 공개하며, 기업이 통제력을 유지한 채 AI를 운영할 수 있도록 지원하겠다고 밝혔다.

SAS 글로벌 AI 및 생성형 AI 시장 전략 총괄 마리넬라 프로피는 해당 행사에서 “지금은 형태를 만드는 AI에서 실제로 행동하는 AI로 전환되는 시점”이라며 “이는 신뢰, 거버넌스, 책임성 측면에서 새로운 요구사항을 만들어내는 중요한 도약”이라고 말했다.

에이전트와 보다 직관적으로 상호작용

SAS는 먼저 바이야(Viya) 플랫폼에 내장된 대화형 AI 어시스턴트 ‘SAS 바이야 코파일럿’을 공개했다. 이 도구는 인간의 통제를 기반으로 작동하며, 마이크로소프트 파운드리(Microsoft Foundry)와 통합돼 분석 워크플로우 내에서 자연어를 활용해 데이터 분석, 모델 구축, 의사결정을 수행할 수 있도록 지원한다.

프로피는 “전문가 수준의 어시스턴트를 통해 사용자는 작업을 실행하고 질문을 던지며 전체 분석 라이프사이클을 보다 쉽게 탐색할 수 있다”고 설명했다.

바이야 코파일럿은 핵심 애플리케이션 전반에 걸친 질의응답, 설명 가능하고 문서화된 AI 코드 생성, 모델 파이프라인 가이드(추천 및 다음 단계 제시), 대화형 대시보드, AI 기반 검색과 알림 내러티브를 활용한 시각적 분석 기능 등을 제공한다. SAS에 따르면 향후 데이터 관리, 모델 관리, AI 인프라 영역까지 기능이 확장될 예정이다.

초기에는 두 가지 코파일럿이 먼저 제공된다. 자산·부채 관리(ALM)는 시나리오 개발과 금융 리스크 워크플로우 실행 및 해석, 자연어 입력을 분석 모델로 변환하는 기능을 지원한다. 헬스케어 임상 데이터 디스커버리는 데이터 분석, 코호트 생성, 연구 논문 및 의료 문서 탐색 기능을 제공한다.

SAS는 올해 말까지 은행, 제조 등 다양한 산업으로 바이야 코파일럿 적용 범위를 확대할 계획이다.

임베디드 AI 어시스턴트를 넘어, SAS는 내부 및 외부 에이전트를 연결하고 통제할 수 있는 도구와 인프라도 함께 제공한다. 새롭게 공개된 SAS 바이야 MCP 서버는 연결 방식을 표준화해, 클로드(Claude), GPT, 제미나이(Gemini) 등 다양한 대형언어모델(LLM)이나 인터페이스를 활용하는 외부 에이전트가 별도의 맞춤형 통합 없이도 SAS의 도구, 데이터, 모델에 안전하게 접근할 수 있도록 지원한다.

프로피는 “코파일럿은 단순히 질문에 답하는 것을 넘어, 바이야 전반의 기능을 보다 구조화된 방식으로 호출할 수 있다”고 말했다.

또한 SAS는 ‘에이전틱 AI 액셀러레이터’를 통해 코드, 인터페이스, 구성 요소, 모범 사례를 제공한다. 이를 통해 개발자뿐 아니라 로우코드·노코드 사용자까지 다양한 수준의 팀이 SAS 바이야 환경에서 에이전트를 설계하고 구축, 배포, 관리할 수 있다.

현재 바이야 사용자는 MCP 서버와 AI 액셀러레이터를 모두 깃허브를 통해 이용할 수 있다.

인간 판단 유지 위한 거버넌스 강화

SAS는 감독 체계와 신뢰 가능한 AI, 인간 개입 기반 통제의 중요성을 지속적으로 강조하고 있다. 이러한 전략의 일환으로 SaaS 기반 도구 ‘SAS AI 내비게이터’를 새롭게 공개했다.

이 도구는 기업이 보유한 AI 모델을 체계적으로 파악하고, 거버넌스를 적용하며, 정책을 일관되게 관리할 수 있도록 지원한다. 2026년 3분기 마이크로소프트 애저 마켓플레이스(Microsoft Azure Marketplace)를 통해 제공될 예정이며, 자체 개발 모델과 외부 모델을 포함해 기업 내 모든 AI 모델과 도구를 한눈에 파악할 수 있는 엔드투엔드 가시성을 제공한다. 이를 통해 기업은 내부 정책은 물론 외부 규제와 프레임워크까지 AI 활용 전반에 적용할 수 있다.

SAS 데이터 거버넌스 및 윤리 부문 부사장 레지 타운젠드는 “이 도구는 AI 자산에 대한 가시성을 제공할 뿐 아니라 ‘우리가 얼마나 잘하고 있는가’라는 근본적인 질문에도 답을 제시한다”고 말했다.

타운젠드는 기업이 평판, 효율성, 비용 등 다양한 요소를 동시에 고려하는 만큼, 이를 한눈에 파악할 수 있는 충분한 데이터가 필요하다고 강조했다. 이어 “신뢰는 이제 새로운 비즈니스 차별화 요소이자 일종의 통화로 인식되고 있다”고 덧붙였다.

그는 “책임 있는 AI를 자연스럽게 받아들이도록 만드는 것이 핵심”이라며, 기술 발전 속도와 조직의 적응 능력 간 격차를 의미하는 ‘기술 비대칭(tech asymmetry)’ 상황에서도 인간의 판단을 유지하기 위해 AI 거버넌스가 중요한 역할을 한다고 설명했다.

또한 “기술 역량은 충분하지만, 기업은 이를 대규모로 빠르게 적용하는 데 어려움을 겪고 있다”며 “이러한 역량을 지속 가능한 비즈니스 경쟁력으로 전환하는 것이 필요하다”고 강조했다.

AI 기능과 서비스가 빠르게 확장되는 상황에서, 타운젠드는 사용자들에게 충분한 이해도를 갖추고 호기심을 기반으로 AI를 접근하며, 기술이 비즈니스와 개인의 삶에 어떻게 적용될 수 있을지 비판적으로 사고할 것을 주문했다.

그는 “이처럼 빠르게 변화하는 환경에서는 확신을 잠시 내려놓을 필요가 있다”며 “확신은 경직성을 낳고, 이는 지금 필요한 섬세한 판단을 가로막을 수 있다”고 말했다. 이어 “AI의 다음 단계는 이러한 판단을 확장하고, 빠른 속도로 거버넌스를 적용하며, 신뢰를 경쟁력으로 전환하는 데 있다”고 강조했다.

신뢰 가능한 데이터 확보가 AI 성패 좌우

SAS는 기업 데이터 환경의 복잡성과 신뢰 문제도 주요 과제로 지목했다. SAS 산업 시장 총괄 앨리사 패럴은 온프레미스, 레거시 인프라, 프라이빗·퍼블릭 클라우드 등 다양한 환경에 데이터가 분산돼 있으며, 데이터 자체에 대한 신뢰 부족이 의사결정 신뢰 저하로 이어지고 있다고 설명했다. 또한 성능 제약 역시 AI 도입을 저해하는 요인으로 작용하고 있다고 지적했다.

이 문제를 해결하기 위해 SAS는 바이야 기반 클라우드 네이티브 데이터 관리 포트폴리오 ‘SAS 데이터 매니지먼트’를 개편했다. AI 활용을 고려한 데이터 관리, 설계 단계부터 적용되는 거버넌스, 에이전틱 AI 및 코파일럿 기능, 클라우드 기반 분석 가속 기능 등을 새롭게 추가하거나 강화했다.

패럴은 “이 플랫폼은 데이터가 접근되고 준비되며 활용되는 전체 워크플로우 내에서 데이터 계보, 투명성, 통제 기능을 제공한다”고 설명했다.

그는 “에이전트와 AI는 그 어느 때보다 더 많은 데이터를 필요로 한다”며 “특히 의사결정 자동화를 도입하는 경우, 초기 단계부터 데이터 관리 체계를 제대로 구축하는 것이 매우 중요하다”고 강조했다.

재설계된 플랫폼은 신뢰 가능한 데이터를 기반으로 AI를 구동하며, 원시 데이터를 AI 활용이 가능한 형태로 전환한다. 특히 클라우드 네이티브 분석 데이터 플랫폼 ‘스피디스토어(SpeedyStore)’를 통해 데이터를 이동하지 않고도 분석과 AI 처리를 수행할 수 있도록 했다. 이를 통해 대규모 데이터 이동 없이도 효율적인 처리가 가능해진다.

기업은 여전히 데이터 주권을 유지하면서 다양한 데이터 저장소 전반에 걸쳐 워크플로우를 통제할 수 있다.

패럴은 “고객이 현재 상황에 대응하는 데 필요한 모든 요소를 제공하고, 데이터를 활용하고 관리하며 가치를 창출할 수 있는 도구를 지원하고 있다”며 “이를 통해 기업은 자신 있게 AI를 대규모로 운영할 수 있다”고 말했다.
dl-ciokorea@foundryco.com

  • ✇Security | CIO
  • SAS makes AI governance the centerpiece of its agent strategy
    Enterprises are quickly moving from AI experimentation to deployment, however, when agentic AI begins making more decisions, invoking more tools, and operating across fragmented data environments, there can be an erosion of visibility, governance, and trust. SAS laid out its answer to that problem at its annual conference, SAS Innovate, introducing a new family of copilots, agent frameworks, Model Context Protocol (MCP) plugins, and management tools to help enterprises
     

SAS makes AI governance the centerpiece of its agent strategy

28 de Abril de 2026, 22:38

Enterprises are quickly moving from AI experimentation to deployment, however, when agentic AI begins making more decisions, invoking more tools, and operating across fragmented data environments, there can be an erosion of visibility, governance, and trust.

SAS laid out its answer to that problem at its annual conference, SAS Innovate, introducing a new family of copilots, agent frameworks, Model Context Protocol (MCP) plugins, and management tools to help enterprises operationalize AI without losing control of it.

“What we’re seeing here is really a shift from AI that forms to AI that acts,” Marinela Profi, the company’s global AI and generative AI market strategy lead, said at the event. “This is a significant leap, because it introduces new requirements around trust, around governance, around accountability.”

Interacting with agents more intuitively

To begin with, SAS today announced SAS Viya Copilot, a human-governed, conversational AI assistant embedded into its Viya platform. It integrates Microsoft Foundry, operating within analytics workflows to help developers, data scientists, and other users instructing it in natural language to analyze data, build models, and make decisions across workflows.

“You have an expert assistant that allows you to take actions, ask questions, and help you navigate across the full analytical lifecycle,” Profi explained.

Its capabilities include: General Q&A across core Viya applications; production of documented and explainable AI-generated code; model pipeline guidance including recommendations and next steps; conversational dashboarding; and visual investigation with AI-assisted search and alert narratives. Copilot capabilities will eventually extend to data management, model management, and AI infrastructure, according to SAS. 

The company is initially launching two Copilots: Asset and Liability Management (ALM), for developing scenarios, executing and interpreting financial risk workflows, and translating natural language inputs into analytic models; and Health Clinical Data Discovery, for analyzing data, creating cohorts, and investigating research papers and other medical documents.

SAS plans to expand Viya Copilot into additional industries, including banking and manufacturing, later this year.

Going beyond embedded AI assistants, SAS is providing tools and infrastructure to connect and govern internal and external agents. The new SAS Viya MCP server standardizes connections so external agents can safely access SAS tools, data, and models, using the large language model (LLM) or interface of their choice (Claude, GPT, Gemini), without having to create custom integrations, duplicate logic, or bypass controls.

“The Copilot is not only answering questions for you, it can invoke capabilities across Viya in a more structured way,” Profi said.

In addition, a new Agentic AI Accelerator provides a collection of code, interfaces, components, and best practices that allow teams across skill levels (developers, low-code or no-code users) to design, build, deploy, and manage agents within SAS Viya, she explained.

Current Viya users can access both the MCP server and AI Accelerator via GitHub.

Maintaining human judgment

SAS continues to emphasize the importance of oversight, trustworthy AI, and human-in-the-loop control.

Furthering this mission, the company is introducing SAS AI Navigator. The Software-as-a-Service (SaaS) tool helps enterprises inventory, govern, and apply policies to underlying AI models.

Available in Q3 2026 on Microsoft Azure Marketplace, the platform will offer an end-to-end view of all AI models and tools in use in an enterprise, whether built in-house or provided by third parties. Using it, enterprises will be able to apply internal policies and external regulations and frameworks to AI use cases.

“It’s giving visibility into your AI inventory,” Reggie Townsend, VP of SAS’ data governance and ethics practice, said at today’s event. “But it also answers the really basic question: How are we doing?”

Enterprises want “enough data at a glance” to consider tension points when they’re juggling factors like reputation, efficiency, and cost, he pointed out. They’re also viewing trust as a new business differentiator, even as a currency. 

Navigator started with a really simple idea, he noted: “What happens if we can make being responsible irresistible?” AI governance is one way to preserve human judgment amidst what he called “tech asymmetry.”

Technology unevenness has been a long-standing problem; While there’s strong technical capability, enterprises struggle to adapt to the pace of change at scale. “What folks need to do is try to translate some of these capabilities into a sustainable business advantage,” said Townsend.

As AI capabilities (and offerings) continue to expand, he urged users to gain “sufficient literacy,” approach AI with curiosity, and think critically about how evolving tools can apply to both business and personal life.

“In an emerging landscape like this, we’ve got to suspend certainty,” he said. “Certainty breeds rigidity, and rigidity suspends this idea of nuanced judgment, which we need right now.”

The next chapter of AI is about scaling that judgment, governing at speed, and turning trust into that competitive advantage, he emphasized.

Getting to the right enterprise data

Enterprise data can be fragmented across many different ecosystems (on-prem, in legacy infrastructure, or in private or public clouds), noted SAS industry market lead Alyssa Farrell. Beyond that, she said, “[enterprises] have low trust in the data itself, which is leading to low trust in decisions.” Further, performance constraints can hamper AI progress.

To address these issues, SAS today announced a targeted refresh of SAS Data Management, its cloud-native portfolio built on the Viya platform, adding or expanding its AI-ready data management, governance by design, agentic AI and copilots, and cloud-native analytic acceleration. It provides lineage, transparency, and control capabilities within workflows where data is accessed, prepared, and activated, Farrell explained.

“Agents and AI crave data more than ever before,” she said. “It’s really important that organizations get this right from the beginning, especially if they’re adding automation to that decision process.”

The re-architected platform grounds AI in trusted data, making raw data assets usable for AI. Notably, it brings analytics and AI to the data itself through SpeedyStore, the company’s cloud-native analytical data platform, negating the need to move volumes of data for processing, Farrell explained. Enterprises still retain digital sovereignty and can control workflows across their various data stores.

“We’re making sure our customers have everything they need to meet this moment [and] tools that access the data, manage the data and gain value from it,” Farrell noted. “They can really proceed at scale to operationalize AI with confidence.”

  • ✇Security | CIO
  • AI won’t fix your data problems. Data engineering will
    Most enterprise AI investments today focus on models, compute and tooling. The assumption is that intelligence is the binding constraint and that a more capable model will produce better outcomes across every dimension that matters. This is a reasonable starting point, but it is also where most initiatives go wrong. The models organizations are deploying were trained on public data at scale. None of your internal systems, customer schema, pricing logic or support taxono
     

AI won’t fix your data problems. Data engineering will

28 de Abril de 2026, 09:00

Most enterprise AI investments today focus on models, compute and tooling. The assumption is that intelligence is the binding constraint and that a more capable model will produce better outcomes across every dimension that matters. This is a reasonable starting point, but it is also where most initiatives go wrong.

The models organizations are deploying were trained on public data at scale. None of your internal systems, customer schema, pricing logic or support taxonomy appeared in that training.

When a model encounters your internal data, it processes it as best it can, but without the grounding that comes from having been trained on it. Early AI initiatives are struggling not because the models are weak, but because the context they need to operate reliably inside your organization is something they have never seen before.

Data engineering holds the key to this context.

Why context breaks first

Think about what an AI agent handling a support escalation needs to function well: The customer’s support history across time, not just the most recent ticket. Billing records matter too, because the character of a problem often depends on what the customer is paying for and whether anything has changed recently. Product usage data is equally essential, as what a customer reports is frequently explained by how they have been using the product. None of these things live in a single place, as they are scattered across systems that were each built by different teams, on different timelines, with different definitions of what a customer record is supposed to capture.

Human agents work around these gaps through judgment developed over time. They know which system to trust for a particular type of question, they know the usage data runs six hours behind and they know how to weigh conflicting signals based on context that is never written down anywhere. AI systems do not have that judgment. They process whatever they receive and act on it, which means that when the context is inconsistent or incomplete, the output reflects that, not as a visible error but as a subtly wrong decision. The customer notices before anyone on your team does.

When bad data stops being annoying and starts being operational

In the analytics era, data quality problems surfaced as numbers that looked off in dashboards. Analysts were the error-detection layer, and when something looked wrong, they would investigate, find the issue and get it fixed. The feedback loop was slow, but it existed, and it caught most problems before they reached the business in any consequential way.

AI agents making operational decisions do not have that buffer. They have no way of knowing that a schema migration introduced silent gaps or that a pipeline is running four hours late. Refunds go out incorrectly because the billing context was incomplete at the moment of decision.

What an analytics team could absorb as an occasional anomaly in a report becomes a real problem when an automated system acts on degraded context hundreds of times a day before anyone identifies the pattern. The volume is what makes it dangerous, and by the time it surfaces, the damage is already distributed across thousands of interactions.

The role data engineers play now

For the past decade, data engineering meant building pipelines that fed warehouses so analysts could query data and produce dashboards. The work was foundational but treated as background infrastructure, and its value was measured in pipeline reliability, query performance and reporting freshness.

The agent era changes the purpose of that work entirely. When AI systems make operational decisions, the goal is no longer producing data that is queryable. The goal is producing context that is reliable enough for a system to act on, and those are different problems with different requirements. That starts with entity resolution across systems, providing a consistent and trustworthy answer across every data source that touches them.

This also means handling late-arriving data explicitly, because agents cannot act on a state of the world that no longer holds. Freshness thresholds need to be calibrated to the decision type, since a personalization recommendation can tolerate six-hour-old usage data in ways that a refund workflow cannot. Lineage needs to survive schema changes and reorganizations, so that the provenance of any piece of context can be traced when something goes wrong.

None of that is a model problem, nor does it yield to prompt engineering. This is data engineering work, and organizations that treat it as anything else will spend a long time debugging production failures that look like AI problems but are infrastructure problems.

Context is only half the problem

Getting the right information to an agent is necessary, but it is not sufficient. There is a second challenge that most organizations have not yet confronted: How do you coordinate, govern and operate dozens or hundreds of autonomous agents making real decisions across your business?

Agent frameworks handle reasoning well. What they do not handle is everything around the agent: Scheduling when it runs, controlling what it is allowed to spend, enforcing who can approve its decisions, managing retries when external systems fail and ensuring that when an agent needs human sign-off, it does not tie up compute for hours while it waits. These are not AI problems. They are operational infrastructure problems, and they are the same class of problems that orchestration platforms have been solving for data pipelines for over a decade.

One agent answering questions in a sandbox is a proof of concept. Fifty agents making operational decisions across finance, compliance and customer operations is a fleet management problem, and it requires the same kind of scheduling, governance, cost controls and auditability that enterprises already demand from their data infrastructure.

Orchestration is typically the one layer that already has visibility across platforms, spanning your warehouse, your transformation layer, your external APIs and your operational databases. That cross-platform vantage point is what makes it possible to build a context layer that is comprehensive rather than siloed.

Governance needs to execute at runtime, not live in documentation. Policies about data access, cost limits and human approval requirements need to be enforced in code as agents run, not described in guidelines that agents cannot read and humans forget to follow.

What this means going forward

The organizations that deploy AI agents at scale will have invested in two things before those agents reach production.

First, a context layer that gives agents a reliable, cross-platform understanding of the enterprise’s data. This means not just raw access to tables, but semantic knowledge of what the data means, where it comes from and how much to trust it.

Second, an operational layer that governs how agents act, with the scheduling, cost controls, auditability and human-in-the-loop checkpoints that enterprise deployment demands.

These two investments are not independent. They form a flywheel. Better context makes agents more effective, which drives broader adoption, which generates richer operational metadata, which deepens the context layer further.

Data engineers are becoming the people who determine whether automated decisions are trustworthy, not because they control the models but because they control both the context on which those models operate and the infrastructure through which they act. The organizations that understand this early will keep building on it. The ones that keep treating data engineering and orchestration as background infrastructure will keep rediscovering the same production failures, just with different names on the postmortem each time.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

  • ✇HACKMAGEDDON
  • Q1 2026 Cyber Attack Statistics Paolo Passeri
    I aggregated the statistics created from the cyber attacks timelines published in the first quarter of 2026. In this period, I collected a total of 528 events (5.87 events/day) dominated by Cyber Crime with 66%, followed by Cyber Espionage with 18%, Hacktivism with 3%, and finally Cyber Warfare with 2%.
     

Q1 2026 Cyber Attack Statistics

28 de Abril de 2026, 06:51
I aggregated the statistics created from the cyber attacks timelines published in the first quarter of 2026. In this period, I collected a total of 528 events (5.87 events/day) dominated by Cyber Crime with 66%, followed by Cyber Espionage with 18%, Hacktivism with 3%, and finally Cyber Warfare with 2%.
  • ✇Security | CIO
  • Converged analytics is the refinery for the age of sovereign AI and data
    “Data is the new oil” is one of the most overused phrases in enterprise technology. Yet it still captures something fundamentally true about the modern enterprise, if we extend the analogy. Crude oil has limited value until it is refined into the fuels, chemicals, plastics, polymers, synthetic fibers, and industrial materials that power entire societies and permeate nearly every aspect of modern life. Similarly, the real value of data does not lie in its raw accumulatio
     

Converged analytics is the refinery for the age of sovereign AI and data

27 de Abril de 2026, 09:07

“Data is the new oil” is one of the most overused phrases in enterprise technology. Yet it still captures something fundamentally true about the modern enterprise, if we extend the analogy.

Crude oil has limited value until it is refined into the fuels, chemicals, plastics, polymers, synthetic fibers, and industrial materials that power entire societies and permeate nearly every aspect of modern life. Similarly, the real value of data does not lie in its raw accumulation but in its transformation, through systems, into decisions, intelligence, and operational impact.

In this context, converged analytics has emerged as the refinery of the data economy. Organizations that lead will be those with the most effective refining layer. 

Traditional analytics architectures evolved in silos, no longer compatible with the dynamic AI world

Over the past decade, enterprises have invested heavily in extracting, storing, and moving data. Data lakes, warehouses, streaming platforms, and cloud pipelines have created an unprecedented accumulation of information. And yet only 13% of enterprises globally are successfully achieving ROI from their AI initiatives. 

“Enterprises now sit on massive reserves of structured, semi-structured, and unstructured data generated by applications, devices, and digital interactions. Yet despite this abundance, many CIOs still struggle to translate data into consistent, real-time business value. The issue is not scarcity—it is fragmentation,” says Quais Taraki, CTO, EnterpriseDB (EDB)

The value of data is trapped when it’s siloed and spread across systems and teams. 

Transactional systems were optimized for operational workloads. Analytical systems were built for reporting and historical analysis. Streaming systems handled real-time events. Each requires different infrastructure, tools, and governance models. Data has to be copied, moved, transformed, and reconciled across environments before it can be used. This introduces latency, complexity, duplication, and risk. Insights often arrive too late to influence outcomes, while operational systems remain disconnected from analytical intelligence.

Converged analytics solves the largest challenge for AI-ready data

What makes crude oil valuable is not extraction alone but its combination with the refinery—the integrated industrial system that processes, synthesizes, and upgrades raw hydrocarbons into usable products. 

Comparable in the world of enterprise technology is converged analytics, which addresses data systems fragmentation by unifying capabilities into a single, sovereign architectural paradigm. It brings together transactional processing, analytical processing, and streaming-data handling within a cohesive system. 

“Instead of moving data across multiple specialized platforms, converged analytics enables computation to occur where the data resides, across different workloads and time horizons. This integration collapses latency, reduces duplication, and preserves context, allowing organizations to move from retrospective analysis to real-time decision-making,” says Taraki of EDB.

AI raises the stakes 

While generative AI and now agentic AI have captured executive attention, their effectiveness depends on access to fresh, well-governed, and contextually rich data. Models trained on stale or fragmented datasets deliver limited value. 

Converged analytics provides the foundation for continuous data pipelines, real-time feature engineering, and low-latency inference. It enables architectures such as retrieval-augmented generation and supports ongoing feedback loops that improve model performance over time. In this sense, it is not just complementary to AI; it is a prerequisite for operationalizing it at scale.

AI also intensifies the cost of fragmentation. 

“Every time data must be copied, moved, or reconciled across specialized systems, organizations introduce latency, duplication, and loss of context,” says Taraki. 

Converged analytics reduces that friction by enabling computation closer to where data already resides, allowing decisions to happen in real time rather than after the fact.

Converged analytics offers non-AI and data companies a pathway to increased relevance and value

Unlike point solutions that address isolated parts of the data pipeline, converged analytics platforms sit at the center of the entire data lifecycle. They intersect with storage, compute, networking, and security, making them a natural integration point for a wide range of technologies. 

For hardware vendors, this creates demand for high-performance infrastructure capable of handling mixed workloads with low latency and high throughput. For service providers, it opens the door to long-term engagements around platform design, deployment, optimization, and governance.

Converged analytics workloads are not peripheral use cases; they are core to business performance. Real-time fraud detection, predictive maintenance, personalized customer experiences, and supply chain optimization all depend on the ability to process and act on data as it is generated. These workloads are both compute intensive and mission critical, making converged analytics an especially valuable category for vendors seeking to align with enterprise priorities.

The shift toward hybrid and edge computing environments adds another dimension to the opportunity. As enterprises distribute workloads across cloud, on-premises, and edge locations, the need for consistent analytics capabilities across these environments becomes critical. 

Converged analytics platforms are increasingly designed to operate seamlessly across this spectrum, enabling data to be processed and acted upon wherever it is generated. This creates additional insertion points for both hardware and services vendors, from edge devices and accelerators to orchestration, lifecycle management, and ongoing operational support.

Making it work at enterprise scale

In the early stages of the oil industry, value was concentrated in extraction. Over time, it shifted to refining and distribution, with efficiency, scale, and integration determining competitive advantage. The same transition is now underway in the data economy. Enterprises already possess vast reserves of data; the differentiator will be their ability to refine it rapidly, efficiently, and in context.

Converged analytics represents that refining capability. It is why hardware vendors are optimizing for data-intensive workloads and why services firms are reorganizing around platform engineering. But the practical reality is that this refining layer cannot succeed as software alone. It depends on the hardware, services, support, and operational expertise required to deploy and run it at scale.

For CIOs, this is no longer just a question of architecture. It is a prerequisite for making data a true driver of business value. To learn more, visit us here.

California Engineer Identified in Suspected Shooting at White House Correspondents’ Dinner

26 de Abril de 2026, 00:26
A 31-year-old engineer and self-described indie game developer is suspected of firing shots at the annual event attended by President Donald Trump, high-profile media figures, and US government officials.

White House Says China-Linked Actors Tried to ‘Steal American AI’

23 de Abril de 2026, 14:11

The White House says China-linked actors are using industrial-scale distillation to extract American AI breakthroughs, with US action planned.

The post White House Says China-Linked Actors Tried to ‘Steal American AI’ appeared first on TechRepublic.

  • ✇Security | CIO
  • 칼럼 | AI ROI의 진짜 변수는 기술 아닌 ‘조직 설계’
    AI에 가장 많은 투자를 하는 조직일수록 오히려 창출하는 가치는 가장 적은 경우가 많다. 이 같은 역설은 AI가 실제로 가치를 만들어내는지에 대한 논쟁을 키우고 있다. 그러나 이는 본질적인 질문이 아니다. 업무 단위에서는 이미 충분한 근거가 축적돼 있다. 코딩, 글쓰기, 분석, 고객 지원 등 다양한 영역에서 AI가 측정 가능한 생산성 향상을 만들어낸다는 사실이 반복적으로 입증되고 있다. 다만 이러한 성과가 기업 전체의 재무 성과로 이어지지 않는 것이 문제다. MIT 연구에 따르면 전체 AI 파일럿 프로젝트의 95%가 초기 단계에서 손익계산서(P&L)에 의미 있는 영향을 주지 못한 것으로 나타났다. 맥킨지 역시 전체 응답 기업 가운데 약 6%에 해당하는 고성과 기업만이 AI를 통해 EBIT의 5% 이상을 창출했다고 분석했다. 보스턴컨설팅그룹(BCG)은 AI 전환 프로젝트의 약 60%가 제한적이거나 실질적인 가치 창출에 실패했다고 분
     

칼럼 | AI ROI의 진짜 변수는 기술 아닌 ‘조직 설계’

23 de Abril de 2026, 23:15

AI에 가장 많은 투자를 하는 조직일수록 오히려 창출하는 가치는 가장 적은 경우가 많다. 이 같은 역설은 AI가 실제로 가치를 만들어내는지에 대한 논쟁을 키우고 있다. 그러나 이는 본질적인 질문이 아니다. 업무 단위에서는 이미 충분한 근거가 축적돼 있다. 코딩, 글쓰기, 분석, 고객 지원 등 다양한 영역에서 AI가 측정 가능한 생산성 향상을 만들어낸다는 사실이 반복적으로 입증되고 있다.

다만 이러한 성과가 기업 전체의 재무 성과로 이어지지 않는 것이 문제다. MIT 연구에 따르면 전체 AI 파일럿 프로젝트의 95%가 초기 단계에서 손익계산서(P&L)에 의미 있는 영향을 주지 못한 것으로 나타났다. 맥킨지 역시 전체 응답 기업 가운데 약 6%에 해당하는 고성과 기업만이 AI를 통해 EBIT의 5% 이상을 창출했다고 분석했다. 보스턴컨설팅그룹(BCG)은 AI 전환 프로젝트의 약 60%가 제한적이거나 실질적인 가치 창출에 실패했다고 분석했다.

결국 공통된 흐름은 분명하다. 파일럿 단계에서는 성과가 확인되지만, 이를 조직 전체로 확장하는 과정에서 가치가 제대로 이어지지 않는다.

한편 대기업과 중소기업 간 AI 도입 격차는 빠르게 좁혀지고 있다. 미국 중소기업청(SBA) 데이터에 따르면 2023년 11월부터 2025년 8월까지 AI 도입률은 양측 모두에서 꾸준히 증가했다. 대기업은 6% 미만에서 12% 이상으로, 중소기업은 약 4%에서 8% 이상으로 상승했다. 여전히 대기업이 높은 수준을 유지하고 있지만, 중소기업의 도입 속도가 빨라지면서 격차는 점차 축소되는 추세다.

‘엣지’에선 작동, ‘코어’에선 정체되는 AI

대기업에서 AI 도입률이 빠르게 높아지고 있음에도 불구하고, 실제 운영 환경에 들어간 AI는 쉽게 자리 잡지 못한다. 수십 년간 축적된 시스템과 규제 체계, 다층적인 거버넌스, 그리고 부서 간 복잡한 의존성이 얽혀 있기 때문이다. AI는 도입 이후 보안 검토, 구매 절차, 법률 심사, 아키텍처 위원회 검토, 레거시 시스템 연동 제약 등 다양한 관문을 통과해야 한다. 각각의 절차는 필요에 의해 존재하지만, 이들이 결합되면 변화 속도를 늦추고 효과를 분산시키는 요인이 된다.

특정 부서 단위에서는 AI 파일럿이 성과를 보일 수 있다. 그러나 이를 조직 전체로 확장하려는 순간 기존 운영 모델과 충돌한다. 데이터 소유권, 책임 구조, 의사결정 권한이 명확하지 않을 경우 확장 비용은 더욱 커진다. 결국 제한된 환경에서 효과를 보였던 AI는 조직 단위로 확장되는 과정에서 멈춰 서고, 기대했던 가치는 규모화 과정에서 사라진다.

중소기업 역시 나름의 어려움을 안고 있다. 자금 흐름 제약, 제한된 인력, 고객 리스크 등이 대표적이다. 다만 의사결정 과정에서의 ‘거부 지점’은 상대적으로 적다. 예를 들어 창업자가 AI 기반 견적 자동화나 후속 고객 응대 시스템을 실험하기 위해 별도의 범부서 위원회를 구성하는 경우는 드물다.

의사결정은 빠르게 이뤄지고, 피드백도 짧은 주기로 반복된다. 직원 한 명이 전체 생산성에서 차지하는 비중이 크기 때문에 변화의 효과도 즉각적으로 드러난다. 5명 규모 기업이 행정 업무의 20%를 자동화하면, 그 성과는 곧바로 측정 가능하다.

이들의 구조적 강점은 ‘단순함’이다. 레거시 시스템이 적고 의사결정 경로가 짧으며, 다층적인 거버넌스가 상대적으로 덜하다. SaaS 기반 솔루션도 빠르게 도입하고 큰 마찰 없이 통합할 수 있다. 이러한 특성이 더 나은 의사결정을 보장하는 것은 아니지만, 실행 속도를 높이는 데는 분명한 이점으로 작용한다.

AI ROI의 본질은 ‘조직의 준비 상태’

반대로 대기업은 높은 수준의 시스템 통합 요구와 정교한 거버넌스, 분산된 책임 구조를 갖추고 있다. 이는 운영 리스크를 줄이는 데는 효과적이지만, 새로운 기술 역량을 실제 재무 성과로 전환하는 속도를 늦추는 요인이 된다. AI 파일럿이 기술적으로는 충분한 가능성을 입증하더라도, 기업 전체의 경제적 성과를 끌어올리는 데는 실패하는 이유다.

이 때문에 경영진은 본질적인 선택에 직면하게 된다. 그러나 많은 경우 이를 회피하려 한다. AI ROI를 단순한 기술 문제로 규정하면 IT 조직이나 데이터 팀, 혁신 부서에 위임할 수 있기 때문이다. 반면 조직 설계의 문제로 접근하면 이야기가 달라진다. 이는 전사적인 변화 없이는 해결할 수 없다.

AI는 구조적 문제를 해소하기보다 오히려 증폭시키는 성격을 갖는다. 의사결정 권한이 불명확하면 그 문제가 더욱 선명해지고, 데이터 거버넌스가 취약하면 리스크는 확대된다. 보상 체계가 어긋나 있다면 그 불균형 역시 더 빠르게 심화된다. 결국 업무 단위에서의 생산성 향상이 기업 전체의 수익성 개선으로 자동 연결되지 않는 이유가 여기에 있다.

이 같은 현상은 새로운 것이 아니다. 인터넷 도입 초기에도 유사한 흐름이 나타났다. 기술 자체는 정상적으로 작동했지만, 기존 구조 위에 이를 덧붙인 기업이 아니라 조직을 재설계한 기업이 더 큰 성과를 거뒀다.

현재 AI 역시 같은 패턴을 보이고 있다. ROI를 제한하는 요인은 모델 성능이 아니라, 변화를 수용하고 확장할 수 있는 조직의 준비 상태다.

따라서 질문은 “AI가 왜 ROI를 만들지 못하는가”가 아니다. ROI는 AI가 아니라 조직이 만들어내는 것이기 때문이다. 진짜 질문은 조직이 일하는 방식과 의사결정 구조, 거버넌스, 성과 측정 방식을 재설계할 준비가 되어 있는가에 있다.

이러한 변화가 없다면 AI는 주변 업무의 생산성을 높이는 도구에 머무를 가능성이 크다. 반대로 조직이 이를 수용할 수 있다면, AI는 지속 가능한 경제적 가치를 창출하는 핵심 동력으로 자리 잡게 된다.
dl-ciokorea@foundryco.com

❌
❌