Hosting Journalist

Hosting Journalist Hosting Journalist is a news portal covering the global web & cloud hosting industry. For your daily

Hostingjournalist.com is a news portal covering the global hosting industry with solutions among which cloud servers, dedicated servers virtual servers, reseller hosting, managed hosting, CDN, and colocation.

    Cisco Showcases AI-Driven Hybrid Mesh Firewall at Cisco Live Keynote: In this Cisco Live center-stage keynote, Rai C...
21/11/2025

Cisco Showcases AI-Driven Hybrid Mesh Firewall at Cisco Live Keynote: In this Cisco Live center-stage keynote, Rai Chopra - Senior Vice President and Chief Product Officer for Cisco’s Security Business Group - shares how AI is fundamentally reshaping network security. Speaking to an audience of security and infrastruc...

http://dlvr.it/TPNl8Y

21/11/2025

Xen 4.21 Expands Performance, Security for Cloud and Automotive: The Xen Project has released Xen 4.21, marking one of the hypervisor’s most substantial modernization steps in recent years as it expands its role across cloud, data center, automotive, and emerging embedded workloads. The new release updates core toolchains, improves x86 performance efficiency, strengthens security on Arm-based platforms, and introduces early RISC-V enablement for future architectures.

Hosted by the Linux Foundation, the open-source virtualization platform continues to evolve beyond its roots as a cloud hypervisor, aiming to serve as a unified foundation for compute environments ranging from hyperscale servers to safety-critical vehicle systems.

For cloud providers, data center operators, and virtualization vendors, Xen 4.21 brings measurable performance improvements. Enhancements to memory handling, cache management, and PCI capabilities on x86 promise higher VM density and improved performance per watt - an increasingly important metric as operators refine infrastructure for AI, GPU-accelerated workloads, and large-scale multitenant environments.

The release introduces a new AMD Collaborative Processor Performance Control (CPPC) driver, allowing finer-grained CPU frequency scaling on AMD platforms. Combined with an updated page-index compression (PDX) algorithm and support for resizable BARs in PVH dom0, the update is designed to extract more capability from modern multi-core CPUs without demanding architectural rewrites from operators.

Xen’s role in the automotive and embedded sectors continues to expand as the industry shifts toward software-defined vehicles powered by heterogeneous SoCs.

Xen 4.21 includes expanded support for Arm-based platforms with new security hardening, stack-protection mechanisms, MISRA-C compliance progress, and features designed to meet the stringent requirements of safety-certifiable systems. The release adds support for eSPI ranges on SoCs with GICv3.1+ and introduces advancements to dom0less virtualization - an architecture increasingly used in automotive deployments to isolate workloads such as infotainment, digital instrument clusters, and advanced driver-assistance systems. Demonstrations by AMD and Honda at Xen Summit 2025 showcased the hypervisor running on production-grade automotive hardware, signaling growing industry readiness.

RISC-V support also advances with the addition of UART and external interrupt handling in hypervisor mode. While full guest virtualization is still under development, this early work lays the groundwork for future RISC-V systems that may require secure workload isolation in edge, automotive, or custom compute environments.

Hypervisor Modernization

Cody Zuschlag, Community Manager for the Xen Project, said the 4.21 release reflects a broader modernization strategy. “We’re modernizing the hypervisor from the inside out: updating toolchains, expanding architecture support, and delivering the performance that next-generation hardware deserves. It’s exciting to see Xen powering everything from next-generation cloud servers to real-world automotive systems,” he said.

Toolchain updates represent one of the most significant architectural shifts in the release. Xen 4.21 raises minimum supported versions of GCC, Binutils, and Clang across all architectures - an essential but complex step that reduces technical debt and improves the platform’s long-term security and maintainability. The update also formalizes support for qemu-xen device models inside Linux stubdomains, an approach favored by security-focused Linux distributions, including QubesOS.

The Xen Project remains backed by a wide ecosystem of contributors from AMD, Arm, AWS, EPAM, Ford, Honda, Renesas, Vates, XenServer, and numerous independent maintainers. Enterprise vendors leveraging Xen for commercial offerings welcomed the update.

Citrix, for example, emphasized improvements that translate into better performance and reliability for users of XenServer. “Updates like the newly introduced page index compression algorithm and better memory cache attribute management translate into better performance and improved scalability for all our enterprise XenServer users,” said Jose Augustin, Product Management at Citrix.

Arm echoed the significance of the release for software-defined automotive and edge platforms. “Virtualization is becoming central to how automotive and edge systems deliver safety, performance, and flexibility,” said Andrew Wafaa, Senior Director of Software Communities at Arm. “By expanding support for Arm Cortex-R technology, the latest Xen 4.21 release will help advance more scalable, secure, and safety-critical deployments on Arm-based platforms.”

As cloud and AI workloads accelerate, and automotive manufacturers adopt virtualization for isolation and safety, Xen continues to position itself as a hypervisor built for the next generation of distributed compute environments. Xen 4.21 signals not only modernization, but a strategic expansion into industries where performance, resilience, and safety converge.

Executive Insights FAQ: The Xen 4.21 Release

How does Xen 4.21 improve performance for cloud and data center workloads?

The release enhances memory handling, cache efficiency, PCI performance, and CPU scaling - allowing operators to run more virtual machines with lower overhead and greater performance per watt on modern x86 hardware.

Why is the automotive sector interested in Xen?

Xen’s dom0less architecture, MPU progress, MISRA-C compliance work, and strong isolation capabilities align with automotive safety and reliability requirements for systems such as ADAS, dashboards, and infotainment.

What makes this release significant for Arm-based platforms?

Xen 4.21 adds stack protection, eSPI support, refined Kconfig options, and Cortex-R MPU progress - key elements for building safety-certifiable embedded and automotive deployments.

How far along is RISC-V support?

Xen 4.21 introduces early hypervisor-mode capabilities such as UART and external interrupt handling, laying the foundation for full guest support in future releases.

Why were toolchain upgrades emphasized in this release?

Modern compilers and build tools improve code quality, reduce vulnerabilities, and enable architectural features needed for next-generation hardware - ensuring Xen remains maintainable and secure for long-term industry use.

http://dlvr.it/TPNhjp

21/11/2025

Palo Alto to Buy Chronosphere for $3.35B to Boost AI Observability: Palo Alto Networks is making one of its most aggressive moves yet in the race to build infrastructure for AI-driven enterprises. The cybersecurity giant announced a definitive agreement to acquire Chronosphere, a fast-growing observability platform engineered to handle the scale, latency, and resilience requirements of modern cloud and AI workloads.

The $3.35 billion acquisition signals Palo Alto Networks’ intention to unify telemetry, AI automation, and security into a single data platform capable of supporting the next wave of hyperscale applications.

At the center of this deal is an industrywide shift: AI data centers and cloud-native environments now depend on uninterrupted uptime, deterministic performance, and the ability to detect and remediate failures instantly. Observability - once a domain of dashboards and log aggregation - has become mission-critical infrastructure. For Palo Alto Networks, Chronosphere represents the architectural foundation for this new reality.

Chronosphere’s platform was built for organizations operating at extreme scale, including two leading large language model providers. Its architecture emphasizes cost-optimized data ingestion, real-time visibility across massive cloud environments, and resilience under unpredictable workloads. Chronosphere has also gained industry validation, recently recognized as a Leader in the 2025 Gartner Magic Quadrant for Observability Platforms.

Palo Alto Networks Chairman and CEO Nikesh Arora said Chronosphere’s design aligns with the operational demands of AI-native companies. He emphasized that the acquisition will extend the reach of Palo Alto Networks’ AgentiX, its autonomous security and remediation framework. The combined offering is intended to rethink observability from passive alerting to active, AI-driven remediation. According to Arora, AI agents will be deployed across Chronosphere’s telemetry streams to detect anomalies, investigate root causes, and autonomously implement fixes - turning observability into a real-time automated control plane.

Chronosphere co-founder and CEO Martin Mao described the acquisition as the natural next chapter for the company’s mission to “provide scalable resiliency for the world’s largest digital organizations.” He framed Palo Alto Networks as the right strategic match to expand Chronosphere’s capabilities globally, while deepening integration between security data and operational telemetry. Both companies aim to build a consolidated data layer that can keep pace with the explosion of metrics, traces, logs, and events produced by AI-powered infrastructure.

Managing Observability Costs

Beyond automation, the acquisition reflects rising pressure on enterprises to manage observability costs. Cloud-native architectures generate telemetry at petabyte scale, creating unsustainable ingestion and retention expenses for many organizations. Chronosphere’s optimized pipeline and data transformation technology promise to reduce operational costs by routing, deduplicating, and prioritizing telemetry in ways traditional observability stacks cannot.

Chronosphere would bring more than technology. The company reports annual recurring revenue above $160 million as of September 2025, with triple-digit year-over-year growth - an uncommon trajectory in the observability market, which has grown crowded and competitive. Palo Alto Networks expects the acquisition to close in the second half of its fiscal 2026, subject to regulatory approval.

The move positions Palo Alto Networks as an emerging heavyweight in observability, setting it up to compete more directly with Datadog, Dynatrace, and New Relic. But unlike its rivals, Palo Alto Networks aims to merge observability with active AI agents and security telemetry, betting that customers will increasingly prioritize unified control across performance, cost, and cyber risk.

For enterprises navigating the uncertainty of AI-era operations, the promise of a consolidated observability and remediation engine may prove compelling. As workloads become distributed across clouds, GPUs, edge devices, and emerging AI fabrics, the companies argue that the old model of isolated dashboards can no longer keep up with the volume or velocity of operational data. Instead, the future will require autonomous systems capable of interpreting telemetry and responding in real time - exactly the space Palo Alto Networks hopes to define through this acquisition.

Executive Insights FAQ: Palo Alto Networks + Chronosphere

What strategic gap does Chronosphere fill for Palo Alto Networks?

Chronosphere gives Palo Alto Networks a cloud-scale observability platform optimized for high-volume AI and cloud workloads, enabling unified security and performance visibility.

How will AgentiX integrate with Chronosphere’s platform?

AgentiX will use Chronosphere’s telemetry streams to deploy AI agents that detect issues, investigate root causes, and autonomously remediate failures across distributed environments.

Why is observability suddenly mission-critical for AI workloads?

AI data centers require continuous uptime and deterministic performance; observability becomes the real-time sensor layer that ensures reliability and cost-efficient scaling.

What financial impact does Chronosphere bring?

Chronosphere reports more than $160M in ARR with triple-digit annual growth, giving Palo Alto Networks a fast-expanding revenue engine in an increasingly competitive market.

How will customers benefit from the combined offering?

Enterprises would gain deeper visibility across security and observability data at petabyte scale, paired with automated remediation and significant cost reductions in telemetry ingestion.

http://dlvr.it/TPNg8K

20/11/2025

Read: Lata Varghese - Rackspace Technology - - HostingJournalist.com

20/11/2025

StorPool Launches New HCI Stack Integrated with Oracle Virtualization: While StorPool Storage delivers sub-100µs in-VM latency and a robust data-management platform for always-on operations, it also effortlessly connects with Oracle Virtualization, which offers full-featured KVM management with simple, predictable main...

http://dlvr.it/TPNHzR

20/11/2025

Cloudflare Outage Traced to Internal Error, Not Cyberattack: Cloudflare is detailing the root cause of a major global outage that disrupted traffic across a large portion of the Internet on November 18, 2025, marking the company’s most severe service incident since 2019. While early internal investigations briefly raised the possibility of a hyper-scale DDoS attack, Cloudflare cofounder and CEO Matthew Prince confirmed that the outage was entirely self-inflicted.

The Cloudflare disruption, which began at 11:20 UTC, produced spikes of HTTP 5xx errors for users attempting to access websites, APIs, security services, and applications running through Cloudflare’s network - an infrastructure layer relied upon by millions of organizations worldwide.

Cloudflare cofounder and CEO Matthew Prince confirmed that the outage was caused by a misconfiguration in a database permissions update.Cloudflare cofounder and CEO Matthew Prince confirmed that the outage was caused by a misconfiguration in a database permissions update, which triggered a cascading failure in the company’s Bot Management system, which in turn caused Cloudflare’s core proxy layer to fail at scale.

The error originated from a ClickHouse database cluster that was in the process of receiving new, more granular permissions. A query designed to generate a ‘feature file’ - a configuration input for Cloudflare’s machine-learning-powered Bot Management classifier - began producing duplicate entries once the permissions change allowed the system to see more metadata than before. The file doubled in size, exceeded the memory pre-allocation limits in Cloudflare’s routing software, and triggered software panics across edge machines globally.

Those feature files are refreshed every five minutes and propagated to all Cloudflare servers worldwide. The intermittent nature of the database rollout meant that some nodes generated a valid file while others created a malformed one, causing the network to oscillate between functional and failing states before collapsing into a persistent failure mode.

The initial symptoms were misleading. Traffic spikes, noisy error logs, intermittent recoveries, and even a coincidental outage of Cloudflare’s independently hosted status page contributed to early suspicion that the company was under attack. Only after correlating file-generation timestamps with error propagation patterns did engineers isolate the issue to the Bot Management configuration file.

By 14:24 UTC, Cloudflare had frozen propagation of new feature files, manually inserted a known-good version into the distribution pipeline, and forced resets of its core proxy service - known internally as FL and FL2. Normal traffic flow began stabilizing around 14:30 UTC, with all downstream services recovering by 17:06 UTC.

The impact was widespread because the faulty configuration hit Cloudflare’s core proxy infrastructure, the traffic-processing layer responsible for TLS termination, request routing, caching, security enforcement, and API calls. When the Bot Management module failed, the proxy returned 5xx errors for all requests relying on that module. On the newer FL2 architecture, this manifested as widespread service errors; on the legacy FL system, Bot scores defaulted to zero, creating potential false positives for customers blocking bot traffic.

Multiple services either failed outright or degraded, including Turnstile (Cloudflare’s authentication challenge), Workers KV (the distributed key-value store underpinning many customer applications), Access (Cloudflare’s Zero Trust authentication layer), and portions of the company’s dashboard. Internal APIs slowed under heavy retry load as customers attempted to log in or refresh configurations during the disruption.

Cloudflare emphasized that email security, DDoS mitigation, and core network connectivity remained operational, although spam-detection accuracy temporarily declined due to the loss of an IP reputation data source.

Prince acknowledged the magnitude of the disruption, noting that Cloudflare’s architecture is intentionally built for fault tolerance and rapid mitigation, and that a failure blocking core proxy traffic is deeply painful to the company’s engineering and operations teams. The outage, he said, violated Cloudflare’s commitment to keeping the Internet reliably accessible for organizations that depend on the company’s global network.

Cloudflare has already begun implementing systemic safeguards. These include hardened validation of internally generated configuration files, global kill switches for key features, more resilient error-handling across proxy modules, and mechanisms to prevent debugging systems or core dumps from consuming excessive CPU or memory during high-failure events.

The full incident timeline reflects a multi-hour race to diagnose symptoms, isolate root causes, contain cascading failures, and bring the network back online. Automated detection triggered alerts within minutes of the first malformed file reaching production, but fluctuating system states and misleading external indicators complicated root-cause analysis. Cloudflare teams deployed incremental mitigations - including bypassing Workers KV’s reliance on the proxy - while working to identify and replace the corrupted feature files.

By the time a fix reached all global data centers, Cloudflare’s network had stabilized, customer services were back online, and downstream errors were cleared.

As AI-driven automation and high-frequency configuration pipelines become fundamental to global cloud networks, the Cloudflare outage underscores how a single flawed assumption - in this case, about metadata visibility in ClickHouse queries — can ripple through distributed systems at Internet scale. The incident serves as a high-profile reminder that resilience engineering, configuration hygiene, and robust rollback mechanisms remain mission-critical in an era where edge networks process trillions of requests daily.

Executive Insights FAQ: Understanding the Cloudflare Outage

What triggered the outage in Cloudflare’s global network?

A database permissions update caused a ClickHouse query to return duplicate metadata, generating a Bot Management feature file twice its expected size. This exceeded memory limits in Cloudflare’s proxy software, causing widespread failures.

Why did Cloudflare initially suspect a DDoS attack?

Systems showed traffic spikes, intermittent recoveries, and even Cloudflare’s external status page went down by coincidence - all patterns resembling a coordinated attack, contributing to early misdiagnosis.

Which services were most affected during the disruption?

Core CDN services, Workers KV, Access, and Turnstile all experienced failures or degraded performance because they depend on the same core proxy layer that ingests the Bot Management configuration.

Why did the issue propagate so quickly across Cloudflare’s global infrastructure?

The feature file responsible for the crash is refreshed every five minutes and distributed to all Cloudflare servers worldwide. Once malformed versions began replicating, the failure rapidly cascaded across regions.

What long-term changes is Cloudflare making to prevent future incidents?

The company is hardening configuration ingestion, adding global kill switches, improving proxy error handling, limiting the impact of debugging systems, and reviewing failure modes across all core traffic-processing modules.

http://dlvr.it/TPNDGV

20/11/2025

IONOS Deploys Distributed High-Performance Network with VyOS: VyOS Networks is expanding its footprint in the enterprise cloud ecosystem as IONOS, one of Europe’s largest hosting and infrastructure providers, has completed a broad deployment of the VyOS open-source network operating system across its Bare Metal platform.

The rollout marks a significant architectural shift for IONOS, replacing centralized, hardware-dependent networking models with a distributed, software-defined approach designed to support massive scale, improve resilience, and reduce operational costs.

The deployment reflects a growing trend among global cloud providers: leveraging open-source network operating systems to accelerate infrastructure modernization while avoiding vendor lock-in. For IONOS, the move to VyOS enables the company to scale to hundreds of nodes, orchestrate workloads more flexibly across its European data centers, and achieve high-performance throughput without the licensing costs associated with traditional proprietary systems.

According to IONOS, the shift was driven by a need to eliminate architectural bottlenecks and reduce the risk of outages tied to centralized network chokepoints. By distributing VyOS instances across its infrastructure, the company has built a fault-tolerant environment that maintains service continuity even when individual components fail. The redesign also positions IONOS to better support increasingly data-intensive customer workloads spanning bare metal compute, hybrid cloud deployments, and latency-sensitive applications.

“VyOS gave us the freedom to build a resilient, distributed network without sacrificing performance or control,” said Tomás Montero, Head of Hosting Network Services at IONOS. “We can scale to hundreds of nodes efficiently and securely.”

Performance metrics from the deployment indicate that VyOS is delivering high throughput at scale. Across IONOS clusters, aggregate speeds reach into the hundreds of gigabits per second. Individual clusters achieve peak throughput of 20 Gbps and sustain roughly 1.5 million packets per second. These figures position the open-source platform squarely within the performance range of commercial network operating systems traditionally relied upon by large cloud providers.

VyOS Networks emphasized that the collaboration highlights a broader industry shift in favor of open-source networking as a strategic foundation for next-generation infrastructure. “IONOS’s adoption of VyOS demonstrates how open-source networking solutions can rival and even outperform proprietary systems in scalability, reliability, and cost efficiency,” said Santiago Blanquet, Chief Revenue Officer at VyOS Networks. “This collaboration showcases how enterprises can leverage VyOS to build cloud-ready, high-throughput infrastructures that deliver exceptional performance and resilience.”

The move to VyOS has also yielded cost benefits for IONOS. The company reports significant savings tied to the elimination of traditional hardware and licensing expenditures. Instead of renewing contracts with established networking vendors, IONOS is investing in software-defined infrastructure that can scale horizontally and adapt to workload demands without requiring specialized hardware appliances.

Looking ahead, IONOS plans to deepen its integration with the VyOS ecosystem. The company is preparing to adopt Vector Packet Processing (VPP) in VyOS 1.5 to further push throughput and efficiency across its networking layer. Additional enhancements planned for upcoming phases include expanded orchestration support and advanced load-balancing capabilities to optimize multi-tenant infrastructure performance. Taken together, these investments signal a long-term commitment to open-source networking as the backbone of IONOS’s infrastructure strategy.

VyOS Networks, which has spent more than a decade developing open-source routing, firewall, and VPN technologies, now occupies a growing role in enterprise infrastructure modernization initiatives. Its software is deployed across bare-metal environments, hyperscale clouds, and distributed edge systems, giving organizations a unified networking platform that can be automated and scaled across heterogeneous environments.

With competition in cloud infrastructure intensifying, the collaboration positions IONOS to offer customers more flexible, high-performance network services without the constraints of legacy architectures. For VyOS, it strengthens the company’s presence in the European infrastructure market and highlights the maturing role of open-source networking within mission-critical cloud platforms.

Executive Insights FAQ: What This News Means for Enterprise Networking

How does VyOS improve network scalability for cloud providers?

VyOS enables distributed deployment across hundreds of nodes, allowing cloud operators to scale network capacity horizontally without relying on centralized hardware.

What performance gains did IONOS achieve with VyOS?

Clusters reached peak throughput of 20 Gbps and about 1.5 million PPS, with aggregate speeds in the hundreds of Gbps across the environment.

How does VyOS reduce operational and financial risk?

The distributed design eliminates single points of failure and VyOS’s open-source model removes licensing fees, reducing both downtime risk and recurring cost.

Why is open-source networking gaining traction in hyperscale and cloud environments?

Enterprises want vendor independence, automation-friendly infrastructure, and cost efficiency - areas where open-source NOS platforms increasingly match or surpass proprietary options.

What comes next in the VyOS–IONOS collaboration?

IONOS plans to adopt VPP in VyOS 1.5, enhance orchestration, and expand load-balancing capabilities to further improve throughput and operational efficiency across its bare-metal platform.

http://dlvr.it/TPNDDs

20/11/2025

VAST Data, Microsoft Unite to Deliver High-Scale Agentic AI on Azure: VAST Data and Microsoft are deepening their alignment around next-generation AI infrastructure, announcing a new collaboration that will bring the VAST Data AI Operating System (AI OS) natively to Microsoft Azure. Unveiled at Microsoft Ignite, the partnership positions VAST Data as a strategic technology layer supporting what both companies describe as the coming wave of agentic AI.

These AI systems composed of autonomous, continuously reasoning software agents operate on massive, real-time datasets.

For Azure customers, the integration means they will be able to deploy VAST’s full data platform directly within the Microsoft cloud, using the same governance, security, operational tooling, and billing frameworks that define Azure-native services. The VAST AI OS, long known in enterprise AI circles for its performance-oriented architecture and unified data model, will now be available as a cloud service, simplifying deployment for organizations scaling AI workloads across on-premises, hybrid, and multi-cloud environments.

The partnership gives enterprises access to VAST’s unified storage, data cataloging, and database services, designed to support increasingly complex AI pipelines that incorporate vector search, retrieval-augmented generation (RAG), model training, inference, and real-time agentic processing. VAST’s architecture will run on Azure infrastructure, including the new Laos VM Series and Azure Boost accelerated networking, which are optimized for high-bandwidth AI workloads.

Jeff Denworth, co-founder of VAST Data, described the partnership as an inflection point for enterprise AI deployment. “Performance, scale, and simplicity are converging,” he said. “Azure customers will be able to unify their data and AI pipelines across environments with the same power, simplicity, and performance they expect from VAST - now combined with the elasticity and geographic reach of Microsoft’s cloud.”

Microsoft, for its part, sees the integration as a way to streamline the data and storage foundations required for the fast-growing segment of AI model builders working within Azure. “Many of the world’s leading AI developers leverage VAST for its scalability and breakthrough performance,” said Aung Oo, Vice President of Azure Storage. “Running VAST’s AI OS on Azure will help customers accelerate time-to-insight while reducing operational and cost barriers.”

At the center of the offering is a platform designed for agentic AI. VAST’s InsightEngine provides stateless compute and database services optimized for vector search, RAG pipelines, and high-performance data preparation. Its companion AgentEngine coordinates autonomous AI agents working across distributed environments, enabling continuous reasoning over data streams without requiring multi-step orchestration frameworks.

Azure CPU and GPU Clusters

From an infrastructure perspective, the VAST AI OS is engineered to maximize utilization of Azure CPU and GPU clusters. The platform integrates intelligent caching, metadata-aware I/O, and high-throughput data services to ensure predictable performance across training, fine-tuning, and inference cycles. This aligns with Microsoft’s broader strategy of building vertically integrated AI infrastructure - one that increasingly includes custom silicon investments.

A key differentiator of the VAST approach is its exabyte-scale DataSpace, which creates a unified global namespace across on-prem, co-lo, and cloud environments. The model gives enterprises the ability to burst GPU-intensive workloads into Azure without redesigning pipelines or migrating data - a capability that has traditionally slowed hybrid AI adoption.

VAST Data’s disaggregated, shared-everything (DASE) architecture extends into Azure as well, allowing compute and storage resources to scale independently. With built-in Similarity Reduction technology reducing the storage footprint of large AI datasets, the combined platforms aim to give customers both elasticity and cost containment - critical factors as model development increasingly demands multi-region, multi-petabyte environments.

The collaboration arrives as AI infrastructure requirements evolve rapidly. Autonomous agents, context-rich retrieval systems, and continuous-learning workflows require consistent performance across heterogeneous environments - something neither legacy storage architectures nor siloed cloud services were built to handle. By positioning VAST as a unified data substrate for Azure-based AI, Microsoft is betting on an architecture that can bridge those gaps at cloud scale.

Both companies say they will co-engineer future capabilities as Microsoft advances its next-generation compute programs. The long-term goal, they emphasize, is to ensure that regardless of model architecture or processor design, the underlying data layer can support AI workloads with predictability and scale.

Executive Insights FAQ

What does this partnership enable for Azure customers?

Azure users will be able to deploy the VAST AI Operating System natively in the cloud, giving them unified data services, high-performance storage, and AI-optimized compute pipelines without managing separate infrastructure.

How does the VAST AI OS support agentic AI?

VAST’s InsightEngine and AgentEngine allow organizations to run autonomous AI agents and stateful reasoning systems directly on real-time data streams, enabling continuous decision-making across hybrid and multi-cloud environments.

What advantages does the integration bring for AI model builders?

The platform keeps Azure GPU clusters fully utilized through high-throughput data services, intelligent caching, and metadata-optimized I/O - ensuring predictable performance for training, fine-tuning, and inference at scale.

How does VAST improve hybrid AI workflows?

Its global DataSpace functions as a unified namespace, allowing organizations to burst workloads into Azure without data migration or pipeline redesign, enabling seamless hybrid and multi-cloud operations.

How will the collaboration evolve as Microsoft introduces new AI hardware?

VAST Data and Microsoft will co-engineer future platform requirements so that emerging Azure infrastructure - including custom silicon initiatives - remains fully compatible with VAST’s AI OS, ensuring long-term scalability and performance.

http://dlvr.it/TPN6Cr

Adres

Coehoornsingel 58
Zutphen
7201AD

Meldingen

Wees de eerste die het weet en laat ons u een e-mail sturen wanneer Hosting Journalist nieuws en promoties plaatst. Uw e-mailadres wordt niet voor andere doeleinden gebruikt en u kunt zich op elk gewenst moment afmelden.

Contact

Stuur een bericht naar Hosting Journalist:

Delen