A Principled Technologies report: Hands-on testing. Real-world results.

Move more data in real time with up to 73% greater Kafka throughput. Deliver faster streaming responses with up to 78% lower Kafka latency. Speed database transactions with 80% more OLTP performance.

Accelerate your containerized workloads with VMware vSphere Kubernetes Service

On Kafka and OLTP workloads, VMware vSphere Kubernetes Service outperformed an OpenShift 4.19 solution in our testing

Enterprises have many options for tailoring Kubernetes deployments to their performance needs, operational models, and existing infrastructure. Two main approaches, virtualized and bare metal, offer considerable differences. Virtualized Kubernetes environments can offer resource isolation, simple management, and shared infrastructure across diverse workloads. Bare metal Kubernetes deployments can minimize abstraction layers and provide direct access to hardware resources without overhead. Understanding the performance impact of these approaches for workloads, such as modern streaming data pipelines or traditional online transaction processing, can help you choose the Kubernetes platform that more closely aligns with your operational goals.

We set out to compare the performance of two popular Kubernetes platforms: VMware® vSphere® Kubernetes Service (VKS) with VMware Cloud Foundation (VCF) and Red Hat® OpenShift® Container Platform (OCP) on bare metal. VKS represents a VM-based Kubernetes approach and Red Hat OCP on bare metal runs Kubernetes directly on hardware. To capture performance data of containerized modern and traditional workloads, we used both a Kafka workload and a HammerDB TPROC-C workload with PostgreSQL. On both workloads, the VKS solution provided significantly better performance than the Red Hat OCP bare metal solution. Our report explains our findings and how your choice of Kubernetes platform could impact your business.

What we found: Kafka

Scale real-time data streaming with less lag

Many modern organizations rely on the open-source, distributed event-streaming platform Apache Kafka for ingesting and processing streaming data tied to real-time decision-making, digital services, and more.1 As the volume of real-time streaming data increases, sustaining high throughput and low latency becomes essential for keeping Kafka applications responsive and reliable to meet critical service-level agreements (SLAs) and keep data moving for users. Even small delays can impact application behavior, user experience, and downstream systems that depend on immediate data availability.

How we tested

To understand how both Kubernetes platforms can handle real-world Kafka demands, we deployed twelve Kafka pods, three per Kubernetes worker node, to both Kubernetes platforms. On the VCF testbed, we deployed a single Kubernetes worker node per physical host. Each pod contained a Kafka application and necessary resources. We also ran client pods, which generated and sent records as the producers, on the servers.

We started with a single producer and a single platform, or “topic,” and ran the producer workload. Then we scaled up the number of topics, with each topic having a single producer, and ran the workload again at each level (two, four, six, and eight topic/producer pairs).

Throughput

For enterprises that depend on rapid ingestion, such as those in retail, logistics, and fintech, higher throughput can mean systems process more data in less time. This can help reduce backlogs, keep dashboards up to date, and ensure that downstream applications receive fresh data as quickly as possible.

With one topic, the Kubernetes solutions supported equivalent throughput, showing that VKS can match bare-metal performance even at low levels of contention and with little to no overhead from vSphere virtualization. As we increased the number of topics, VKS delivered higher throughput than OCP. Figure 1 shows all the throughput data from our Kafka producer testing.

Handle up to 73% more streaming throughput. Kafka producer workload throughput results in MB per second. Higher is better. At 1 topic, the VKS solution processed 221.42 MB/sec, and the OCP solution processed 221.43 MB/sec. At 2 topics, the VKS solution processed 436.56 MB/sec, and the OCP solution processed 420.10 MB/sec. At 4 topics, the VKS solution processed 845.04 MB/sec, and the OCP solution processed 723.01 MB/sec. At 6 topics, the VKS solution processed 1,202.66 MB/sec, and the OCP solution processed 870.81 MB/sec. At 8 topics, the VKS solution processed 1,450.24 MB/sec and the OCP solution processed 833.72 MB/sec.
Kafka producer workload throughput results from the VKS and OCP solutions. Source: PT.

Latency

Low and consistent latency is central to the success of streaming applications. Faster response times help ensure that fraud detection engines remain accurate, supply chain systems stay synchronized, and user-facing applications provide timely updates. In our testing, the VKS solution supported lower latency at every level (see Figure 2).

Not only did the VKS solution support lower latency, but it also maintained that advantage with a larger throughput pipeline with two additional topics (22 ms for the VKS solution with six producers vs. 39 ms for the OCP solution with four topics). Despite latency increasing with load, the VKS solution continued to push more data than the OCP solution.

Support up to 78% lower latency. Kafka producer workload average latency results in ms. Lower is better. At 1 topic, the VKS solution had a 1.84ms latency, and the OCP solution had a 1.99ms latency. At 2 topics, the VKS solution had a 2.47ms latency, and the OCP solution had a 6.64ms latency. At 4 topics, the VKS solution had an 8.24ms latency, and the OCP solution had a 39.00ms latency. At 6 topics, the VKS solution had a 22.51ms latency, and the OCP solution had a 79.62ms latency. At 8 topics, the VKS solution had a 106.06ms latency, and the OCP solution had a 156.99ms latency.
Kafka producer workload latency results from the VKS and OCP solutions. Source: PT.

Meeting throughput and latency targets

Every team sets its own latency thresholds, which define the limits of acceptable performance, based on use case and preference. For our testing, we set a latency threshold of 50 milliseconds, a publisher latency that could introduce noticeable delays in the end-to-end latency of real-time streaming.2 While still low, latency above this level could indicate a bottleneck with the system, which could affect transaction processing, event sequencing, and other time-sensitive streaming operations.

Based on this threshold, the VKS solution delivered acceptable performance all the way up through 6 topics, while the OCP solution could support only 4 topics with latency under 50ms, meaning that the VKS solution could support 50% more topics with acceptable latency.

At 8 topics, both the VKS solution and the OCP solution surpass the 50ms latency threshold we set. For organizations with a higher latency threshold, however, it may be valuable to see that the VKS solution delivers significantly higher throughput and lower latency than its competitor.

What we found: PostgreSQL

Support faster transaction processing

SQL databases form the operational backbone of organizations of all sizes, from mom-and-pop retail stores to large, multinational enterprises. Because many of these databases are mission-critical, maintaining high and reliable performance is essential.

For our testing, we used PostgreSQL, an open-source relational database used for “many web, mobile, geospatial, and analytics applications.”3 For real-world results, we used the HammerDB TPROC-C workload, which simulates five types of common online transaction processing.

The VKS solution provided dramatically stronger performance than the OCP solution (see Figure 3). With VKS, the system supported 80 percent more new orders per minute (NOPM) and significantly lower latency. Note that in order to achieve the best possible results for each solution, we tuned the number of virtual users with which we tested and present the results for both solutions with the highest NOPM.

When your database can process more transactions in the same amount of time, your customers and staff get a more responsive experience, critical for engagement and productivity.

Support 80% stronger PostgreSQL performance. HammerDB TPROC-C results in NOPM. Higher is better. The VKS solution supported 112,947 NOPM, and the OCP solution supported 62,459 NOPM.
HammerDB TPROC-C results from the VKS and OCP solutions using a PostgreSQL database. Source: PT.
Support 72% lower latency for PostgreSQL workloads. HammerDB TPROC-C latency results in ms. Lower is better. The VKS solution had an 18.33ms latency, and the OCP solution had a 66.57ms latency.
HammerDB TPROC-C latency results from the VKS and OCP solutions using a PostgreSQL database. We took the weighted average of the six latency measurements that the benchmark provided. Source: PT.

Conclusion

Kubernetes powers many real-time services, streaming analytics, and mission-critical databases. These workloads demand high throughput, low latency, and the ability to scale efficiently as data volumes grow. In our testing, a VMware VKS virtualized Kubernetes deployment delivered stronger performance than a Red Hat OCP bare metal environment across both Kafka producer and OLTP workloads.

VKS sustained higher throughput for Kafka producer workloads, including up to 73 percent more throughput and up to 78 percent lower latency, and supported more topics before exceeding the potentially problematic 50ms latency threshold.4 In OLTP testing with PostgreSQL, VKS delivered 80 percent more NOPM than OCP on bare metal. Together, these results indicate that deploying a virtualized Kubernetes environment with VKS can deliver faster modern application responses, improved real-time data processing, and more efficient use of infrastructure.

  1. Apache, “Powered By,” accessed December 2, 2025, https://kafka.apache.org/powered-by.
  2. Penghui Li and David Kjerrumgaard, “Latency Numbers Every Data Streaming Engineer Should Know,” accessed December 3, 2025, https://streamnative.io/blog/latency-numbers-every-data-streaming-engineer-should-know.
  3. PostgreSQL Global Development Group, “About - What is PostgreSQL?” accessed December 5, 2025, https://www.postgresql.org/about/.
  4. Penghui Li and David Kjerrumgaard, “Latency Numbers Every Data Streaming Engineer Should Know,” accessed December 3, 2025, https://streamnative.io/blog/latency-numbers-every-data-streaming-engineer-should-know.

This project was commissioned by Broadcom.

January 2026

Principled Technologies is a registered trademark of Principled Technologies, Inc.

All other product names are the trademarks of their respective owners.

Forgot your password?