Generative Data Intelligence

Why it’s time to rethink your cloud strategy

Date:

Commissioned The decision by the three largest U.S. public cloud providers to waive data transfer fees is a predictable response to the European Data Act’s move to eradicate contractual terms that stifle competition.

A quick recap: Google made waves in January when it cut its data transfer fee, the money customers pay to move data from cloud platforms. Amazon Web Services and Microsoft followed suit in March. The particulars of each move vary, forcing customers to read the fine print closely.

Regardless, the moves offer customers another opportunity to rethink where they’re running application workloads. This phenomenon, which often involves repatriation to on-premises environments, has gained steam in recent years as IT has become more decentralized.

The roll-back may gain more momentum as organizations decide to create new AI workloads, such as generative AI chatbots and other applications, and run them in house or other locations that will enable them to retain control over their data.

To the cloud and back

Just a decade ago, the organizations pondered whether they should migrate workloads to the public cloud. Then the trend became cloud-first, and everywhere else second.

Computing trends have shifted again as organizations seek to optimize workloads.

Some organizations clawed back apps they’d lifted and shifted to the cloud after finding them difficult to run them there. Others found the operational costs too steep or failed to consider performance requirements. Still others stumbled upon security and governance issues that they either hadn’t accounted for or had to reconcile to meet local compliance laws.

“Ultimately, they didn’t consider everything that was included in the cost of maintaining these systems, moving these systems and modernizing these systems in the cloud environment and they balked and hit the reset button,” said David Linthicum, a cloud analyst at SiliconANGLE.

Much ado about egress fees

Adding to organizations’ frustration with cloud software are the vendors’ egress fees. Such fees can range from 5 cents to 9 cents per gigabyte, which can grow to tens of thousands of dollars for organizations working with petabytes. Generally, fees vary based on where data is being transferred to and from, as well as how it is moved.

Regulators dislike fear the switching costs will keep customers locked into the platform hosting their data, thus reducing choice and hindering innovation. Customers dislike these fees and other surcharges as part of a broader strategy to squeeze them for fatter margins.

This takes the form of technically cumbersome and siloed solutions (proprietary and costly to connect to rivals’ services), as well as steep financial discounts that result in the customer purchasing additional software they may or may not need. Never mind that consuming more services – and thus generating even more data – makes it more challenging and costly to move. Data gravity weighs on IT organizations’ decisions to move workloads.

In that vein, the hyperscalers’ preemptive play is designed to get ahead of Europe’s pending regulations, which commence in September 2025. Call it what you want – just don’t call it philanthropy.

The egress fee cancellation adds another consideration for IT leaders mulling a move to the cloud. Emerging technology trends, including a broadening of workload locations, are other factors.

AI and the expanding multicloud ecosystem

While public cloud software remains a $100 billion-plus market, the computing landscape has expanded, as noted earlier.

Evolving employee and customer requirements that accelerated during the pandemic have helped diversify workload allocation. Data requirements have also become more decentralized, as applications are increasingly served by on-premises systems, multiple public clouds, edge networks, colo facilities and other environments.

The proliferation of AI technologies is busting datacenter boundaries, as running data close to compute and storage capabilities often offers the best outcomes. No workload embodies this more than GenAI, whose large language models (LLMs) require large amounts of compute processing.

While it may make sense to run some GenAI workloads in public clouds – particularly for speedy proof-of – concepts, organizations also recognize that their corporate data is one of the key competitive differentiators. As such, organizations using their corporate IP to fuel and augment their models may opt to keep their data in house – or bring their AI to their data – to maintain control.

The on-premises approach may also offer a better hedge against the risks of shadow AI, in which employees’ unintentional gaffes may lead to data leakage that harms their brands’ reputation. Fifty-five percent of organizations feel preventing exposure of sensitive and critical data is a top concern, according to Technalysis Research.

With application workloads becoming more distributed to maximize performance it may make sense build, augment, or train models in house and run the resulting application in multiple locations. This is an acceptable option, assuming the corporate governance and guardrails are respected.

Ultimately, whether organizations choose to run their GenAI workloads on premises or in multiple other locations, they must weigh the options that will afford them the best performance and control.

Companies unsure of where to begin their GenAI journey can count on Dell Technologies, part of a broad partner ecosystem, for help. Dell offers AI-optimized servers, client devices for the modern workplace and professional services to help organizations deploy GenAI in trusted, secure environments.

Learn more about Dell Generative AI Solutions.

Brought to you by Dell Technologies.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?