Generative Data Intelligence

Generative AI: What it takes to turn a hyperbole into transformational paradigm

Date:

1.    Evolutionary shift of Artificial Intelligence toward Generative AI

 Rapid advancements in artificial intelligence (AI) and machine learning (ML) techniques – e.g., natural language processing (NLP) and large language models (LLM) capabilities have intensified the wide sways of technology on the functioning of business, government, and society in recent times. Part of evolutionary shift of AI/ML, large language models driven generative AI capabilities have emerged as a subject of deeper interest for exploration of innovation use cases and their adoption by business firms. These models are pre-trained on large corpus of diverse domains data and contain designed abilities to adopt unsupervised or semi-supervised learning approaches for progressive enhancement of accuracy of their outcomes.

The emergence of generative AI opens new frontiers of disruptive innovation across domains – reshaping business models and augmenting new forms of intelligent applications. Market research indicates that the global generative AI market growing at a CAGR of 32.2% in the next 5 years to hit $53.9 billion by 2028. At the same time, growing concerns about lack of authenticity and low explainability of outcomes and surrounding ethical issues require the setting of basic guardrails.

 2.    Promise and Potential

Massive buzz created by LLM powered ChatGPT after its release in Nov 2022 has unexpectedly brought generative AI to the centerstage of technology revolution. Wide frenzy aside, ChatGPT – as an example of large multimodal model, brings a wide range of creative potential to reinvent business with perceptive use cases aimed at innovation and productivity. Human-like conversation, generation or editing of text/image/video content, summarization of conversation or content, technical writing and authoring are basic examples of creative abilities of generative AI. Amid intensified interest, the frenetic launch of diverse off-the-shelf pre-trained LLM models and platforms by technology giants sets a new trajectory of AI adoption in the industry to enable democratized access to generative AI for business firms.

While the promise of generative AI driven transformation ideas across wide spectrum of domains caught the imagination of people, growing concerns about the potential risks and ethical issues underline intrinsic vulnerabilities – if left unaddressed. Lack of factual accuracy, hallucinating or imaginary output, low emotional intelligence, and empathy as well as disturbing or confrontational responses are real-world hazards to be avoided in all business scenarios. All the while, concerns regarding disparaging of human potential and job loss, dark fantasies, discrimination, and bias are serious ethical binds arising from uncontrolled functioning of generative AI. Also, massive energy consumption to support huge computing power in training of LLMs resulting in increased carbon footprint hampers firms’ intent of turning carbon-neutral in coming years.

3.    Dilemma of financial firms

Beyond early curiosity and excitement about the novelty of generative AI, a large swathe of financial firms is still struggling to realistically decipher the big picture context– in terms of its functional relevance, application areas and use cases, costs, and benefits as well as legal and regulatory risks involved in business adoption. There are hardly any reliable answers as what it takes to quantify and qualify unclear spectrum of opportunities and costs as well as coherent next steps towards enterprise-level adoption for building customized solutions. Also, concerns regarding data privacy, plagiarism, copyright infringement and regulatory ambiguity further undermine a massive early adoption scenario in the industry.

Amid intensified hype, two contrasting approaches for the adoption of generative AI are emerging in the financial services landscape. The divergent positions taken by the leading financial firms broadly signifies the dilemma faced by the industry about its future directions. At one end, firms have embarked on full-blown strategic initiative focused on generative AI– i.e., beyond experimentation, adoption in specific application areas focused on select use cases to augment productivity and intelligent insights through bespoke solutions. At the other end, firms are extremely cautious and staying away from any notion of adoption at least in near-term, considering risk control and regulatory compliance concerns – including third party software issues, sensitivity of confidential client information and breach of data privacy. In between the two contrasting extremes, the firms’ early intention to embark on exploration and experimentation requires an extensive evaluation. Guided by guardrails of Responsible AI principles, business and technical viability of use cases, and importantly reliable architectural construct for LLM integration with the core business processes become fundamental requisites.

4.    Enterprise scale adoption: Key imperatives

While it is too early to predict future directions and the course of adoption path for generative AI to be taken by the industry, FOMO (fear of missing out) syndrome is expected to dominate thinking of financial firms to invest in experimentation for a set of narrow-focused use cases. Importantly, integration of generative AI based capabilities in core business processes of financial firms requires an extensive assessment of foundational factors. Considering wide divergences on use case focus, less comparable features, and complex benchmarking of available LLMs, evaluation of their capabilities and commercial and deployment terms becomes an onerous exercise. It entails a detailed evaluation of suitability of use cases, cost-benefit analysis of innovation features, technology viability, human impact as well as legal, regulatory, and reputational risk nuances, besides their control and mitigation measures.

From a technology standpoint, enterprise scale adoption of generative AI critically depends on technology and data ecosystem factors – including data science practices and platforms of a business firm. Apart from context of business problem to be solved in diverse application areas, maturity level of AI adoption and readiness of modelling platform and tools as well as scaling of data engineering pipelines are key requisites to move forward on generative AI focused value exploration path. Sizable augmentation of enterprise AI platform by enabling access to various open-source or proprietary models, new generation modelling tools and workflows as well as high-performance computation infrastructure provisions are the basic first steps.

4.1  Finding right model and its contextualization

Aligning to the centricity of business use cases, firms have options to choose a model from range of available LLMs – open source (e.g., Meta’s OPT, LLaMA, EleutherAI’s GPT-NeoX, Hugging Face’ Bloom, Google’s PaLM, Databricks’s Dolly 2.0, Stability.AI’s StableLM) or closed proprietary models (e.g., ChatGPT from OpenAI, Dolly from Databricks, Jurassic from AI21 Labs, Cohere, LightOn). Typically, open-source models offer a rich ecosystem of developers with a potential of swift iteration cycles as well as enhanced techniques for optimization for inference and scaling. The other critical consideration involves selection of a large size general purpose model catering to wider business domains or a relatively lite-sized domain specific model (e.g., BloombergGPT for finance). Given the multiplicity of pre-trained models available with diverse use cases coverage, it is the least practical option for a firm to build its own model from scratch. Moreover, the approach is fraught with a risk to produce less reliable outcomes even after incurring sizable costs, handling complexities of intense modelling rigor and provisioning huge domain data corpus for pre-training requirements.

4.2  Cost of fine-tuning and retraining models

Flexibility of transfer learning or fine-tuning of available models in specific domain context with limited quantum of user’s dataset in form of few-shot learning or zero-short learning becomes a critical factor for swifter deployment to align with business needs. Also, evolution of data science and computing technology brings an element of dynamicity in models and requires regular upgrade and iteration. In effect, it adds new learning, and retraining needs for an implemented model. Even if inference API or service call charges for training data and usage of model typically appear to be in low 100th or 1000th fractions of a dollar (say, $0.000N / 1K tokens), cumulative cost in fine-tuning or retraining a model can be astronomically high amount. As realized in the case of Cloud journey, nominal appearing commercial charges applied for data egress and ingress by hyperscalers under different deployment models routinely surpass the projected OPEX outgo. Given the unpredictability of the LLM upgrades across its lifecycle and consequent learning or retraining needs, a realistic assessment of these cost heads would be hard to project.

4.3  Technology and computing infrastructure requirements

Complexities of LLMs require significantly powerful computational power for swifter training and run. To support training multiple-billion or trillion sized token parameters and processing of massive data corpus, it requires specialized hardware, memory, and compute resources in parallel or distributed set-up. To take an example, some analysis indicates that training a sixty-five billion parameter LLaMA model processing 380 tokens/sec/GPU on 2048 A100 GPU with 80GB of RAM involving dataset 1.4 trillion tokens takes approximately 21 days. It entails GPU and infrastructure costs amounting to USD 4.05 million. In the case of multiple or iterative training scenarios involving longer time duration, computing costs can jump to an exorbitantly prohibitive figure. Thus, provisioning a cost-efficient computing infrastructure to support large scale processing at an optimal performance becomes a vital requirement.

4.4  Modelling tools and workflows

Access to diverse set of pretrained models, advanced modelling tools and accelerators are foundational requirements for exploring potential of innovation use cases across stages of design, development, testing and deployment of customized solutions. Apart from computing and data engineering infrastructure enablement, generative AI platforms must support range of comprehensive services across models training, finetuning, inference services and its deployment backed by an integrated AI workflow. The good thing is that the leading hyperscalers have reformulated their AI centric platform and services to provide high performance foundation models, computing hardware, and software frameworks to capture growing market interest in LLM. A typical service portfolio comprises of pretrained models, built in solution models and services, workflow tools and capabilities as well as APIs and frameworks to build applications at scale.

4.5  Risk control and safeguards

In recent years, regulatory emphasis (and rulemaking in progress) across major jurisdictions is focused on comprehensive standards on Responsible AI framework of practices and risk governance measures to ensure fairness, explainability and trust in place for AI systems. Integration of AI models in core business processes entails enhanced risk control and safeguards for data privacy and security concerns as well as to ensure trust and reliability of outcomes. Effective handling of risk control and compliance issues become vital for two basic counts – firstly, typical hybrid cloud model for running of enterprise applications and workloads remain susceptible to unknown vulnerabilities, despite array of security protection measures. Secondly, usage of large data corpus involving client confidential, or business sensitive information poses new levels of risk from unintended data exposures. Also, distorted emotion or behavior patterns imbued from few-shot or zero-shot training data carries an indeterminate level of bias in outcomes. Certainly, ensuing trust and reliability has significant business ramifications and needs holistic oversight and control across model’s lifecycle, beyond box ticking to comply with Response AI guidance.

5.    Way forward: Gearing to traverse an uncertain path

The wide transformational influence of generative AI across business functions is bound to change the way financial firms have been striving to reinvent into a data-led business organization in recent years. While waiting for more discernible signs of technological and regulatory evolution, immediate spur for firms points toward cautious exploratory ventures focused on few narrow use cases to build a potent competitive edge. Aiming to reinvent business and harness productivity and cost-effectiveness advantages, use cases in internal business processes appears to be the first set of candidates for generative AI application. To start with a cautious approach, it will limit business, regulatory and legal risks to the lowest level.

In prevailing realm of unpredictability on AI evolution curve, enterprise-wide adoption with set of killer apps appears a long path for various real-world factors –data privacy concerns, evolution of a coherent legal & regulatory model, model training and retraining needs, computing platform, and infrastructure costs and constraints. As the build-up for the use cases and adoption stories advance through exploratory paths, minimal investment to get counted in the vanity race at one end and curiosity-filled passivity at another extreme are likely expanse of experimentation journeys. Importantly, the evolution of AI supercomputing on the computing side, better clarity on copyright aspects on the legal side and deeper embedding of the Responsible AI core in models can completely reframe the generative AI paradigm or its futuristic avatars in coming years.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?