Use Case #1: Code Generation ↵Use Case #2: Chat Bots and Virtual Agents ↵Use Case #3: Auto-generation of Visualizations and Dashboard Layouts ↵Use Case #4: Apps, Automation and Workflows ↵Generative AI Caveats ↵Technology Options for Generative AI ↵Tips for Success with Generative AI ↵Use Case #1: Code GenerationA core Generative AI use case is the practical application of large language models (LLM) in the generation of initial code, which accelerates the overall development lifecycle. We do not view Generative AI as a full replacement for well-structured, well-meaning code; however, the productivity gains that occurs from automatic generation of template code — or living repositories of previously created code for specific use cases — expedites the speed of delivery teams when done correctly.Another practical application of text-based LLMs is the ability to convert legacy code bases into destination code bases and assist as a migration accelerator. A basic example of this is an analytics migration scenario, such as the conversion of Qlik Sense reporting to Power BI. A basic project element to this type of engagement would entail the refactoring of proprietary Qlik syntax into DAX code on the front end of the reporting. Ordinarily this would entail an individual conversant within both tool bases. Generative AI allows for all basic expressions to be converted from Qlik syntax to DAX to help expedite delivery of the solution itself.Use Case #2: Chat Bots and Virtual AgentsFor those toying with the idea of adding a chatbot to your site to expand your customer service options, the advent of broad-based LLMs makes the implementation and roll-out of chatbots far more accessible than in the past. Chatbot integration within front-end analytics can add a compelling contextual dimension to existing reporting independent of the visualizations themselves.Cloud platforms are rapidly building “data-in” features to deploy cognitive search services and off-the-shelf LLMs against your own dataset, so the barrier to entry is becoming increasingly lower. These chat bots can then be integrated into workflows via API integration endpoints or native app deployment dependent on our hyper-scaler of choice. Open source frameworks such as LangChain can also be deployed in a similar manner.Use Case #3: Auto-generation of Visualizations and Dashboard LayoutsThe generation of visualizations and chart elements can be achieved based upon auto-prompt suggestions from a BI tool’s native AI capabilities. The same visual aesthetics can be applied to the construction of an entire report layout leveraging the same capabilities.This can be a significant timesaver in the creation of new dashboards and refining existing visualizations that have grown to be less intuitive over time.Use Case #4: Apps, Automation and WorkflowsLightweight application interfaces (Zapier, Power Apps, Power Automate) can leverage API calls to GPT and LLM models to create simple action/trigger response apps. Think of situations in which you can train GPT to adopt the persona of a customer service agent or trawl a company intranet to derive key information and return that information via a form application.Simple automations can also be built, such as form email responses to inbound inquiries or emails that leverage a trigger mechanism to generate an auto prompt.Generative AI CaveatsWhile there is great potential with Generative AI, there are certain risks that should be accounted for before employing Generative AI in your data strategy:Basis of Evidence: Generative AI is contingent upon LLMs and neural networks, processes that typically entail infinite permutations to communicate a result set. This poses a distinct challenge in articulating the rationale behind why a specific code, design choices or guidance was adopted as part of an overall process.Security, IP, and PII Risks to Data: The ease of use in Generative AI is a core selling point. This can also present a challenge in that sensitive, proprietary, or personal identifiable information can be included in a training dataset without proper guard rails in place.Accuracy: Publicly accessible LLMs such as ChatGPT are backed by the open-source internet. In a private organizational based setting, the same LLMs are only going to be as accurate as the training data or metadata being provided. Accuracy on result sets can be negatively impacted based upon the quality of data going into the models.Cost: While the barrier to entry has never been lower, the potential for cost overruns have never been higher. Cognitive search via an LLM is a computational intensive operation. Be highly aware of the cost implications of deploying and running LLMs before deploying to a production environment.Technology Options for Generative AIMost mainstream analytics tools offer Generative AI capabilities in different forms. Here is a list of tools you could harness for Generative AI and their typical application.MicrosoftAzure OpenAI Service enables the use of large-scale, generative AI models. This tool contains pre-trained models as well as options for custom AI models. There are token and image-based pricing models available.Copilot allows for the ability to generate visualizations, insights within reporting, DAX expressions, and narrative summaries within Power BI.QlikQlik offers a suite of OpenAI connectors. OpenAI Analytics Connector allows for generative content within front end Qlik Sense apps. OpenAI Connector for Application Automation allows developers to enhance their workflows when creating expressions, commands, or scripts.GoogleVertex AI allows for the ability to customize and embed models within applications. Models can be tuned using Generative AI Studio on Vertex AI.Generative AI App Builder is an entry point Generative AI builder that allows developers to build and deploy chatbots and search applications.AWSAWS Bedrock is a fully managed service provided by AWS that makes 3rd party LLMs and base models from Amazon available for development and deployment of Generative AI applications.TableauTableau Pulse is powered by Tableau GPT which is built on Einstein GPT. Tableau Pulse allows for automated analytics, and the ability to surface insights via natural language and visual format.SigmaSigma AI represents a broad suite of AI powered feature functionality built within the platform. These include: Input Tables AI which allows users to construct AI-generated input tables; Natural Language Workbooks which allows for natural generated text to produce workbook elements within Sigma; and Helpbot which is a chat bot that provides assistance to users by indexing all help and community articles within Sigma.LangChainLangChain is an open-source framework that allows developers working with large language models to connect to other external components to create LLM based applications. LangChain enables applications to be built leveraging LLMs from providers (OpenAI, Hugging Face, etc.) along with various data sources.Tips for Success with Generative AIUnderstand the mechanics of open Generative AI platforms.Familiarize yourself with the mechanics of open Generative AI platforms to understand how the platforms themselves function. Get a handle on structuring prompts and understand the verbosity necessary to generate the best possible results. The barrier to entry is incredibly low to familiarize yourself with something such as ChatGPT.A tip: If necessary, adjust the “temperature” within your model (ChatGPT is on a 0.1-0.9 scale) to alter tone in responses.Set master prompting as a standard.Tone is extremely important in how we use generative AI. If you’re leveraging Generative AI as a mechanism to create standard content across an organization, it is crucial you set master prompting as a standard. This entails creating two declarative statements: the first is on who your organization is, and the second is on the tone in which you want the LLM to adopt in communication. By setting master prompting standards, the text generated will adopt a consistent structure and feel in communication and reduce the likelihood of companywide communication adopting differing voices.Understand the cost structure.There is a stark cost difference in using Generative AI for one-off, individual use cases versus scaling out for an organization. Cost optimization and strategy needs to be firm so as not to incur significant overruns when initially rolling a solution out to production. It’s recommended to keep a strong handle on cost and usage patterns during development, and to keep development access relatively restricted initially to regulate R&D within your organization.Have a firm data strategy in place.You need to understand where your data lives and how it’s being maintained and structured — before rolling out AI. As part of your data strategy, you also need strong data governance protocols.Plan out a strong use case.Generative AI is just another unique utility without firm use cases in mind. Think of the core processes within your organization, and how Generative AI could be leveraged for workflow automation.Make data privacy paramount.Ensure that only the right data under the right protocols is making it into your LLMs. There are simple steps to enact, such as turning off chat history and training in ChatGPT. When expanding development into major cloud platform providers, its crucial to know things such as data retention policies and where your data is ultimately stored in the process of turning prompts into responses.Continue to enrich your data and supporting metadata.This is a pre-requisite for impactful use of LLMs within a closed, corporate setting. Open source LLMs such as ChatGPT, Bard, and LLaMA are engaging precisely because their transformer models are consuming large volumes of information derived from the open internet. The richness of your corporate data, and the volume of that data for training purposes, will be a critical component to effectuate something similarly engaging as open source LLMs.Choose between general and domain-specific LLMs.Which approach fits your needs the best? Domain specific LLMs are trained using data specific to vertical use cases (i.e., BloombergGPT) versus general use LLMs that may not understand the industry specific terminology in your prompts.Understand your cloud platform.Cloud platforms (AWS, Microsoft, Google) are in a race to bring accessible LLM development and deployment to the masses. Each platform brings a slightly different flavor to how document storage, vector databases, embeddings models, LLMs, and cognitive search come together to generate responses to your prompts. Its crucial to have firm handle on the necessary services and resources to deploy chat services and Generative AI apps. Also of paramount importance is understanding the usage based cost structure of your cloud platform when deploying these solutions to a wider audience.Consider the relationship between Generative AI and BI.LLMs undeniably present a fundamental shift in how we can interpret and access data. There is a future in which traditional dashboarding and reporting becomes obsolete as structured prompts enable users to surface insights with a central data platform powering the data collection and aggregation behind it. There is also a future of peaceful co-existence in which traditional BI tools leverage Generative AI to form hybrid analytics. Regardless of how it takes shape — end users and stakeholders should be keeping in mind how Generative AI can augment, or in certain cases replace, their overall BI strategy and landscape moving into the future.Get In Touch With a Data ExpertThank you. Check your email for details on your request.