In this blog, we break down six practical ways your peers are using Generative AI to get more value from their data. We also cover key considerations, best practices, and technology options for integrating Generative AI effectively.

If you know Generative AI has potential, but you are struggling to figure out real-world application in your work, here are six practical use cases to consider — plus key factors to keep in mind before you get started.

In this blog, we’ll cover:

A flowchart titled "AI’s Role in the Data Analytics Lifecycle" outlines six stages where AI enhances data processes. The stages include Data Collection & Integration, Data Governance & Quality, Data Processing & Transformation, Exploration & Insights, Visualization & Reporting, and Automation & Workflow Orchestration. Each stage is accompanied by a brief description of AI’s capabilities, such as automated data mapping, anomaly detection, natural language queries, AI-generated dashboards, and workflow optimization. The Analytics8 logo appears in the bottom right corner.

Generative AI plays a crucial role throughout the data analytics lifecycle, from data integration and governance to visualization and workflow automation.

Use Case #1 Code Generation: How Generative AI Accelerates Development

A core Generative AI use case in data analytics is using large language models (LLM) to generate  initial code, which accelerates the overall development lifecycle. Generative AI doesn’t replace well-structured, thoughtfully written code; but when used correctly, it helps teams work faster by generating template code or maintaining reusable repositories for common use cases.

Another practical application is converting legacy code to modern platforms, making migrations more efficient. For example, when moving from Qlik Sense reporting to Power BI, Generative AI can refactor proprietary Qlik syntax into DAX, automating the conversion of basic expressions and reducing the manual effort typically required.

Use Case #2 Chatbots and Virtual Agents: How to Enhance Interactions with AI

If you’re considering adding a chatbot to your site to expand your customer service options, broad-based LLMs make the implementation and roll-out of chatbots far more accessible than in the past. But chatbots aren’t just for answering customer inquiries — a chatbot can be an internal tool that helps business users understand and explore their data more effectively.

Integrated into analytics platforms, these AI-powered chatbots can summarize dashboards, explain key metrics, and answer follow-up questions about the data. Unlike static reports, they allow users to query data conversationally, making it easier to extract insights without manually navigating dashboards or writing queries.

Cloud-based platforms like Databricks and Snowflake are rapidly building “data-in” features to deploy cognitive search services and off-the-shelf LLMs against your own dataset, so the barrier to entry in deploying an LLM-based Chatbot is becoming increasingly lower. You can integrate these chat bots into workflows via API endpoints or as native applications, depending on your cloud provider. If you prefer an open-source approach, frameworks such as LangChain offer another way to build AI-powered chatbots.

Use Case #3 Data Governance: Using AI to Automate Documentation and Improve Trust

Major platforms like Databricks now integrate Generative AI into their governance tooling, automating metadata generation, improving data documentation, and tracking lineage more intelligently. These capabilities streamline traditionally time-consuming data governance tasks, helping you maintain robust data practices without sacrificing agility.

Beyond basic documentation, Generative AI helps document processes and improve quality assurance. It analyzes existing workflows, generates comprehensive documentation, and identifies areas for improvements. This is especially valuable when you’re building or updating data governance frameworks, ensuring consistency and completeness across your data ecosystem.

AI can also improve user trust — when someone questions a metric or analysis, Generative AI can quickly reference your documented data governance framework to provide clear, contextual explanations of data lineage, calculations, and business rules.

Use Case #4 AI-Generated Visualizations: Creating Dashboards and Reports Faster

Modern BI platforms like Databricks AI/BI and Power BI have built in Gen AI capabilities that let you create and enhance data visualizations through natural language interactions. With simple conversational prompts, you can generate sophisticated visualizations and entire dashboard layouts in seconds. This goes beyond basic chart creation, letting you quickly iterate on designs and optimize layouts for different audiences and purposes.

Agentic AI tool Zenlytic take this further by integrating its AI analyst (Zöe), which not only create visualizations but also help interpret the data and suggest relevant insights. Meanwhile, Power BI’s Copilot changes how you interact with their data, offering AI-driven features to generate visualizations, create DAX expressions, and produce narrative summaries — all through natural language commands.

This generative AI-driven approach to UI creation saves time and makes self-services analytics more accessible, especially for those without deep SQL or visualization expertise. For example, you could ask, “show me monthly sales trends with year-over-year comparison,” and instantly receive professionally designed visualizations. This accessibility helps build a data-driven culture where insights are available to everyone, not just technical analysts.

Use Case #5 Automating Workflows: How Generative AI Streamlines Data Processes

With workflow automation tools like Zapier, Power Apps, and Power Automate, you can now embed Generative AI directly into your existing business applications and workflows without complex development efforts. These integrations automate analytical requests, from simple data summaries to complex report generation, while maintaining your organization’s security and governance standards. Low-code platforms and API integrations make insights more accessible to business users.

The real power of these integrations comes from their ability to connect different systems and data sources seamlessly. Whether you’re generating weekly performance reports, creating data-driven email responses, or building interactive analytical applications, these workflows reduce manual effort while keeping insights consistent. You can automate workflows that monitor business metrics, generate analytical summaries with natural language explanations, and distribute insights through existing communication channels like email or Teams — ensuring stakeholders get the right information at the right time.

Use Case #6 AI Agents: Handling Complex Analytical Tasks

AI agents go beyond workflow automation by handling complex analytical tasks that require reasoning and adaptation. While workflow automation focuses on structured processes, AI agents adapt dynamically to different analytical requests and refine their approach based on new data.

Agent frameworks like Mosaic, LangGraph, AutoGen, and CrewAI let you build specialized components that work together — just like human analysts solving complex problems. When properly implemented, AI agents break tasks into logical steps and execute them systematically. (This process should not rest entirely in the hands of AI – your oversight is essential to ensure accuracy and consistency.)

You can apply these frameworks within analytics platforms to handle routine analytical workflows. For example, when you’re investigating a business metric, an analytics agent can follow a structured approach: identifying relevant data sources, performing statistical analysis, and generating preliminary insights. You can enhance this workflow by deploying multiple specialized agents — one for data preparation, another for statistical analysis, and a third for visualization. Proper coordination is key to getting accurate results.

While AI agents adjust their approach based on initial findings, they should enhance — not replace — your analysis. The multi-agent approach streamlines routine analytical tasks and highlights key insights, but it works best when you set clear boundaries and use cases. If you’re implementing agent-based analytics, maintain oversight and validation processes to ensure the accuracy and reliability of automated analysis.

Generative AI Caveats: Common Challenges and Risks to Watch For

Generative AI offers significant potential, but you need to consider certain risks before integrating it into your data strategy:

  1. Basis of Evidence: Generative AI relies on LLMs and neural networks, which generate results through infinite permutations. This makes it difficult to explain why a specific code, design choice, or recommendation was made in each process.
  2. Security, IP, and PII Risks to Data: The ease of use in Generative AI is one of the biggest advantages — but also a risk. Without proper safeguards, sensitive, proprietary, or personal identifiable information can end up in a training dataset, creating compliance and security concerns.
  3. Accuracy: Public LLMs like ChatGPT pull from open-source data. In a private setting, their accuracy depends entirely on the quality of your training data and metadata. Poor data leads to poor results, so you need strong data governance to ensure reliable outputs.
  4. Cost: The barrier to entry has never been lower — but cost overruns have never been higher. Cognitive search with an LLM is resource-intensive, and if you’re not careful, deployment and scaling can drive up costs quickly. Monitor usage closely before rolling out AI in production.
  5. Rapid Evolution: The Generative AI landscape is constantly changing, with frequent updates to models, tools, and frameworks. This evolution can break workflows and require ongoing maintenance to keep your AI implementation secure and effective.
  6. Response Consistency: Even when you use the same inputs and data, foundation models can generate different outputs. This inconsistency is especially challenging for production use cases where reliable, repeatable results are essential.
A chart titled "Generative AI Risks vs. Mitigation Strategies" displays six key risks of Generative AI on the left and corresponding mitigation strategies on the right. The risks include lack of explainability, security & compliance, accuracy & data quality, high cost of AI workloads, model drift & evolution, and inconsistent outputs. The mitigation strategies include audit & validation, AI security, data governance, cost monitoring, model versioning, and standardized prompting techniques. Each risk is visually linked to its corresponding strategy. The Analytics8 logo is at the bottom right.

Mitigate the risks of Generative AI by implementing strong security, governance, cost monitoring, and validation strategies.

Generative AI Tools and Platforms: Choosing the Right Technology

Most mainstream analytics tools offer Generative AI capabilities in different forms. The right platform depends on your organization’s needs — whether you’re looking for built-in AI features within your existing analytics stack or open-source frameworks for customization. Here’s a breakdown of the available options:

  • AWS: AWS Bedrock is a fully managed service provided by AWS that makes 3rd party LLMs and base models from Amazon available for development and deployment of Generative AI applications.
  • Google: Vertex AI allows for the ability to customize and embed models within applications. Models can be tuned using Generative AI Studio on Vertex AI. Generative AI App Builder is an entry point Generative AI builder that allows developers to build and deploy chatbots and search applications.
  • Microsoft: Azure OpenAI Service enables the use of large-scale, generative AI models. This tool contains pre-trained models as well as options for custom AI models. There are token and image-based pricing models available. Copilot allows for the ability to generate visualizations, insights within reporting, DAX expressions, and narrative summaries within Power BI.
  • Databricks AI/BI: Databricks AI/BI leverages the lakehouse architecture to enable natural language querying, automated visualizations, and AI-assisted analytics. The platform integrates with foundation models while maintaining enterprise-grade security and governance within existing Databricks environments.
  • Qlik: Qlik offers a suite of OpenAI connectors. OpenAI Analytics Connector allows for generative content within front end Qlik Sense apps. OpenAI Connector for Application Automation allows developers to enhance their workflows when creating expressions, commands, or scripts.
  • Sigma: Sigma AI represents a broad suite of AI powered feature functionality built within the platform. These include: Input Tables AI which allows users to construct AI-generated input tables; Natural Language Workbooks which allows for natural generated text to produce workbook elements within Sigma; and Helpbot which is a chat bot that provides assistance to users by indexing all help and community articles within Sigma.
  • Tableau: Tableau Pulse is powered by Tableau GPT which is built on Einstein GPT. Tableau Pulse allows for automated analytics, and the ability to surface insights via natural language and visual format.
  • Zenlytic: Zenlytic is an LLM-powered BI platform that combines dashboards, self-serve exploration, and an AI data analyst (named Zöe). Zenlytic allows you to explore, pivot, and ask your data questions like you’re talking to an analyst.

Generative AI Frameworks for Implementation: A Structure to Get Started  

These frameworks help you build, deploy, and manage Generative AI applications by providing structure, automation, and integration capabilities. Whether you’re developing AI-powered chatbots, multi-agent systems, or analytics automation, choosing the right framework depends on your use case and technical requirements.

  • LangChain/LangGraph: LangChain and LangGraph are open-source frameworks for building LLM-powered applications. LangChain lets you integrate external components and data sources, while LangGraph extends these capabilities by allowing developers to create structured, state-aware agent workflows using graph-based architectures.
  • Pydantic AI: Pydantic AI provides a structured framework for building type-safe LLM applications, ensuring data validation and consistent output formatting. With Pydantic AI, you can build reliable AI applications with predictable response structures and error handling.
  • AutoGen: AutoGen helps you build multi-agent systems that can collaborate to solve complex tasks. This framework lets you develop conversational agents that work together, share context, and execute multi-step workflows autonomously.
  • Crew AI: Crew AI enables you to orchestrate multiple AI agents to handle complex tasks. With this platform, you can create agent-based workflows where specialized agents collaborate, delegate tasks, and share information to achieve specific goals.
  • Mosaic: Mosaic gives you a framework for developing and deploying production-ready AI agents. You can use it to build, test, and manage intelligent agents that handle complex analytical and operational tasks while maintaining reliability.
  • AtomicAgents: AtomicAgents lets you create modular, reusable AI agents that can be combined into larger systems. This platform enables developers to build scalable agent-based applications while maintaining consistency and reliability across deployments.

Tips for Success with Generative AI

  • Practice your prompts. Familiarize yourself with how open Generative AI platforms function so that you can use them effectively. Learn how to structure prompts and adjust verbosity to get the best possible results. Since the barrier to entry is low, you can easily experiment with tools like ChatGPT to refine your approach.
  • You can adjust your model’s “temperature setting”. Values closer to 0 keep responses grounded in facts, while higher values allow the model to take more ‘creative liberties’. Determine what is appropriate for your use cases and business.
  • Context is everything. LLMs need rich contextual information to generate meaningful results for your organization. Unlike human analysts, they lack built-in knowledge of your organization’s specific metrics, terminology, business rules, or technical architecture. Without clear context, responses can be inaccurate or irrelevant. To get the best results, include relevant business metrics, internal terminology, calculation methods, and specific use case details in your prompts.
  • Set master prompting as a standard. Tone matters when using Generative AI. If you’re using AI to generate standardized content across your organization, establish a master prompting approach. Create two clear statements: one defining your organization’s identity and another setting the tone AI should adopt. This ensures consistency in AI-generated text and prevents mismatched communication styles across teams.
  • Understand the cost structure. The cost of Generative AI varies greatly between small-scale and enterprise-wide use. Without a clear strategy, costs can escalate quickly. Track your usage and limit access during development to control spending and optimize costs before full deployment.
  • Have a firm data strategy in place. Before rolling out AI, know where your data lives and how it’s being maintained and structured. A strong data strategy includes clear data governance protocols to maintain accuracy and security.
  • Plan out a strong use case. Generative AI is a tool, not a solution on its own. Identify key processes in your organization where AI can streamline workflows or automate repetitive tasks to drive real value.
  • Make data privacy paramount. Only the right data should feed into your LLMs under the right security protocols. Simple steps like disabling chat history and restricting training data in ChatGPT can help. If you’re using a cloud provider, understand its data retention policies and how it stores prompt-related information.
  • Continue to enrich your data and supporting metadata. In a corporate setting, the effectiveness of LLMs depends on the quality and depth of your internal data. Open-source LLMs like ChatGPT and Bard work well because they draw from vast online datasets. If you want similar results in a closed environment, ensure your AI models have access to high-quality, well-structured internal data.
  • Choose between general and domain-specific LLMs. Which works best for you — a general-purpose LLM or a domain-specific one? Industry-specific models (like BloombergGPT for finance) offer more relevant insights than general AI models that may not understand niche terminology.
  • Bigger isn’t always better. Large language models get most of the attention, but smaller models like Mistral-7B, Phi-4, and SmolVLM can be just as powerful while requiring fewer resources. If you’re working with structured data and automated workflows, these smaller models often deliver faster results at a lower cost.
  • Understand your cloud platform. AWS, Microsoft, and Google offer different approaches to LLM development and deployment. Each platform handles storage, vector databases, embeddings, and cognitive search differently — so learn how your cloud provider structures these services before launching AI-driven applications. Also, keep track of usage-based costs to avoid unexpected expenses.
  • Consider the relationship between Generative AI and BI. LLMs are changing how users interact with data. In the future, structured prompts may replace traditional dashboards and reports by letting users surface insights instantly. However, there’s also potential for hybrid analytics where BI tools integrate AI to enhance data exploration. Think about how AI can complement or transform your analytics strategy moving forward.

Analytics8 Gen AI Offerings

No matter where you are on your generative AI journey, there is a strategy package that fits your needs.

A white and blue graphic highlighting three Gen AI service packages Analytics8 offers: the Essential Gen AI Launchpad, which helps identify and implement a high-value AI use case in 2 weeks; the Enhanced Gen AI Strategy, providing fundamental training, maturity assessment, workshops, architecture design, and a roadmap for implementation in 4 weeks; and the Chatbot Quickstart, leveraging proprietary company information to address specific use cases such as meeting prep, content creation, customer service, and onboarding, with a custom timeframe

More about our packaged Gen AI offerings

Talk With a Data Analytics Expert

Kevin Lobo Kevin is our VP of Consulting and is based out of our Chicago office. He leads the entirety of our consulting organization, including 100+ consultants in the U.S. and Europe. Outside of work, Kevin enjoys spending time with his wife and two daughters, going to concerts, and running the occasional half-marathon.
John Bemenderfer John is a Senior Consultant based out of our Dallas office. He has experience across the entire data stack, from data engineering to analytics, helping clients get the most value out of their data. He also helps lead the Power BI practice for Analytics8. Outside of work, John enjoys spending time with his daughter and wife, dungeons and dragons, and anything Star Wars related.
Subscribe to

The Insider

Sign up to receive our monthly newsletter, and get the latest insights, tips, and advice.