Review the latest updates for key data and analytics technologies and platforms including Snowflake, Databricks, dbt, BigQuery, Looker, Qlik, Tableau, and Power BI.

We know the product release notes from the vendors can be very detailed and overwhelming. So, we have outlined the major product updates you need to know about for Q1 and Q2 2022, how these updates can impact you, how they can be applied, and other major news for key technologies within the data and analytics space.

Snowflake

Q2 Product Release Updates

Unistore

  • Unistore’s hybrid table functionality allows fast single-row operations needed to support transactional business applications directly on Snowflake.
  • The new transaction-optimized tables (which support database constraints and row-level locking) can be combined with Snowflake’s existing table format which is optimized for analytics. Transactional business applications can now be built directly on Snowflake rather than shuttling data back and forth between Snowflake and transactional systems.

Native Application Framework

  • Snowflake’s Data Marketplace has been renamed Snowflake Marketplace because it now enables applications to be distributed, not just datasets.
  • Snowflake now offers a platform for building, monetizing, and deploying data-intensive applications in the cloud. This framework is made possible by prior feature releases such as stored procedures, user-defined functions (UDFs), and user-defined table functions (UDTFs) which are core functionalities used to build Snowflake native applications.

Snowpark API for Python — Preview (announced 6/22)

  • Snowflake API for python has entered public preview. Java & Scala support have entered GA (general availability) as of May.
  • According to Snowflake: “We are pleased to announce the preview of the Snowpark API for Python.Snowpark is a new developer experience that provides an intuitive API for querying and processing data in a data pipeline. Using this library, you can build applications that process data in Snowflake without moving data to the system where your application code runs.”

Process Unstructured Data Using Java UDFs — Preview (announced 4/2022)

  • Reduces the need for additional tools alongside Snowflake to manage and utilize unstructured data.
  • According to Snowflake: “Unstructured data is data that lacks a predefined structure. It is often textual, such as open-ended survey responses and social media conversations, but can also be non-textual, including images, video, and audio. Java UDFs enable you to perform custom operations using the Java programming language to manipulate data and return either scalar or tabular results. Call your custom UDFs and compiled code to extract text, process images, and perform other operations on unstructured data for analysis. You can either include the Java code inline in the function definition or package the code in a JAR file and copy the file to an internal or external stage. Call the UDF with the input as a scoped URL, file URL, or the string file path for one or more files located in an internal or external stage. The new SnowflakeFile class enables you to easily pass additional file attributes when calling the UDF, such as file size, to filter the results. Previously, Snowflake customers were limited to processing unstructured files using external functions and remote API services.”

Directed Acyclic Graph (DAG) Support for Tasks — Preview (announced 6/2022)

  • Enhances ability to build and orchestrate complex data pipelines directly in Snowflake through use of tasks. DAGs enable parallel processing, for example to update a set of dimension tables concurrently before aggregating facts for a dashboard.
  • According to Snowflake: “We are pleased to announce preview DAG support for tasks. A DAG is a series of tasks composed of a single root task and additional tasks, organized by their dependencies. Previously, users were limited to task trees, in which each task had at most a single predecessor (parent) task. In a DAG, each non-root task can have dependencies on multiple predecessor tasks, as well as multiple subsequent (child) tasks that depend on it.”

Object Dependencies — GA (April 2022)

  • With this release, Snowflake is pleased to announce the general availability of object dependencies in the OBJECT_DEPENDENCIES view (Account Usage).
  • This update provides data stewards and data engineers a unified picture of the relationships between referencing objects and referenced objects. For example, when a table owner plans to modify a column, querying the OBJECT_DEPENDENCIES view based on the table name returns all the objects (e.g., views) that will be affected by the modification.

Q2 Other News

  • In a recent JP Morgan Massive- Scale CIO Survey which polled 142 CIOs controlling more than $100B of IT spend, Analyst Mark Murphy notes that Snowflake has excellent standing from its customers. He says “Snowflake ranked No. 1 in installed base spending intentions, beating out Microsoft, Alphabet-owned Google Cloud Platform, and CrowdStrike. Snowflake also ranked No. 1 among emerging companies whose vision most impressed respondents……Snowflake enjoys excellent standing among customers as evident in customer interviews and a recently laid out clear long-term vision at its Investor Day toward cementing its position as a critical emerging platform layer of the enterprise software stack.”
Blue and white graph from JP Morgan illustrating Snowflake's standing in install base spending in 2022.

In a JP Morgan survey from June 2022, Snowflake is surging into elite territory.

Q1 Product Release Updates

  • Release 6.0 (1/19): Snowflake Scripting – Preview: “Snowflake Scripting is an extension to Snowflake SQL that adds support for procedural logic. You can use Snowflake Scripting to write stored procedures in SQL.”
    • SQL Scripting is a step forward in usability and capability. It allows SQL logic to be executed directly, no longer requiring use of stored procedures wrapped in JavaScript. This increases the number of projects that can be performed entirely in Snowflake without third-party tools.
  • Release 6.1 (1/24): Unstructured Data Support – General Availability: “Enables users to access, load, govern, and share unstructured files of data types for which Snowflake has no native support, including some industry-specific types. Support for unstructured files adds to the existing robust support for structured and semi-structured data.”
    • This update reduces the need for additional tools alongside Snowflake to manage and utilize unstructured data.
  • Release 6.5 (2/8): External Table Support for Delta Lake — Preview: “With this release, we are pleased to announce preview support for Delta Lake in external tables. Delta Lake is a table format on your data lake that supports ACID (atomicity, consistency, isolation, durability) transactions among other features. All data in Delta Lake is stored in Apache Parquet format. Query the Parquet files in a Delta Lake by creating external tables that reference your cloud storage locations enhanced with Delta Lake.”
    • This improves Snowflake’s flexibility to be deployed in architectures alongside Databricks.

Q1 Other News 

  • Snowflake released new industry-specific data clouds. These are solutions to accelerate capabilities of organizations in financial services; healthcare and life sciences; retail and CPG; advertising, media, and entertainment; public sector; education; and technology.

 

Databricks

Q2 Product Release Updates

  • Delta Lake 2.0: Open sourcing all Delta Lake enhancements by Databricks, including those that were previously available only to Databricks customers. This update is aimed at allowing customers to maintain a fully open data architecture, one of the core principles of building a data lakehouse.
  • Spark Connect: Offering Apache SparkTM whenever and wherever, decoupling the client and server so it can be embedded everywhere, from application servers, IDEs, notebooks, and all programming languages.
  • Project Lightspeed: Bringing the next generation of Spark Structured Streaming.
  • Databricks SQL Serverless: Available in preview on AWS, providing instant, secure, and fully managed elastic compute for improved performance at a lower cost. Photon, the record-setting query engine for lakehouse systems, will be generally available on Databricks Workspaces in the coming weeks.
  • Unity Catalog: Generally available on AWS and Azure in the coming weeks, Unity Catalog offers a centralized governance solution for all data and AI assets, with built-in search and discovery, automated lineage for all workloads, with performance and scalability for a lakehouse on any cloud.
  • Databricks Marketplace: Provides an open marketplace to package and distribute data sets and a host of associated analytics assets like notebooks, sample code, and dashboards.
  • Cleanrooms: Available in the coming months, Cleanrooms will provide a way to share and join data across organizations with a secure, hosted environment and no data replication required.
  • Databricks Terraform provider (GA, June 22): One of the coolest updates to Databricks this quarter has been the addition of a Terraform provider to manage all Databricks workspaces and the underlying architecture setup. It works with the Databricks REST APIs to allow for complete automation of a Databricks deployment using Terraform scripting. This can automate architecture setup, cluster management, job scheduling, provision workspaces, and setup user access, all without having to use the GUI!
  • Delta Live Tables support SCD type 2 (GA, June 21): Delta Live Tables (DLT) now support type 2 slowly changing dimensions (previously only supported type 1). This means developers do not have to write custom code to handle changes to slowly changing dimensions in cases where we need to track history. Instead, by using the apply changes function, we can simply identify which table and column(s) we want to track history for, and Databricks will handle it automatically.
  • Parameter values can be passed between Databricks jobs (GA, June 13): Values can now be passed from the output of one job to downstream tasks. This is useful if we need to put a filter on a query, but the filter needs to be dynamic depending on changes in the data and is calculated in an upstream task. This essentially lets us save variables from a task and have it persist in-memory, meaning we do not have to create a table and save it to refer back to in a future task. This allows for efficiency and time-savings.
  • Jobs matrix view (GA, April 27): You can now view Databricks jobs in a matrix view in the jobs user interface. Overview of job and task run details, including start time, duration, and status of each run. This isn’t a major update, but provides a nicer visual for seeing how a job has executed over time (pic below).

    Screenshot of Log Query page on Databricks.

    Job Matrix view gives users an improved visual for seeing how a job has executed over time.

  • Only re-run unsuccessful tasks (GA, April 25): If a job fails, the new repair and re-run feature allows you to re-run only the unsuccessful subset of tasks that failed instead of running the entire job over again. This means that if a job fails 25 minutes into a 30-minute run, instead of having to re-run from the start and wait another 30 minutes, the developer can re-run it starting from the 25-minute mark, saving valuable time when debugging errors.

Q1 Product Release Updates

  • Version 3.6 – Syntax highlighting and autocomplete for SQL commands in Python cells: Syntax highlighting and SQL autocomplete are available when you use SQL inside a Python command, such as in a spark.sql command.
    • These features help to reduce developer errors and increase speed of development of teams working in Databricks.
  • Public Preview – Unity Catalog: The Unity Catalog is a cross-workspace metastore which will help improve centralized data governance for the Lakehouse. One of the main features of the Unity Catalog is the Delta Lineage view, which will auto-capture and map the flow of data into the Lakehouse (across all languages down to the table and column level).
    • This will bring a level of governance and oversight to Databricks that has previously been missing.
  • Public Preview – Delta Sharing: Delta Sharing is a feature of the new Unity Catalog and will allow for Databricks-to-Databricks verified data sharing. It will allow incremental data changes to be shared using Delta Change Data Feed and has a simple UI to create and modify shares and recipients.
    • This feature adds capabilities of data sharing features similar to existing solutions offered by competitors like Snowflake.
  • Public Preview (AWS only) – Serverless DBSQL: Serverless DBSQL sets aside managed server instances which are always running, patched, and upgraded automatically to allow customers to access instant compute Power.
    • This reduces wait times for clusters to spin up. Databricks is also working to improve first query performance up to 3x using serverless DBSQL.
  • Public Preview – Databricks SQL: Databricks SQL is undergoing new updates in public preview, including a tabbed editor, advanced autocomplete and syntax error highlighting, allowing for multiple concurrent executions per tab, and sharing past executions to other users.
    • The same queries can be run from notebooks using SQL, however the Databricks SQL view is a more traditional SQL query editor which will allow Databricks to compete more fully with Snowflake and other cloud DWH competitors as it provides a simpler experience for querying directly against the Lakehouse.

Q1 Other News 

  • You can now create Databricks workspaces in your VPC hosted in the AWS ap-northeast-2 region. You can use a customer-managed VPC to exercise more control over your network configurations to comply with specific cloud security and governance standards your organization may require.

 

dbt

Q2 Product Release Updates

April 2022 – Audit Log 
To review actions performed by people in your organization, dbt provides logs of audited user and system events. The dbt Cloud audit log lists events triggered in your organization within the last 90 days. The audit log includes details such as who performed the action, what the action was, and when it was performed. 

  • The audit log adds an additional layer of data governance for organizations. This helps ensure stability and history without having to bring more complex tools into your stack

April 2022 – Rebuilt Cloud Scheduler 
The dbt cloud scheduler became the most popular discrete product in dbt Cloud with over 75% of users engaging with the product monthly. dbt greatly improved performance in Q2, especially around customer prep time. A great tip for your scheduling workloads is to move them off the top of the hour to after the 15-minute or 45-minute mark each hour of the day to limit concurrency. 

Q2 Other Updates

  • dbt + Databricks partnership: We recently wrote this blog which showcases the value of using databricks with dbt to unify your BI and Data Science teams. Traditionally, these teams have worked in separate tools for compute and storage, which adds unnecessary complexity to data stacks. Now, by integrating dbt with Databricks, there is now a well-designed framework for engineers to transform data for both analytics and data science initiatives without having to extract the data into any additional places. A common frustration in the industry is that there are too many microservices that have been unbundled for almost every small use case. The dbt and databricks partnership is a substantial step toward simplifying data teams’ stacks. 
  • dbt now mainstream? Snowflake conference show out: Mentioned at over 10 customer-led sessions, booth was constantly crowded, and best-practices session had a line out the door.
    dbt was once considered a niche tool but became hard to ignore with the incredible growth over the last few years. It is safe to say dbt has become one of the most popular transformation tools to use in the modern cloud data warehouse ecosystem, with Snowflake leading the charge. Many large companies have adopted dbt and found unmatched success. 

Q2 Other News

  • Join Analytics8 at Coalesce 2022 this October in New Orleans! Let us know if you’re coming to the conference in-person, we’d love to spend time with customers.

Q1 Product Release Updates

  • Version 1.0 (stable release): dbt version 1.0 offers ease of upgrades and is backward compatible. It serves as the foundation for new functionality to be built upon in future releases. This marks a significant milestone in product maturity as ideas that previously seemed ambitious—including artifacts, advanced testing, Slim CI, and more—have materialized as incredible features in dbt v0.21 and earlier releases.
    • As product manager Jeremy Cohen mentioned at Coalesce 2021, this means the dbt core features have reached a point where they are fast, stable, intuitive, extensible, and maintainable. After upgrading to dbt 1.0, you can be certain that your future upgrades will be simple and intuitive from that point on. A few future ideas on the roadmap for these upgrades are column level lineage, support for sharded tables, built-in linting, column level Slim CI, and eventually dbt-SQL.

Q1 Other News 

 

BigQuery

Q2 Product Release Updates

Column-level data masking, combined with column level access control, gives access to columns while obscuring the data

  • More control on who sees which column, but also what in each column.

Time Travel Window Configuration – Specify duration from 2-7 days (or longer if needed)

  • Having snapshots of what your data was, how it was being used, or ability to roll-back to a point is essential to maintaining your data. Having the ability to configure how and when is a very important tool to utilize.

Query Queues – For on-demand and flat-rate customers, automatically determine the query concurrency.

  • By having a queue, no longer do queries that go above the 100 current threshold return an error.

External Table Feature in Google Sheets – Connect Google Sheets to your BigQuery data for direct access to your data.

  • Give your end users direct access to your data in BigQuery via Google Sheets. Control is still maintained in BigQuery.

Q1 Product Release Updates

  • General Availability – Qualify Clause: The qualify clause is key to make code more concise. This has eliminated redundancies in code while executing window functions for transformations.
    • One of our clients (Driven Brands) will use the Qualify clause to automate deduplication of critical source data in a view within BigQuery.
  • Preview – Table Clones: Cloning tables allows a cost effective, light-weight way to store copies of tables and test changes to a table. You are only billed for storing the cloned table once it differs from the base table.

Q1 Other News 

  • A BigQuery Reliability Guide was published in Q1 2022 to help customers create solutions that correctly contemplate reliability requirements for specific use cases. This helps customers determine the best service within GCP for their solution rather than trying to shoehorn BigQuery for everything when it is not the optimal service. They included areas of planning for import reliability, query reliability, read reliability, and disaster planning.

 

Looker

Q2 Product Release Updates

Forecasting Labs Feature: Add forecasting to your new or existing Explore queries to help predict and monitor specific data points.

  • Looker has improved its native forecasting capability by incorporating seasonality, prediction interval, as well as length of forecast (forecast over X period). This allows for enhanced insights and control into how forecasts are constructed within Looker.

New LookML Runtime: More performant loading of Explores and dashboards, and faster SQL writing.

  • Continual improvement of how LookML is handled, served up, and utilized allows for users to get the latest and greatest without having to rebuild.

Q2 Other News

Hundreds of knowledge articles previously for internal use only are now available for public use. These “Knowledge Drops” were originally created for Looker by Looker and are now available for all to use. Most have already been published, but check it out while it lasts!

Q1 Product Release Updates

  • API 4.0 is generally available: Introduces new functionality that was previously inaccessible via the Looker API, like copying dashboards.
    • Copying dashboards previously could not be done via the API without some heavy coding. This turns what used to be a 50-60 line script into about 2-3 lines of code. We immediately began using this for one of our customers, but could see it being implemented any time a regular set of dashboards needs to be copied into a new folder.
    • The ability to use Looker’s API to automate administrative and embedding use cases sets it apart from its competitors. Tedious management of user permissions and authentication can be automated down to a few keystrokes or scripts.
    • From an administration standpoint, Looker is an InfoSec dream. Permissions can be set up via a script and repeated multiple times, and pipelines for copying dashboards/checking user access can be established. The sky’s the limit with the API, but it blows the competition out of the water when it comes to administration.
  • LookML Dashboards can be moved to any content folder: Allows for more logical storage of LookML dashboards, especially for Looker instances where multiple companies are logging into the same central instance.
    • The LookML Dashboards update is a step toward fully version-controlled dashboard creation, where changes to a dashboard can be tracked over time, so users can always be confident that the dashboard they’re seeing is the correct, working version.
    • Before this update, LookML Dashboards all had to stay in one folder, which was not manageable if working on an instance with hundreds of LookML dashboards. The concept of connecting a Looker dashboard to a Git service is something that sets Looker apart from the competition, and benefits users and developers alike, knowing that changes can be tracked and rolled back, if needed.

Q1 Other News 

  • Looker is sunsetting its certification program to align more with Google’s ‘badge’ system.

 

Qlik

Q2 Product Release Updates

Chart level scripting 

Chart level scripting is a powerful feature that allows you to modify the dynamic dataset behind a chart using a subset of the Qlik scripting language, with techniques such as variables and loops. You can add or modify rows and columns that were not in the original dataset. This allows for calculations in chart expressions that were previously not possible, such as simulations or goal-seeking.

Chart level scripting adds a new more precise level of customization within chart building. Although Qlik is known to be flexible with what can be achieved via backend scripting and frontend set analysis, there are still some limitations. Using chart scripting, you will have more control over how your visualization looks and behaves and should reduce the need for work arounds or alternative methods of displaying information in your apps. Developers should be cautious in using heavy chart scripting as it may affect performance vs. simpler, built-in chart functionality.

Support for field-level lineage 

As part of delivering explainable BI, new catalog capabilities are now available. Explainable BI is the concept that users will gain more confidence in using data and making decisions based on it, if they have a good understanding, or explanation, of the information’s origin. 

Lineage capabilities have expanded to not only visually show the data history by table but also by the specific field within a table—starting with applications all the way back to the original source. Field-level lineage helps you establish trust in the data. When you explore an app, you can now quickly access a data lineage summary that traces back any dimensions and measures in a chart to the original sources. This makes it easy for any user to understand where the data came from within a chart.

There is also now a distinction between Lineage and Impact Analysis. 

  • Lineage shows you a detailed visual representation of the history of a field or dataset, back through the applications and interim datasets to its original source.
  • Impact Analysis shows you a downstream view of the dependencies for a data element, including the databases, apps, files, or links that will be directly or indirectly impacted if the value of that particular field changes.

In data cataloging and managements, more information and detail into the lineage of your data is always needed. With field level lineage, you can track the source of a field throughout your pipelines and apps. This should the reduce the need to track a field’s origin by physically opening apps and data sources and provide additional clarity to data consumers.

Q1 Product Release Updates

In Q1 2022 the Qlik SaaS product has continued to add additional functionality and integration with their recent acquisitions.

  • Qlik Forts: Allows organizations to keep their data and application on premise, while taking advantage of SaaS functionality and upgrades.
    • Qlik Forts will provide an option for those clients that are hesitant to keep all of their data and applications on a cloud environment. This opens the tool to some clients that have security or regulatory concerns.
  • Qlik Automations: Qlik automations has become a more robust feature in the SaaS environment. Templates have been created to provide a library of ways that automation can be utilized to take analytics outside the SaaS environment and greater control within the SaaS environment.
    • Qlik Automations/templates is going to provide customers with a wide range of ways to create API code that will allow the organization to take analytics outside of simple dashboards.

 

Tableau

Q2 Product Release Updates

Data Stories: Tableau wants to tell you stories in their 2022.2 release. The new feature Data Stories builds a “story” based on a data visualization within your dashboard. These plain-language explanations will help any user confidently access, understand, and communicate with data. Developers are given the ability to add story points on distinct visualization to add additional context. Data Stories are also fully customizable so you can tailor the stories based on your audience. 

Autosave: This feature lets you edit an existing workbook in a draft until you’re ready to publish so you won’t lose your changes or share them prematurely with other workbook users (note: this is for web authoring only).  

Quick Search: In Tableau Server, turn to the all-new Quick Search feature to view past searches and suggested content. Quick Search can be accessed from any page and allows you to review results within the search box.  Pairing Quick Search with Tableau’s enhanced Ask Data phrase builder ensures you can spend less time searching for your data and more time engaging with it! 

Tableau Cloud: Tableau Online is now Tableau Cloud! This product is the same easy-to-use self-service platform designed to streamline the power of data to people, just with a new name. 

  • Tableau Cloud will now autosave updates to an unpublished draft, which will allow users less familiar with Tableau to make changes without fear of disrupting production, sharing results prematurely, or losing unsaved work. Meanwhile, the new search experience for Tableau Server will save business users time and, hopefully, frustration by quickly returning suggestions based on popular search terms and past searches.

Ask Data: The Ask Data features for Tableau Server and Cloud help make Tableau’s software more accessible to business users as a self-service platform for obtaining data insights. Ask Data can return manipulable visualizations in response to common business questions, allowing users without technical Tableau knowledge to build views to fit their specific needs. Introduced in May, this enhancement is intended to enable business users to create self-service visualizations based upon questions they pose of their data.

Q2 Other News

  • If you missed the 2022 Tableau Conference, you could catch up on the keynotes and top sessions on-demand.
  • Tableau will enable Multi-Factor Authentication for site administrators on existing sites and all roles on new Tableau Cloud sites.

Q1 Product Release Updates

As of March 21, 2022 Tableau 2021.4 is the latest version of Tableau.

  • Virtual Connections: A content type in Tableau Server/Online that lets you create and manage access to data sources within Tableau Online. Unlike a standard published data source, virtual connections allow you to securely embed service account credentials and define data policies for row-level security, all within the Tableau Online platform. This greatly simplifies data management and access, and makes it much easier to implement and manage row-level security; previously you needed to create custom logic within a Tableau workbook, publish an entirely separate workbook data source, and manage control offline.
    • Virtual Connections is an important new feature because it automates and simplifies data management and security processes. This feature bridges the gap between the ‘core’ Tableau offering (Tableau Desktop, where you make your dashboards) and Tableau Online/Server, where you share reports. Having this single connection ensures all users are accessing fresh data sourced centrally from managed extract refresh schedules. Administrators save time by making database changes only once in the virtual connection rather than in every connection per Tableau content (data source, workbook, flow).
  • Copy and Paste: Added functionality to Tableau Desktop that allows you to copy and paste workbook elements from one page to another, even across workbooks.
    • This feature accelerates development of reports and dashboards, and it encourages and simplifies design consistency.

Q1 Other News 

  • Tableau for Slack: allows users to share a vizualization generated in Ask Data directly to Slack, putting data at the center of every conversation. Streamline communication and collaboration on one platform to move business forward.

 

PowerBI

Q2 Product Release Updates

Feature Release: Power BI Datamarts 

  • Released May 2022 at MSFT Build, this feature allows the ability for business users to create data warehouses for custom, self-service, departmental reporting. Power BI Datamarts also unlock the ability for analysts to use SQL queries—one of the most requested features of Power BI. Work that may typically take weeks for a data engineer, including creating data pipelines, a SQL Datawarehouse, provisioning access, or adding any additional code can now be facilitated much faster through the creation of Datamarts.
  • How it Works: The entire ETL process can happen inside the Datamart by creating a fully managed Azure SQL database automatically—all done directly within the Power BI web service. Data pipelines and models are built using low-code or no-code PowerQuery tasks online—enabling analysts to do more with their data without all the traditional steps required by data engineers. This enables organizations to help bridge gaps between IT users and business users and reduce time and bottlenecks for self-service and custom departmental reporting.
  • Why it’s a Differentiator: While PBI Datamarts are similar to PBI dataflows—they provide an additional value-add by supporting more than just single table use cases—ideal for combining and working with data across multiple sources in a single place. It also provides a great option for Mac users to engage with data in the PBI ecosystem. Data modeling capabilities are enabled by hosting PowerQuery and model tools directly from the web service as opposed to locally developing.

Q2 Other News

  • Keep in tune with the Power BI Roadmap for all new and generally available feature releases
  • Microsoft Inspire— July 19-20 2022— partner event focused on MSFT cloud partner programs.  

Q1 Product Release Updates

  • In the February 2022 Power BI update, Microsoft released the ability to create, modify, and execute Power BI Deployment Pipelines directly from Azure DevOps using the new Power BI automation tools extension. This open-source Azure DevOps extensions removes the need to use API or scripts to manage these pipelines.
    • The new Power BI automation tools extension for Azure DevOps allows for organizations to quickly incorporate Power BI dataset and report management into their existing Azure DevOps pipelines. Enabling continuous integration and continuous delivery (CI/CD) for Power BI. Automating this management process for Power BI datasets and reports provides decisions makers the latest information more quickly and accurately—removing error introduced by manual processes.
    • With the simplicity of incorporating this capability, Power BI developers can automate delivery of these different reports and datasets to users, unlike other BI tools. Managing workspaces, user access and promotion processes of these Power BI objects further enhances the amazing list of features and capabilities of Power BI.
    • Incorporating the Power BI automation tools extension into existing Azure DevOps pipelines is fast and easy. There is no longer a need for someone to manual promote datasets and reports, instead freeing them up for other tasks. This also significantly reduces potential error caused by manual promotions.

Q1 Other News 


These updates are current as of Q2 2022. Keep an eye out for quarterly updates on technologies within the data and analytics space.

Patrick Vinton Patrick oversees R&D and is responsible for the technical direction of Analytics8. When he's not working, he's probably playing with his 2 sons. If the kids are with the babysitter, he's sharing a bottle of wine with his wife while binging on Netflix - probably a documentary or historical drama.
Tony Dahlager Tony is Analytics8’s Managing Director of Data Management, leading sales, marketing, partnerships, and consulting enablement for our data management service line.
Kevin Lobo Kevin is our Managing Director of Analytics and is based out of our Chicago office. He leads our Analytics service line, overseeing its strategic direction and messaging, while ensuring delivery of high impact analytics solutions for our clients. Outside of work, Kevin enjoys spending time with his wife and daughter, as well as running and live music.
Subscribe to

The Insider

Sign up to receive our monthly newsletter, and get the latest insights, tips, advice.

Thank You!