Blog

Single-cell genomic analysis accelerated by NVIDIA on Google Cloud

Single-cell genomic analysis accelerated by NVIDIA on Google Cloud

In the past decade, the Healthcare and Life Sciences industry has enjoyed a boon in technological and scientific advancement. New insights and possibilities are revealed almost daily. At Google Cloud, driving innovation in cloud computing is in our DNA. Our team is dedicated to sharing ways Google Cloud can be used to accelerate scientific discovery. For example, the recent announcement of AlphaFold2 showcases a scientific breakthrough, powered by Google Cloud, that will promote a quantum leap in the field of proteomics. In this blog, we’ll review another omics use case, single-cell analysis, and how Google Cloud’s Dataproc and NVIDIA GPUs can help accelerate that analysis.

The Need for Performance in Scientific Analysis

The ability to understand the causal relationship between genotypes and phenotypes is one of the long-standing challenges in biology and medicine. Understanding and drawing insights from the complexity of biological systems abounds from the actual code of life (DNA) through to expression of genes (RNA) to translation of gene transcripts into proteins that function in different pathways, cells, and tissues within an organism. Even the smallest of changes in our DNA can have large impacts on protein expression, structure, and function, which ultimately drives development and response – at both cellular and organism levels. And, as the omics space becomes increasingly data- and compute-intensive, research requires an adequate informatics infrastructure. An infrastructure that scales with growing data demands, enables a diverse range of resource-intensive computational activities, and is affordable and efficient – reducing data bottlenecks and enabling researchers to maximize insight. 

But where do all these data and compute challenges come from and what makes scientific study so arduous? The layers of biological complexity begin to be made apparent immediately when looking at not just the genes themselves, but their expression. Although all the cells in our body share nearly identical genotypes, our many diverse cell types (e.g. hepatocytes versus melanocytes) express a unique subset of genes necessary for specific functions, making transcriptomics a more powerful method of analysis by allowing researchers to map gene expression to observable traits. Studies have shown that gene expression is heterogeneous, even in similar cell types. Yet, conventional sequencing methods require DNA or RNA extracted from a cell population. The development of single-cell sequencing was pivotal to the omics field. Single-cell RNA sequencing has been critical in allowing scientists to study transcriptomes across large numbers of individual cells. 

Despite its potential, and the increasing availability of single-cell sequencing technology, there are several obstacles: an ever increasing volume of high-dimensionality data, the need to integrate data across different types of measurements (e.g. genetic variants, transcript and protein expression, epigenetics) and across samples or conditions, as well as varying levels of resolution and the granularity needed to map specific cell types or states. These challenges present themselves in a number of ways including background noise, signal dropouts requiring imputation, and limited bioinformatics pipelines that lack statistical flexibility. These and other challenges result in analysis workflows that are very slow, prohibiting the iterative, visual, and interactive analysis required to detect differential gene activity. 

Accelerating Performance

Cloud computing can help not only with data challenges, but with some of the biggest obstacles: scalability, performance, and automation of analysis. To address several of the data and infrastructure challenges facing single-cell analysis, NVIDIA developed end-to-end accelerated single-cell RNA sequencing workflows that can be paired with Google Cloud Dataproc, a fully-managed service for running open source frameworks like Spark, Hadoop, and RAPIDS. The Jupyter notebooks that power these workflows include examples using samples like human lung cells and mouse brains cells and demonstrate acceleration between CPU-based processing compared to GPU-based workflows. 

Google Cloud Dataproc powers the NVIDIA GPU-based approach and demonstrates data processing capabilities and acceleration, which in turn have the potential of delivering considerable performance gains.  When paired with RAPIDS, practitioners can accelerate data science pipelines on NVIDIA GPUs, reducing operations like data loading, processing, and training from hours to seconds. RAPIDS abstracts the complexities of accelerated data science by building upon popular Python and Java libraries effortlessly. When applying RAPIDS and NVIDIA accelerated compute to single-cell genomics use cases, practitioners can churn through analysis of a million cells in only a few minutes.

Give it a Try

The journey to realizing the full potential of omics is long; but through collaboration with industry experts, customers, and partners like NVIDIA, Google Cloud is here to help shine a light on the road ahead. To learn more about the notebook provided for single-cell genomic analysis, please take a look at NVIDIA’s walkthrough. To give this pattern a try on Dataproc, please visit our technical reference guide.

Related Article

Building a genomics analysis architecture with Hail, BigQuery, and Dataproc

Try a cloud architecture for creating large clusters for genomics analysis with BigQuery and Google-built healthcare tooling.

Read Article

Source : Data Analytics Read More

Converging architectures: Bringing data lakes and data warehouses together

Converging architectures: Bringing data lakes and data warehouses together

Historically, data warehouses have been painful to manage. The legacy, on-premises systems that worked well for the past 40 years have proved to be expensive and they had many challenges around data freshness, scaling, and high costs. Furthermore, they cannot easily provide AI or real-time capabilities that are needed by modern businesses. We even see this with the cloud newly created data warehouses as well. They do not have AI capabilities still, despite showing that or arguing that they are the modern data warehouses. They are really like the lift and shift version of the legacy on-premises environments over to cloud. 

At the same time, on-premises data lakes have other challenges. They promised a lot, looked really good on paper,  promised low cost and ability to scale. However, in reality this did not capitalize for many organizations. This was  mainly because they were not easily operationalized, productionized, or utilized. This in return increased the overall total cost of ownership. There are also significant data governance challenges created by the data lakes. They did not work well with the existing IAM and security models. Furthermore, they ended up creating data silos because data is not easily shared across through the hadoop environment.

With varying choices, customers would choose the environment that made sense, perhaps a pure data warehouse, or perhaps a pure data lake, or a combination. This leads to a set of tradeoffs for nearly any real-world customer working with real-world data and use cases. Therefore, this past approach has naturally set up a model where we see different and often disconnected teams setting up shop within organizations. Resulting in users split between their use of the data warehouse and the data lake. 

Data warehouse users tend to be closer to the business, and have ideas about how to improve analysis, often without the ability to explore the business to drive a deeper understanding. On the contrary, data lake users are closer to the raw data and have the tools and capabilities to explore the data. Since they spend so much time doing this, they are focused on the data itself, and less focused on the business. This disconnect robs the business of the opportunity to find insights that would drive the business forward to higher revenues, lower costs, lower risk, and new opportunities.

Since then the two systems co-existed and complemented each other as the two main data analytics systems of enterprises, residing side by side in the shared IT sphere. These are also the data systems at the heart of any digital transformation of the business and the move to a full data-driven culture. As more organizations are migrating their traditional on-premises systems to the cloud and SaaS solutions, this is a period during which enterprises are rethinking the boundaries of these systems toward a more converged analytics platform.

This rethinking has led to convergence of data lakes and warehouses, as well as data teams across organizations. The cloud offers managed services that help expedite the convergence so that any data person could start to get insight and value out of the data, regardless of the system. The benefits of the converged data lake and data warehouse environment present itself in several ways. Most of these are driven by the ability to provide managed, scalable, and serverless technologies. As a result, the notion of storage and computation is blurred. Now it is no longer important to explicitly manage where data is stored or what format it is stored. Users are democratized, they should be able to access the data regardless of the infrastructure limitations. From a data user perspective, it doesn’t really matter whether the data resides in a data lake or a data warehouse. They do not look into which system the data is coming from. They really care about what data that they have, and whether they can trust it. The volume of the data that they can ingest and whether it is real time or not. They are also discovering and managing data across varied datastores and taking them away from the siloed world into an integrated data ecosystem. Most importantly, analyze and process data with any person or tool.

At Google Cloud, we provide a cloud native, highly scalable and secure, converged solution that delivers choice and interoperability to customers. Our cloud native architecture reduces cost and improves efficiency for organizations. For example, BigQuery‘s full separation of storage and compute allows for BigQuery compute to be brought to other storage mechanisms through federated queries. BigQuery storage API allows treating a data warehouse like a data lake. It allows you to access the data residing in BigQuery. For example, you can use Spark to access data resigning in Data Warehouse without it affecting performance of any other  jobs accessing it. On top of this, Dataplex, our intelligent data fabric service, provides data governance and security capabilities across various storage tiers built on GCS and BigQuery.

There are many benefits achieved by the convergence of the data warehouses and data lakes and if you would like to find more, here’s the full paper.

Related Article

Registration is open for Google Cloud Next: October 12–14

Register now for Google Cloud Next on October 12–14, 2021

Read Article

Source : Data Analytics Read More

Save messages, money, and time with Pub/Sub topic retention

Save messages, money, and time with Pub/Sub topic retention

Starting today, there is a simpler, more useful way to save and replay messages that are published to Pub/Sub: topic retention. Prior to topic retention, you needed to individually configure and pay for message retention in each subscription. Now, when you enable topic retention, all messages sent to the topic within the chosen retention window are accessible to all the topic’s subscriptions, without increasing your storage costs when you add subscriptions. Additionally, messages will be retained and available for replay even if there are no subscriptions attached to the topic at the time the messages are published. 

Topic retention extends Pub/Sub’s existing seek functionality—message replay is no longer constrained to the subscription’s acknowledged messages. You can initialize new subscriptions with data retained by the topic, and any subscription can replay previously published messages. This makes it safer than ever to update stream processing code without fear of data processing errors, or to deploy new AI models and services built on a history of messages. 

Topic retention explained

With topic retention, the topic is responsible for storing messages, independently of subscription retention settings. The topic owner has full control over the topic retention duration and pays the full cost associated with message storage by the topic. As a subscription owner, you can still configure subscription retention policies to meet your individual needs.

Topic-retained messages are available even when the subscription is not configured to retain messages

Initializing data for new use cases

As organizations become more mature at using streaming data, they often want to apply new use cases to existing data streams that they’ve published to Pub/Sub topics. With topic retention, you can access the history of this data stream for new use cases by creating a new subscription and seeking back to a desired point in time.

Using the GCloud CLI 

These two commands initialize a new subscription and replay data from two days in the past. Retained messages are available within a minute after the seek operation is performed.

Choosing the retention option that’s right for you

Pub/Sub lets you choose between several different retention policies for your messages; here’s an overview of how we recommend you should use each type.

Topic retention lets you pay just once for all attached subscriptions, regardless of when they were created, to replay all messages published within the retention window. We recommend topic retention in circumstances where it is desirable for the topic owner to manage shared storage.

Subscription retention allows subscription owners, in a multi-tenant configuration, to guarantee their retention needs independently of the retention settings configured by the topic owner.

Snapshots are best used to capture the state of a subscription at the time of an important event, e.g. an update to subscriber code when reading from the subscription.

Transitioning from subscription retention to topic retention

You can configure topic retention when creating a new topic or updating an existing topic via the Cloud Console or the gcloud CLI. In the CLI, the command would look like:

gcloud pubsub topics update myTopic –message-retention-duration 7d.

If you are migrating to topic retention from subscription storage, subscription storage can be safely disabled after 7 days.

What’s next

Pub/Sub topic retention makes reprocessing data with Pub/Sub simpler and more useful. To get started, you can read more about the feature, visit the pricing documentation, or simply enable topic retention on a topic using Cloud Console or the gcloud CLI.

Related Article

How to detect machine-learned anomalies in real-time foreign exchange data

Model the expected distribution of financial technical indicators to detect anomalies and show when the Relative Strength Indicator is un…

Read Article

Source : Data Analytics Read More

How Renault solved scaling and cost challenges on its Industrial Data platform using BigQuery and Dataflow

How Renault solved scaling and cost challenges on its Industrial Data platform using BigQuery and Dataflow
Following the first significant bricks of data management implemented in our plants to serve initial use cases such as traceability and operational efficiency improvements, we confirmed we got the right solution to collect industrial data from our machines and operations at scale. We started to deploy that solution and we therefore needed a state-of-the-art data platform for contextualization, processing and hosting all the data gathered. This platform had to be scalable to deploy across our entire footprint, and also affordable and reliable, to foster the use of data in our operations. Our partnership with Google Cloud in 2020 was a key lever to reach this target. Jean-Marc Chatelanaz – Industry Data Management 4.0 Director, Renault

French multinational automotive manufacturer Renault Group has been investing in Industry 4.0 since the early days. A primary objective of this transformation has been to leverage manufacturing and industrial equipment data through a robust and scalable platform. Renault designed an industrial data acquisition layer and connected it to Google Cloud, using optimized big data products and services that together form Renault’s Industrial Data Platform.

In this blog, we’ll outline the evolution of that data platform on Google Cloud and how Renault worked with Google Cloud’s professional services to design and build a new architecture to achieve a secure, reliable, scalable, and cost-effective data management platform. Here’s how they’ve done it.

All started with Renault’s digital transformation

Renault Group has manufactured cars since 1898. Today, it is an international group with five brands that sold close to 3 million vehicles worldwide in 2020 and provide sustainable mobility throughout the world, with 40 manufacturing sites and more than 170,000 employees.

A leader in the Industry 4.0 movement, Renault launched its digital transformation plan in 2016 to achieve greater efficiency and modernize its traditional manufacturing and industrial practices. A key objective of this transformation was using the industrial data from up to 40 sites globally to promote data-based business decisions and create new opportunities.

Several initiatives were already growing in silos, such as Conditional Based Maintenance, Full Track and Trace, Industrial Data Capture and AI projects. In 2019, Renault kicked off a program named Industry Data Management 4.0 (IDM 4.0) to federate all those initiatives as well as future ones, and, most of all, to design and build a unique data platform and a framework for all Renault’s industrial data. 

The IDM 4.0 would ultimately enable Renault’s manufacturing, supply chain, and production engineering teams to quickly develop analytics, AI, and predictive applications based on a single industrial data capture and data referential platform as shown with these pillars:

Renault Group data transformation pillars

With Renault’s leadership in Industry 4.0 for manufacturing and supply chain, and with Google’s expertise in big data, analytics and machine learning since its foundations, Google Cloud solutions were a clear match for Renault’s ambitions.

Building IDM 4.0 in the cloud

Before IDM 4.0, Renault Digital was the internal team that incubated the idea of data in the cloud. Following some trials, the team was able to reduce storage costs by more than 50% by moving to Google Cloud. Key business requirements were data reliability, long-term storage, and unforgeability, all of which led the team to use Apache Spark running on fully-managed Dataproc for data processing, and Spanner as the main database.

The IDM 4.0 program aimed to scale and unfold more business use cases while keeping the platform cost-effective, with maximum performance and reliability. Meeting this goal required reviewing the architecture, as the costs projections of the business usage would go beyond the project budget, refraining from additional business explorations. 

Accelerating the redesign of that core data platform was a pillar of the Renault-Google Cloud partnership. . Google Cloud’s professional services organization (PSO) helped define the best architecture to meet requirements, solve challenges, and handle 40 plants’ worth of scaling data in a reliable and cost-sustainable way.

Renault Group IDM 4.0 layers

The data capture journey is complex and includes several components. Basically, Renault built a specific process relying on standard data models designed with OPC-UA, an open machine-to-machine communication protocol. In the new Google Cloud architecture, this data model is implemented in BigQuery such that most writes are append-only and it is not necessary to do lots of join operations to retrieve the required information. BigQuery is complemented with a cache in Memorystore to retrieve up-to-date states at all times. This architecture enabled high-performing and cost-efficient reads and writes.

The team also started developing with the Beam framework and Dataflow. Working with Beam was not only as easy as working with Spark, but also unlocked a number of benefits:

A unified model for batch and streaming processing: less coding variety and more flexibility in choosing the types of processing jobs depending on needs and cost target

A thriving ecosystem around Beam: many available connectors (PubsubIO, BigQueryIO, JMSIO, etc)

Simplified operations: efficient autoscaling, smoother updates, finer cost tracking, easy monitoring, end-to-end testing, and error management.

Dataflow has since become the go-to tool for most data processing needs in the platform. Pub/Sub was also a natural choice for streaming data processing, along with other third-party messaging systems. The teams are also maintaining their advanced use of Google Cloud’s managed services such as Google Kubernetes Engine (GKE), Cloud Composer, and Memorystore.

The new platform is built to scale, and as of Q1 2021, the IDM 4.0 team has been connecting more than 4,900 industrial appliances through Renault’s internal data acquisition solution, transmitting more than 1 billion messages per day to the platform in Google Cloud!

Focusing on cost efficiency

While the migration to the new architecture was ongoing, cost optimization became a key decision driver, especially with the COVID-19 pandemic and its impacts on the world’s economics.

Hence, Renault’s management team and Google Cloud’s PSO agreed to allocate a special task force to optimize costs on parts of the platform that were not yet migrated. 

This special task force discovered that one Apache Spark cluster continuously performed Cloud Storage Class A & B operations every few tens of milliseconds. This was inconsistent with the majority of incoming flows, which were hourly or daily batches. One correction we applied was  simply increasing the interval polling the filesystem, i.e. the spark.sql.streaming.pollingDelay parameter from Spark. Later, we confirmed a 100x drop in Cloud Storage Class A calls, and a meaningful drop in class B calls, which resulted in a 50% decrease in production costs.

Drop in cost of storage  following cost optimization actions

Renault now uses  Dataflow to ingest and transform data from manufacturing plants as well as from other connected key referential databases. BigQuery is used to store and analyze massive amounts of data, such as packages and vehicle tracking, with many more data sources in the works. 

 The below diagram shows last year’s impressive achievements!

Evolution of data ingestion and platform costs (not to scale)

The new IDM 4.0 platform started to run production workloads just a few months ago, and Renault expects to have 10 times more industrial data coming in the next two years, helping to keep costs, performance, and reliability optimum. 

Unfold new opportunities and democratize data access 

The IDM 4.0 team has succeeded in gathering industrial data from plants, and merging and harmonizing this data in a scalable, reliable, and cost-effective platform. The team also managed to expose data in a controlled and secure way to data scientists, business teams or any application. 

In this blog, we’ve overviewed use cases ranging  from tracking to conditional maintenance, but this platform can open many other possibilities for innovation and business impact, such as self-service analytics that will democratize the use of data and support data-driven business decisions.

This Renault data transformation with Google Cloud demonstrates the potential of Industry 4.0 to improve Renault’s manufacturing, process engineering, and supply chain processes—and also provides an example for other companies looking to leverage data and cloud platforms for improved results. To learn more about BigQuery, click here

Acknowledgments 

We’d like to thank all collaborators who participated actively in writing these lines; both from Google and Renault working tremendously as one team for 1+ year, fully remote:

Writers from Google:

Jean-Francois Macresy, Consultant, IDM4.0 Architecture best practicesJeremie Gomez, Strategic Engineer, IDM4.0 Data specialistRifel Mahiou, Program Manager, IDM4.0 Partnership strategy & governanceRazvan Culea, Data Engineer, IDM4.0 Data optimization 

Reviewers from Renault:

Sebastien Aubron, IDM4.0 Program ManagerJean-Marc Chatelanaz, IDM4.0 DirectorMatthieu Dureau, IDM4.0 platform Product OwnerDavid Duarte, IDM4.0 platform Data Engineer

Related Article

Registration is open for Google Cloud Next: October 12–14

Register now for Google Cloud Next on October 12–14, 2021

Read Article

Source : Data Analytics Read More

BigQuery Admin reference guide: Monitoring

BigQuery Admin reference guide: Monitoring

Last week, we shared information on BigQuery APIs and how to use them, along with another blog on workload management best practices. This blog focuses on effectively monitoring BigQuery usage and related metrics to operationalize workload management we discussed so far.

Monitoring Options for BigQuery Resource

BigQuery Monitoring Best Practices

Visualization Options For Decision Making

Tips on Key Monitoring Metrics 

Monitoring options for BigQuery

Analyzing and monitoring BigQuery usage is critical for businesses for overall cost optimization and performance reporting. BigQuery provides its native admin panel with overview metrics for monitoring. BigQuery is also well integrated with existing GCP services like Cloud Logging to provide detailed logs of individual events and Cloud Monitoring dashboards for analytics, reporting and alerting on BigQuery usage and events. 

BigQuery Admin Panel

BigQuery natively provides an admin panel with overview metrics. This feature is currently in preview and only available for flat-rate customers within the Admin Project. This option is useful for organization administrators to analyze and monitor slot usage and overall performance at the organization, folder and project levels. Admin panel provides real time data for historical analysis and is recommended for capacity planning at the organization level. However, it only provides metrics for query jobs. Also, the history is only available for up to 14 days.

Cloud Monitoring

Users can create custom monitoring dashboards for their projects using Cloud Monitoring. This provides high-level monitoring metrics, and options for alerting on key metrics and automated report exports. There is a subset of metrics that are particularly relevant to BigQuery including slots allocated, total slots available, slots available by job, etc. Cloud Monitoring also has a limit of 375 projects that can be monitored per workspace (as of August 2021). This limit can be increased upon request. Finally, there is limited information about reservations in this view and no side by side information about the current reservations and assignments.

Audit logs 

Google Cloud Audit logs provide information regarding admin activities, system changes, data access and data updates to comply with security and compliance needs. The BigQuery data activities logs, provide the following key metrics:

query – The BigQuery SQL executed

startTime – Time when the job started

endTime – Time when the job ended

totalProcessedBytes – Total bytes processed for a job

totalBilledBytes – Processed bytes, adjusted by the job’s CPU usage

totalSlotMs – The total slot time consumed by the query job

referencedFields – The columns of the underlying table that were accessed

Users can set up an aggregated logs sink at organization, folder or project level to get all the BigQuery related logs:

Other Filters:

Logs from Data Transfer Service

protoPayload.serviceName=bigquerydatatransfer.googleapis.com

Logs from BigQuery Reservations API

protoPayload.serviceName=bigqueryreservation.googleapis.com

INFORMATION_SCHEMA VIEWS

BigQuery provides a set of INFORMATION_SCHEMA views secured for different roles to quickly get access to BigQuery jobs stats and related metadata. These views (also known as system tables) are partitioned and clustered for faster extraction of metadata and are updated in real-time. With the right set of permission and access level a user can monitor/review jobs information at user, project, folder and organization level. These views allow users to:

Create customized dashboards by connecting to any BI tool 

Quickly aggregate data across many dimensions such as user, project, reservation, etc.

Drill down into jobs to analyze total cost and time spent per stage

See holistic view of the entire organization

For example, the following query provides information about the top 2 jobs in the project with details on job id, user and bytes processed by each job.

Data Studio

Leverage these easy to set up public Data Studio dashboards for monitoring slot and reservation usages,  query troubleshooting, load slot estimations, error reporting, etc. Check out this blog for more details on performance troubleshooting using Data Studio.

Looker 

Looker marketplace provides  BigQuery Performance Monitoring Block for monitoring BigQuery usage. Check out this blog for more details on performance monitoring using Looker. 

Monitoring best practices

Key metrics to monitor

Typical questions administrator or workload owners would like to understand are:

What is my slots utilization for a given project?

How much data scan and processing takes place during a given day or an hour?

How many users are running jobs concurrently?

How is performance and throughout changing over the time?

How can I appropriately perform cost analysis for showback and chargeback?

One of the most demanding analyses is to understand how many slots are good for a given workload i.e. do we need more or less slots as workload patterns change? 

Below is a list of key metrics and trends to observe for better decision making on BigQuery resources:

Monitor slot usage and performance trends (week over week, month over month). Correlate trends with any workload pattern changes, for example:

Are more users being onboarded within the same slot allocation?

Are new workloads being enabled with the same slot allocation?

You may want to allocate more slots if you see:

Concurrency – consistently increasing

Throughput – consistently decreasing

Slot Utilization – consistently increasing or keeping beyond 90%

If slot utilization has spikes, are they on a regular frequency?

In this case, you may want to leverage flex slots for predictable spikes

Can some non-critical workloads be time shifted?

For a given set of jobs with the same priority, e.g.  for a specific group of queries or users:

Avg. Wait Time – consistently increasing

Avg. query run-time – consistently increasing

Concurrency and throughput

Concurrency is the number of queries that can run in parallel with the desired level of performance, for a set of fixed resources. In contrast, throughput is the number of completed queries for a given time duration and a fixed set of resources.

In the blog BigQuery workload management best practices, we discussed in detail on how BigQuery leverages dynamic slot allocation at each step of the query processing. The chart above reflects the slot replenishment process with respect to concurrency and throughput. More complex queries may require more number of slots, hence fewer available slots for other queries. If there is a requirement for a certain level of concurrency and minimum run-time, increased slot capacity may be required.  In contrast, simple and smaller queries gives you faster replenishment of slots, hence high throughput to start with for a given workload. Learn more about BiqQuery’s fair schedulingand query processing in detail.

Slot utilization rate

Slot utilization rate is a ratio of slots used over total available slots capacity for a given period of time. This provides a window of opportunities for workload optimization. So, you may want to dig into the utilization rate of available slots over a period. If you see that on an average a low percentage of available slots are being used during a certain hour, then you may add more scheduled jobs within that hour to further utilize your available capacity.  On the other hand, high utilization rate means that either you should move some scheduled workloads to different hours or purchase more slots.

For example: 

Given a 500 slot reservation (capacity), the following query can be used to find total_slot_ms over a period of time:

Lets say, we have the following results from the query above:

sum(total_slot_ms) for a given second is 453,708 mssum(total_slot_ms) for a given hour is 1,350,964,880 mssum(total_slot_ms) for a given day is  27,110,589,760 ms

Therefore, slot utilization rate can be calculated  using the following formula: 

Slot Utilization = sum(total_slot_ms) / slot capacity available in msBy second: 453,708 / (500 * 1000) = 0.9074 => 90.74%By hour: 1,350,964,880/(500 * 1000 * 60 * 60) = 0.7505 => 75.05%By day: 27,110,589,760 / (500 * 1000 * 60 * 60 * 24) = 0.6276 => 62.76%

Another common metric used to understand slot usage patterns is to look at the average slot time consumed over a period for a specific job or workloads tied to a specific reservation. 

Average slot usage over a period: Highly relevant for workload with consistent usage

Metric:

SUM(total_slot_ms) / {time duration in milliseconds} => custom duration

Daily Average Usage: SUM(total_slot_ms) / (1000 * 60 * 60 * 24) => for a given day

Example Query:

Average slot usage for an individual job: Job level statistics

Average slot utilization over a specific time period is useful to monitor trends, help understand how slot usage patterns are changing or if there is a notable change in a workload. You can find more details about trends in the ‘Take Action’ section below. Average slot usage for an individual job is useful to understand query-run time estimates, to identify outlier queries and to estimate slots capacity during capacity planning.

Chargeback

As more users and projects are onboarded with BigQuery, it is important for administrators to not only monitor and alert on resource utilization, but also help users and groups to efficiently manage cost+performance. Many organizations require that individual project owners be responsible for resource management and optimization. Hence, it is important to provide reporting at a project-level that summarizes costs and resources for the decision makers.

Below is an example of a reference architecture that enables comprehensive reporting,  leveraging audit logs, INFORMATION_SCHEMA and billing data. The architecture highlights persona based reporting for admin and individual users or groups by leveraging authorized view based access to datasets within a monitoring project.

Export audit log data to BigQuery with specific resources you need (in this example for the BigQuery). You can also export aggregated data at organization level.

The INFORMATION_SCHEMA provides BigQuery metadata and job execution details for the last six months. You may want to persist relevant information for your reporting into a BigQuery dataset. 

Export billing data to BigQuery for cost analysis and spend optimization.

With BigQuery, leverage security settings such as authorized views to provide separation of data access by project or by persona for admins vs. users.

Analysis and reporting dashboards built with visual tools such as Looker represent the data from BigQuery dataset(s) created for monitoring. In the chart above, examples of dashboards include: 

Key KPIs for admins such as usage trend or spend trends

Data governance and access reports

Showback/Chargeback by projects

Job level statistics 

User dashboards with relevant metrics such as query stats, data access stats and job performance

Billing monitoring

To operationalize showback or chargeback reporting, cost metrics are important to monitor and include in your reporting application. BigQuery billing is associated at project level as an accounting entity. Google Cloud billing reports help you understand trends and protect your resource costs and help answer questions such as:

What is my BigQuery project cost this month?

What is the cost trend for a resource with a specific label?

What is my forecasted future cost based on historical trends for a BigQuery project?

You can refer to these examples to get started with billing reports and understand what metrics to monitor. Additionally, you can export billing and audit metrics to BigQuery dataset for comprehensive analysis with resource monitoring.

As a best practice, monitoring trends is important to optimize spend on cloud resources. This article provides a visualization option with Looker to monitor trends. You can take advantage of readily available Looker block to deploy spend analytics and block for audit data visualization for your projects and, today!

When to use

The following tables provide guidance on using the right tool for monitoring based on the feature requirements and use cases.

Following features can be considered in choosing the mechanism to use for BigQuery monitoring:

Integration with BigQuery INFORMATION_SCHMA  – Leverage the data from information schema for monitoring Integration with other data sources – Join this data with other sources like business metadata, budgets stored in google sheets, etc.Monitoring at Org Level –  Monitor all the organization’s projects togetherData/Filter based Alerts – Alert on specific filters or data selection in the dashboard. For example, send alerts for a chart filtered by a specific project or reservation.User based Alerts – Alert for specific userOn-demand Report Exports – Export the report as PDF, CSV, etc.

1 BigQuery Admin Panel uses INFORMATION SCHEMA under the hood.

2 Cloud monitoring provides only limited integration as it surfaces only high-level metrics.

3 You can monitor up to 375 projects at a time in a single Cloud Monitoring workspace.

BigQuery monitoring is important across different use cases and personas in the organization. 

Personas

Administrators – Primarily concerned with secure operations and health of the GCP fleet of resources. For example, SREs

Platform Operators – Often run the platform that serves internal customers. For example, Data Platform Leads

Data Owners / Users – Develop and operate applications, and manage a system that generates source data. This persona is mostly concerned with their specific workloads. For example, Developers

The following table provides guidance on the right tool to use for your specific requirements:

Take action

To get started quickly with monitoring on BigQuery, you can leverage publicly available data studio dashboard and related github resources. Looker also provides BigQuery Performance Monitoring Block for monitoring BigQuery usage. To quickly deploy billing monitoring with GCP, see reference blog and related github resources. 

The key to successful monitoring is to enable proactive alerts. For example, setting up alerts when the reservation slot utilization rate crosses a predetermined threshold. Also, it’s important to enable the individual users and teams in the organization to monitor their workloads using a self-service analytics framework or dashboard. This allows the users to monitor trends for forecasting resource needs and troubleshoot overall performance.

Below are additional examples of monitoring dashboards and metrics:

Organization Admin Reporting (proactive monitoring)

Alert based on thresholds like 90% slot utilization rate 

Regular reviews of consuming projects

Monitor for seasonal peaks

Review jobs metadata from information schema for large queries using  total_bytes_processed and total_slot_ms metrics

Develop data slice and dice strategies in the dashboard for appropriate chargeback

Leverage audit logs for data governance and access reporting

Specific Data Owner Reporting (self-service capabilities)

Monitor for large queries executed in the last X hours

Troubleshoot job performance using concurrency, slots used and time spent per job stage, etc.

Develop error reports and alert on critical job failures

Understand and leverage INFORMATION_SCHEMA for real-time reports and alerts. Review more examples on job stats and technical deep-dive INFORMATION_SCHEMA explained with this blog.

Related Article

BigQuery Admin reference guide: API landscape

Explore the different BigQuery APIs that can help you programmatically manage and leverage your data.

Read Article

Source : Data Analytics Read More

Education is leveraging cloud and AI to ensure intelligent safety and incidence management for student success

Education is leveraging cloud and AI to ensure intelligent safety and incidence management for student success

Campus safety is increasingly important to students and parents. A student poll published by the College Board and Art & Sciences Group found that 72% of students indicated that the safety of the campus was very important to them in college consideration and choice, with  86% percent having reported that it was very important to their parents.

The latest annual report of the National Center for Education Statistics (NCES) for indicators of school crime and safety found  that there were 836,100 total victimizations on campus in the United States, including theft, sexual assaults, vandalism, and even violent deaths. Unfortunately, those acts of violence and threats to safety are a global challenge for university staff who are responsible for student safety and wellbeing

Besides the importance of ensuring campus safety for student success, the Clery Act requires all colleges and universities participating in federal financial aid programs to keep and disclose information about crime on and near their campuses. Compliance is monitored by the Department of Education and violation of the Clery Act compliance requirements results in multimillion dollar fines in civil penalties to several academic institutions.

Google Cloud is helping educational institutions put the right solution in place to ensure campus safety and facilitate compliance with Clery Act requirements

When we announced Student Success Services last November, our goal was to reinvent how educational institutions attract, support and engage with students and faculty. As part of the offering, we’ve partnered with NowIMS to provide a fully managed intelligent safety and incident management platform in compliance with FERPA, GDPR and COPPA.

NowIMS is provisioned to the customer’s own Google Cloud project through the Google Cloud Marketplace to facilitate automated deployment and provide integrated billing with the rest of the organization’s Google Cloud services. Some of the key functionality includes:

Automated capture and consolidation of data from different sources, including social media, websites, external alarm systems, video cameras and other sources that facilitate user-reported incidents 

Analysis and visualization of data at scale, aggregating all data and providing a single pane of glass view that ensures teams have access to the data and audit logs they need to get work done, complying with security and data governance policies.

Real-time tailored reporting and messaging: Teams will be notified as intelligence is collected and processed, and have access to self-service real-time reports. Each user group will receive tailored communication through their desired channels.

Automated generation of reports: Clery Act requirements bring significant operational overhead, and NowIMS helps to automate the generation of applicable reports, giving time back to campus safety team members to do what matters most–providing personalized support to students.

A customized approach that aligns with the needs of educational institutions

Although NowIMS is a fully managed solution, it integrates with the rest of Student Success Services to offer customization to any educational institution. Our solution is currently being used by Higher ED and K-12 customers across the US. For example, Fort Bend Independent School District is using NowIMS on Google Cloud to solve student safety and emergency communication challenges and serves 78,500 students across the district’s 80 schools. And recently, Alabama A&M University has deployed NowIMS to facilitate intelligence tracking for public safety, including issues such as drugs on campus, fights, suspicious packages, and more.  Captain Ruble of Alabama A&M stated, “NowIMS is the first tool we have found that will provide a single pane of glass for our operators. Before NowIMS, we had to watch more than 6 other systems.” In addition to this public safety use case, the Information Technology department will also use NowIMS to track badge access entry events, as well as network anomaly and outage notifications.  

To find out more about how NowIMS detects issues and rapidly responds to risks on and off campus, check out our session from Student Success Week.

Source : Data Analytics Read More

What are the newest datasets in Google Cloud?

What are the newest datasets in Google Cloud?

Editor’s note:  With Google Cloud’s datasets solution, you can access an ever-expanding resource of the newest datasets to support and empower your analyses and ML models, as well as frequently updated best practices on how to get the most out of any of our datasets. We will be regularly updating this blog with new datasets and announcements, so be sure to bookmark this link and check back often.

August 2021

New dataset: Google Cloud Release Notes

Looking for a new way to access Google Cloud Release Notes besides the docs, the XML feed, and Cloud Console? Check out the Google Cloud Release Notes dataset. With up-to-date release notes for all generally available Google Cloud products, this dataset allows you to use BigQuery to programmatically analyze release notes across all products, exploring security bulletin updates, fixes, changes, and the latest feature releases. 

Access the dataset

Access the BigQuery release notes dataset from https://cloud.google.com/release-notes/all

July 2021

Best practice: Use Google Trends data for common business needs

The Google Trends dataset represents the first time we’re adding Google-owned Search data into Datasets for Google Cloud. The Trends data allows users to measure interest in a particular topic or search term across Google Search, from around the United States, down to the city-level. You can learn more about the dataset here, and check out the Looker dashboard here! These tables are super valuable in their own right, but when you blend them with other actionable data you can unlock whole new areas of opportunity for your team. To learn how to make informed decisions with Google Trends data, keep reading.

Access the dataset

New dataset: COVID-19 Vaccination Search Insights

With COVID-19 vaccinations being a topic of interest around the United States, this dataset shows aggregated, anonymized trends in searches related to COVID-19 vaccination and is intended to help public health officials design, target, and evaluate public education campaigns. Check out this interactive dashboard to explore searches for COVID-19 vaccination topics by region.

Access the dataset

Source: https://google-research.github.io/vaccination-search-insights/

June 2021

New dataset: Google Diversity Annual Report 2021

Since 2014, Google has disclosed data on the diversity of its workforce in an effort to bring candid transparency to the challenges technology companies like Google face in recruitment and retention of underrepresented communities. In an effort to make this data more accessible and useful, we’ve loaded it into BigQuery for the first time ever. To view Google’s Diversity Annual Report and learn more, check it out.

Access the dataset

New dataset: Google Trends Top 25 Search terms

The most popular and surging Google Search terms are now available in BigQuery as a public dataset. View the Top 25 and Top 25 rising queries from Google Trends from the past 30-days, including 5 years of historical data across the 210 Designated Market Areas (DMAs) in the US. Keep reading.

Access the dataset

Top 25 Google Search terms, ranked by search volume (1 through 25) and with average search index score across the geographic areas (DMAs) in which it was searched.

New dataset: COVID-19 Vaccination Access

With metrics quantifying travel times to COVID-19 vaccination sites, this dataset is intended to help Public Health officials, researchers, and Healthcare Providers to identify areas with insufficient access, deploy interventions, and research these issues. Check out how this data is being used in a number of new tools.

Access the dataset

(Image courtesy of Vaccine Equity Planner, https://vaccineplanner.org/)

Best practice: Leveraging BigQuery Public Boundaries datasets for geospatial analytics 

Geospatial data is a critical component for a comprehensive analytics strategy. Whether you are trying to visualize data using geospatial parameters or do deeper analysis or modeling on customer distribution or proximity, most organizations have some type of geospatial data they would like to use – whether it be customer zipcodes, store locations, or shipping addresses. However, converting geographic data into the correct format for analysis and aggregation at different levels can be difficult. In this post, we’ll walk through some examples of how you can leverage the Google Cloud platform alongside Google Cloud Public Datasets to perform robust analytics on geographic data. Keep reading.

Access the dataset

Get the metadata and try BigQuery sandbox 

When you’ve learned about many of our datasets and pre-built solutions from across Google, you may be ready to start querying them. Check out the full dataset directory and read all the metadata at g.co/cloud/marketplace-datasets, then dig into the data with our free-to-use BigQuery sandbox account, or $300 in credits with our Google Cloud free trial.

Related Article

Discover datasets to enrich your analytics and AI initiatives

Discover and access datasets and pre-built dataset solutions from Google, or public and commercial data providers

Read Article

Source : Data Analytics Read More

How to load Salesforce data into BigQuery using a code-free approach powered by Cloud Data Fusion

How to load Salesforce data into BigQuery using a code-free approach powered by Cloud Data Fusion

Organizations are increasingly investing in modern cloud warehouses and data lake solutions to augment analytics environments and improve business decisions. The business value of such repositories increases as customer relationship data is loaded and additional insights are generated.

In this post, we’ll cover different ways to incrementally move Salesforce data into BigQuery using the scalability and reliability of Google services, an intuitive drag-and-drop solution based on pre-built connectors, and the self-service model of a code-free data integration service. 

A Common Data Ingestion Pattern:

To provide a little bit more context, here is an illustrative (and common) use case:

Account, Lead and Contact Salesforce objects are frequently manipulated by call center agents when using the SalesForce application.Changes to these objects need to be identified and incrementally loaded into a data warehouse solution using either a batch or streaming approach.A fully managed and cloud-native enterprise data integration service is preferred for quickly building and managing code-free data pipelines.  Business performance dashboards are created by joining Salesforce and other related data available in the data warehouse.

Cloud Data Fusion to the rescue 

To address the Salesforce ETL (extract, transform and load) scenario above, we will be demonstrating the usage of Cloud Data Fusion as the data integration tool. 

Data Fusion is a fully managed, cloud-native, enterprise data integration service for quickly building and managing code-free data pipelines. Data Fusion’s web UI allows organizations to build scalable data integration solutions to clean, prepare, blend, transfer, and transform data without having to manage the underlying infrastructure. Its integration with Google Cloud ensures data is immediately available for analysis. 

Data Fusion offers numerous pre-built plugins for both batch and real-time processing. These customizable modules can be used to extend Data Fusion’s native capabilities and are easily installed though the Data Fusion Hub component.

For Salesforce source objects, the following pre-built plugins are generally available:

Batch Single Source – Reads one sObject from Salesforce. The data can be read using SOQL queries (Salesforce Object Query Language queries) or using sObject names. You can pass incremental/range date filters and also specify primary key chunking parameters. Examples of sObjects are opportunities, contacts, accounts, leads, any custom object, etc. 

Batch Multi Source – Reads multiple sObjects from Salesforce. It should be used in conjunction with multi-sinks.

Streaming Source – Tracks updates in Salesforce sObjects. Examples of sObjects are opportunities, contacts, accounts, leads, any custom object, etc.

If none of these pre-built plugins fit your needs, you can always build your own by using Cloud Data Fusion’s plugin APIs. 

For this blog, we will leverage the out of the box Data Fusion plugins to demonstrate both batch and streaming Salesforce pipeline options.

Batch incremental pipeline

There are many different ways to implement a batch incremental logic. The Salesforce batch multi source plugin has parameters such as “Last Modified After”, “Last Modified Before”, “Duration” and “Offset” which can be used to control the incremental loads.
Here’s a look at a sample Data Fusion batch incremental pipeline for Salesforce objects Lead, Contact and Account. The pipeline uses the previous’ start/end time as the guide for incremental loads.

Batch Incremental Pipeline – From Salesforce to BigQuery

The main steps of this sample pipeline are:

For this custom pipeline, we decided to store start/end time in BigQuery and demonstrate different BigQuery plugins. When the pipeline starts, timestamps are stored on a user checkpoint table in BigQuery. This information is used to guide the subsequent runs and incremental logic.

Using the BigQuery Argument Setter plugin, the pipeline reads from the BigQuery checkpoint table, fetching the minimum timestamp to read from.

With the Batch Multi Source plugin, the objects lead, contact and account are read from Salesforce, using the minimum timestamp as a parameter passed to the plugin.

BigQuery tables lead, contact and account are updated using the BigQuery Multi Table sink plugin

The checkpoint table is updated with the execution end time followed by an update to current_time column.

Adventurous?

You can exercise this sample Data Fusion pipeline in your development environment by downloading its definition file from GitHub and importing it through the Cloud Data Fusion Studio. After completing the import, adjust the plugin properties to reflect your own Salesforce environment. You will also need to:      

Create a BigQuery dataset named from_salesforce_cdf_stagingCreate the sf_checkpoint BigQuery table on dataset from_salesforce_cdf_staging as described below:

3. Insert the following record into the sf_checkpoint table:

Attention: The initial last_completion date  = “1900-01-01T23:01:01Z” indicates the first pipeline execution will read all Salesforce records with LastModifedDate column greater than 1900-01-01. This is a sample value targeted for initial loads. Adjust the last_completion column as needed to reflect your environment and requirements for the initial run.

After executing this sample pipeline a few times, observe how sf_checkpoint.last_completion column evolves as executions finish. You can also validate that changes are being loaded incrementally into BigQuery tables as shown below:

BigQuery output – Salesforce incremental pipeline

Streaming pipeline  

When using the Streaming Source plugin with Data Fusion, changes in Salesforce sObjects are tracked using PushTopic events. The Data Fusion streaming source plugin can either create a Salesforce PushTopic for you, or use an existing one you defined previously using Salesforce tools. 

The PushTopic configuration defines the type of events (insert, update, delete) to trigger notifications, and the objects columns in scope. To learn more about Salesforce PushTopics, click here.   

When streaming data, there is no need to create a checkpoint table in BigQuery as data gets replicated near real time, automatically capturing only changes, as soon as they occur. The Data Fusion pipeline becomes super simple as demonstrated in the sample below:

Salesforce streaming pipeline with Cloud Data Fusion

The main steps of this sample pipeline are:

1. Add a Salesforce streaming source and provide its configuration details. For this exercise, only inserts and updates are being captured from CDFLeadUpdates PushTopic. As a reference, here is the code we used to pre-create the CDFLeadUpdates PushTopic in Salesforce. The Data Fusion plugin can also pre-create the PushTopic for you if preferred.

Hint: In order to run this code block, login to Salesforce with the appropriate credentials and privileges, open the Developer Console and click on Debug | Open Execute Anonymous Window.

2. Add a BigQuery sink to your pipeline in order to receive the streaming events. Notice the BigQuery table gets created automatically once the pipeline executes and the first change record is generated.

After starting the pipeline, make some modifications to the Lead object in Salesforce and observe the changes flowing into BigQuery as exemplified below:

BigQuery output – Salesforce streaming pipeline with Cloud Data Fusion

Adventurous?

You can exercise this sample Data Fusion pipeline in your development environment by downloading its definition file from GitHub and importing it through the Cloud Data Fusion Studio. After completing the import, adjust the plugin properties to reflect your own Salesforce environment.

Got deletes?  

If your Salesforce implementation allows “hard deletes” and you must capture them, here is a non-exhaustive list of ideas to consider:

An audit table to track the deletes. A database trigger, for example, can be used to populate a custom audit table. You can then use Data Fusion to load the delete records from the audit table and compare/update the final destination table in BigQuery.An additional Data Fusion job that reads the primary keys from the source and compare/merge with the data in BigQuery.A Salesforce PushTopic configured to capture delete/undelete events and a Data Fusion Streaming Source added to capture from the PushTopic.Salesforce Change Data Capture.

Conclusion:

If your enterprise is using Salesforce and If it’s your job to replicate data into a data warehouse then Cloud Data Fusion has what you need. And if you already use Google Cloud tools for curating a data lake with Cloud Storage, Dataproc, BigQuery and many others, then Data Fusion integrations make development and iteration fast and easy. 

Have a similar challenge? Try Google Cloud and this Cloud Data Fusion quickstart next. 

For a more in-depth look into Data Fusion check out the documentation.

Have fun exploring!

Source : Data Analytics Read More

BigQuery Admin reference guide: Recap

BigQuery Admin reference guide: Recap

Over the past few weeks, we have been publishing videos and blogs that walk through the fundamentals of architecting and administering your BigQuery data warehouse. Throughout this series, we have focused on teaching foundational concepts and applying best practices observed directly from customers. Below, you can find links to each week’s content:

Resource Hierarchy [blog]: Understand how BigQuery fits into the Google Cloud resource hierarchy, and strategies for effectively designing your organization’s BigQuery resource model.

Tables & Routines[blog]:What are the different types of tables in BigQuery? When should you use a federated connection to access external data, vs bringing data directly into native storage? How do routines help provide easy-to-use and consistent analytics? Find out here!

Jobs & Reservation Model[blog]: Learn how BigQuery manages jobs, or execution resources, and how processing jobs plays into the purchase of dedicated slots and the reservation model.

Storage & Optimizations[blog]: Curious to understand how BigQuery stores data in ways that optimize query performance? Here, we go under-the-hood to learn about data storage and how you can further optimize how BigQuery stores your data.

Query Processing [blog]:Ever wonder what happens when you click “run” on a new BigQuery query? This week, we talked about how BigQuery divides and conquers query execution to power super fast analytics on huge datasets.

Query Optimization[blog]: Learn about different techniques to optimize queries. Plus, dig into query execution for more complex workflows to better understand tactics for saving time and money analyzing your data. 

Data Governance [blog]:Understand how to ensure that data is secure, private, accessible, and usable  inside of BigQuery. Also explore integrations with other GCP tools to build end-to-end data governance pipelines. 

BigQuery API Landscape [blog]:Take a tour of the BigQuery APIs and learn how they can be used to automate meaningful data-fueled workflows.

Monitoring [blog]:Walk through the different monitoring data sources and platforms that can be used to continuously ensure your deployment is cost effective, performant and secure.

We hope that these links can act as resources to help onboard new team members onto BigQuery or a reference for rethinking new patterns or optimizations – so make sure to bookmark this page! If you have any feedback or ideas for future videos, blogs or data focused series, don’t hesitate to reach out to me on LinkedIn or Twitter.

Related Article

BigQuery Admin reference guide: Monitoring

This blog aims to simplify monitoring and best practices related to BigQuery, with a focus on slots and automation.

Read Article

Source : Data Analytics Read More

Google Cloud improves Healthcare Interoperability on FHIR

Google Cloud improves Healthcare Interoperability on FHIR

The Importance of Interoperability

In 2020 hospital systems were scrambling to prepare for COVID-19. Not just the clinicians preparing for a possible influx of patients, but also the infrastructure & analytics teams trying to navigate a maze of Electronic Health Records (EHR) systems. By default these EHRs are not interoperable, or able to speak to one another, so answering a relatively simple question “how many COVID-19 patients are in all of my hospitals?” can require many separate investigations. 

Typically, the more complex a dataset is, the more difficult it is to build interoperable systems around it. Clinical data is extremely complex (a patient has many diagnoses, procedures, visits, providers, prescriptions, etc.), and EHR vendors built and managed their own proprietary data models to handle those data challenges. This has made it much more difficult for hospitals to track a patient’s performance when they switch hospitals (even within the same hospital system) and especially difficult for multiple hospitals systems to coordinate on care for nationwide epidemics (e.g. COVID-19, opioid abuse), which makes care less effective for patients & more expensive for hospitals. 

A Big Leap Forward in Interoperability

Building an interoperable system requires: 

(1) A common data schema

(2) A mechanism for hospitals to bring their messy real-world data into that common data schema

(3) A mechanism for asking questions against that common data schema

In 2011 a common FHIR (Fast Healthcare Interoperability Resources) Data Model & API Standard provided an answer to (1): a single data schema for the industry to speak the same data language. In the past 18 months, Google Cloud has deployed several technologies to unlock the power of FHIR and solve for (2) and (3): 

Google Cloud’s Healthcare Data Engine (HDE) produces FHIR records from streaming clinical data (either HL7v2 messages out of EHR systems or legacy formats from EDWs). This technology then enables data use for other applications & analytics in Google BigQuery (BQ)

Google Cloud’s Looker enables anyone in a healthcare organization to ask any question against the complex FHIR schema in Google BigQuery

Now a hospital system can quickly ask & answer a question against records from several EHR systems at once.

This dashboard tracks a hospital system’s COVID-19 cases & volume across its hospitals.

Applications Seen So Far

In less than 18 months, GCP has seen dozens of applications for HDE, BigQuery, and Looker working together to improve clinical outcomes. A few applications that have been particularly successful so far have answered questions like: 

How many readmissions will a hospital expect in 30 days? 

How long will each inpatient patient stay in a hospital?

How can a hospital better track misuse & operational challenges in prescribing opioid drugs to my patients?

How can a hospital quickly identify anomalies in patients’ vital signs across my hospital system?

How can a hospital identify & minimize hospital-associated infections (e.g. CLABSI) in my hospital?

How can a hospital prepare for COVID-19 cases across a hospital system? And leverage what-if planning to prepare for the worst?

These use cases represent just the tip of the iceberg of possibilities for improving day-to-day operations for clinicians & hospitals.

Solving Major Challenges in Interoperability

Latency: Hospitals often rely on stale weekly reports for analytics; receiving analytics in near real-time enables hospitals to identify problems and make changes much more quickly. COVID-19 in particular highlighted the need for faster turnaround on analytics. GCP’s Healthcare Data Engine handles streaming clinical data in the form of HL7 messages. As soon as messages arrive, they are transformed into FHIR and sent over to the BigQuery database to be queried. There is minimal latency, and users are querying near real-time data.

Scale: The scale and scope of hospital data is rapidly increasing as hospitals track more clinical events and as hospitals consolidate into larger systems. Hospitals are adopting cloud-based systems that autonomously scale to handle the intensive computation necessary to take in millions of clinical records with growing hospital system datasets. GCP’s serverless, managed cloud is meeting these needs for many hospital systems today.

Manage Multiple Clinical Definitions: Today hospitals struggle to manage definitions of complex clinical KPIs. For example, hospitals had many different definitions for a positive COVID-19 result (based on a frequently changing set of lab results and symptoms), which creates inconsistencies in analytics. Additionally, those definitions are often buried in scripts that are hard to adjust and change. HDE has developed capabilities that consistently transform HL7 messages into the FHIR store in a scalable fashion. Looker then provides a single source of truth in an object-oriented, version-controlled semantic layer to define clinical KPIs and quickly update them.  

Represent FHIR Relationally: FHIR was originally intended for XML storage to maximize schema flexibility. However, this format is usually very difficult for analytical queries, which perform better with relational datasets. In particular, FHIR has rows of data buried (or “nested”) within a single record (e.g. a single patient record has many key-value pairs of diagnoses) that make FHIR difficult to ask questions against. BigQuery is an analytical database that combines the analytical power of OLAP databases with the flexibility of a No-SQL data schema by natively storing FHIR data in this “nested” structure and querying against it. 

Query Quickly against FHIR: Writing unnested queries to a schema as complex as FHIR can be challenging. GCP’s Looker solution writes “nested” queries natively to BigQuery, making it much simpler to ask & answer new questions. This also prevents the “cube / extract” problem so common in healthcare, where hospitals are forced to build, manage, and maintain hundreds of simplified data cubes to answer their questions.

Predict Clinical Outcomes: Predictive modelling with AI/ML workflows has matured significantly. Hospitals increasingly rely on AI/ML to guide patients & providers towards better outcomes. For example, predicting patient, staffing, and ventilator volumes 30 days in advance across a hospital system can minimize disruptions to care. Leveraging FHIR on GCP enables GCP’s full suite of managed AI/ML tools – in particular BQML (BigQuery Machine Learning) and AutoML.

Ensure 24/7 Data Availability: COVID-19 exposed the vulnerabilities of relying on staffed on-premise data centers; GCP’s cloud infrastructure ensures availability and security of all clinical data. 

Protect Patient Data: Interoperability blends the need for private patient data to stay private while allowing data to be shared across hospitals. Researchers in particular often require granular security rules to access clinical data. Today hospitals often use an extract-based approach that requires many copies of the data outside of the database, a potential security flaw. GCP’s approach ensures that hospitals can query the data where it resides – in a secure data warehouse. Additionally, every component of GCP’s FHIR solution (HDE, BQ, Looker) can be configured to be HIPAA-compliant and includes row-level, column-level, and field-level security that can be set by users to ensure cell-level control over PHI data. GCP’s Cloud Loss Prevention API also anonymizes data automatically.

Speed to Insights: The complexity of data can lead to long windows to build analytics pipelines, leading to delays in healthcare improvements. GCP’s FHIR solution is relatively low-effort to implement. HDE can be set up against streaming HL7 messages in a few days and weeks, not months and years. Looker has a pre-built FHIR Block (coming to the Looker Blocks Directory and Marketplace soon) that can be installed & configured for a hospital’s particular data needs.

Share Insights Broadly: Interoperability requires not just being able to query across multiple systems, but also to share those insights across multiple platforms. GCP’s FHIR solution allows hospital systems to analyze governed results on FHIR, then send them anywhere: to dashboards in Looker, other BI tools, embedded applications, mobile apps, etc. For example, the National Response Portal represents the promise of hospitals and other organizations sharing aggregated healthcare data for nationwide insights around COVID-19.

For a technical review of GCP’s Healthcare Data Engine against Azure’s & AWS’ solutions, see here and here.

The New Frontier

This new healthcare data stack at Google Cloud represents a significant step forward towards interoperability in healthcare. When hospitals can communicate more easily with each other and when complex analytics are easier to conduct, everyone wins. Patients have better healthcare outcomes, and hospitals can provide care more efficiently. 

Google Cloud is committed to continue partnering with the largest hospital systems in the country to solve the most challenging problems in healthcare. Patients’ lives depend on it.

Related Article

Registration is open for Google Cloud Next: October 12–14

Register now for Google Cloud Next on October 12–14, 2021

Read Article

Source : Data Analytics Read More