Single-cell genomic analysis accelerated by NVIDIA on Google Cloud

Single-cell genomic analysis accelerated by NVIDIA on Google Cloud

In the past decade, the Healthcare and Life Sciences industry has enjoyed a boon in technological and scientific advancement. New insights and possibilities are revealed almost daily. At Google Cloud, driving innovation in cloud computing is in our DNA. Our team is dedicated to sharing ways Google Cloud can be used to accelerate scientific discovery. For example, the recent announcement of AlphaFold2 showcases a scientific breakthrough, powered by Google Cloud, that will promote a quantum leap in the field of proteomics. In this blog, we’ll review another omics use case, single-cell analysis, and how Google Cloud’s Dataproc and NVIDIA GPUs can help accelerate that analysis.

The Need for Performance in Scientific Analysis

The ability to understand the causal relationship between genotypes and phenotypes is one of the long-standing challenges in biology and medicine. Understanding and drawing insights from the complexity of biological systems abounds from the actual code of life (DNA) through to expression of genes (RNA) to translation of gene transcripts into proteins that function in different pathways, cells, and tissues within an organism. Even the smallest of changes in our DNA can have large impacts on protein expression, structure, and function, which ultimately drives development and response – at both cellular and organism levels. And, as the omics space becomes increasingly data- and compute-intensive, research requires an adequate informatics infrastructure. An infrastructure that scales with growing data demands, enables a diverse range of resource-intensive computational activities, and is affordable and efficient – reducing data bottlenecks and enabling researchers to maximize insight. 

But where do all these data and compute challenges come from and what makes scientific study so arduous? The layers of biological complexity begin to be made apparent immediately when looking at not just the genes themselves, but their expression. Although all the cells in our body share nearly identical genotypes, our many diverse cell types (e.g. hepatocytes versus melanocytes) express a unique subset of genes necessary for specific functions, making transcriptomics a more powerful method of analysis by allowing researchers to map gene expression to observable traits. Studies have shown that gene expression is heterogeneous, even in similar cell types. Yet, conventional sequencing methods require DNA or RNA extracted from a cell population. The development of single-cell sequencing was pivotal to the omics field. Single-cell RNA sequencing has been critical in allowing scientists to study transcriptomes across large numbers of individual cells. 

Despite its potential, and the increasing availability of single-cell sequencing technology, there are several obstacles: an ever increasing volume of high-dimensionality data, the need to integrate data across different types of measurements (e.g. genetic variants, transcript and protein expression, epigenetics) and across samples or conditions, as well as varying levels of resolution and the granularity needed to map specific cell types or states. These challenges present themselves in a number of ways including background noise, signal dropouts requiring imputation, and limited bioinformatics pipelines that lack statistical flexibility. These and other challenges result in analysis workflows that are very slow, prohibiting the iterative, visual, and interactive analysis required to detect differential gene activity. 

Accelerating Performance

Cloud computing can help not only with data challenges, but with some of the biggest obstacles: scalability, performance, and automation of analysis. To address several of the data and infrastructure challenges facing single-cell analysis, NVIDIA developed end-to-end accelerated single-cell RNA sequencing workflows that can be paired with Google Cloud Dataproc, a fully-managed service for running open source frameworks like Spark, Hadoop, and RAPIDS. The Jupyter notebooks that power these workflows include examples using samples like human lung cells and mouse brains cells and demonstrate acceleration between CPU-based processing compared to GPU-based workflows. 

Google Cloud Dataproc powers the NVIDIA GPU-based approach and demonstrates data processing capabilities and acceleration, which in turn have the potential of delivering considerable performance gains.  When paired with RAPIDS, practitioners can accelerate data science pipelines on NVIDIA GPUs, reducing operations like data loading, processing, and training from hours to seconds. RAPIDS abstracts the complexities of accelerated data science by building upon popular Python and Java libraries effortlessly. When applying RAPIDS and NVIDIA accelerated compute to single-cell genomics use cases, practitioners can churn through analysis of a million cells in only a few minutes.

Give it a Try

The journey to realizing the full potential of omics is long; but through collaboration with industry experts, customers, and partners like NVIDIA, Google Cloud is here to help shine a light on the road ahead. To learn more about the notebook provided for single-cell genomic analysis, please take a look at NVIDIA’s walkthrough. To give this pattern a try on Dataproc, please visit our technical reference guide.

Related Article

Building a genomics analysis architecture with Hail, BigQuery, and Dataproc

Try a cloud architecture for creating large clusters for genomics analysis with BigQuery and Google-built healthcare tooling.

Read Article

Source : Data Analytics Read More

Handling duplicate data in streaming pipelines using Dataflow and Pub/Sub

Handling duplicate data in streaming pipelines using Dataflow and Pub/Sub

Purpose

Processing streaming data to extract insights and powering real time applications is becoming more and more critical. Google Cloud Dataflow and Pub/Sub provides a highly scalable, reliable and mature streaming analytics platform to run mission critical pipelines. One very common challenge that developers often face when designing such pipelines is how to handle duplicate data. 

In this blog, I want to give an overview of common places where duplicate data may originate in your streaming pipelines and discuss various options that are available to you to handle them. You can also check out this tech talk on the same topic.

Origin of duplicates in streaming data pipelines

This section gives an overview of the places where duplicate data may originate in your streaming pipelines. Numbers in red boxes in the following diagram indicate where this may happen.

Some duplicates are automatically handled by Dataflow while for others developers may need to use some techniques to handle them. This is summarized in the following table.

1. Source generated duplicate
Your data source system may itself produce duplicate data. There could be several reasons like network failure, system errors etc that can produce duplicate data. Such duplicates are referred to as ‘source generated duplicates’.

One example where this could happen is when you set trigger notifications from Google Cloud Storage to Pub/Sub in response to object changes to GCS buckets. This feature guarantees at-least-once delivery to Pub/Sub and can produce duplicate notifications.

2. Publisher generated duplicates 
Your publisher when publishing messages to Pub/Sub can generate duplicates due to at-least-once publishing guarantees. Such duplicates are referred to as ‘publisher generated duplicates’. 

Pub/Sub automatically assigns a unique message_id to each message successfully published to a topic. Each message is considered successfully published by the publisher when Pub/Sub returns an acknowledgement to the publisher. Within a topic all messages have a unique message_id and no two messages have the same message_id. If success of the publish is not observed for some reason (network delays, interruptions etc) the same message payload may be retried by the publisher. If retries happen, we may end up with duplicate messages with different message_id in Pub/Sub. For Pub/Sub these are unique messages as they have different message_id.

3. Reading from Pub/Sub
Pub/Sub guarantees at least once delivery for every subscription. This means that a message may be delivered more than once by the same subscription if Pub/Sub doesn’t receive acknowledgement within the acknowledgement deadline. The subscriber may acknowledge after the acknowledgement deadline or the acknowledgement may be lost due to transient network issues. In such scenarios the same message would be redelivered and subscribers may see duplicate data. It is the responsibility of the subscribing system (for example Dataflow) to detect such duplicates and handle accordingly.

When Dataflow receives messages from Pub/Sub subscription, messages are acknowledged after they are successfully processed by the first fused stage. Dataflow does optimization called fusion where multiple stages can be combined into a single fused stage. A break in fusion happens when there is a shuffle which happens if you have transforms like GROUP BY, COMBINE or I/O transforms like BigQueryIO. If a message has not been acknowledged within its acknowledgement deadline, Dataflow attempts to maintain the lease on the message by repeatedly extending the acknowledgement deadline to prevent redelivery from Pub/Sub. However this is best effort and there is a possibility that messages may be redelivered. This can be monitored using metrics listed here.

However, because Pub/Sub provides each message with a unique message_id, Dataflow uses it to deduplicate messages by default if you use the built-in Apache Beam PubSubIO. Thus Dataflow filters out such duplicates originating from redelivery of the same message by Pub/Sub. You can read more about this topic on one of our earlier blog under the section “Example source: Cloud Pub/Sub”

4. Processing data in Dataflow
Due to the distributed nature of processing in Dataflow each message may be retried multiple times on different Dataflow workers. However Dataflow ensures that only one of those tries wins and the processing from the other tries does not affect downstream fused stages. Dataflow does guarantee exactly once processing by leveraging checkpointing at each stage to ensure such duplicates are not reprocessed affecting state or output. You can read more about how this is achieved in this blog.

5. Writing to a sink
Each element can be retried multiple times by Dataflow workers and may produce duplicate writes. It is the responsibility of the sink to detect these duplicates and handle them accordingly. Depending on the sink, duplicates may be filtered out, over-written or appear as duplicates.

File systems as sink
If you are writing files, exactly once is guaranteed as any retries by Dataflow workers in event of failure will overwrite the file. Beam provides several I/O connectors to write files, all of which guarantees exactly once processing.

BigQuery as sink

If you use the built-in Apache Beam BigQueryIO to write messages to BigQuery using streaming inserts, Dataflow provides a consistent insert_id (different from Pub/Sub message_id) for retries and this is used by BigQuery for deduplication. However, this deduplication is best effort and duplicate writes may appear. BigQuery provides other insert methods as well with different deduplication guarantees as listed below.

You can read more about BigQuery insert methods at the BigQueryIO Javadoc. Additionally for more information on BigQuery as a sink check out the section “Example sink: Google BigQuery” in one of our earlier blog

For duplicates originating from places discussed in points 3), 4) and 5) there are built-in mechanisms in place to remove such duplicates as discussed above, assuming BigQuery is a sink. In the following section we will discuss deduplication options for ‘source generated duplicates’ and ‘publisher generated duplicates’. In both cases, we have duplicate messages with different message_id, which for Pub/Sub and downstream systems like Dataflow are two unique messages.

Deduplication options for source generated duplicates and publisher generated duplicates

1. Use Pub/Sub message attributes

Each message published to a Pub/Sub topic can have some string key value pairs attached as metadata under the “attributes” field of PubsubMessage. These attributes are set when publishing to Pub/Sub. For example, if you are using the Python Pub/Sub Client Library, you can set the “attrs” parameter of the publish method when publishing messages. You can set the unique fields (e.g: event_id) from your message as attribute value and field name as attribute key.

Dataflow can be configured to use these fields to deduplicate messages instead of the default deduplication using Pub/Sub message_id. You can do this by specifying the attribute key when reading from Pub/Sub using the built-in PubSubIO.

For Java SDK, you can specify this attribute key in the withIdAttribute method of PubsubIO.Read() as shown below.

In the Python SDK, you can specify this in the id_label parameter of the ReadFromPubSub PTransform as shown below.

This deduplication using a Pub/Sub message attribute is only guaranteed to work for duplicate messages that are published to Pub/Sub within 10 minutes of each other.

2. Use Apache Beam Deduplicate PTransform
Apache Beam provides deduplicate PTransforms which can deduplicate incoming messages  over a time duration. Deduplication can be based on the message or a key of a key value pair, where the key could be derived from the message fields. The deduplication window can be configured using the withDuration method, which can be based on processing time or event time (specified using the withTimeDomain method). This has a default value of 10 mins.

You can read the Java documentation or the Python documentation of this PTransform for more details on how this works.

This PTransform uses the Stateful API under the hood and maintains a state for each key observed. Any duplicate message with the same key that appears within the deduplication window is discarded by this PTransform.

3. Do post-processing in sink
Deduplication can also be done in the sink. This could be done by running a scheduled job that periodically deduplicates rows using a unique identifier.

BigQuery as a sink
If BigQuery is the sink in your pipeline, scheduled query can be executed periodically that writes the deduplicated data to another table or updates the existing table. Depending on the complexity of the scheduling you may need orchestration tools like Cloud Composer or Dataform to schedule queries.

Deduplication can be done using a DISTINCT statement or DML like MERGE. You can find sample queries about these methods on these blogs (blog 1, blog 2).

Often in streaming pipelines you may need deduplicated data available in real time in BigQuery. You can achieve this by creating materialized views on top of underlying tables using a DISTINCT statement.

Any new updates to the underlying tables will be updated in real time to the materialized view with zero maintenance or orchestration.

Technical trade-offs of different deduplication options

Related Article

After Lambda: Exactly-once processing in Google Cloud Dataflow, Part 1

Learn the meaning of “exactly once” processing in Dataflow, its importance for stream processing overall and its implementation in stream…

Read Article

Source : Data Analytics Read More

Ad agencies choose BigQuery to drive campaign performance

Ad agencies choose BigQuery to drive campaign performance

Advertising agencies are faced with the challenge of providing the precision data that marketers require to make better decisions at a time when customers’ digital footprints are rapidly changing. They need to transform customer information and real-time data into actionable insights to inform clients what to execute to ensure the highest campaign performance.

In this post, we’ll explore how two of our advertising agency customers are turning to Google BigQuery to innovate, succeed, and meet the next generation of digital advertising head on. 

Net Conversion eliminated legacy toil to reach new heights

Paid marketing and comprehensive analytics agency Net Conversion has made a name for itself with its relentless attitude and data-driven mindset. But like many agencies, Net Conversion felt limited by traditional data management and reporting practices. 

A few years ago, Net Conversion was still using legacy data servers to mine and process data across the organization, and analysts relied heavily on Microsoft Excel spreadsheets to generate reports. The process was lengthy, fragmented, and slow—especially when spreadsheets exceeded the million-row limit.

To transform, Net Conversion built Conversionomics, a serverless platform that leverages BigQuery, Google Cloud’s enterprise data warehouse, to centralize all of its data and handle all of its data transformation and ETL processes. BigQuery was selected for its serverless architecture, high scalability, and integration with tools that analysts were already using daily, such as Google Ads, Google Analytics, and Data Hub. 

After moving to BigQuery, Net Conversion discovered surprising benefits that streamlined reporting processes beyond initial expectations. For instance, many analysts had started using Google Sheets for reports, and BigQuery’s native integration with Connected Sheets gave them the power to analyze billions of rows of data and generate visualizations right where they were already working.

If you’re still sending Excel files that are larger than 1MB, you should explore Google Cloud. Kenneth Eisinger
Manager of Paid Media Analytics at Net Conversion

Since modernizing their data analytics stack, Net Conversion has saved countless hours of time that can now be spent on taking insights to the next level. Plus, BigQuery’s advanced data analytics capabilities and robust integrations have opened up new roads to offer more dynamic insights that help clients better understand their audience.   

For instance, Net Conversion recently helped a large grocery retailer launch a more targeted campaign that significantly increased downloads of their mobile application. The agency was able to better understand and predict their customers’ needs by analyzing buyer behavior across the website, mobile application, and their purchase history. Net Conversion analyzed website data in real-time with BigQuery, ran analytics on their mobile app data through the Firebase’s integration with BigQuery, and enriched these insights with sales information from the grocery retailer’s CRM to generate propensity behavior models that  accurately predicted which customers would most likely install their mobile app.

WITHIN helped companies weather the COVID storm

WITHIN is a performance branding company, focused on helping brands maximize growth by fusing marketing and business goals together in a single funnel. During the COVID-19 health crisis, WITHIN became an innovator in the ad agency world by sharing real-time trends and insights with customers through its Marketing Pulse Dashboard. This dashboard was part of the company’s path to adopting BigQuery for data analytics transformation. 

Prior to using BigQuery, WITHIN used a PostgreSQL database to house its data and manual reporting. Not only was the team responsible for managing and maintaining the server, which took focus away from the data analytics, but query latency issues often slowed them down. 

BigQuery’s serverless architecture, blazing-fast compute, and rich ecosystem of integrations with other Google Cloud and partner solutions made it possible to rapidly query, automate reporting, and completely get rid of CSV files. 

Using BigQuery, WITHIN is able to run Customer Lifetime Value (LTV) analytics and quickly share the insights with their clients in a collaborative Google Sheet. In order to improve the effectiveness of their campaigns across their marketing channels, WITHIN further segments the data into high and low LTV cohorts and shares the predictive insights with their clients for in-platform optimizations.

By distilling these types of LTV insights from BigQuery, WITHIN has been able to use those to empower their campaigns on Google Ads with a few notable success stories.

WITHIN worked with a pet food company to analyze historical transactional data to model predicted LTV of new customers. They found significant differences between product category and autoship vs single order customers, and they implemented LTV-based optimization. As a result, they saw a 400% increase in average customer LTV. 
WITHIN helped a coffee brand increase their customer base by 560%, with the projected 12-month LTV of newly acquired customers jumping a staggering 1280%.

Through integration with Google AI Platform Notebooks, BigQuery also advanced WITHIN’s ability to use machine learning (ML) models. Today, the team can build and deploy models to predict dedicated campaign impact across channels without moving the data.  The integration of clients’ LTV data through Google Ads has also impacted how WITHIN structures their clients’ accounts and how they make performance optimization decisions.

Now, WITHIN can capitalize on the entire data lifecycle: ingesting data from multiple sources into BigQuery, running data analytics, and empowering people with data by automatically visualizing data right in Google Data Studio or Google Sheets.

A year ago, we delivered client reporting once a week. Now, it’s daily. Customers can view real-time campaign performance in Data Studio — all they have to do is refresh. Evan Vaughan
Head of Data Science at WITHIN

Having a consistent nomenclature and being able to stitch together a unified code name has allowed WITHIN to scale their analytics. Today, WITHIN is able to create an internal Media Mix Modeling (MMM) tool with the help of Google Cloud that they’re trialing with their clients.

The overall unseen benefit of BigQuery was that it put WITHIN in a position to remain nimble and spot trends before other agencies when COVID-19 hit. This aggregated view of data allowed WITHIN to provide unique insights to serve their customers better and advise them on rapidly evolving conditions.

Ready to modernize your data analytics? Learn more about how Google BigQuery unlocks the insights hidden in your data.

Related Article

Query BIG with BigQuery: A cheat sheet

Organizations rely on data warehouses to aggregate data from disparate sources, process it, and make it available for data analysis in s…

Read Article

Source : Data Analytics Read More

Optimizing your BigQuery incremental data ingestion pipelines

Optimizing your BigQuery incremental data ingestion pipelines

When you build a data warehouse, the important question is how to ingest data from the source system to the data warehouse. If the table is small you can fully reload a table on a regular basis, however, if the table is large a common technique is to perform incremental table updates. This post demonstrates how you can enhance incremental pipeline performance when you ingest data into BigQuery.

Setting up a standard incremental data ingestion pipeline

We will use the below example to illustrate a common ingestion pipeline that incrementally updates a data warehouse table. Let’s say that you ingest data into BigQuery from a large and frequently updated table in the source system, and you have Staging and Reporting areas (datasets) in BigQuery.

The Reporting area in BigQuery stores the most recent, full data that has been ingested from the source system tables. Usually you create the base table as a full snapshot of the source system table. In our running example, we use BigQuery public data as the source system and create reporting.base_table as shown below. In our example each row is identified by a unique key which consists of two columns: block_hash and log_index.

In data warehouses it is common to partition a large base table by a datetime column that has a business meaning. For example, it may be a transaction timestamp, or datetime when some business event happened, etc. The idea is that data analysts who use the data warehouse usually need to analyze only some range of dates and rarely need the full data. In our example, we partition the base table by block_timestamp which comes from the source system.

After ingesting the initial snapshot you need to capture changes that happen in the source system table and update the reporting base table accordingly. This is when the Staging area comes into the picture. The staging table will contain captured data changes that you will merge into the base table. Let’s say that in our source system on a regular basis we have a set of new rows and also some updated records. In our example we mock the staging data as follows: first, we create new data, than we mock the updated records:

Next, the pipeline merges the staging data into the base table. It joins two tables by unique key and than updates the changed value or inserts a new row

It is often the case that the staging table contains keys from various partitions but the number of those partitions are relatively small. It holds, for instance, because in the source system the recently added data may get changed due to some initial errors or ongoing processes but older records are rarely updated. However, when the above MERGE gets executed, BigQuery scans all partitions in the base table and processes 161 GB of data. You might add additional join condition on block_timestamp:

But BigQuery would still scan all partitions in the base table because condition T.block_timestamp = S.block_timestamp is a dynamic predicate and BigQuery doesn’t automatically push such predicates down from one table to another in MERGE.

Can you improve the MERGE efficiency by making it scan less data? The answer is Yes. 

As described in the MERGE documentation, pruning conditions may be located in a subquery filter, a merge_condition filter, or a search_condition filter. In this post we show how you can leverage the first two. The main idea is to turn a dynamic predicate into a static predicate.

Steps to enhance your ingestion pipeline

The initial step is to compute the range of partitions that will be updated during the MERGE and store it in a variable. As was mentioned above, in data ingestion pipelines, staging tables are usually small so the cost of the computation is relatively low.

Based on your existing ETL/ELT pipeline, you can add the above code as-is to your pipeline or you can compute date_min, data_max as part of some already existing transformation step. Alternatively, date_min, data_max can be computed on the Source System side while capturing the next ingestion data batch.

After computing date_min, date_max you pass those values to the MERGE statement as static predicates. There are several ways to enhance the MERGE and prune partitions in the base table based on precomputed date_min, data_max. 

If your initial MERGE statement uses a subquery, you can incorporate a new filter into it:

Note that you add the static filter to the staging table and keep T.block_timestamp = S.block_timestamp to convey to BigQuery that it can push that filter to the base table. This MERGE processes 41 GB of data in contrast to the initial 161 GB. You can see in the query plan that BigQuery pushes the partition filter from the staging table to the base table:

This type of optimization, when a pruning condition is pushed from a subquery to a large partitioned or clustered table, is not unique for MERGE. It also works for other types of queries. For instance:

And you can check the query plan to verify that BigQuery pushed down the partition filter from one table to another.

Moreover, for SELECT statements, BigQuery can automatically infer a filter predicate on a join column and push it down from one table to another if your query meets the following criteria:

The target table must be clustered or partitioned. The result size of the other table, i.e. after applying all filters, must qualify for broadcast join. Namly, the result set must be relatively small, less than ~100MB.

In our running example, reporting.base_table is partitioned by block_timestamp. If you define a selective filter on staging.load_delta and join two tables, you can see an inferred filter on the join key pushed to the target table

There is no requirement to join tables by partitioning or clustering key to kick off this type of optimization. However, in this case the pruning effect on the target table would be less significant.

But let us get back to the pipeline optimizations. Another way to enhance MERGE is to modify the merge_condition filter by adding static predicate on the base table:

To summarize, here are the steps that you can perform to enhance incremental ingestion pipelines in BigQuery. First you compute the range of updated partitions based on the small staging table. Next, you tweak the MERGE statement a bit to let BigQuery know to prune data in the base table.

All the enhanced MERGE statements scanned 41 GB of data, and setting up the src_range variable took 115 MB.  Compare it with the initial 161 GB scan. Moreover, given that computing src_range may be incorporated into some existing transformation in your ETL/ELT, it results in a good performance improvement which you can leverage in your pipelines. 

In this post we described how to enhance data ingestion pipelines by turning dynamic filter predicates into static predicates and letting BiQuery prune data for us. You can find more tips on BigQuery DML tuning here.

Special thanks to Daniel De Leo, who helped with examples and provided valuable feedback on this content.

Related Article

BigQuery explained: How to run data manipulation statements to add, modify and delete data stored in BigQuery

How do you delete all rows from a table in BigQuery? In this blog post, you’ll learn that and more, as we show you how you how to run dat…

Read Article

Source : Data Analytics Read More

Lower TCO for managing data pipelines by 80% with Cloud Data Fusion

Lower TCO for managing data pipelines by 80% with Cloud Data Fusion

In today’s data-driven environment, organizations need to use various different data sources available in order to extract timely and actionable insights.  Organizations are better off making data integration easier to get faster insights from data rather than spending time and effort in coding, testing complex data pipelines .  

Recently, Google Cloud sponsored Enterprise Strategy Group (ESG) on two whitepapers, conducting deep dives into the need for modern data integration, and how Cloud Data Fusion can address these challenges including the economic value of using Cloud Data Fusion as compared to other alternatives available.

Challenges in Data Integration

Data integration has notoriously been the greatest challenge data-driven organizations face as they work to better leverage data. On top of migrating certain, if not all, workloads to the cloud, areas like lack of metadata, combining distributed data sets, combining different data types, and handling the rate at which source data changes are directly leading to organizations prioritizing data integration.  In fact, this is the reason that improving data integration is where organizations expect to make the most significant investments in the next 12-18 months1. Organizations recognize that by simplifying and improving their data integration processes, they can enhance operational efficiency across data pipelines while ensuring they are on a path to data-driven success.

Cloud Strategy and Data Integration 

Based on the ESG report, the cloud strategy impacts the way in which organizations implement and utilize data integration today. Organizations can choose from, single cloud, multi-cloud or hybrid cloud strategy and in doing so, choosing the right data integration option can give organizations freedom and flexibility. Irrespective of the cloud strategy, organizations are embracing a centralized approach to data management to not only reduce costs but also to ensure greater efficiency in the creation and management of data pipelines. By standardizing on a centralized approach, data lifecycle management is streamlined through data unification. Further, with improved data access and availability, data and insight reusability are achieved.

Cross-environment integration and collaboration

Organizations are increasingly in search of services and platforms that minimize lock-in while promoting cross-environment integration and collaboration. As developers dedicate more time and effort into building modern applications heavily rooted in data, knowing the technology is underlined by open-source technology provides peace of mind knowing the application and underlying code can be run anywhere the open-source technology is supported. This desire for open-source technology extends to data pipelines too, where data teams have dedicated hours to optimally integrate a growing set of technologies and perfect ETL scripts. As new technologies, use cases, or business goals emerge, enabling environment flexibility ensures organizations can embrace data in the best way possible.

Cost savings with Cloud Data Fusion

ESG conducted interviews with customers to validate the savings and benefits that they have seen in practice and used these as the assumptions to compare Cloud Data Fusion. Aviv, a senior validation analyst with ESG has taken two use cases, building data warehouse and building data lake and compared on-prem, build yourself with Cloud Data Fusion. The research shows that customers can realize cost savings up to 88% to operate a hybrid cloud data lake and up to 80% to deploy, manage, and maintain data pipelines for cloud-based enterprise data warehouses in BigQuery. Here is a sneak peek into ROI calculations for building Data Warehouse in BigQuery using Cloud Data Fusion vs other alternatives.

The full whitepapers contain even more insight, as well as a thorough analysis of the data integration tools’ impact on businesses and recommended steps for unlocking its full potential. You can download the full reports below:

Accelerating Digital Transformation with Modern Data Integration

The Economic Benefits of Data Fusion versus other Alternatives 

Additionally, try a quickstart, or reach out to us for your cloud data integration needs.

1. ESG Master Survey Results, 2021 Technology Spending Intentions Survey, December 2020

Source : Data Analytics Read More

Google Cloud improves Healthcare Interoperability on FHIR

Google Cloud improves Healthcare Interoperability on FHIR

The Importance of Interoperability

In 2020 hospital systems were scrambling to prepare for COVID-19. Not just the clinicians preparing for a possible influx of patients, but also the infrastructure & analytics teams trying to navigate a maze of Electronic Health Records (EHR) systems. By default these EHRs are not interoperable, or able to speak to one another, so answering a relatively simple question “how many COVID-19 patients are in all of my hospitals?” can require many separate investigations. 

Typically, the more complex a dataset is, the more difficult it is to build interoperable systems around it. Clinical data is extremely complex (a patient has many diagnoses, procedures, visits, providers, prescriptions, etc.), and EHR vendors built and managed their own proprietary data models to handle those data challenges. This has made it much more difficult for hospitals to track a patient’s performance when they switch hospitals (even within the same hospital system) and especially difficult for multiple hospitals systems to coordinate on care for nationwide epidemics (e.g. COVID-19, opioid abuse), which makes care less effective for patients & more expensive for hospitals. 

A Big Leap Forward in Interoperability

Building an interoperable system requires: 

(1) A common data schema

(2) A mechanism for hospitals to bring their messy real-world data into that common data schema

(3) A mechanism for asking questions against that common data schema

In 2011 a common FHIR (Fast Healthcare Interoperability Resources) Data Model & API Standard provided an answer to (1): a single data schema for the industry to speak the same data language. In the past 18 months, Google Cloud has deployed several technologies to unlock the power of FHIR and solve for (2) and (3): 

Google Cloud’s Healthcare Data Engine (HDE) produces FHIR records from streaming clinical data (either HL7v2 messages out of EHR systems or legacy formats from EDWs). This technology then enables data use for other applications & analytics in Google BigQuery (BQ)

Google Cloud’s Looker enables anyone in a healthcare organization to ask any question against the complex FHIR schema in Google BigQuery

Now a hospital system can quickly ask & answer a question against records from several EHR systems at once.

This dashboard tracks a hospital system’s COVID-19 cases & volume across its hospitals.

Applications Seen So Far

In less than 18 months, GCP has seen dozens of applications for HDE, BigQuery, and Looker working together to improve clinical outcomes. A few applications that have been particularly successful so far have answered questions like: 

How many readmissions will a hospital expect in 30 days? 

How long will each inpatient patient stay in a hospital?

How can a hospital better track misuse & operational challenges in prescribing opioid drugs to my patients?

How can a hospital quickly identify anomalies in patients’ vital signs across my hospital system?

How can a hospital identify & minimize hospital-associated infections (e.g. CLABSI) in my hospital?

How can a hospital prepare for COVID-19 cases across a hospital system? And leverage what-if planning to prepare for the worst?

These use cases represent just the tip of the iceberg of possibilities for improving day-to-day operations for clinicians & hospitals.

Solving Major Challenges in Interoperability

Latency: Hospitals often rely on stale weekly reports for analytics; receiving analytics in near real-time enables hospitals to identify problems and make changes much more quickly. COVID-19 in particular highlighted the need for faster turnaround on analytics. GCP’s Healthcare Data Engine handles streaming clinical data in the form of HL7 messages. As soon as messages arrive, they are transformed into FHIR and sent over to the BigQuery database to be queried. There is minimal latency, and users are querying near real-time data.

Scale: The scale and scope of hospital data is rapidly increasing as hospitals track more clinical events and as hospitals consolidate into larger systems. Hospitals are adopting cloud-based systems that autonomously scale to handle the intensive computation necessary to take in millions of clinical records with growing hospital system datasets. GCP’s serverless, managed cloud is meeting these needs for many hospital systems today.

Manage Multiple Clinical Definitions: Today hospitals struggle to manage definitions of complex clinical KPIs. For example, hospitals had many different definitions for a positive COVID-19 result (based on a frequently changing set of lab results and symptoms), which creates inconsistencies in analytics. Additionally, those definitions are often buried in scripts that are hard to adjust and change. HDE has developed capabilities that consistently transform HL7 messages into the FHIR store in a scalable fashion. Looker then provides a single source of truth in an object-oriented, version-controlled semantic layer to define clinical KPIs and quickly update them.  

Represent FHIR Relationally: FHIR was originally intended for XML storage to maximize schema flexibility. However, this format is usually very difficult for analytical queries, which perform better with relational datasets. In particular, FHIR has rows of data buried (or “nested”) within a single record (e.g. a single patient record has many key-value pairs of diagnoses) that make FHIR difficult to ask questions against. BigQuery is an analytical database that combines the analytical power of OLAP databases with the flexibility of a No-SQL data schema by natively storing FHIR data in this “nested” structure and querying against it. 

Query Quickly against FHIR: Writing unnested queries to a schema as complex as FHIR can be challenging. GCP’s Looker solution writes “nested” queries natively to BigQuery, making it much simpler to ask & answer new questions. This also prevents the “cube / extract” problem so common in healthcare, where hospitals are forced to build, manage, and maintain hundreds of simplified data cubes to answer their questions.

Predict Clinical Outcomes: Predictive modelling with AI/ML workflows has matured significantly. Hospitals increasingly rely on AI/ML to guide patients & providers towards better outcomes. For example, predicting patient, staffing, and ventilator volumes 30 days in advance across a hospital system can minimize disruptions to care. Leveraging FHIR on GCP enables GCP’s full suite of managed AI/ML tools – in particular BQML (BigQuery Machine Learning) and AutoML.

Ensure 24/7 Data Availability: COVID-19 exposed the vulnerabilities of relying on staffed on-premise data centers; GCP’s cloud infrastructure ensures availability and security of all clinical data. 

Protect Patient Data: Interoperability blends the need for private patient data to stay private while allowing data to be shared across hospitals. Researchers in particular often require granular security rules to access clinical data. Today hospitals often use an extract-based approach that requires many copies of the data outside of the database, a potential security flaw. GCP’s approach ensures that hospitals can query the data where it resides – in a secure data warehouse. Additionally, every component of GCP’s FHIR solution (HDE, BQ, Looker) can be configured to be HIPAA-compliant and includes row-level, column-level, and field-level security that can be set by users to ensure cell-level control over PHI data. GCP’s Cloud Loss Prevention API also anonymizes data automatically.

Speed to Insights: The complexity of data can lead to long windows to build analytics pipelines, leading to delays in healthcare improvements. GCP’s FHIR solution is relatively low-effort to implement. HDE can be set up against streaming HL7 messages in a few days and weeks, not months and years. Looker has a pre-built FHIR Block (coming to the Looker Blocks Directory and Marketplace soon) that can be installed & configured for a hospital’s particular data needs.

Share Insights Broadly: Interoperability requires not just being able to query across multiple systems, but also to share those insights across multiple platforms. GCP’s FHIR solution allows hospital systems to analyze governed results on FHIR, then send them anywhere: to dashboards in Looker, other BI tools, embedded applications, mobile apps, etc. For example, the National Response Portal represents the promise of hospitals and other organizations sharing aggregated healthcare data for nationwide insights around COVID-19.

For a technical review of GCP’s Healthcare Data Engine against Azure’s & AWS’ solutions, see here and here.

The New Frontier

This new healthcare data stack at Google Cloud represents a significant step forward towards interoperability in healthcare. When hospitals can communicate more easily with each other and when complex analytics are easier to conduct, everyone wins. Patients have better healthcare outcomes, and hospitals can provide care more efficiently. 

Google Cloud is committed to continue partnering with the largest hospital systems in the country to solve the most challenging problems in healthcare. Patients’ lives depend on it.

Related Article

Registration is open for Google Cloud Next: October 12–14

Register now for Google Cloud Next on October 12–14, 2021

Read Article

Source : Data Analytics Read More

BigQuery Admin reference guide: Recap

BigQuery Admin reference guide: Recap

Over the past few weeks, we have been publishing videos and blogs that walk through the fundamentals of architecting and administering your BigQuery data warehouse. Throughout this series, we have focused on teaching foundational concepts and applying best practices observed directly from customers. Below, you can find links to each week’s content:

Resource Hierarchy [blog]: Understand how BigQuery fits into the Google Cloud resource hierarchy, and strategies for effectively designing your organization’s BigQuery resource model.

Tables & Routines[blog]:What are the different types of tables in BigQuery? When should you use a federated connection to access external data, vs bringing data directly into native storage? How do routines help provide easy-to-use and consistent analytics? Find out here!

Jobs & Reservation Model[blog]: Learn how BigQuery manages jobs, or execution resources, and how processing jobs plays into the purchase of dedicated slots and the reservation model.

Storage & Optimizations[blog]: Curious to understand how BigQuery stores data in ways that optimize query performance? Here, we go under-the-hood to learn about data storage and how you can further optimize how BigQuery stores your data.

Query Processing [blog]:Ever wonder what happens when you click “run” on a new BigQuery query? This week, we talked about how BigQuery divides and conquers query execution to power super fast analytics on huge datasets.

Query Optimization[blog]: Learn about different techniques to optimize queries. Plus, dig into query execution for more complex workflows to better understand tactics for saving time and money analyzing your data. 

Data Governance [blog]:Understand how to ensure that data is secure, private, accessible, and usable  inside of BigQuery. Also explore integrations with other GCP tools to build end-to-end data governance pipelines. 

BigQuery API Landscape [blog]:Take a tour of the BigQuery APIs and learn how they can be used to automate meaningful data-fueled workflows.

Monitoring [blog]:Walk through the different monitoring data sources and platforms that can be used to continuously ensure your deployment is cost effective, performant and secure.

We hope that these links can act as resources to help onboard new team members onto BigQuery or a reference for rethinking new patterns or optimizations – so make sure to bookmark this page! If you have any feedback or ideas for future videos, blogs or data focused series, don’t hesitate to reach out to me on LinkedIn or Twitter.

Related Article

BigQuery Admin reference guide: Monitoring

This blog aims to simplify monitoring and best practices related to BigQuery, with a focus on slots and automation.

Read Article

Source : Data Analytics Read More

How to load Salesforce data into BigQuery using a code-free approach powered by Cloud Data Fusion

How to load Salesforce data into BigQuery using a code-free approach powered by Cloud Data Fusion

Organizations are increasingly investing in modern cloud warehouses and data lake solutions to augment analytics environments and improve business decisions. The business value of such repositories increases as customer relationship data is loaded and additional insights are generated.

In this post, we’ll cover different ways to incrementally move Salesforce data into BigQuery using the scalability and reliability of Google services, an intuitive drag-and-drop solution based on pre-built connectors, and the self-service model of a code-free data integration service. 

A Common Data Ingestion Pattern:

To provide a little bit more context, here is an illustrative (and common) use case:

Account, Lead and Contact Salesforce objects are frequently manipulated by call center agents when using the SalesForce application.Changes to these objects need to be identified and incrementally loaded into a data warehouse solution using either a batch or streaming approach.A fully managed and cloud-native enterprise data integration service is preferred for quickly building and managing code-free data pipelines.  Business performance dashboards are created by joining Salesforce and other related data available in the data warehouse.

Cloud Data Fusion to the rescue 

To address the Salesforce ETL (extract, transform and load) scenario above, we will be demonstrating the usage of Cloud Data Fusion as the data integration tool. 

Data Fusion is a fully managed, cloud-native, enterprise data integration service for quickly building and managing code-free data pipelines. Data Fusion’s web UI allows organizations to build scalable data integration solutions to clean, prepare, blend, transfer, and transform data without having to manage the underlying infrastructure. Its integration with Google Cloud ensures data is immediately available for analysis. 

Data Fusion offers numerous pre-built plugins for both batch and real-time processing. These customizable modules can be used to extend Data Fusion’s native capabilities and are easily installed though the Data Fusion Hub component.

For Salesforce source objects, the following pre-built plugins are generally available:

Batch Single Source – Reads one sObject from Salesforce. The data can be read using SOQL queries (Salesforce Object Query Language queries) or using sObject names. You can pass incremental/range date filters and also specify primary key chunking parameters. Examples of sObjects are opportunities, contacts, accounts, leads, any custom object, etc. 

Batch Multi Source – Reads multiple sObjects from Salesforce. It should be used in conjunction with multi-sinks.

Streaming Source – Tracks updates in Salesforce sObjects. Examples of sObjects are opportunities, contacts, accounts, leads, any custom object, etc.

If none of these pre-built plugins fit your needs, you can always build your own by using Cloud Data Fusion’s plugin APIs. 

For this blog, we will leverage the out of the box Data Fusion plugins to demonstrate both batch and streaming Salesforce pipeline options.

Batch incremental pipeline

There are many different ways to implement a batch incremental logic. The Salesforce batch multi source plugin has parameters such as “Last Modified After”, “Last Modified Before”, “Duration” and “Offset” which can be used to control the incremental loads.
Here’s a look at a sample Data Fusion batch incremental pipeline for Salesforce objects Lead, Contact and Account. The pipeline uses the previous’ start/end time as the guide for incremental loads.

Batch Incremental Pipeline – From Salesforce to BigQuery

The main steps of this sample pipeline are:

For this custom pipeline, we decided to store start/end time in BigQuery and demonstrate different BigQuery plugins. When the pipeline starts, timestamps are stored on a user checkpoint table in BigQuery. This information is used to guide the subsequent runs and incremental logic.

Using the BigQuery Argument Setter plugin, the pipeline reads from the BigQuery checkpoint table, fetching the minimum timestamp to read from.

With the Batch Multi Source plugin, the objects lead, contact and account are read from Salesforce, using the minimum timestamp as a parameter passed to the plugin.

BigQuery tables lead, contact and account are updated using the BigQuery Multi Table sink plugin

The checkpoint table is updated with the execution end time followed by an update to current_time column.

Adventurous?

You can exercise this sample Data Fusion pipeline in your development environment by downloading its definition file from GitHub and importing it through the Cloud Data Fusion Studio. After completing the import, adjust the plugin properties to reflect your own Salesforce environment. You will also need to:      

Create a BigQuery dataset named from_salesforce_cdf_stagingCreate the sf_checkpoint BigQuery table on dataset from_salesforce_cdf_staging as described below:

3. Insert the following record into the sf_checkpoint table:

Attention: The initial last_completion date  = “1900-01-01T23:01:01Z” indicates the first pipeline execution will read all Salesforce records with LastModifedDate column greater than 1900-01-01. This is a sample value targeted for initial loads. Adjust the last_completion column as needed to reflect your environment and requirements for the initial run.

After executing this sample pipeline a few times, observe how sf_checkpoint.last_completion column evolves as executions finish. You can also validate that changes are being loaded incrementally into BigQuery tables as shown below:

BigQuery output – Salesforce incremental pipeline

Streaming pipeline  

When using the Streaming Source plugin with Data Fusion, changes in Salesforce sObjects are tracked using PushTopic events. The Data Fusion streaming source plugin can either create a Salesforce PushTopic for you, or use an existing one you defined previously using Salesforce tools. 

The PushTopic configuration defines the type of events (insert, update, delete) to trigger notifications, and the objects columns in scope. To learn more about Salesforce PushTopics, click here.   

When streaming data, there is no need to create a checkpoint table in BigQuery as data gets replicated near real time, automatically capturing only changes, as soon as they occur. The Data Fusion pipeline becomes super simple as demonstrated in the sample below:

Salesforce streaming pipeline with Cloud Data Fusion

The main steps of this sample pipeline are:

1. Add a Salesforce streaming source and provide its configuration details. For this exercise, only inserts and updates are being captured from CDFLeadUpdates PushTopic. As a reference, here is the code we used to pre-create the CDFLeadUpdates PushTopic in Salesforce. The Data Fusion plugin can also pre-create the PushTopic for you if preferred.

Hint: In order to run this code block, login to Salesforce with the appropriate credentials and privileges, open the Developer Console and click on Debug | Open Execute Anonymous Window.

2. Add a BigQuery sink to your pipeline in order to receive the streaming events. Notice the BigQuery table gets created automatically once the pipeline executes and the first change record is generated.

After starting the pipeline, make some modifications to the Lead object in Salesforce and observe the changes flowing into BigQuery as exemplified below:

BigQuery output – Salesforce streaming pipeline with Cloud Data Fusion

Adventurous?

You can exercise this sample Data Fusion pipeline in your development environment by downloading its definition file from GitHub and importing it through the Cloud Data Fusion Studio. After completing the import, adjust the plugin properties to reflect your own Salesforce environment.

Got deletes?  

If your Salesforce implementation allows “hard deletes” and you must capture them, here is a non-exhaustive list of ideas to consider:

An audit table to track the deletes. A database trigger, for example, can be used to populate a custom audit table. You can then use Data Fusion to load the delete records from the audit table and compare/update the final destination table in BigQuery.An additional Data Fusion job that reads the primary keys from the source and compare/merge with the data in BigQuery.A Salesforce PushTopic configured to capture delete/undelete events and a Data Fusion Streaming Source added to capture from the PushTopic.Salesforce Change Data Capture.

Conclusion:

If your enterprise is using Salesforce and If it’s your job to replicate data into a data warehouse then Cloud Data Fusion has what you need. And if you already use Google Cloud tools for curating a data lake with Cloud Storage, Dataproc, BigQuery and many others, then Data Fusion integrations make development and iteration fast and easy. 

Have a similar challenge? Try Google Cloud and this Cloud Data Fusion quickstart next. 

For a more in-depth look into Data Fusion check out the documentation.

Have fun exploring!

Source : Data Analytics Read More

Wrapping up the summer: A host of new stories and announcements from Data Analytics

Wrapping up the summer: A host of new stories and announcements from Data Analytics

August is the time to sit back, relax, and enjoy the last of the summer. Or, for those in the southern hemisphere, August is the month you start looking at your swimsuits and sunglasses with interest as the weather warms. But regardless of where you live, August also seems to be when Google produces a lot of interesting Data Analytics reading.

In this monthly recap, I’ll divide last month’s most interesting articles into three groups: New Features and Announcements, Customer Stories, and  How-Tos. You can read through in order or skip to the section that’s most interesting to you.

Features and announcements

Datasets and Demo Queries – Recursion is a powerful topic, but it’s also a marvelous metaphor.  During August, we launched our most self-referential datasetyet; Google Cloud Release Notes are now in BigQuery. Find the product and feature announcements you need and do it fast using the new Google Cloud Release Notes Dataset. Additionally, we also launched the Top 25 topics in Google Trends (Looker Dashboard).

Save Messages, money and time with Pub/Sub topic retention. When you enable topic retention, all messages sent to the topic within the chosen retention window are accessible to all the topic’s subscriptions —without increasing your storage costs when you add subscriptions. Additionally, messages will be retained and available for replay even if there are no subscriptions attached to the topic at the time the messages are published, allowing subscribers to see the entire history of messages sent to the topic.

Extend your Dataflow templates with UDFs. Google provides a set of Dataflow templates that customers commonly use for frequent data tasks but also as reference data pipelines that developers can extend. But what if you want to customize a Dataflow template without modifying or maintaining the Dataflow template code itself? With JavaScript user-defined functions (UDFs), customers can now extend certain Dataflow templates with custom logic to transform records on the fly. This is especially helpful for users who want to customize a pipeline’s output format without having to re-compile or maintain the template code itself.

The diagram below shows the process flow for UDF enabled Dataflow Templates

Customer stories

Renault uses BigQuery to improve its Industrial Data platform
Last month, Renault described their journey and the impact of establishing an industrial IoT analytics system using Dataflow, Pub/Sub, and BigQuery for traceability of tools and products, as well as measure operational efficiency.  By consolidating multiple data silos into BigQuery the IT Infrastructure team was able to reduce storage costs by 50% even while processing several times more data than on their legacy system.

The chart below shows the multifold growth in data that Renault processed each month (blue bars) along with the corresponding drop in cost (red line) between the previous system (shaded area) and the Google solution.

Constellation Brands chose Google Cloud to power Direct to Consumer (DTC) shift

Ryan Mason, Director and Head of Direct to Consumer Strategy from Constellation Brands authored a piece on the business value of DTC channels as well as the method and impact of how he implemented his pipelines. This story explained how to gather multiple streams from the Google Marketing platform (Analytics 360, Tag Manager 360) and land them in BigQuery.

A key differentiator for us is that all Google Marketing Platform data is natively accessible for analysis within BigQuery.

From there, Constellation Brands can calculate key performance indicators (KPIs), such as customer acquisition cost (CAC), Customer Lifetime Value (CLV), and Net Promoter Score (NPS), and broadcast them across the company using Looker dashboards. In this way, the entire organization can track the health of the business through common access to the same KPI’s.

The operational impact of Looker [dashboards] is also substantial: our team estimates that the number of hours needed to reach critical business decisions has been reduced by nearly 60%.

How-Tos

Our DevRel Rockstar Leigha Jarett has published four very useful articles in her continuing series on the BigQuery Administration Reference Guide. This month she covered: Monitoring, API Landscape, Data Governance, and Query Optimization.

I highly recommend reading the article on query optimization. It’s packed with good tips, from the most commonly used — partitioning and clustering to reduce the amount of data that the query has to scan – to some lesser-known tips like proper ordering of expressions.

This article on workload management in BigQuery has some very useful tips and explains the key constructs of Commitments, Reservations, and Assignments.

If streaming is your game then this post from Zeeshan Kahn on handling duplicate data in streaming pipelines using DF and PS will come in handy.

Zooming up a few thousand feet, Googlers Firat Tekiner and Susan Pierce offer some high level insights as they discuss theconvergence of data lakes and data warehouses. After all, who wants to manage two sets of infrastructure?   Aunified data platformis the way to go.

And that’s how we do a relaxing summer month here at Google Cloud! Stay tuned for more announcements, how-to blogs, and customer stories as we ramp up for Google Cloud Next coming in October.

Related Article

What are the newest datasets in Google Cloud?

Want to know about the latest datasets from Google Cloud? Find information here in one handy location. Check back regularly as we update …

Read Article

Source : Data Analytics Read More

What type of data processing organization are you?

What type of data processing organization are you?

Every organization has its own unique data culture and capabilities. Yet each is expected to use technology trends and solutions in the same way as everyone else. Your organization may be built on years of legacy applications, you may have developed a considerable amount of expertise and knowledge, yet you may be asked to adopt a new approach based on a technology trend. On the other hand, you may be on the other side of the spectrum, a digitally native organization built with engineering principles from scratch without legacy systems but expected to follow the same principles as process driven, established organizations. The question is, should we treat these organizations in the same way when it comes to data processing? In this series of blogs and papers this is what we are exploring: how to set up an organization from the first principles from data analyst, data engineering and data science point of view. In reality, there is no such organization that is solely driven by one of these but it is likely to be a combination of multiple types. What type of organization you become is then driven by how much you are influenced by each of these principles. 

When you are considering what data processing technology encompasses, take a step back and make a strategic decision based on your key goals. This can be whether you optimize for performance, cost, reduction in operational overhead, increase in operational excellence, integration of new analytical and machine learning approaches. Or perhaps you’re looking to leverage existing employees’ skills while meeting all your data governance and regulatory requirements. We will be exploring these different themes and will focus on how they guide your decision-making process. You may be coming from technologies which are solving some of the past problems and some of the terminologies may be more familiar, however they don’t scale your capabilities. There is also the opportunity cost of prioritizing legacy and new issues that arise from a transformation effort, and as a result your new initiative can set you further behind on your core business while you play catch up to an ever changing technology landscape. 

Data value chain

The key for any ingestion and transformation tool is to extract data from a source and start acting on it. The ultimate goal is to reduce the complexity and increase the timeliness of the data. Without data, it is impossible to create a data driven organization and act on the insights. As a result, data needs to be transformed, enriched, joined with other data sources, and aggregated to make better decisions. In other words, insights on good timely data mean good decisions.

While deciding on the data ingestion pipeline, one of the best approaches is to look into the volume of data, the velocity of the data, and type of data that is arriving. Other considerations include the number of different data sources you are managing, whether you need to scale to thousands of sources using generic pipelines, whether you want to create one generic pipeline but then apply data quality rules and governance. ETL tools are ideal for this use case as generic pipelines can be written and then parameterized. 

On the other hand, consider the data source. Can the data be directly ingested without transforming and formatting the data? If the data does not need to be transformed and can be ingested directly into the data warehouse as a managed solution. This not only reduces the operational costs but also allows for more timely data delivery. If the data is coming in through an unstructured format such as XML or in a format such as EBCDIC and needs to be transformed and formatted, then a tool with ETL Capabilities can be used depending on the speed of the data arrival. 

It is also important to understand the speed and time of arrival of the data. Think about your SLAs and time durations/windows that are relevant for your data ingestion plans. This would not only drive the ingestion profiles but would also dictate which framework to use. As discussed above, velocity requirements would drive the decision-making process.

Type of Organization

Different organizations can be successful by employing different strategies based on the talent that they have. Just like in sports, each team plays with a different strategy with the ultimate goal of winning. 

Organizations often need to decide on what’s the best strategy to take in respect to data ingestion and processing – whether you need to hire an expensive group of data engineers, or exploit your data wizards and analysts to enrich and transform data that can be acted on, or whether it would be more realistic to train the current workforce to do more functional/high value work rather than to focus on building generally understood and available foundational pieces.

On the other hand, the transformation part of ETL pipelines as we know it, dictates where the load will be. All of these are made a reality in the cloud native world where data can be enriched, aggregated, and joined. Loading data into a powerful and modern data warehouse means that you can already join and enrich the data using ELT. Consequently, ETL isn’t really needed in its strict terms anymore if the data can be loaded directly into the data warehouse.

All of the above was not possible in the traditional, siloed, and static data warehouses and data ecosystems whereby systems would not talk to each other or there were capacity constraints in respect to both storing and processing the data in the expensive Data Warehouse. This is no longer the case in the BigQuery world as storage is now cheap and transformations are now much more capable without constraints of virtual appliances. 

If your organization is already heavily invested into an ETL tool, one option is to use them to load BigQuery and transform the data initially within the ETL tool. Once the as-is and to-be are verified to be matching, then with the improved knowledge and expertise one can start moving workloads into BigQuery SQL, and effectively do ELT. 

Furthermore, if your organization is coming from a more traditional data warehouse that extensively relies on stored procedures and scripting, then the question that one may ask is, do I continue leveraging these skills and expertise and use these capabilities that are also provided in BigQuery? ELT with BigQuery is more natural, similar to what’s already in Teradata BTEQ, Oracle PL/SQL but migrating from ETL to ELT requires changes. This change then enables exploiting streaming use cases, such as real-time use cases in retail. This is because there is no preceding step before data is loaded and made available.

Organizations can be broadly classified under 3 types as Data Analyst Driven, Data Engineering driven, and Blended organization. We will be covering a Data Science driven organization within the Blended category.   

Data Analyst Driven

Analysts understand the business and are used to using SQL/spreadsheets. Allowing them to do advanced analytics through interfaces that they are accustomed to enables scaling. As a result, easy to use ETL tooling to bring data quickly into the target system becomes a key driver. Ingesting data directly from a source or staging area then also becomes critical as it allows analysts to exploit their key skills using ELT and increases timeliness of the data. This is commonplace with traditional EDWs and realized by extended capabilities of using Stored Procedures and Scripting. Data is enriched, transformed, and cleansed using SQL and ETL tools act as the orchestration tools. 

The capabilities brought by cloud computing on separation of data and computation changes the face of the EDW as well. Rather than creating complex ingestion pipelines, the role of the ingestion becomes, bringing data close to the cloud, staging on a storage bucket or on a messaging system before being ingested into the cloud EDW. This then releases data analysts to focus on looking into data insights using tools and interfaces that they are accustomed to. 

Data Engineering / Data Science Driven 

Building complex data engineering pipelines is expensive but enables increased capabilities. This allows creating repeatable processes and scaling the number of sources. Once complemented with cloud it enables agile data processing methodologies. On the other hand, data science organizations allow carrying out experiments and producing applications that work for specific use cases but are not often productionised or generalized. 

Real-time analytics enables immediate responses and there are specific use cases where low latency anomaly detection applications are required to run. In other words, business requirements would be such that it has to be acted upon as the data arrives on the fly. Processing this type of data or application requires transformation done outside of the target.

All the above usually requires custom applications or state-of-the-art tooling which is achieved by organizations that excel with their engineering capabilities. In reality, there are very few organizations that can be truly engineering organizations. Many fall into what we call here as the blended organization.  

Blended org

The above classification can be used on tool selection for each project. For example, rather than choosing a single tool, choose the right tool for the right workload, because this would reduce operational cost, license cost and use the best of the tools available. Let the deciding factor be driven by business requirements: each business unit or team would know the applications they need to connect with to get valuable business insights. This coupled with the data maturity of the organization would be the key to making sure the right data processing tool would be the right fit. 

In reality, you are likely to be somewhere on a spectrum. Digital native organizations are likely to be closer to being engineering driven, due to their culture and business that they are in. However, brick and mortar organizations would be closer to being analyst driven due to the significant number of legacy systems and processes they possess. These organizations are either considering or working toward digital transformation with an aspiration of having a data engineering / software engineering culture like Google. 

The blended organization with strong skills around data engineering, would have built the platform and built frameworks, to increase reusable patterns would increase productivity and then reduce costs. Data engineers focus on running Spark on Kubernetes whereas infrastructure engineers focus on container work. This in turn provides unparalleled capabilities as application developers focus on the data pipelines and even the underlying technologies or platforms changes code stays the same. As a result, security issues, latency requirements, cost demands and portability are addressed at multiple layers. 

Conclusion – What type of organization are you?

Often an organization’s infrastructure is not flexible enough to react to a fast changing technological landscape. Whether you are part of an organization which is engineering driven or analyst driven, organizations frequently look at technical requirements that inform which architecture to implement. But a key, and frequently overlooked, component needed to truly become a data-driven organization is the impact of the architecture on your data users. When you take into account the responsibilities, skill sets, and trust of your data users, you can create the right data platform to meet the needs of your IT department as well as your business.

To become a truly data-driven organization, the first step is to design and implement an analytics data platform that meets your technical and business needs. The reality is that each organization is different and has a different culture, different skills, and capabilities. Key is to leverage its strengths to stay competitive while adopting new technologies when it is needed and as it fits to your organization. 

To learn more about the elements of how to build an analytics data platform depending on the organization you are, read our paper here.

Related Article

Just released: The Google Cloud Next session catalog is live. Build your custom playlists.

Google Cloud Next session catalog is live

Read Article

Source : Data Analytics Read More