Archives 2023

How Charlotte Tilbury Beauty uses Google Cloud to respond to customer data requests

How Charlotte Tilbury Beauty uses Google Cloud to respond to customer data requests

Editor’s note: Today we hear from global beauty brand, Charlotte Tilbury Beauty, on how they use Google Cloud Workflows and Sensitive Data Protection to respond quickly and at scale to customer data requests.

Launched in September 2013 by iconic beauty entrepreneur Charlotte Tilbury MBE, Charlotte Tilbury Beauty was born out of Charlotte’s long-held desire to empower everyone to feel like the most beautiful version of themselves, helping people around the world gain the confidence to achieve their biggest and boldest dreams.

Through Charlotte’s vision and leadership, at Charlotte Tilbury Beauty we continue to break records across regions, channels, and categories. We now sell more than 500 products across color, complexion, and skincare, we have a physical presence in over 50 global markets and 42 countries via charlottetilbury.com, as well as over 2,000 points of distribution worldwide including department stores and travel retail.

We are a customer-obsessed brand that strives to build a direct, emotional, and trusting relationship with our customers and partners. Our products and experiences are even better and more tailored to our customers when they choose to share their information with us, and trust us to appropriately redact and forget their information should they wish. Organizations that collect customer data have a responsibility to respond to customer data requests, be it a Right-To-Be-Forgotten (RTBF) or Data Subject Access Requests. Below, we focus on the data that resides on our Google infrastructure and explore how Google Cloud Workflows and Google Cloud Sensitive Data Protection (formerly Cloud Data Loss Prevention) have made it easier to respond to customer data deletion requests, and enabled our data team to develop an automated RTBF process.

To provide some context to the problem, our tool selection was driven by several needs and challenges:

A desire to have complete visibility over what, where, and how much PII is being stored in our Google Cloud environmentsData deletion from our data warehouse being heavily dependent on the deletion status in source systemsThe existing process being highly manual and unscalable for future volumes as well as having a limited audit trailA need for the solution to be independent from data warehouse processing workloadsA need for the solution to integrate with our consent management platform (OneTrust) whilst being able to perform complex BigQuery procedures and statementsThe language used in our product or marketing-specific content containing names that could be mistaken as PII

Google Cloud Sensitive Data Protection (SDP) provides us a way to scan the entirety of our BigQuery data and apply business-specific rules and edge cases to tune the performance of data profiling. This has significantly increased our understanding of our PII footprint, not just within our data warehouse but across the business. SDP scans for sensitive data at either daily, weekly, or monthly intervals, triggered by schema changes or changes in rows of data. These results, called data profiles, can be pushed to a number of destinations including Security Command Center, Chronicle, Dataplex, Pub/Sub, and BigQuery. In our case, we pushed to BigQuery for simplicity of consumption as part of our data deletion process.

As well as data discovery, SDP has the ability to encrypt and mask data based on the type of sensitive data identified.

Cloud Workflows gives us the ability to rapidly deploy a process that orchestrates multiple services in order to:

Retrieve the most recent open customer requestsScan our SDP data profiles for customer occurrencesApply business rules to determine the course of actionFulfill the deletion request via the BigQuery APIProvide a post-deletion report for manual checks

By using serverless infrastructure, we bypassed the burden of setting up development environments, SDKs, and API clients. This freed us to focus on meticulously defining and automating the data deletion request process through workflow YAML files and BigQuery SQL. Ultimately, Workflows fulfills the need to orchestrate and control the sequence of API requests to OneTrust, BigQuery (for routine execution and consuming SDP Data Profile results), and Cloud Functions (for processing of OneTrust API JSON response bodies), which allows us to automate a process that must consider dependencies between several systems.

The combination of Google SDP and Cloud Workflows provides simplicity to a complex problem. It has automated PII discovery in our BigQuery assets and has simplified the automation of data redaction through easy-to-define interactions with the BigQuery API and Consent Management System API. This approach has future-proofed our solution, where new data products or assets are automatically included in future SDP data profiles, and the redaction of the data product only requires a new BigQuery routine definition and a new routine invocation defined in the Workflow yaml.

It is worth mentioning that our infrastructure state is managed in Terraform Cloud, which means our automated deployments are made consistently in a declarative manner. There are still opportunities for continuous improvement, but these tools have given us a strong foundation on which to continue building trust with our customers when it comes to their data.

Source : Data Analytics Read More

Google recognized as a Leader and positioned furthest in vision among all vendors evaluated in the 2023 Gartner Magic Quadrant for Cloud Database Management Systems

Google recognized as a Leader and positioned furthest in vision among all vendors evaluated in the 2023 Gartner Magic Quadrant for Cloud Database Management Systems

We’re excited to share that Gartner has recognized Google as a Leader and is positioned furthest in vision among all vendors evaluated in the 2023 Gartner® Magic Quadrant™ for Cloud Database Management Systems. As a Leader in this report for four consecutive years, we believe Google’s position is a testament to delivering continuous customer innovation in areas such as built-in intelligence, open data ecosystems and unified data offerings.

Download the complimentary 2023 Gartner Magic Quadrant for Cloud Database Management Systems report.

“The market has been undergoing a significant shift as companies look to move to more modern, cloud-first approaches. This shift is being driven by a number of factors, including the need for greater agility, scalability, cost savings, and the desire to leverage generative AI capabilities in conjunction with data to innovate.” said Sanjeev Mohan, Principal Analyst at SanjMo and Former Gartner VP. “The way in which Google Cloud is approaching the entire data and AI stack through principles of simplification, unification and open standards is well-aligned to what customers are looking for in a long-term strategic partner.”

Data is critically important to every organization’s AI journey. A strong data foundation can accelerate enterprise-wide AI and revenue growth, according to interviews with hundreds of global executives. The higher an organization’s data maturity, the stronger its AI capabilities and offerings tend to be, the survey found.

However, many organizations are still struggling to extract the full business value of data due to disparate tools and data sources, as well as poor data quality. Moreover, organizations are managing an increasing variety of databases and analytics workloads, resulting in additional complexity, overhead, and risk.

With Google’s Data and AI Cloud, our vision is to bring the simplicity, scalability, security, and intelligence of Google’s data approach to your business, allowing you to unify your data, leverage the best of open source compatibility, all while providing the latest AI capabilities, so you can unlock the full potential of that data.

Manage all data with a unified data cloud

Google’s Data and AI Cloud is based on the principle of simplicity, enabling you to interconnect your data at multiple levels. It provides an open and unified data platform that allows organizations to manage every stage of the data lifecycle — from running operational databases for applications, to managing analytical workloads across data warehouses and data lakes, to data-driven decision making, to AI and machine learning.

Google’s Data and AI Cloud architecture is distinctive, so you can unify your data, people, and workloads. Our databases are built on a highly scalable, distributed storage system with fully disaggregated resources and high-performance Google-owned global networking. This combination allows us to provide tightly integrated data services across AlloyDB, BigQuery, BigLake, Bigtable, Cloud SQL, Dataflow, Dataplex, Dataproc, and Spanner.

We recently launched several capabilities that further strengthen these integrations, making it even easier to accelerate innovation:

Unification of workspace for data and AI people – BigQuery Studio, in preview, brings data engineering, analytics, and ML workloads together, enabling you to edit SQL, Python, Spark and other languages and easily run analytics at petabyte scale without any additional infrastructure management overhead. BigQuery Studio gives you direct access to Colab Enterprise, a new offering that brings Google Cloud’s enterprise-level security and compliance support to Colab.Unification of transactional and analytical systems – We announced the general availability of Spanner Data Boost, a breakthrough technology that allows you to analyze your Spanner data via services such as BigQuery, Spark on Dataproc, or Dataflow — all with virtually no impact to your transactional workloads. We also made it easier to ingest data from BigQuery back to operational databases like Bigtable, our HBase-compatible, NoSQL database, with just a few clicks. With the new BigQuery Export to Bigtable feature, you can serve analytical insights from your applications without having to touch any ETL tools.Unified data management and governance – We introduced intelligent data profiling and data quality capabilities to help you understand completeness, accuracy and validity of your data. We also launched extended data management and governance capabilities in Dataplex. Now, you have a single experience for all your data and AI assets, including Vertex AI models and datasets, operational databases, and analytical systems.Unification of all types of data – BigLake lets you work with data of any type, in any location. You no longer have to worry about underlying storage formats and can reduce cost and inefficiencies because BigLake is deeply integrated with BigQuery. We launched the general availability of BigLake Object Tables to help data users easily access, transverse, process and query unstructured data like images, audio and documents using SQL. BigLake is in hyper-growth; just since the beginning of this year, there has been a 27x increase in BigLake usage.

Run all your data where it is, with an open data ecosystem

Google Cloud provides industry leading integration with open source and open APIs, helping to ensure portability, flexibility, and reducing the risk of vendor lock-in. These integrations include BigQuery Migration Service to accelerate migration from traditional data warehouses and the Database Migration Service to help you migrate and modernize your operational databases. You can also take advantage of our managed database services that are fully compatible with popular open-source engines such as PostgreSQL, MySQL, and Redis.

We continue to focus on making Google Cloud the most open data cloud. Some recent launches in this area include:

Analyze BigQuery data in other clouds – Many of our customers manage and analyze their data on Google Cloud, AWS or Azure with BigQuery Omni which provides a single pane of glass across clouds. Taking BigQuery Omni one step further, we added support for cross-cloud materialized views, and cross-cloud joins. In the past six months, BigQuery Omni saw over 120% growth in data processed by customers while querying across AWS, and Azure environments.Run AlloyDB virtually anywhere – We announced the general availability of AlloyDB Omni, the downloadable edition of AlloyDB that runs virtually anywhere — on Google Cloud, AWS, Azure, Google Distributed Cloud Hosted, on-premises, and even on developer laptops. AlloyDB Omni provides the flexibility to run the same enterprise-class database, AlloyDB for PostgreSQL, across all your environments, backed by Google’s enterprise support organization, and at a fraction of the cost of legacy databases. We also launched the preview of the AlloyDB Omni Kubernetes operator, which simplifies common database tasks including database provisioning, backups, secure connectivity, and observability, allowing you to run AlloyDB Omni in most Kubernetes environments.Support for open table formats – We launched the availability in BigLake for Hudi, Delta, and Iceberg tables. This allows customers to use streaming ingestion for data in Google Cloud storage and gain a fully managed experience with automatic storage optimizations, as well as perform DML transactions to enable consistent modifications and improved data security, all while retaining full Iceberg reader compatibility.Enhance performance for enterprise workloads – We announced Enterprise Plus edition for MySQL and PostgreSQL, providing new availability, performance, and data protection enhancements in Cloud SQL. Cloud SQL Enterprise Plus edition for MySQL delivers up to three times higher performance than Amazon’s comparable MySQL service. We’ve also been working on supercharging our Memorystore for Redis offering with a brand new fully managed offering, Memorystore for Redis Cluster which is now generally available. With Memorystore for Redis Cluster, you get an easy-to-use, open-source-compatible Redis Cluster service that provides up to 60 times more throughput than Memorystore for Redis, with microseconds latencies.

We’ve also significantly expanded our data cloud partner ecosystem, and are increasing our partner investments across many new areas. Today, more than 1,000 software partners are building their products using Google’s data cloud, and more than 70 data platform partners offer validated integrations through our Google Cloud Ready – BigQuery initiative. We also announced Google Cloud Ready for Cloud SQL, a program that recognizes partner solutions that have met integration requirements with Cloud SQL. This program joins our existing Google Cloud Ready for AlloyDB partner program.

Unlock new value from data with integrated AI

AI provides numerous opportunities to activate your data. So, we made AI easily accessible to all your data teams and also made it effortless to use your data to train AI models. Here are some recent launches:

Easily build enterprise gen AI apps – We introduced AlloyDB AI, an integral part of AlloyDB, which offers an integrated set of capabilities for easily building enterprise gen AI apps, everywhere. AlloyDB AI runs vector queries up to 10x faster compared to standard PostgreSQL when using the IVFFlat index, allows you to easily generate embeddings from within your database, and fully integrates with Vertex AI and open-source gen AI tools. We also launched the preview of BigQuery feature tables and vector embeddings to store all your ML features and vector embeddings. This allows you to build powerful semantic searches and do recommendation queries on the scale of your BigQuery data in real-time.Access to foundational models – We enabled users to access our Vertex AI’s foundation models directly from BigQuery. With just a single statement, you can connect a BigQuery table to a large language model (LLM) and tune prompts with your BigQuery data. This allows you to use gen AI capabilities such as text analysis on your data or generate new attributes to enrich your data model. Additionally, we launched BigQuery ML inference engine that allows you to embrace the ecosystem of pretrained models and open ML frameworks. It helps you run predictions on Google vision, natural language and translation models in BigQuery, import models in additional formats like TensorFlow Lite, ONNX and XGBoost, and directly use models hosted in Vertex AI.Enhance productivity with Duet AI – We launched Duet AI in BigQuery to simplify data analysis, generate code and assist with writing SQL queries and Python code, allowing you to focus more on logic and outcomes. We also announced Duet AI in Spanner, which allows you to generate code to structure, modify, or query your data using natural language. And to make it even easier to modernize your Oracle databases, we brought the power of Duet AI to the last mile of Oracle to PostgreSQL migrations with Duet AI in Database Migration Service. Finally, Duet AI in Dataplex can be used for metadata insights to solve the cold-start problem — how do I know which questions I can ask my data?

We are honored to be recognized as a Leader in the 2023 Gartner Magic Quadrant for Cloud Database Management Systems.

What’s next

We look forward to continuing to innovate and partner with you on your digital transformation journey. Download the complimentary 2023 Gartner Magic Quadrant for Cloud Database Management Systems report.

2023 Gartner Magic Quadrant for Cloud Database Management Systems, Adam Ronthal, Henry Cook, Rick Greenwald, Aaron Rosenbaum, Ramke Ramakrishnan, Xingyu Gu, December 18, 2023.

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research and Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

GARTNER is a registered trademarks and service mark, and MAGIC QUADRANT is a registered trademark of Gartner, Inc. and/or its affiliates in the US and internationally and are used herein with permission. All rights reserved. This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Google.

Source : Data Analytics Read More

Zeotap builds marketer’s AI companion with Vertex AI

Zeotap builds marketer’s AI companion with Vertex AI

In today’s fast-changing marketing world, data is king. Marketers are under pressure to show solid returns on investment (ROI). With tightening budgets, marketers find themselves leaning heavily on data for strategic planning, audience targeting, performance evaluation, and efficient resource allocation.

As businesses strive to better understand their customers and deliver meaningful experiences, Customer Data Platforms (CDPs) have emerged as crucial tools in a marketer’s kit, enabling brands to build unified profiles of their customer data from all channels. They clean and standardize data across sources for easy integration with AdTech and MarTech platforms.

One pioneering CDP providing marketers with innovative and comprehensive customer solutions is Zeotap. By partnering with Google Cloud, and leveraging the same cloud technologies powering Search, YouTube, and Google Maps, Zeotap has built an intuitive marketers’ CDP.

As our collaboration redefines how brands manage and engage with customer data, this blog shows how Zeotap is leveraging Google’s generative AI prowess to enable marketers to derive even more value from their customer data by creating a CDP that is easy to use yet robust, drive deeper insights and marketing success.

Building a marketing companion with Vertex AI

Building effective and impactful marketing campaigns requires new ways to build deeper relationships with your customers while delivering results. For large brands, with multiple customer touch points, complex segmentation models are essential to provide context and time-based alerts and offers. However, these models can be difficult for non-technical users to understand and leverage effectively.

Ada™ Zeotap’s AI Companion is here to guide marketers through intuitive steps to build and analyze customer data to make insight-driven decisions. The seamless, accessible introduction enables all marketers, regardless of technical skill, to unlock valuable insights from their data. By simply conversing with Ada and describing their business goals and available data, marketers can effortlessly build custom segments that Ada will translate into actionable rules to review, save, or activate.

Architecture overview

The foundation of this application lies atop Google’s Large Language model (LLM) PaLM2 on Vertex AI, which possesses an extensive understanding of human language and context. This model serves as the core component responsible for interpreting natural language commands. The deployment includes autonomous agents using this powerful LLM, serving as asynchronous threads of thought that coalesce together toward one common goal. Zeotap uses an ensemble [1] of such agents [2] called Mixture of Experts that work as a team to refine their ideas to provide a clear, straightforward response. Before taking any action, the automated assistants map out exactly what they plan to do using a method called ReAct [3].

Data flow

When a user describes a segment to build, our system gathers the relevant catalog from internal data stores and aggregates it into a (JSON) LLM-readable format. After precise prompt tuning and elaborately crafted flows, we provide the AI with the user’s perceived intent, reference information, and plenty of sanity checks. Once the semantic intent is understood, the AI queries the metadata from the backing databases and identifies the relevant entities. Each of these entities is refined via a business context aware, exhaustive set of sanity checks through custom tailored heuristics to keep the agent’s hallucinations in check. Special care is taken to ensure that these processes do not change or interfere with the underlying client data.

Vector similarity search

Vertex AI’s Vector Search employs machine learning to grasp the essence and context of disorganized data. It relies on huge pre-trained models (text-embedding-gecko in this case) that have a broad range of knowledge and interpret meanings with great accuracy. These models can translate words, sentences, or paragraphs into numerical representations. These numerical representations encapsulate the root meaning, and as a result, similar numbers match similar ideas.

To break down the task into operators and values, we make two distinct requests to the Vertex AI’s LLM PaLM2 (text-bison) using the same basic information. Each request involves the context (user’s input), available values, and the previous agent’s response within the input. Because the pool of operators is limited, the AI Companion can consistently provide a reasonable response without needing further refinement. However, the actual answer may be wrong or missing. Additionally, the agent can only use a limited range of values and doesn’t understand columns with multiple possibilities. To address this, we compare Ada’s answer to the possible value for that column until we find a match using similarity search.

Once we have built these three structured groups, the primary role of PaLM-2 text-bison concludes. At this point, we employ these well-structured groups to construct SQL queries, which run using a designated client SQL Query Engine. We use this output to pre-fill the segment conditions which the user can verify and save the audience for activation.

Better together: Zeotap + Google Cloud

Zeotap and Google Cloud are working together to transform how companies manage and engage with their clients. Our collaborative solutions are already driving value for Zeotap’s customers, offering an innovative, user-friendly interface that prioritizes simplicity and results. By harnessing the power of Google Cloud’s gen AI models, we are committed to making data-driven marketing more accessible and efficient.

Google’s generative AI technology has been instrumental in helping us unlock new possibilities for our customers. The synergy between Zeotap’s platform and Google’s advanced models has enabled us to deliver innovative solutions that improve accuracy, efficiency, and personalization. We are excited to continue collaborating with Google and exploring the potential of generative AI to transform the industry.

Our vision extends beyond audience refinement; we are dedicated to enhancing user experiences and pioneering innovative solutions. This involves streamlining data integration, automating data mapping, and equipping marketers with cutting-edge AI technology for effortless customer insights. In the upcoming months and years, Zeotap is committed to continuing our collaboration with Google Cloud to capitalize on all of the benefits that gen AI will bring to our customers.

Learn more about Google Cloud’s open and innovative generative AI partner ecosystem. Read more about Zeotap and Google Cloud.

References

Chen, Zixiang, et al. “Towards understanding mixture of experts in deep learning.” arXiv preprint arXiv:2208.02813 (2022).Karpas, Ehud, et al. “MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning.” arXiv preprint arXiv:2205.00445 (2022).Yao, Shunyu, et al. “React: Synergizing reasoning and acting in language models.” arXiv preprint arXiv:2210.03629 (2022).Ji, Bin. “VicunaNER: Zero/Few-shot Named Entity Recognition using Vicuna.” arXiv preprint arXiv:2305.03253 (2023).

Source : Data Analytics Read More

Dataflow and Vertex AI: Scalable and efficient model serving

Dataflow and Vertex AI: Scalable and efficient model serving

If you’re considering using Vertex AI to train and deploy your models, you’re on the right track! Data is essential for machine learning, and the more data a model has and the higher quality it is, the better the model will perform. Before training a model, the data must be preprocessed, which means cleaning, transforming, and aggregating it into a format that the model can understand. Data preprocessing is also important when serving a model, but it can be more complex due to factors such as real-time streaming data, hardware scalability, and incomplete data.

When you’re handling large amounts of data, you need a service that’s both scalable and reliable. Dataflow fits the bill perfectly, as it can process data in both real-time and batch mode, and it’s ideal for models with high throughput and low latency requirements.

Dataflow and Vertex AI work great together, so keep reading to learn how to use these two powerful services to serve models for streaming prediction requests.

Use Case: Streaming Prediction Requests

Certain applications, such as anomaly detection in sensor data and predictive maintenance for industrial equipment, demand real-time predictions from machine learning models. Surprisingly, implementing real-time prediction systems doesn’t require an overly complex setup. If your machine learning model needs to make predictions on real-time data, a straightforward approach involves utilizing a Pub/Sub topic to capture real-time data, a Dataflow pipeline to preprocess and transform the data, and a Vertex AI endpoint to execute the machine learning model and generate predictions. Additionally, you can enable model monitoring to track any data or model changes that could impact prediction accuracy. The following diagram illustrates the workflow of this solution:

Deploy Model to Vertex AI Endpoint

First, we will need a trained model stored in Vertex AI Model Registry before the serving solution can be implemented. This can be done by either training a model in Vertex AI or importing a pre-trained model.

Now, with just a few clicks (or API calls), you can deploy your model to an endpoint in Vertex AI, so it can serve online predictions. You can enable model monitoring without writing any additional custom code, which helps ensure that there is no skew between the training and serving data.

Instead of deploying the model to an endpoint, you can use the RunInference API to serve machine learning models in your Apache Beam pipeline. This approach has several advantages, including flexibility and portability. However, deploying the model in Vertex AI offers many additional benefits, such as the platform’s built-in tools for model monitoring, TensorBoard, and model registry governance.

Vertex AI also provides the ability to use Optimized TensorFlow runtime in your endpoints. To do this, simply specify the TensorFlow runtime container when you deploy your model.

The Optimized TensorFlow runtime is a runtime that can improve the performance and cost of TensorFlow models. You can learn more about how to use it to speed up model inference here. This blog post contains benchmark data that shows how well it performs.

Data Processing Dataflow Pipeline

Apache Beam has built-in support for sending requests to a remotely deployed Vertex AI endpoint by using the VertexAIModelHandlerJSON class. With just a couple of lines of code, we can send the preprocessed message for inference.

code_block<ListValue: [StructValue([(‘code’, ‘model_handler = VertexAIModelHandlerJSON(rn endpoint_id=known_args.endpoint_id,rn project=known_args.project_id,rn location=known_args.location,rn )’), (‘language’, ‘lang-py’), (‘caption’, <wagtail.rich_text.RichText object at 0x3ee80bd4e2e0>)])]>

Now, we’ll use Dataflow for the data preprocessing part. Below, you can find a code snippet of a python Apache Beam Pipeline which

1. Reads messages from Pub/Sub

2. Preprocesses the message. This can include the following:
a. Cleaning the data
b. Handling missing values
c. Encoding categorical data
d. Feature scaling

3. Sends a prediction request to the Vertex AI endpoint using the Vertex AI model handler

4. Processes the output. In this instance, we transform the raw output of the model into a format that is easily interpretable.

5. Write to BigQuery. Store the output in BigQuery so it can be easily retrieved.

code_block<ListValue: [StructValue([(‘code’, ‘with beam.Pipeline(options=pipeline_options) as p:rn elements = ( rn p | “Read PubSub” >> beam.io.ReadFromPubSub(rn subscription=known_args.subscriptionrn ).with_output_types(bytes)rn | “Preprocess” >> beam.ParDo(Preprocess())rn | “Run Vertex Inference” >> RunInference(model_handler)rn | “Process Output” >> beam.ParDo(Postprocess())rn | “Write to BigQuery”rn >> beam.io.WriteToBigQuery(rn table=f”{known_args.bq_location}”rn )rn )’), (‘language’, ‘lang-py’), (‘caption’, <wagtail.rich_text.RichText object at 0x3ee80bd4ea30>)])]>

What’s next?

The Apache Beam pipeline can be easily converted into a Flex Template, which allows multiple teams in the same company with similar use cases to reuse it. You can read more about flex templates here. Also, the Dataflow streaming pipeline can be run as one step of a Vertex AI Pipeline (take a look at some of the pre-built components).

In conclusion, Dataflow + Vertex AI is a powerful combination for serving machine learning models for both batch and streaming prediction requests. Dataflow can process data in both real-time and batch mode, and it’s ideal for use cases that require high throughput and low latency. Vertex AI provides a platform for deploying and managing models, and it also offers many additional benefits, such as built-in tools for model monitoring, the ability to leverage the Optimized Tensorflow Runtime, and Model Registry.

To learn more about how to use Dataflow and Vertex AI to serve machine learning models, please visit the following resource for detailed code samples: Apache Beam RunInference with Vertex AI.

Ready to discuss your cloud needs? Learn how Google Cloud Consulting can help you implement an end-to-end solution. Visit cloud.google.com/consulting.

Source : Data Analytics Read More

5 Reasons Data-Driven SEO Agencies Are the Future

5 Reasons Data-Driven SEO Agencies Are the Future

Data analytics is undoubtedly the future of the marketing profession. Businesses spent $3.9 billion on marketing analytics in 2021 and that figure is likely to reach $14.3 billion by 2031. Big data technology has led to tremendous changes in the marketing realm in recent years. A growing number of businesses are turning to marketing agencies […]

Source : SmartData Collective Read More