Build an image data classification model with BigQuery ML

Build an image data classification model with BigQuery ML

Build an image data classification model with BigQuery ML

There are countless use cases for capturing, storing, and classifying unstructured image data — think social media analytics to find missing people, image analytics for tracking road traffic, or media analytics for e-commerce recommendations, to name a few. Most organizations are not able to be fully data driven because the majority of the data generated these days is highly unstructured and when it comes to large-scale analytics across data types and formats, there are some limiting factors for enterprise applications: 1) data storage and management, 2) infrastructure management and 3) availability of data science resources. With BigQuery’s new unstructured data analysis feature, you can now store, process, analyze, model, predict, with unstructured data, and combine it with structured data in queries. Best of all, you can do all of this in no-code SQL-only steps.

In this blog, we will discuss the use case of storing and analyzing images of yoga poses in BigQuery, and then implement a classification model with BigQuery ML to label the poses using only SQL constructs.

BigQuery and BQML

BigQuery is a serverless, multi-cloud data warehouse that can scale from bytes to petabytes with zero operational overhead. This makes it a great choice for storing ML training data. Besides, the built-in BigQuery Machine Learning (BQML) and analytics capabilities allow you to create no-code predictions using just SQL queries. And you can access data from external sources with federated queries, eliminating the need for complicated ETL pipelines. You can read more about everything BigQuery has to offer on the BigQuery page. 

So far, we know BigQuery as this fully managed cloud data warehouse that helps users analyze structured and semi-structured data. But,

BigQuery has expanded to perform all analytics and ML on unstructured data as well

We can use SQL queries to perform insightful analysis, analytics and ML on images, videos, audio etc. at scale without having to write additional code

We have the ability to combine structured and unstructured data as if they all existed together in a table

We will discuss these in our Yoga Pose Classification use case covered in the next section.

Image Data Classification with BigQuery ML

With first of its kind access to image data related “structured” queries, we can now predict results using machine learning classification models using BigQuery ML. I have narrowed down the stages involved into 5 steps for easy understanding.

1. Create dataset and a BigLake connection

For our use case of image detection of five Yoga poses, I have used a publicly available dataset and you can access the dataset from this repo. The Yoga poses we are identifying are limited to Downdog, Goddess, Plank, Tree and Warrior2.

Before you begin with the BigQuery Dataset creation, make sure to select or create a Google Cloud Project and check if billing is enabled on the project. Enable BigQuery API and BigQuery Connection API.

a. Using below steps, create the dataset “yoga_set”

b. BigLake Connection allows us to connect the external data source while retaining fine-grained BigQuery access control and security, which in our case is the Cloud Storage for the image data. We will use this connection to read objects from Cloud Storage. Follow steps below to create the BigLake Connection

Click ADD DATA on the Explorer pane of the BigQuery page:

Click Connections to external data sources and select BigLake and Remote functions option:

Provide Connection Name and create the connection. Remember to take a note of the Service Account id that is created in this process.

2. Create a Cloud Storage bucket and grant permissions

We are going to use a Cloud Storage bucket to contain the image files of Yoga poses that we want to create the model on. 

a. Go to Cloud Storage Buckets page and click CREATE

b. On the Create a bucket page, enter your bucket information and continue, making sure it is in the same region as the dataset and the connection discussed in above steps and create

c. Once the bucket is created, store your images (through console or Cloud Shell commands or programmatically) and grant the necessary permissions for the connection’s service account (that we saved earlier) to access the images

> export sa=”yourServiceAccountId@email.address”
> gsutil iam ch serviceAccount:$sa:objectViewer “gs://<<bucket>>”

3. Create an object table

Create an external object table from BigQuery to access the unstructured data in the bucket using the connection we created. Run the below CREATE SQL from BigQuery editor:

code_block[StructValue([(u’code’, u’CREATE OR REPLACE EXTERNAL TABLE `<<dataset>>.<<table_name>>` rnWITH CONNECTION `us.<<connection-name>>` rnOPTIONS(rnobject_metadata=”SIMPLE”, uris=[“gs://<<bucket>>/<<folder>>/*.jpg”]);’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eba1d7af3d0>)])]

External Table is created as shown below:

Let’s quickly query a pose from the newly created table:

code_block[StructValue([(u’code’, u”SELECT data , urirnFROM `yoga_set.yoga_poses` rnWHERE REGEXP_CONTAINS(uri, ‘gs://yoga_images/Downdog’)rnLimit 1;”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eba0fdc91d0>)])]

As you can see in the screenshot below, you can create and operate on unstructured images as if they are structured data:

Now let’s export the query result from above into a small Python snippet to visualize the result:

Now that we have created the external table and accessed images from Cloud Storage only using SQL queries, let us move on to the next section that is to create the Classification Model.

4. Create the model and upload it to Cloud Storage

For this implementation, we are going to use the pre-trained ResNet 50 Model to run inference on the object table we just created. The ResNet 50 model analyzes image files and outputs a batch of vectors representing the likelihood that an image belongs to the corresponding class (logits). 

Before moving on to this step, make sure you have all the necessary permissions in place. Then follow the below steps:

a. Download the model from this location and save it in your local
b. It should unpackage into saved_model.pb and a variables folder
c. Upload these two (the file and the folder) into the bucket we created in previous section

Once this step is completed, your model related files should be present in the same bucket as your images as seen in the image above.

5. Load the model into BQML and infer!

In this step, we are going to load the model into the same BigQuery Dataset as the external table we created earlier and apply it for the images we have stored in the Cloud Storage.

a. From BigQuery Editor, run the following SQL statement

code_block[StructValue([(u’code’, u”CREATE MODEL `<<Dataset>>.<<Model_Name>>`rnOPTIONS(rnmodel_type = ‘TENSORFLOW’,rnmodel_path = ‘gs://<<Bucket>>/*’);”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eba1c490510>)])]

Once the execution is completed, you would see the model listed in your Dataset section in BigQuery.

b. Inspect the model to see its input and output fields. Expand the dataset and click on the model we just created “yoga_poses_resnet”. Click the Schema tab:

In the Labels section, you see the “activation_49” field that represents the output field. In the Features section, you can see “input_1” that represents the field that is expected to be input to the model. You will reference “input_1” in your inference query (or prediction query) as the field you are passing in for your “test” data.

c. Infer your Yoga Pose!

Let’s use the model we just created to classify our test image data. Make sure you have some test images (Yoga poses) identified from your Cloud Storage bucket that made it into the External Table when we created it. We are going to selectively query for those test images in BigQuery to perform the inference using the BQML model we just created. Use the below query to trigger the test.

code_block[StructValue([(u’code’, u”SELECT * rnFROM ML.PREDICT(rnMODEL yoga_set.yoga_poses_resnet,rn(SELECT uri, ML.DECODE_IMAGE(data) AS input_1rnFROM yoga_set.yoga_poses where REGEXP_CONTAINS(uri,rn’gs://yoga_images/Downdog/00000097.jpg’)));”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eba1de9c050>)])]

In the above query, we select one test image that is identified to contain a specific URI value (00000097.jpg) in the external table. Also, the SELECT part uses the ML.DECODE_IMAGE construct as field “input_1” in order for the ML.PREDICT function to work.

Once execution is completed, you will see the result as shown below:

Now for those who know the ResNet model in depth, this should help understand the classification. But for those like me, let’s code a small snippet to understand the classification visually. 

d. Flattening the result

One way of visualizing the above output is to flatten the activation_49 field values using BigQuery SQL’s UNNEST construct. Please refer to the query below for flattening the result from the earlier step. If you want to further textually label the resulting class, you can introduce the logic in place of the placeholder <<LABEL_LOGIC>> in the query (uncomment when using).

code_block[StructValue([(u’code’, u’with predictions as (rnSELECTrnUri, data, SPLIT(uri, “/”)[OFFSET(ARRAY_LENGTH(SPLIT(uri, “/”)) – 1)] as img, rni as label_i,rn<<LABEL_LOGIC>> label,rnScorernFROM ML.PREDICT(rnMODEL yoga_set.yoga_poses_resnet, rn(SELECT data, uri, ML.DECODE_IMAGE(data) AS input_1 rnFROM yoga_set.yoga_poses rnWHERE rnREGEXP_CONTAINS(uri,’gs://yoga_images/Goddess/00000007.jpg’))), rnUNNEST(activation_49) as score WITH OFFSET i)rnSELECT * FROM predictions rnORDER BY score DESCrnLIMIT 5;’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eba0fbc2190>)])]

Without the class labeling logic, below is the output to the query:

You can read further about the model and apply the logic that works best with your data and the model output.

e. Visualizing the inference

Finally, a quick Python snippet to visualize the result from the classification! Export the above query result to a CSV file and reference it in the Python code.

The above image output refers to the Yoga Pose “Downward Dog” which is exactly the same test input we passed into the ML.PREDICT query for classification using BQML!

Unifying structured and unstructured with BigQuery

Lastly, my favorite part of this implementation is to unify the fields from my structured relational table with this unstructured image data. I created a structured BigQuery table in the same dataset as the external table to hold the pose and its health related data.

The image above represents the schema of the structured data table named “yoga_health” and the fields are pose, focus, health_benefit and breath. The query below joins both Structured and Unstructured data:

code_block[StructValue([(u’code’, u’SELECT SPLIT(uri, “/”)[OFFSET(ARRAY_LENGTH(SPLIT(uri, “/”)) – 2)] as pose,rna.health_benefit, breath, focus, data rnFROM `abis-345004.yoga_set.yoga_health` a, yoga_set.yoga_poses b rnWHERE a.pose = SPLIT(uri, “/”)[OFFSET(ARRAY_LENGTH(SPLIT(uri, “/”)) – 2)];’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eba0fcbebd0>)])]

Below is the result:

Note: All of the queries we have covered in this blog can be run directly from your Python Notebook using the BigQuery Magic commands.

Try it out

That’s it! We have successfully stored and queried unstructured data in BigQuery, created a Classification Model using BQML and predicted test yoga poses with the model. If you would like to implement this, get started with your Google Cloud project and follow the codelab. Also, if you would like to learn more about databases or other end to end application implementations in Google Cloud, please head to my blogs. For feedback and questions, you can stay in touch with me here.

Source : Data Analytics Read More