Archives 2022

Blockchain Technology Makes USDT a Valuable Cryptocurrency

Blockchain Technology Makes USDT a Valuable Cryptocurrency

Blockchain technology has been instrumental in the growth of cryptocurrencies. Obviously, bitcoin was the first cryptocurrency to rely on blockchain technology. Many other digital coins have emerged, due to the growth of blockchain.

USDT is one of the newest cryptocurrencies that has arisen. Blockchain has not just made this new digital coin possible, but it has also become more easily integrated with other platforms. A number of regulators have raised concerns about this new digital coin, but it is becoming popular nonetheless.

USDT is available on multiple networks and is one of the biggest stablecoins by capitalization. It comes as no surprise that the best way to store it is in the best cryptocurrency wallets out there, which is able to connect to the blockchain. This is where MetaMask comes into play. Out of all wallets, it is by far the best decision to add USDT to MetaMask and it can connect easily with the blockchain, so let’s see how to do that.

Copy the USDT contract address

Before we start, we must mention that crypto terminology can be quite challenging for beginners. So let’s explain that MetaMask is a crypto wallet that you will need to store your crypto assets in, or in this case the stablecoin USDT. You can easily exchange the coins through the blockchain at your convenience.

In this article, we will be using crypto terms that you might not be familiar with. But don’t worry, it is easy to learn their meanings with a good crypto glossary, that will familiarize you with all the terms you need.

Before you can add USDT to MetaMask, you first need to purchase some of it. While you can do it through your bank account or third-party payment solutions, it’s easiest to buy it with a debit card. The actual procedure of buying this stablecoin is pretty simple, and for more info, you can check the TradeCrypto website that has a helpful guide on how to buy USDT with a debit card. Once you have it, in order to add USDT to MetaMask, you first need to copy its address. Thereason for this is simple, USDT is not automatically added to MetaMask, meaning that you will need to import it as a custom token. Here is how to do that:

Go to CoinMarketCap.com; 

Look for “USDT”;

Find the part that says “contracts”;

And from there, you can copy your USDT address.

How to log in to your MetaMask wallet

If you are a new MetaMask user, you will need to sign up and make an account. In order to start, you will first need to download the MetaMask Chrome extension.

Go to the wallet’s official website;

Click the “Install MetaMask for Chrome” button;

On the new page, click “Add to Chrome”; 

Now, pick “Add extension” and open it;

All that’s left is to start the sign-up.

That is all that you need to do! You don’t need to connect directly to the blockchain, because the wallet can handle that process for you.

How to sign up on MetaMask

Start by opening the MetaMask extension or mobile app. When you do, the first thing you will need to do is agree to the user terms. Afterward, you need to select the “Wallet Creation Button” (you will need to agree with the disclaimer). As you do that on your screen, you will get a phrase. Make sure to copy it; the more copies, the better. This will be your recovery phase. In order to confirm the phrase, you will need to enter it. And that is how you create a MetaMask wallet!

How to login on MetaMask

On the other hand, if you already used MetaMask, then all you need to do is log in! In order to log in, you will need your MetaMask sign-in password. Although if by chance you forget the password, you can access your account by using the secret recovery phrase (the one you got when signing in for the first time). How to log in with the password:

Open the MetaMask wallet software on your device;

If you are using a phone open the MetaMask app you installed on the phone;

When the main screen opens, you need to enter the MetaMask sign-in password;

Next press “Sign In” and your wallet will be open for you to use.

Click on “import tokens”

So, after you buy USDT, copy its address and open the MetaMask wallet, there is still more to do. The next step is to import USDT. For starters, make sure that the “Ethereum Mainnet” network is selected. If by chance it isn’t, then you need to manually select it. In order to complete the process, you will need to scroll down and select “import tokens”. This will mark the beginning of importing USDT as a custom token. 

Paste the USDT contact address

The last thing you did was “import tokens”. So now you should be on the “Search” tab. There you can search for “USDT” and add it as a custom token.

Click on “Custom Token” and begin adding a custom token.

Three fields will now appear: ”Token Contract Address”, “Token Symbol”, “Token Decimal”.

Select the “Token Contract Address” field.

Paste the USDT address there.

Once you pass the address, the rest will be added manually. 

All that is left is for you to click on “Add Custom Token”.

Send or receive USDT

Now that you clicked on “Add Custom Token”, your USDT was added to the Wallet. If you want to make sure it was added you can always check in the tab labeled “Assets”. Having your USDT there means that you are now eligible to send or receive USDT. To send, you just need to click on “Send” and enter the address. And to receive you need to share your address.

How to deposit with MetaMask?

So, what do you need to do in order to deposit with MetaMask? First, you need to go to the BTSE Wallet Page and select the currency.

Then click on “Deposit” followed by selecting the MetaMask icon. After choosing MetaMask, the extension window will pop up. There you will need to press “Next” and “Connect”.

After everything is connected, you will finally be able to use the MetaMask deposit option.

Enter the amount;  

Click “Deposit”; 

“Confirm”; 

 MetaMask will inform you when the trade is confirmed.

After getting the confirmation, the trade deposit will be finished. It is not hard to do if you follow the steps closely.

Blockchain Makes USDT Wallets Possible

Blockchain is vital to the rise of cryptocurrencies like Tether and USDT. A growing number of cryptocurrency investors are investing in USDT. This digital coin has become highly popular in recent years due to the growth of blockchain technology. As a result, a growing number of investors are using USDT wallets. Investors should invest in one if they want to purchase this cryptocurrency.

The post Blockchain Technology Makes USDT a Valuable Cryptocurrency appeared first on SmartData Collective.

Source : SmartData Collective Read More

90% Of Saas Buyers Overpay for AI-Driven Services

90% Of Saas Buyers Overpay for AI-Driven Services

AI is becoming a powerful disruptive technology in recent years. One poll showed that 86% of CEOs feel that AI is a mainstream technology that their companies heavily depend on.

According to a recent report by SaaS purchasing platform Vertice, 90% of buyers are overpaying for their SaaS, by an average of 20-30%. Many of these buyers are enamored with the idea of using AI to improve their business models, but don’t make sure that they are getting the right value from it. And with these tools responsible for so many indispensable business operations, this figure should be ringing alarm bells for companies of all sizes.

But why is this happening, and what can you do about it? Let’s unpack why your business might be overspending on its software subscriptions. The guidelines below will make it easier to get the most value from AI-driven SAAS services.

SaaS demand is growing in response to the proliferation of AI

Businesses need SaaS, since it is one of the easiest ways for them to leverage the power of AI. From human resources to customer relations to in-house comms, digital tools take the pressure off of members of staff by automating vital business functions and streamlining collaboration efforts.

But now that the average company is utilizing 110 different AI-driven SaaS tools each year, the sellers providing these services are responding to demand and hiking their prices. In fact, FastSpring estimates that 37% of vendors increased what they were charging between Q1 2021 and Q1 2022, driving up software costs for the average customer.

The problem of pricing visibility for SAAS companies offering artificial intelligence services

Software developers face a number of challenges when creating AI software businesses. One of their biggest struggles is finding a good price.

Rising prices make it more important than ever for AI software buyers to track down the best value plans for their organization. However, the main challenge that company IT, finance and procurement teams face when purchasing SaaS is the lack of pricing transparency. When shopping around and attempting to compare prices between different products, you may find that the vendors you’ve shortlisted haven’t made their pricing options publicly available. This makes it difficult for companies to compare the value of different AI applications.

While some choose to publish list prices online through their websites or third-party publications, many companies intentionally hide their price points — in fact, OpenView reports that as many as 55% of vendors choose to obscure. But why does this happen?

Usually, it’s because vendors are looking for prospective customers to reach out and get in touch with their sales teams to find out pricing information. This works to their advantage — once they learn about your needs, providers can give quotes for a tailored plan that might come in costing a little higher than a standardized price that you were looking to pay. And next thing you know, you’ll be on their books and moving through the sales process.

From the customer side of the deal, this makes it difficult to compare between vendor prices and get the best deal for the business. When vendors obscure their price points, buyers are unable to see what other organizations are paying for their SaaS, so lack the insight to effectively negotiate with leverage.

So, once this is all said and done, SaaS buyers will often find themselves overpaying for SaaS — even if they don’t realize that they are doing so.

Strategies for saving money on SaaS

There are a lot of benefits and risks of using artificial intelligence. AI technology can be very valuable, but it can also be very expensive. You have to make sure investing it doesn’t blow a hole in your budget. If you’re not cutting costs where you can and keeping your SaaS spend to a minimum, it could quickly spiral out of control. Between inflation, hidden contract clauses, and expensive upgrades, you’re likely to find your software costs growing year-on-year. But it doesn’t have to be this way — here are some top tips for saving money on your SaaS.

1.      Cut out redundant applications

Because companies now have so many tools in their SaaS stack, it can be hard to keep track of which applications are providing the most business value. As a result, you might find that you’re paying for apps that are going unused, or that the company is using multiple tools for the same purpose.

This is where application rationalization can help. Application rationalization is the process of assessing the entire SaaS stack in use across an organization, to determine which contracts can be terminated, which need to be downgraded, and which should be retained. To get started, look into the associated usage patterns for each tool and the return on investment that you’re getting from your subscription, then make informed decisions about which should stay and which should go.

2.      Negotiate your contracts

Once you’ve entered into talks with a vendor, it’s up to your company’s negotiation team to procure the best value contract. The price you’re quoted isn’t always the price you’ll end up paying — so make sure you send your most experienced, best-informed negotiators to handle discussions, equipped with market insight into the price points offered to other customers.

Consider negotiating contract terms that will reduce your spending down the line, such as the removal of clauses that could see your contract automatically renewed at a higher price than originally agreed. You can also negotiate for discounts in exchange for certain terms, for example if you choose to subscribe to upfront billing or a multi-year commitment.

Once you’ve cut down your existing contract outgoings and streamlined the procurement process for new subscriptions, you’ll be getting better value on your plans— and no longer overpaying for your organization’s SaaS.

Get the Right Price When Investing in AI SAAS Tools

SAAS technology is very popular in 2022. Fortune Business Insights shows that the market is growing by over 27% a year. However, there are a lot of nuances that you have to understand. One of the most important things that you need to do is make sure that you pay the right price for AI-driven SAAS tools. The guidelines above should help.

The post 90% Of Saas Buyers Overpay for AI-Driven Services appeared first on SmartData Collective.

Source : SmartData Collective Read More

Perform hyperparameter tuning using R and caret on Vertex AI

Perform hyperparameter tuning using R and caret on Vertex AI

To produce any sufficiently accurate machine learning model, the process requires tuning parameters and hyperparameters. Your model’s parameters are variables that your chosen machine learning technique uses to adjust to your data, like weights in neural networks to minimize loss. Hyperparameters are variables that control the training process itself. For example, in a multilayer perceptron, altering the number and size of hidden layers can have a profound effect on your model’s performance, as does the maximum depth or minimum observations per node in a decision tree.

Hyperparameter tuning can be a costly endeavor, especially when done manually or when using exhaustive grid search to search over a larger hyperparameter space. 

In 2017, Google introduced Vizier, a technique used internally at Google for performing black-box optimization. Vizier is used to optimize many of our own machine learning models, and is also available in Vertex AI, Google Cloud’s machine learning platform.  Vertex AI Hyperparameter tuning for custom training is a built-in feature using Vertex AI Vizier for training jobs. It helps determine the best hyperparameter settings for an ML model.

Overview

In this blog post, you will learn how to perform hyperparameter tuning of your custom R models through Vertex AI.

Since many R users prefer to use Vertex AI from RStudio programmatically, you will interact with Vertex AI through the Vertex AI SDK via the reticulate package. 

The process of tuning your custom R models on Vertex AI comprises the following steps:

Enable Google Cloud Platform (GCP) APIs and set up the local environment

Create custom R script for training a model using specific set of hyperparameters

Create a Docker container that supports training R models with Cloud Build and Container Registry 

Train and tune a model using HyperParameter Tuning jobs on Vertex AI Training

Dataset

To showcase this process, you train a simple boosted tree model to predict housing prices on the California housing data set. The data contains information from the 1990 California census. The data set is publicly available from Google Cloud Storage at gs://cloud-samples-data/ai-platform-unified/datasets/tabular/california-housing-tabular-regression.csv

The tree model model will predict a median housing price, given a longitude and latitude along with data from the corresponding census block group. A block group is the smallest geographical unit for which the U.S. Census Bureau publishes sample data (a block group typically has a population of 600 to 3,000 people).

Environment setup

This blog post assumes that you are either using Vertex AI Workbench with an R kernel or RStudio. Your environment should include the following requirements:

The Google Cloud SDK

Git

R

Python 3

Virtualenv

To execute shell commands, define a helper function:

code_block[StructValue([(u’code’, u’library(glue)rnlibrary(IRdisplay)rnrnsh <- function(cmd, args = c(), intern = FALSE) {rn if (is.null(args)) {rn cmd <- glue(cmd)rn s <- strsplit(cmd, ” “)[[1]]rn cmd <- s[1]rn args <- s[2:length(s)]rn }rn ret <- system2(cmd, args, stdout = TRUE, stderr = TRUE)rn if (“errmsg” %in% attributes(attributes(ret))$names) cat(attr(ret, “errmsg”), “\n”)rn if (intern) return(ret) else cat(paste(ret, collapse = “\n”))rn}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ef16b3633d0>)])]

You should also install a few R packages and update the SDK for Vertex AI:

code_block[StructValue([(u’code’, u’install.packages(c(“reticulate”, “glue”))rnsh(“pip install –upgrade google-cloud-aiplatform”)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ef1718d9bd0>)])]

Next, you define variables to support the training and deployment process, namely:

PROJECT_ID: Your Google Cloud Platform Project ID

REGION: Currently, the regions us-central1, europe-west4, and asia-east1 are supported for Vertex AI; it is recommended that you choose the region closest to you

BUCKET_URI: The staging bucket where all the data associated with your dataset and model resources are stored

DOCKER_REPO: The Docker repository name to store container artifacts

IMAGE_NAME: The name of the container image

IMAGE_TAG: The image tag that Vertex AI will use

IMAGE_URI: The complete URI of the container image

code_block[StructValue([(u’code’, u’PROJECT_ID <- “YOUR_PROJECT_ID”rnREGION <- “us-central1″rnBUCKET_URI <- glue(“gs://{PROJECT_ID}-vertex-r”)rnDOCKER_REPO <- “vertex-r”rnIMAGE_NAME <- “vertex-r”rnIMAGE_TAG <- “latest”rnIMAGE_URI <- glue(“{REGION}-docker.pkg.dev/{PROJECT_ID}/{DOCKER_REPO}/{IMAGE_NAME}:{IMAGE_TAG}”)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ef16a0d4810>)])]

When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.

code_block[StructValue([(u’code’, u’sh(“gsutil mb -l {REGION} -p {PROJECT_ID} {BUCKET_URI}”)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ef1717e0390>)])]

Finally, you import and initialize the reticulate R package to interface with the Vertex AI SDK, which is written in Python.

code_block[StructValue([(u’code’, u’library(reticulate)rnlibrary(glue)rnuse_python(Sys.which(“python3”))rnrnaiplatform <- import(“google.cloud.aiplatform”)rnaiplatform$init(project = PROJECT_ID, location = REGION, staging_bucket = BUCKET_URI)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ef1717e07d0>)])]

Create container images for training and tuning models

The Dockerfile for your custom container is built on top of the Deep Learning container — the same container that is also used for Vertex AI Workbench. You just add an R script for model training and tuning.

Before creating such a container, you enable Artifact Registry and configure Docker to authenticate requests to it in your region.

code_block[StructValue([(u’code’, u’sh(“gcloud artifacts repositories create {DOCKER_REPO} –repository-format=docker –location={REGION} –description=\”Docker repository\””)rnsh(“gcloud auth configure-docker {REGION}-docker.pkg.dev –quiet”)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ef172d55290>)])]

Next, create a Dockerfile.

code_block[StructValue([(u’code’, u’# filename: Dockerfile – container specifications for using R in Vertex AIrnFROM gcr.io/deeplearning-platform-release/r-cpu.4-1:latestrnrnWORKDIR /rootrnrnCOPY train.R /root/train.Rrnrn# Install FortranrnRUN apt-get updaternRUN apt-get install gfortran -yyrnrn# Install R packagesrnRUN Rscript -e \”install.packages(‘plumber’)\”rnRUN Rscript -e \”install.packages(‘argparser’)\”rnRUN Rscript -e \”install.packages(‘gbm’)\”rnRUN Rscript -e \”install.packages(‘caret’)\”rnRUN Rscript -e \”install.packages(‘reticulate’)\”rnrnRUN pip install cloudml-hypertune’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ef172d55950>)])]

Next, create the file train.R, which is used to train your R model. The script trains a gbm model (generalized boosted regression model) on the California Housing dataset. Vertex AI sets environment variables that you can utilize, and the hyperparameters for each trial are passed as command line arguments. The trained model artifacts are then stored in your Cloud Storage bucket. The results of your training script are communicated back to Vertex AI using the hypertune package, which stores a JSON file to /tmp/hypertune/output.metrics. Vertex AI uses this information to come up with a hyperparameter configuration for the next trial, and to assess which trial was responsible for the best overall result.

code_block[StructValue([(u’code’, u’#!/usr/bin/env Rscriptrn# filename: train.R – perform hyperparameter tuning on a boosted tree model using Vertex AIrnrnlibrary(tidyverse)rnlibrary(data.table)rnlibrary(argparser)rnlibrary(jsonlite)rnlibrary(reticulate)rnlibrary(caret)rnrn# The GCP Project IDrnproject_id <- Sys.getenv(“CLOUD_ML_PROJECT_ID”)rnrn# The GCP Regionrnlocation <- Sys.getenv(“CLOUD_ML_REGION”)rnrn# The Cloud Storage URI to upload the trained model artifact tornmodel_dir <- Sys.getenv(“AIP_MODEL_DIR”)rnrn# The trial IDrntrial_id <- Sys.getenv(“CLOUD_ML_TRIAL_ID”, 0)rnrn# The JSON file to save metric results tornmetric_file <- “/var/hypertune/output.metrics”rnrn# Read hyperparameters for this trialrnp <- arg_parser(“California Housing Model”) %>%rn add_argument(“–n.trees”, default = “100”, help = “number of trees to fit”, type = “integer”) %>%rn add_argument(“–interaction.depth”, default = 3, help = “maximum depth of each tree”) %>%rn add_argument(“–n.minobsinnode”, default = 10, help = “minimun number of observations in terminal node”) %>%rn add_argument(“–shrinkage”, default = 0.1, help = “learning rate”) %>%rn add_argument(“–data”, help = “path to the training data in GCS”)rnrndir.create(“/tmp/hypertune”)rnargv <- parse_args(p, unlist(strsplit(commandArgs(trailingOnly = TRUE), “=”)))rnrnrn# Read housing datasetrnsystem2(“gsutil”, c(“cp”, argv$data, “./data.csv”))rndata <- fread(“data.csv”)rnprint(data)rnrnrn# Start model training with the hyperparameter for the trialrnprint(“Starting Model Training”)rntuneGrid <- expand.grid(rn interaction.depth = as.integer(argv$interaction.depth),rn n.trees = as.integer(argv$n.trees),rn n.minobsinnode = as.integer(argv$n.minobsinnode),rn shrinkage = as.numeric(0.1)rn)rnprint(tuneGrid)rnfitControl <- trainControl(method = “cv”, number = 3)rnset.seed(42)rnfit <- train(median_house_value ~ .,rn method = “gbm”,rn trControl = fitControl,rn tuneGrid = tuneGrid,rn metric = “MAE”,rn data = datarn)rnrnmean_absolute_error <- mean(fit$resample$MAE)rncat(paste(“mean absolute error:”, mean_absolute_error, “\\n”))rnrnrn# Report hyperparameter tuning metric to Vertex AI for pickingrn# hyperparameter configuration for the next trialrnhypertune <- import(“hypertune”)rnhpt <- hypertune$HyperTune()rnhpt$report_hyperparameter_tuning_metric(rn hyperparameter_metric_tag = “mean_absolute_error”,rn metric_value = as.numeric(mean_absolute_error),rn global_step = 1000)rnrnrn# Save model to Cloud Storage bucketrnsaveRDS(fit$finalModel, “gbm.rds”)rnsystem2(“gsutil”, c(“cp”, “gbm.rds”, model_dir))’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ef17393a590>)])]

Finally, you build the Docker container image on Cloud Build – the serverless CI/CD platform.  Building the Docker container image may take 10 to 15 minutes.

code_block[StructValue([(u’code’, u’sh(“gcloud builds submit –region={REGION} –tag={IMAGE_URI} –timeout=1h”)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ef17393a490>)])]

Tune custom R model

Once your training application is containerized, you define the machine specifications for the tuning job. In this example, you use n1-standard-4 instances.

code_block[StructValue([(u’code’, u’worker_pool_specs <- list(rn list(rn ‘machine_spec’ = list(rn ‘accelerator_count’ = as.integer(0),rn ‘machine_type’ = ‘n1-standard-4’rn ),rn ‘container_spec’ = list(rn “image_uri” = IMAGE_URI,rn “command” = c(“Rscript”, “train.R”),rn “args” = list(“–data”, “gs://cloud-samples-data/ai-platform-unified/datasets/tabular/california-housing-tabular-regression.csv”)rn ),rn ‘replica_count’ = 1rn )rn)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ef17393a790>)])]

This specification is then used in a CustomJob.

code_block[StructValue([(u’code’, u’MODEL_DIR <- glue(“{BUCKET_URI}/aiplatform-custom-job-hpt”)rncustom_job <- aiplatform$CustomJob(rn display_name = “california-custom-job”,rn worker_pool_specs = worker_pool_specs,rn base_output_dir = MODEL_DIRrn)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ef17393aad0>)])]

Hyperparameter tuning jobs search for the best combination of hyperparameters to optimize your metrics. Hyperparameter tuning jobs do this by running multiple trials of your training application with different sets of hyperparameters.

You can control the job in the following ways:

max_trial_count: Decide how many trials you want to allow the service to run. Increasing the number of trials generally yields better results, but it is not always so. Usually, there is a point of diminishing returns after which additional trials have little or no effect on the accuracy. Before starting a job with a large number of trials, you may want to start with a small number of trials to gauge the effect your chosen hyperparameters have on your model’s accuracy. To get the most out of hyperparameter tuning, you shouldn’t set your maximum value lower than ten times the number of hyperparameters you use.

parallel_trial_count: You can specify how many trials can run in parallel. Running parallel trials has the benefit of reducing the time the training job takes (real time — the total processing time required is not typically changed). However, running in parallel can reduce the effectiveness of the tuning job overall. That is because hyperparameter tuning uses the results of previous trials to inform the values to assign to the hyperparameters of subsequent trials. When running in parallel, some trials start without having the benefit of the results of any trials still running.

In addition, you also need to specify which hyperparameters to tune. There is little universal advice to give about how to choose which hyperparameters you should tune. If you have experience with the machine learning technique that you’re using, you may have insight into how its hyperparameters behave. You may also be able to find advice from machine learning communities.

However you choose them, it’s important to understand the implications. Every hyperparameter that you choose to tune has the potential to increase the number of trials required for a successful tuning job. When you run a hyperparameter tuning job on Vertex AI, the amount you are charged is based on the duration of the trials initiated by your hyperparameter tuning job. A careful choice of hyperparameters to tune can reduce the time and cost of your hyperparameter tuning job.

Vertex AI supports several data types for hyperparameter tuning jobs.

code_block[StructValue([(u’code’, u’hpt_job <- aiplatform$HyperparameterTuningJob(rn display_name = “california-hpt-job”,rn custom_job = custom_job,rn max_trial_count = as.integer(14),rn parallel_trial_count = as.integer(2),rn metric_spec = list(rn “mean_absolute_error” = “minimize”rn ),rn parameter_spec = list(rn “n.trees” = aiplatform$hyperparameter_tuning$IntegerParameterSpec(rn min = as.integer(10), max = as.integer(1000), scale = “linear”rn ),rn “interaction.depth” = aiplatform$hyperparameter_tuning$IntegerParameterSpec(rn min = as.integer(1), max = as.integer(10), scale = “linear”rn ),rn “n.minobsinnode” = aiplatform$hyperparameter_tuning$IntegerParameterSpec(rn min = as.integer(1), max = as.integer(20), scale = “linear”rn )rn )rn)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ef1738069d0>)])]

To tune the model, you call the method run().

code_block[StructValue([(u’code’, u’hpt_job$run()rnhpt_job’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ef173806b10>)])]

Finally, to list all trials and their respective results, we can inspect hpt_job$trials.

code_block[StructValue([(u’code’, u’results <- lapply(hpt_job$trials,rn function(x) { c(as.integer(x$id), as.numeric(x$final_measurement$metrics[[0]]$value)) }rn)rnresults <- as.data.frame(do.call(rbind, results))rncolnames(results) <- c(“id”, “metric”)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ef17393afd0>)])]

And find the trial with the lowest error.

code_block[StructValue([(u’code’, u’best_trial <- results[results$metric == min(results$metric), ]$idrnhpt_job$trials[[best_trial]]’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ef17393a3d0>)])]

The results of this tuning job can also be inspected from the Vertex AI Console.

Summary

In this blog post, you have gone through tuning a custom R model using Vertex AI. For easier reproducibility, you can refer to this notebook on GitHub. You can deploy the resultant model from the best trial on Vertex AI Prediction following the article here.

Related Article

Use R to train and deploy machine learning models on Vertex AI

How to train and deploy a machine learning model with R on Vertex AI.

Read Article

Source : Data Analytics Read More

Surprising Benefits of Smart Technology for Home Security

Surprising Benefits of Smart Technology for Home Security

Did you know that the market for smart homes is currently worth over $79 billion? There is a reason that more families are investing in this technology. Smart home technology has revolutionized the way we improve the safety of our homes and families.

Digital locks and cameras allow us to safeguard our doors, windows, and other entry points 24/7 and monitors anyone who enters our home.

When triggered, intrusion alarms make loud noises to deter burglars. 

Special surveillance software allows us to evaluate suspicious CCTV footage from any internet-connected place. This is especially beneficial in the event of a break-in while we’re gone.

Taken together, these technologies provide individuals with comprehensive security coverage suitable for any home. This is one of the reasons that smart homes help us live better lives.

You could also try Honeywell Security systems with smart home technology is the perfect way to keep your family safe, secure, and connected. The innovative technology allows you to monitor activities within your home remotely from any mobile device.

Through automatic notifications and mobile alerts, users can know exactly what’s happening at all times – from who is at the front door to when a piece of furniture is moved. 

Smart Lighting 

Smart lighting systems are one of the most popular types of smart home technology available today. These systems allow you to control all of your lightings from a central location, such as your smartphone or tablet.

This gives you more control over when lights turn on and off—for example, you can set them to turn on at dusk or turn off when motion is detected in an area. This type of automation helps discourage intruders by making it seem as though someone is always home, even when they’re not. 

Smart Doorbells 

Smart doorbells are another great way to increase the security of your home. These cameras allow you to see who is at your door without having to open it—and they also record video footage so that you can review any suspicious activity that takes place outside your front door. Some smart doorbells even have AI-powered facial recognition software that can alert you if an unfamiliar face appears on camera. 

Security Systems

Security systems are a must-have for anyone looking to make their home safer. Traditional security systems rely on sensors that detect window and door openings, but these days there are also more advanced systems available with features like remote monitoring and real-time alerts via text message or email.

Depending on the system you choose, some may even include video surveillance capabilities so that you can keep an eye on your property from anywhere in the world using an app on your smartphone or computer. 

Conclusion

The method in which we make our homes and families more secure at night has been completely transformed by the advent of smart home technology

We are able to protect our doors, windows, and other entry points around the clock with the use of digital locks and cameras, and we can also keep an eye on anyone who comes into our home.

When activated, intrusion alarms produce a lot of noise to discourage potential thieves from breaking in. You are able to assess potentially suspicious CCTV footage from any location with an internet connection thanks to specialized software for monitoring. This will be especially helpful in the case that there is a break-in while we are away. 

When combined, these technological advancements offer individuals comprehensive security coverage that can be tailored to the needs of each home.

The post Surprising Benefits of Smart Technology for Home Security appeared first on SmartData Collective.

Source : SmartData Collective Read More

Amazing Ways AI is Changing the Marketing Landscape

Amazing Ways AI is Changing the Marketing Landscape

Did you know that companies are projected to spend over $107 billion on AI-based marketing solutions by 2028? There is no doubt that artificial intelligence is creating a number of new changes for people in the marketing profession.

However, many people are still wondering what the actual effects of AI on marketing will be. If yo are in the marketing profession, you will need to learn more about some of the biggest AI trends that will change your career in the years ahead.

Biggest AI Trends Shaping the Marketing Profession

Even companies that specialize in serving local businesses that aren’t very technologically based will benefit from using AI to their advantage. Landscape marketing professionals can even use AI to help their clients grow.

Is your landscaping company struggling to keep up with the pace of the fast-changing digital world? Are you unsure how approaching this new era should shape your brand, advertising style, and content strategy? It’s a fast-paced world out there, and unless you stay ahead of it, you risk falling behind. Therefore, investing in AI to boost your marketing strategy is more important than ever.

Proper landscape marketing can help you see an increase in your sales and eventually help you spend less on marketing. In just over a decade, digital marketing has grown from a luxury that only the biggest companies have to a necessity for landscaping businesses of all sizes. These changes have made marketing more complex, and they’re not slowing down anytime soon.

Savvy marketers will continue to use AI to improve their effectiveness. Let us look at the factors changing the marketing landscape and how your landscaping business can benefit from this.

Use AI to create graphics for online advertisements

Graphics play a very important role in online advertising. Unfortunately, many companies struggle to afford to pay for talented graphic artists.

The good news is that AI technology has made it possible for marketers to create quality art for a fraction of the cost. AI tools like NightCafe, DALL-E 2 and starryai allow designers to create up to 20 images a day for free! These AI-based graphics can be invaluable for marketers.

Use of AI-driven SEO to boost web presence and rankings

Let’s face it; the Internet is very broad, and many rely on it to grow their businesses. The traffic we get often depends on our ranking in search engines. This is why it behooves us to keep up with the latest trends and ensure we use the best practices for SEO or Search Engine Optimization. Many factors go into SEO, but a few of the most important ones are:

The use of keywords: these help potential customers find you when they enter relevant searches into Google or other engines.

The quality of your content: includes not only the writing itself but also things like photos, videos, and infographics. Your content should be engaging and informative so that people will want to stay on your site once they find it.

How often you update your site: fresh content helps keep people coming back, and it also tells search engine algorithms that your site is active, which can help boost your rankings.

AI technology has become very important for modern SEO. Companies can use AI-driven tools like SEMRush, Keyword Chef or Ahrefs to automate their keyword research. They can also use AI to automate new content generation strategies to keep their websites fresh. AI will become even more important for SEO in the years to come.

While you may not have the expertise to do all of this yourself, plenty of resources are available to help you get started. You can also hire someone to do it for you if you feel like that would be a better investment of your time and money.

Use of Google My Business

Google My Business or GMB is a free business listing directory offered by Google. It allows business owners to manage their information and interact with customers through features such as messaging, reviews, and posts. GMB is important for two main reasons: first, it can help improve your SEO by making sure your business’s name, address, and phone number (NAP) are consistent across the web. This signals to Google that you are a legitimate business, boosting your search rankings. Secondly, it provides potential customers with an easy way to find out more about your company and what you have to offer before they even visit your website. Creating a GMB listing is simple and only takes a few minutes- if you haven’t already done so, we recommend setting one up for your business today!

AI technology can also help companies automate and optimize their Google My Business listings. In fact, Search Engine Journal reports that Google itself is using AI to automate this process for companies on their behalf.

Automated lead tracking systems

The world of sales agents is changing, and there are now automated lead tracking systems that can help many businesses to organize and follow up on potential sales leads: You can get systems that track your calls, emails, and follow-up appointments to make it easier to see who you have spoken to when you spoke to them, and what their response was.

The systems can also keep track of your sales pipeline, so you know where each lead is in the process and what needs to be done next. This can take a lot of the guesswork out of sales and help you close more deals. If you are not using a lead tracking system already, it may be worth investigating to see if one could help improve your business’s bottom line.

Automated call tracking and recording

As a landscaping business, you are likely to have many phone calls from potential customers. It can be helpful to have an automated call tracking system to see how many calls you are getting, what times of day they are coming in, and how long each call lasts.

This information can be valuable for two reasons: first, it can help you determine whether or not your current marketing efforts are working (if you see a spike in calls after running a new ad campaign, for example). Secondly, it provides data that could be used to improve your business’s processes- if most of your calls are ending up as dead ends because the potential customer is not ready to commit yet, then maybe you need to change your sales pitch.

Additionally, some call tracking systems allow recordings of the conversations between agents and customers to be made and stored. These recordings could then be listened to for quality control purposes or for training new staff members.

Well-designed websites

The design of a website is an area where many businesses have failed in the past, but it is also an area where they can now be very successful. Well-designed websites are easier to navigate than those that are overly complicated and cluttered. They are also designed to be helpful and provide the information your customers need without making them search for it.

Use of AI to get more qualified traffic with Google Ads

AI technology can also help marketers get more quality traffic with Google Ads. They can use AI tools to automate their keyword research and optimize their ad copywriting.

Many marketers now realize that there are certain advantages to be gained by using Google’s paid advertising services. Some of the benefits of using Google Ads include:

You can target your ads to people who have already shown an interest in what you are selling. This is done by targeting keywords they have searched for in the past or by targeting them based on their location.

You only pay when someone clicks on your ad, so there is no wasted spend.

You can track how many people see your ad and how many of them click through to your website so you can gauge its effectiveness.

Once you stop running the ad, the traffic will also stop- this means that you are not locked into a long-term contract like with some other forms of advertising.

If used correctly, Google Ads can be a great way to bring qualified traffic to your website and generate leads for your business without breaking the bank.

As you can see, companies that use AI to fine-tune their Google Ads strategy can enjoy a higher ROI.

AI is Changing the State of Marketing

All in all, if you want to stay ahead, you need to be aware of the factors that are changing the marketing landscape. Since AI technology is so important for modern marketers, you can’t afford to overlook its importance. You need to be ready for this new way of doing business. The best way to do this is to understand the factors changing the landscaping business and ensure your marketing is ready for this new era, which is going to be guided by new advances in AI.

The post Amazing Ways AI is Changing the Marketing Landscape appeared first on SmartData Collective.

Source : SmartData Collective Read More

Google named a Leader in 2022 Gartner® Magic Quadrant™ for Cloud Database Management Systems

Google named a Leader in 2022 Gartner® Magic Quadrant™ for Cloud Database Management Systems

We’re excited to share that Gartner has recognized Google as a Leader in the 2022 Gartner® Magic Quadrant™ for Cloud Database Management Systems, for the third year in a row. We believe this recognition is a testimony to Google Cloud’s vision and strong track record of delivering continuous product innovation, especially in areas like open data ecosystems and unified data cloud offerings.

Download the complimentary 2022 Gartner Magic Quadrant for Cloud Database Management Systems report. 

Modern applications need to support a large number of globally distributed users, with no downtime and fast performance. And, with the exponential growth in the amount and types of data, workloads, and users, it’s becoming incredibly complex to harness data’s full potential.

This results in a growing data-to-value gap. 

Google’s data cloud is well positioned to address the modern data needs of organizations with intelligent data and analytics services, advanced security, and a strong partner ecosystem, all integrated within a unified platform. We continue to rapidly innovate across these areas of the data space, especially with the new capabilities we announced at Google Cloud Next ’22 from our databases and data analytics portfolios.

Organizations such as Walmart, PayPal, and Carrefour, as well as tens of thousands of other customers around the world, have partnered with Google Cloud to drive innovation with a unified, open, and intelligent data ecosystem. 

Unified data management

Google’s data cloud provides an open and unified data platform that allows organizations to manage every stage of the data lifecycle — from running operational databases for applications to managing analytical workloads across data warehouses and data lakes, to data-driven decision making, to AI and Machine Learning. The way we’ve architected our platform is truly unique and enables customers to bring together their data, their people and their workloads.

Our databases are built on a highly scalable distributed storage with fully disaggregated resources and high-performance Google-owned global networking. This combination allows us to provide tightly integrated data cloud services across our data cloud products such as Cloud Spanner, Cloud Bigtable, AlloyDB for PostgreSQL, BigQuery, Dataproc and Dataflow

We recently launched several capabilities that further strengthen these integrations, making it even more seamless and easy for customers to accelerate innovation:

The unification of transactional and analytical systems. With change streams, customers can track writes, updates, and deletes to Spanner and Bigtable databases and replicate them to downstream systems such as BigQuery, Pub/Sub, and Cloud Storage. Datastream for BigQuery provides easy replication from operational database sources such as AlloyDB, PostgreSQL, MySQL, and Oracle, directly into BigQuery. This allows you to easily set up an ELT (Extract, Load, Transform) pipeline for low-latency data replication enabling real-time insights.

The unification of data of all types. BigLake enables customers to work with data of any type, in any location. Customers no longer have to worry about underlying storage formats and can reduce cost and inefficiencies because BigLake extends up from BigQuery. This level of integration allowed us to rapidly ship object tables, a new table type that provides a structured interface for unstructured data. Powered by BigLake, object tables let customers run analytics and ML on images, audio, documents natively, changing the game for data teams worldwide, who can now innovate without limits with all their data, in one unified environment.

The unification of workloads. We’ve introduced new developer extensions for workloads that require programming beyond SQL. With BigQuery stored procedures for Apache Spark, customers can run Spark programs directly from within BigQuery, unifying transformation and ingestion and enabling Spark procedures to run as a step in a set of SQL statements. This unification not only increases productivity but it also brings costs and billing benefits as customers only pay for the Spark job duration and resources consumed. And the costs are converted to either BigQuery bytes processed or BigQuery slots, giving customers a single billing unit for both data lake and data warehouse jobs. 

Open data ecosystem 

Google Cloud provides industry leading integration with open source and open APIs, which ensures portability, flexibility, and reduces the risk of vendor lock-in. We see customers like PayPal, HSBC, Vodafone, Major League Baseball and hundreds of others increasingly leverage our suite of migration services to power their data cloud transformation journey. This includes BigQuery Migration Service to accelerate migration from traditional data warehouses and the comprehensive Database Migration Program to accelerate migrations to the cloud with the right expertise, assessments and financial support. Customers can also take advantage of our managed services that are fully compatible with the most popular open source engines such as PostgreSQL , MySQL, and Redis.

And we don’t stop there. We also offer BigQuery Omni which enables insights beyond Google Cloud to data in other cloud environments, while providing a single pane of glass for analysis, governance, and security.

We continue to focus on making Google Cloud the most open data cloud that can unlock the full potential of data and remove the barriers to digital transformation. Some recent launches in this area include:

Modernize your PostgreSQL environment. Database Migration Service now supports migrations of any PostgreSQL database to AlloyDB, in an easy-to-use, secure, and serverless manner, and with minimal downtime.

Build an open format data lake. To support data openness, we announced the general availability of BigLake, to help you break down data silos by unifying lakes and warehouses. BigLake innovations add support for Apache Iceberg, which is becoming the standard for open source table format for data lakes. And soon, we’ll add support for formats including Delta Lake and Hudi.

Bring analytics to your data. To help you analyze data irrespective of where it resides, we launched BigQuery Omni. Now we’re adding new capabilities such as cross-cloud transfer and cross-cloud larger query results that will make it easier to combine and analyze data across cloud environments.

We’ve significantly expanded our data cloud partner ecosystem, and are increasing our partner investments across many new areas. Today, more than 800 software partners are building their products using Google’s data cloud, and more than 40 data platform partners offer validated integrations through our Google Cloud Ready – BigQuery initiative. Earlier this year we launched the Data Cloud Alliance, now supported by 17 leaders in data working together to promote open standards and interoperability between popular data applications. We also announced a major expansion of the AlloyDB partner ecosystem, with more than 30 partner solutions to support business intelligence, analytics, data governance, observability, and system integration.

AI-powered innovation

At Google, AI is in our DNA. For two decades, we’ve leveraged the power of AI to organize the world’s information and make it useful to people and businesses everywhere. From enhancing the performance of our Search algorithm with ML, to sharpening content recommendations on YouTube with unsupervised learning, we have constantly leveraged AI to solve some of the toughest challenges in the market.

We continue to bring that same expertise in AI technology to make our data cloud services even more intelligent. 

Database system optimizations. Capabilities such as Cloud SQL recommenders and AlloyDB autopilot make it easier for database administrators and DevOps teams to manage performance and cost for large fleets of databases. 

Databases and AI integration. In addition to infusing AI and ML into our products, we have tightly integrated Spanner, AlloyDB and BigQuery with Vertex AI to simplify the ML experience. With these integrations, AlloyDB and Spanner users can now enable model inferencing directly within the database transaction using SQL. 

Simplified ML Ops. Models created in BigQuery using BigQuery ML are now instantly visible in Vertex AI model registry. You can then directly deploy these models to Vertex AI endpoints for real-time serving, use Vertex AI pipelines to monitor and train models and view detailed explanations for your predictions through BigQuery ML and Vertex AI integration. 

Google Cloud databases and analytics solutions are proven to operate at scale. For example, Spanner processes over 2 billion requests per second at peak, and BigQuery customers analyze over 110 terabytes of data per second. 

We are honored to be a Leader in the 2022 Gartner Magic Quadrant for Cloud Database Management Systems, and look forward to continuing to innovate and partner with you on your digital transformation journey. 

Download the complimentary 2022 Gartner Magic Quadrant for Cloud Database Management Systems report. 

Learn more about how organizations are building their data clouds with Google Cloud solutions. 

Gartner Magic Quadrant for Cloud Database Management Systems, Henry Cook, Merv Adrian, Rick Greenwald, Xingyu Gu, December 13, 2022
GARTNER is a registered trademark and service mark, and MAGIC QUADRANT is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. 
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Google.

Related Article

What’s new in Google Cloud databases: More unified. More open. More intelligent.

Google Cloud databases deliver an integrated experience, support legacy migrations, leverage AI and ML and provide developers world class…

Read Article

Source : Data Analytics Read More

Understand and optimize your BigQuery analytics queries using the query execution graph

Understand and optimize your BigQuery analytics queries using the query execution graph

BigQuery offers strong query performance, but it is also a complex distributed system with many internal and external factors that can affect query speed. When your queries are running slower than expected or are slower than prior runs, understanding what happened can be a challenge.

The query execution graph provides an intuitive interface for inspecting query execution details. By using it, you can review the query plan information in graphical format for any query, whether running or completed.

You can also use the query execution graph to get performance insights for queries. Performance insights provide best-effort suggestions to help you improve query performance. Since query performance is multi-faceted, performance insights might only provide a partial picture of the overall query performance.

Execution graph

When BigQuery executes a query job, it converts the declarative SQL statement into a graph of execution, broken up into a series of query stages, which themselves are composed of more granular sets of execution steps. The query execution graph provides a visual representation of the execution stages and shows the corresponding metrics. Not all stages are made equal. Some are more expensive and time consuming than others. The execution graph provides toggles for highlighting critical stages, which makes it easier to spot the potential performance bottlenecks in the query.

Query performance insights

In addition to the detailed execution graph BigQuery also provides specific insights on possible factors that might be slowing query performance.

Slot contention

When you run a query, BigQuery attempts to break up the work needed by your query into tasks. A task is a single slice of data that is input into and output from a stage. A single slot picks up a task and executes that slice of data for the stage. Ideally, BigQuery slots execute tasks in parallel to achieve high performance. Slot contention occurs when your query has many tasks ready for slots to start executing, but BigQuery can’t get enough available slots to execute them. 

Insufficient shuffle quota

Before running your query, BigQuery breaks up your query’s logic into stages. BigQuery slots execute the tasks for each stage. When a slot completes the execution of a stage’s tasks, it stores the intermediate results in shuffle. Subsequent stages in your query read data from shuffle to continue your query’s execution. Insufficient shuffle quota occurs when you have more data that needs to get written to shuffle than you have shuffle capacity.

Data input scale change

Getting this performance insight indicates that your query is reading at least 50% more data for a given input table than the last time you ran the query and hence experiencing query slowness. You can use table change history to see if the size of any of the tables used in the query has recently increased. 

What’s next?

We continue to work on improving the visualization of the graph. We are working on adding additional metrics to each step and adding more performance insights that will make query diagnosis significantly easier. We are just getting started.

Source : Data Analytics Read More

Automate data governance, extend your data fabric with Dataplex-BigLake integration

Automate data governance, extend your data fabric with Dataplex-BigLake integration

Unlocking the full potential of data requires breaking down the silo between open-source data formats and data warehouses. At the same time, it is critical to enable data governance team to apply policies regardless of where the data happens, whether – on file  or columnar storage. 

Today,  data governance teams have to become subject matter experts on each storage system the corporate data happens to reside on. Since February 2022,  Dataplex has offered a unified  place to apply policies, which are propagated across both lake storage and data warehouses in GCP. Rather than specifying policies in multiple places, bearing the cognitive load of translating policies from “what you want the storage system to do” to “how your data should behave” Dataplex offers a single point for unambiguous policy management.  Now, we are making it easier for you to use BigLake.  

Earlier this year, we launched BigLake into general availability, BigLake unifies data fabric between Data Lakes and Data Warehouses by extending BigQuery storage to open file formats. Today, we announce BigLake Integration with Dataplex (available in preview). This integration eliminates the configuration steps for the admin taking advantage of BigLake and managing policies across GCS and BigQuery from a unified console. 

Previously,  you could point Dataplex at a Google Cloud Storage (GCS) bucket, and Dataplex will discover and extract all metadata from the data lake and register this metadata in BigQuery (and Dataproc Metastore, Data Catalog) for analysis and search. With the BigLake integration capability, we are building on this capability by allowing an “upgrade” of a bucket asset, and instead of just creating external tables in BigQuery for analysis – Dataplex will create policy-capable BigLake tables! 

The immediate implication is that admins can now assign column, row, and table policies to the BigLake tables auto-created by Dataplex, as with BigLake – the infrastructure (GCS) layer is separate from the analysis layer (BigQuery). Dataplex will handle the creation of a BigQuery connection and a BigQuery publishing dataset and ensure the BigQuery service account has the correct permissions on the bucket.

But wait – there’s more.

With this release of Dataplex, we are also introducing advanced logging called governance logs.  Governance logs allow tracking the exact state of policy propagation to tables and columns – adding an additional level of detail going beyond the high-level “status” for the bucket and into fine-grained status and logs for tables, columns. 

What’s next? 

We have updated our documentation for managing buckets and have additional detail regarding policy propagation and the upgrade process.

Stay tuned for an exciting  roadmap ahead, with more automation around policy management.

For more information, please visit:

Google Cloud Dataplex

Source : Data Analytics Read More

8 Ways AI Contributes to Ecommerce Business Scalability

8 Ways AI Contributes to Ecommerce Business Scalability

Artificial intelligence has offered a plethora of benefits for businesses in every sector. The ecommerce industry is among those most benefiting from advances in AI. Therefore, it is no surprise that the market for AI-enabled ecommerce services is projected to be worth nearly $17 billion by 2030.

Ecommerce giants like Amazon are finding creative ways to leverage AI. In 2018, Blake Morgan wrote an article in Forbes detailing how Amazon rebranded itself around AI. AI technology helped the online titan improve product forecasting, deliver a higher ROI on ads to sellers and make better product recommendations.

However, AI is arguably even more beneficial for smaller sellers. If you are running an ecommerce business, then you should try leveraging AI strategically to get the best results.

How AI Can Help Your Ecommerce Business Grow

If you’ve just opened up a business in the booming world of e-commerce, you’re no doubt considering expansion opportunities. Expanding into new markets, a dependable but modest store might become a household name thanks to the power of the World Wide Web. Having a solid foundation for your company is crucial before you even consider growing it.

A business can’t be built on a shaky foundation like a home. Before thinking about growth, you should guarantee the quality of your products, website, and client service. After you’ve mastered the basics, you can expand your web company as much as you want.

The good news is that AI technology can help you make the most of the opportunities available to you. To expand your internet business, here are some of the most important things you need to do.

Increase Your Efforts with an AI-Driven Inbound Marketing

More and more gadgets and operating systems are supporting eCommerce, making the process quicker and more convenient for consumers. Voice-activated gadgets are increasingly becoming popular among consumers. Consequently, e-commerce businesses that want to grow must prioritize the development of distinctive experiences for their customers across all channels. Inbound Creative Marketing bring consumers to your online business by providing them with useful and entertaining content. In other words, it’s a set of actions that, when taken together, will bring in scalable income for your business.

AI helps with inbound marketing in many ways. The Digital Marketing Institute has shared some examples.

Advertising and Sales.

Blog posts, Facebook updates, infographics, videos, and influencer marketing campaigns are all part of an effective inbound strategy for eCommerce since they repeatedly get your brand’s name in front of your target audience.

AI helps companies create higher quality visuals for their ads. As we stated in the past, AI has helped every business become its own branding expert.

Increase Views With Google

Building a steady customer flow is essential to any online store’s success. SEO is time-consuming and can’t rely just on social media marketing. You need a solid Google Ads plan to increase your business’s web traffic. There has to be some kind of Google Shopping and search advertising included, if for no other reason than to re-market to those who have already visited the online shop.

AI is also helpful for SEO. You can read about some of our tips on using AI for SEO here.

Algorithm Optimization

You should also check whether your company is showing up in relevant Google Search results for its target audience. AI technology has made this easier than ever. Google’s algorithm has to be “pleased” if you want to rank well and have your brand exposed to consumers. Search engine giant Google is solely concerned with providing people with useful information. The algorithm will not consider your site relevant if people find it using the keywords you’ve chosen but are not making a purchase.

Inspecting Old Programs

The software you use to expand your company is as crucial as the software you use to run it. You should probably check the current software to ensure it will be sufficient for the foreseeable future. You need to look at everything from accounting software to marketing software to customer relationship management (CRM) systems.

In order to keep track of your progress as your eCommerce firm expands, you’ll need a reliable inventory management system. If your company is experiencing growth and you anticipate needing extra warehouse space for merchandise, warehouse management is an additional consideration.

AI technology can help you do an inventory assessment. It can inspect applications to see if they are outdated and recommend replacements.

Make Your Site Better

Your choice of e-commerce platform—whether it be your own website or a third-party site like Amazon or Etsy—is crucial to your company’s success. Make sure your website is inviting and appealing to first-time visitors. It is crucial to use high-quality product photos and to provide buyers with all the information they need to make an informed purchase.

Improve the quality of the service you provide to your clients.

Providing excellent service to customers is essential to any business looking to grow. If your consumers have a positive experience with your organization, they may return for more purchases, tell their friends about your services, forget any past negative interactions they may have had with your firm, and even become brand advocates. An inability to solve a customer’s problem is a common reason for leaving a company. A primary goal of every web company owner should be customer retention.

As we stated in the past, AI is great for website development and optimization. Our past article on this topic will give you some advice.

Incorporate Automation

Automation is one of the biggest benefits of AI technology. Incorporating new locations or services into your organization is a labor-intensive process. It’s not unusual for you to devote more attention to certain projects while ignoring others.

With the advent of automation, online business has a bright future. That’s because it’s designed to cut down on the time and money you spend on routine chores. The time and energy you save by automating routine processes might be better spent on higher-value activities, such as developing an engaging content strategy or innovative marketing initiatives. Task automation may pave the way for increased revenue. E-commerce systems may automate customer service interactions, including welcome emails, discount coupon surveys, and the recovery of abandoned shopping carts.

The post 8 Ways AI Contributes to Ecommerce Business Scalability appeared first on SmartData Collective.

Source : SmartData Collective Read More

How IEC 62443 and Other Regulatory Requirements Help Enable IoT Security

How IEC 62443 and Other Regulatory Requirements Help Enable IoT Security

As the US Government Accountability Office warns, “internet-connected technologies can improve services, but face risks of cyberattacks.” The use of IoT devices and operational technology (OT) generates new attack surfaces that can expose an organization’s critical infrastructure to hackers and other threat actors.

Building access gadgets, badge readers, fuel usage and route monitors (for vehicle fleets), and apps that connect to the enterprise IT infrastructure create, among others, can be targeted by hackers to compromise not only the devices but the entire network. Worse, attacks on the IoT and OT systems used in power generating stations, production lines, medical facilities, and other critical infrastructure can result in serious or tragic outcomes including actual loss of lives.

Just like most other things that gain widespread use, regulation has started creeping into IoT products. With more than 13 billion IoT devices across the world, it is not surprising that efforts have been undertaken to ensure their security. Here’s a rundown of some notable legal and regulatory requirements imposed to ensure IoT and OT security.

IEC 62443

IEC 62443 or the International Electrotechnical Commission standard 62443 is a series of standards created to counter cyber risks involving operational technology in automation and control systems. It lays out standards for different categories or roles, namely operators, service providers, and component/system manufacturers.

Introduced in 2021, IEC 62443 presents tasks and practices aimed at identifying cyber risks and determining the best defensive or counter-offensive measures. It requires organizations to create a cybersecurity management system (CSMS) that includes the following key elements: initial risk evaluation and prioritization, technical risk assessment, security policy formulation, countermeasure identification, and implementation, and CSMS maintenance.

IEC 62443 does not specifically target IoT devices, but two of its sub-standards are highly relevant to IoT and OT use. IEC 62443-4-1 and IEC 62443-4-2, in particular, require IoT product makers to ensure a secure product development lifecycle and have in place technical system components that guarantee secure user identification and authentication, product usage, system integrity, data confidentiality, data flow regulation, timely security event response, and resource availability.

Properly securing IoT devices is a complex and difficult process, given that it is not viable to install cyber protections for individual IoT devices. However, global security standards such as IEC 62443 compel manufacturers and others involved in the production, deployment, and use of IoT to play a role in addressing the risks and threats.

IoT Cybersecurity Improvement Act of 2020

The IoT Cybersecurity Improvement Act of 2020 is a law that mandates the National Institute of Standards and Technology (NIST) and the Office of Management and Budget (OMB) to undertake steps that advance IoT security. It requires the NIST to formulate guidelines and standards to ensure the secure use and management of IoT devices in federal government offices and connected agencies. On the other hand, the law orders the OMB to review the IT security policies and principles of federal agencies in line with the standards and guidelines set by NIST.

The NIST has a website that presents the resources it has developed in response to the IoT security law. These resources include the NISTIR 8259, which provides security information and guidance for IoT manufacturers; the SP 800-213 series, which contains information for federal agencies, and information on IoT security for consumers.

While the requirements set by the IoT Cybersecurity Improvement Act of 2020 are only for federal offices or agencies, these are expected to pave the way for the adoption of similar IoT security measures in the private sector. After all, if IoT device makers are already creating secure products for their government clients, there is no reason for them not to adopt the same cyber protections for the products they sell to other customers.

EU IoT Cybersecurity legislation (proposed)

The European Union does not have its version of the US IoT cybersecurity law yet, but it already has one in the works. This proposed IoT security legislation is not a standalone bill but a part of the EU Cyber Resilience Act, the first law covering the entirety of the European Union to impose rules on device manufacturers.

Once the law is enacted, companies will be required to get mandatory certificates that serve as proof of their compliance. The legislation plans to impose heavy fines on IoT product makers that fail to meet the requirements or violate regulations. Offending companies can be fined up to €15 million or 2.5 percent of their turnover from the previous year.

The EU’s proposed IoT security law is notably broader in scope compared to what the United States currently has. The proposed legislation will provide the European Commission the authority to ban or recall non-compliant IoT products, regardless of whether they are being sold to the government or to private customers.

IoT security labeling program (proposed)

Nevertheless, the United States government plans to have an IoT security labeling program, which in a way expands the scope of its IoT security endeavor beyond the federal government offices. Set to be implemented in the spring of 2023, the program will provide information (through physical labels) regarding the security of IoT devices in the market. It aims to help buyers of IoT products make informed and better purchase decisions.

The proposed IoT security labeling program is comparable to the Energy Star labels, which provide consumers with information about the energy efficiency of appliances or electronic devices. It does not throw unsecure IoT products out of the market, but it makes them less acceptable to buyers.

There are no details yet as to the certification and labeling process. It is unclear if companies are allowed to self-certify or if they can refer to third-party certifying bodies. However, most industry players reportedly expressed support for the plan.

Other notable IoT security efforts

Other countries also acknowledge the importance of securing IoT devices. In Japan, for example, a law was passed to allow the government to hack into IoT devices used not only in government offices but in private establishments and homes. The government’s rationale: finding and addressing the security loopholes before threat actors do.

In China, the Ministry of Industry and Information Technology (MIIT) released guidelines for the establishment of a security standard for the internet of things. The standard includes guidance regarding software security, data security, and user access and authentication.

Singapore, on the other hand, already has an IoT cybersecurity labeling program that is recognized by Finland and Germany, which also have their respective labeling programs. The program is officially referred to as the Cybersecurity Labelling Scheme (CLS) for consumer smart devices.

The development of the IEC 62443 series of international cybersecurity standards and the implementation of related laws and regulations in different countries is a welcome development for IoT and operational technology security. IoT and embedded devices are more often than not ignored as cyber-attack surfaces. Organizations benefit from the regulations and legislated security requirements, as they are likely to disregard, downplay, or pay little attention to the increasing risks brought about by the expanding IoT ecosystem.

The post How IEC 62443 and Other Regulatory Requirements Help Enable IoT Security appeared first on SmartData Collective.

Source : SmartData Collective Read More