Connect with us


Implementing hyperparameter optimization with Optuna on Amazon SageMaker



Preferred Networks (PFN) released the first major version of their open-source hyperparameter optimization (HPO) framework Optuna in January 2020, which has an eager API. This post introduces a method for HPO using Optuna and its reference architecture in Amazon SageMaker.

Amazon SageMaker supports various frameworks and interfaces such as TensorFlow, Apache MXNet, PyTorch, scikit-learn, Horovod, Keras, and Gluon. The service offers ways to build, train, and deploy machine learning models to all developers and data scientists. Amazon SageMaker offers managed Jupyter Notebook and JupyterLab as well as containerized environments for training and deployment. The service also offers an Automatic Model Tuning with Bayesian HPO feature by default.

When you use Amazon SageMaker Automatic Model Tuning, you will define a search space beforehand performing HPO. For example, see the parameters in the following code:

from sagemaker.tuner import HyperparameterTuner, IntegerParameter, CategoricalParameter, ContinuousParameter
hyperparameter_ranges = {'optimizer': CategoricalParameter(['sgd', 'Adam']), 'learning_rate': ContinuousParameter(0.01, 0.2), 'num_epoch': IntegerParameter(10, 50)}

You can parse metrics out of logs with regular expressions, feed it to the HyperparameterTuner class, and execute HPO. See the following code:

objective_metric_name = 'Validation-accuracy'
metric_definitions = [{'Name': 'Validation-accuracy', 'Regex': 'Validation-accuracy=([0-9\.]+)'}] tuner = HyperparameterTuner(estimator, objective_metric_name, hyperparameter_ranges, metric_definitions, max_jobs=9, max_parallel_jobs=3){'train': train_data_location, 'test': test_data_location})

The preceding code shows that you can easily execute HPO with Bayesian optimization by specifying the maximum and concurrent number of jobs for the hyperparameter tuning job. For more information, see Amazon SageMaker Automatic Model Tuning: Using Machine Learning for Machine Learning.

Using Optuna for HPO

You can write HPO using eager APIs in Optuna. This allows you to simplify the code when optimizing the number of the layer of neural networks. For example, if you pre-define a complicated parameter namespace to search each branch of network structure per selected number of layers, the computational load can be heavy. However, you can use Optuna in eager mode to write HPO intuitively. You can define the model as shown in the following code. The trial is the parameter set defined by Optuna. Using this parameter, you can obtain the parameters of the HPO.

def define_model(trial): # We optimize the number of layers, hidden units and dropout ratio in each layer. n_layers = trial.suggest_int("n_layers", 1, 3) layers = [] in_features = 28 * 28 for i in range(n_layers): out_features = trial.suggest_int("n_units_l{}".format(i), 4, 128) layers.append(nn.Linear(in_features, out_features)) layers.append(nn.ReLU()) p = trial.suggest_uniform("dropout_l{}".format(i), 0.2, 0.5) layers.append(nn.Dropout(p)) in_features = out_features layers.append(nn.Linear(in_features, CLASSES)) layers.append(nn.LogSoftmax(dim=1)) return nn.Sequential(*layers)

To use Optuna, you can define an objective function objective() inside of a training script so that it returns the value you want to maximize or minimize. See the following code:

def objective(trial): # Generate the model. model = define_model(trial).to(DEVICE) # Generate the optimizers. optimizer_name = trial.suggest_categorical("optimizer", ["Adam", "RMSprop", "SGD"]) lr = trial.suggest_loguniform("lr_{}".format(optimizer_name), 1e-5, 1e-1) optimizer = getattr(optim, optimizer_name)(model.parameters(), lr=lr) # Get the MNIST dataset. train_loader, test_loader = get_mnist(args) # Training of the model. model.train() for epoch in range(EPOCHS): for batch_idx, (data, target) in enumerate(train_loader): # Limiting training data for faster epochs. if batch_idx * BATCHSIZE >= N_TRAIN_EXAMPLES: break data, target = data.view(-1, 28 * 28).to(DEVICE), # Zeroing out gradient buffers. optimizer.zero_grad() # Performing a forward pass. output = model(data) # Computing negative Log Likelihood loss. loss = F.nll_loss(output, target) # Performing a backward pass. loss.backward() # Updating the weights. optimizer.step() save_model(model, '/tmp', trial.number) … return accuracy

In each execution, parameters are selected by the expression n_layers = trial.suggest_int('n_layers', 1, 3) in the define_model(trial) function. You similarly define optimizer. This line selects the optimization method and learning rate.

You can execute HPO by calling this defined objective function. See the following code:

study =, study_name=study_name, direction='maximize')
study.optimize(objective, n_trials=100)

In the preceding code, study is a unit of the HPO job. It is saved in the specified relational database. For this use case, you can save it in the database using Amazon Aurora.

Using Optuna in Amazon SageMaker

You can install Optuna with pip install optuna. Optuna stores its history in memory by default. To keep the persistent log, you can use relational databases in the backend. In this use case, you optimize in parallel execution using Amazon Sagemaker training jobs. You use an architecture with the managed database service in AWS, Aurora MySQL. You use this MySQL database to navigate through the search space during parallel execution. You launch the Aurora database inside of the closed and virtual network environment dedicated to your account. You use Amazon Virtual Private Cloud (Amazon VPC) separated from other networks in AWS. Because this database doesn’t need any connection from the public internet, you can place it inside of a private subnet in your VPC.

You can launch a notebook instance of Amazon SageMaker to use an environment for Jupyter to develop models in the same VPC. Because you need to connect to this instance from the open internet, at the time of development, place this instance in a public subnet. To allow connection to the Aurora database, you need to configure the virtual firewall of the instance, the security group, and the route table to define the network traffic route appropriately. You also launch a container for training a task in the VPC and create a NAT Gateway to connect to the open internet to install libraries to the container.

AWS CloudFormation templates and sample code are available in the GitHub repo to help you to create the environment in AWS. Choose Create Stack and create the environment shown in the following diagram within 10-15 minutes.

The status CREATE_COMPLETE displays when stack creation is complete. The CloudFormation stack creates resources that incur charges, so make sure to delete the stack after this tutorial if you don’t intend to use it further.

To connect to the database securely, keep your user name and password in AWS Secrets Manager.

Executing in Amazon SageMaker

You can now connect to the notebook instance of Amazon SageMaker.

  1. Open the Jupyter environment redirected from the AWS Management Console.
  2. Open the sample code pytorch_simple.ipynb cloned from GitHub.
  3. Choose the kernel conda_pytorch_p36.
  4. Install a MySQL client in preparation (pip install optuna PyMySQL) in addition to Optuna.

Preparation for training

Amazon SageMaker uses a Docker container for training and hosting inference endpoints. For this use case, you use the existing PyTorch Docker image in the GitHub repo with requirements.txt to set up an environment because you’re merely adding Optuna to the environment. You can install it when a container starts by specifying the versions of libraries in requirements.txt:

PyMySQL == 0.9.3
optuna == 1.4.0

Alternatively, you could create an Amazon SageMaker Docker image with an additional library for training or inference before running the container. In that way, you reduce the overhead of starting the container. However, this method is outside of the scope of this post.

The Python script (src/ specified as an entry point runs at the time of training in Amazon SageMaker. You just need to rewrite a few parts of the training script so the Python script can run in Amazon SageMaker.

In the main function, parse the hyperparameters passed at the time of training job executed, load the data from Amazon Simple Storage Service (Amazon S3), and save the model to Amazon S3 after the training by specifying the pre-defined directory obtained from the environment variables. For more information, see Using PyTorch with the SageMaker Python SDK.

Running the training job

Obtain the following parameters from the Outputs tab on the AWS CloudFormation console:

  • Private subnet ID and security group of training container
  • Endpoint of the Aurora database
  • Secret name of Secrets Manager

You can use the Amazon SageMaker Python SDK to upload to Amazon S3. For this walkthrough, you call the training script created in the previous section from the notebook instance in Amazon SageMaker. Initialize PyTorch Estimator class by specifying the version of PyTorch, IAM role for execution, subnet ID, security group, and the type and number of instances. Launch the container with the fit() method. The sample notebook runs multiple training jobs parallel on Amazon SageMaker. See the following code:

# setup SageMaker PyTorch estimator
from sagemaker.pytorch.estimator import PyTorch pytorch_estimator = PyTorch(entry_point='', source_dir="src", framework_version='1.5.0', role=role, subnets=subnets, security_group_ids=security_group_ids, train_instance_count=1, train_instance_type='ml.c5.xlarge', hyperparameters={ 'host': host, 'db-name': db_name, 'db-secret': secret_name, 'study-name': study_name, 'region-name': region_name, 'n-trials': 25 }) # HPO in parallel
max_parallel_jobs = 4
for j in range(max_parallel_jobs-1):{'train': train_input, 'test': test_input}, wait=False){'train': train_input, 'test': test_input})

You can see the console output during training in Jupyter Notebook. Models are saved every time in the objective() function. Only trials that are optimal in each training job are moved to model_dir and stored in Amazon S3. The Amazon SageMaker training job ID is recorded in the user attribute of the trial for retrieval. This makes it easy to get the Amazon S3 object key where the best model is stored when the inference endpoint is deployed.

You can visualize the result when the training job is complete. Retrieve the result as pandas.DataFrame. See the following code:

study =, storage=db)
df = study.trials_dataframe() ax = df[‘value’].plot()
ax.set_xlabel('Number of trials')
ax.set_ylabel('Validation accuracy')

Through repeated trials, you can search for better hyperparameters and confirm that the accuracy has improved, which is shown in the following figure.

Creating the inference endpoint

You can now deploy the model and check the inference result.

  1. Retrieve the best model in the study executed so far.
  2. Specify the S3 bucket and object where the results are stored using the Amazon SageMaker job ID that you recorded in the previous step.

The model loading for hosting is written in

  1. Specify the instance numbers and types, and call the deploy() See the following code:
    from sagemaker.pytorch import PyTorchModel best_model_data = “{}/{}/output/model.tar.gz”.format(pytorch_estimator.output_path, study.best_trial.user_attrs[‘job_name’])
    best_model = PyTorchModel(model_data=best_model_data, role=role, entry_point='', source_dir="src", framework_version='1.5.0' ) predictor = best_model.deploy(instance_type="ml.m5.xlarge", initial_instance_count=1)

    Endpoint creation takes several minutes.

By sending images to the endpoint for trial, you can get the result of the inference. See the following code:

import torch test_loader = datasets.MNIST('data', train=False, transform=transforms.ToTensor()), batch_size=5, shuffle=True, ) for batch_idx, (data, target) in enumerate(test_loader): data, target = data.view(-1, 28 * 28).to('cpu'),'cpu') prediction = predictor.predict(data) predicted_label = prediction.argmax(axis=1) print('Pred label: {}'.format(predicted_label)) print('True label: {}'.format(target.numpy())) break

If you see the result as the following code, the execution is successful:

Pred label: [9 7 8 6 1]
True label: [9 7 8 6 1]

Cleaning up

When you are done running testsn, delete the unused endpoints to avoid unintended charges. See the following code:


Delete the CloudFormation stack if you don’t intend to use it any longer. You can delete it on the console.


This post demonstrated how to execute hyperparameter optimization (HPO) using Optuna in Amazon SageMaker. You reviewed the necessary architecture and procedures using AWS services and Optuna versions currently offered. Although this post used Aurora MySQL, you can use other RDS engines and even Redis (experimental in Optuna v1.4.0) for the parameter database. As demonstrated in this post, you can implement Amazon SageMaker and other AWS services to run various workloads. If you have other use cases and new ideas, share your thoughts with the AWS Solutions Architects.

About the authors

Yoshitaka Haribara, Ph.D. AWS Solutions Architect. He works with machine learning startups in Japan including PFN. His favorite service is Amazon SageMaker and Amazon Braket.

Shoko Utsunomiya, Ph.D. AWS Machine Learning Solutions Architect; supporting and providing solutions for machine learning projects in the industry of Automotive, Healthcare, and Games. Her favorite AWS service is Amazon SageMaker.



Why Choosing the Right CBD Product Is Important



When people first decide to use CBD products in order to enjoy the benefits of CBD, they are often confused over which product to purchase. There are many different products you can choose from these days, with many people buying CBD gummies, drops, capsules, and other products online. For those who are new to CBD, it is always important to do some research and find the right CBD products, and this is vital for a range of reasons.

Of course, you do need to look at a few key factors in order to help you to choose the right CBD products, as there are so many different options to choose from. You can do things such as look at online reviews from other people, research the manufacturer and retailer, and consider the suitability of the product for your specific needs and lifestyle. In this article, we will look at some of the reasons why you need to ensure you make the right choices.

The Importance of Doing This

There are many reasons why it is so important that you find the right CBD product for your needs as someone who is new to these products. Some of the reasons behind this are:

You Need to Ensure Quality and Safety

One of the reasons it is so important to look for the right CBD products is so that you can ensure quality and safety. As with any other type of product, you can get great quality CBD products from reputable sources, and you can find substandard ones from questionable sources. It is vital that you do not make the mistake of buying the latter, as this could lead to you ending up with a product that is ineffective and even unsafe. By choosing the right product and provider, you can benefit from quality, safety, and effectiveness.

It Is Important to Ensure Suitability

Another of the reasons you need to ensure you find the right CBD products is to ensure suitability, as you need to find ones that are perfectly suited to your needs. To do this, you should look at your preferences and your lifestyle so that you can then match these to the ideal products. For instance, if you use a vape device, you could look at using CBD liquids whereas if you like sweet treats, you could consider CBD edibles.

You Must Look at Affordability

One of the other reasons you need to choose the right CBD products is to ensure you find something that is affordable and fits in with your budget. The cost of CBD products can vary widely, so you need to do some research and compare different costs in order to find ones that you can afford. Also, make sure you know how much you can afford to spend before you start researching the options, as this means you will not waste time looking at products that are out of your price range.

These are some of the reasons you need to ensure you find the right CBD products.

The post Why Choosing the Right CBD Product Is Important appeared first on 1redDrop.

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.


Continue Reading

Artificial Intelligence

Morgan Stanley’s robot Libor lawyers saved 50,000 hours of work



Untangling trillions of dollars worth of loans and other financial contracts from Libor is a complex, expensive and time-consuming job.
So, finance giants are turning to artificial intelligence to simplify and speed up a task mandated by regulators — and spare human lawyers some serious drudgery.

Morgan Stanley figures it’s saved legal staffers 50,000 hours of work and $10 million in attorney fees by using robot Libor lawyers instead of only the human kind. Goldman Sachs Group Inc. says computer algorithms sped things up “drastically.” These banks aren’t alone in adopting AI, and the revolution likely won’t stop with the Libor transition — but the number of contracts involved in this shift provides an ideal testing ground for the machines.

The task would be grueling for paralegals, whose torture involves parsing dense clauses to sort out which govern in a post-Libor world. Does this paragraph decide how to replace the rate, or do these? They’d sweat floating-rate options, applicable periodic rates and substitute basis to sort out the new interest payment, and grapple with whether the legalese applies just to bonds or to loans and swaps as well.

Then repeat all that grunt work over millions of pages.

‘Army of Lawyers’

“We had a client that had 15 million queries and they were able to get all that answered within a quarter,” said Lewis Liu, chief executive officer at Eigen Technologies Ltd., which helped Goldman Sachs and ING Groep NV deploy Libor-analyzing software. “The alternative would have been literally an army of lawyers and paralegals over a year, or maybe two.”

This is all happening because a decade ago major banks were caught rigging Libor (full name: the London interbank offered rate). As a consequence, the benchmark is being switched off throughout the global financial system. Newly issued loans and other products cannot be tied to the rate after Dec. 31, and it will be retired for dollar-based legacy products after June 2023.

So here come the bots. But even with AI, examining old legal documents to figure out how they change when Libor is swapped out for another interest-rate benchmark is costly. Major global banks are each spending at least $100 million this year on the job, according to Ernst & Young. And humans still need to check their work and make final decisions; once banks discover which contracts need to be renegotiated, they must sit down and haggle with their counterparty.

“A person has to look at the documents and come up with a strategy,” said Anne Beaumont, a partner at law firm Friedman Kaplan Seiler & Adelman LLP, who views AI as an enhancement rather than a threat. “It probably makes a lot of paralegals and lawyers happy that they don’t have to waste time.”

The experience is reshaping broader attitudes toward large-scale administrative tasks, pushing other cumbersome jobs to AI. JPMorgan Chase & Co. has asked its Libor robots to expand their remit and grapple with other hard tasks in the company’s corporate and investment bank, a spokesman said.

Of course, a broader industry shift to more AI could mean fewer jobs for humans in certain areas.

Feeling the Pain

Libor is keeping the bots plenty busy, though. Morgan Stanley’s software digested 2.5 million references to Libor, according to Rob Avery, a managing director at the bank. The algorithm — based on neural-network models and known as Sherlock — rifles through contracts, digging out clauses that identify how a collateralized loan obligation or a mortgage-backed security will transition to replacement rates.

Graph by Bloomberg Mercury

It categorizes them so Morgan Stanley can determine how their value will change depending on the replacement rate. That helps the bank decide whether to keep or sell the asset. The software operates “in a fraction of human processing time to assess the impact of potential rate-change scenarios,” Avery said in an interview.

Goldman Sachs, meanwhile, has seen AI “accelerating the project timescales drastically,” Managing Director Donna Mansfield said in a testimonial published by Eigen.

ING used AI to decide whether more than 1.4 million pages of loan agreements needed revision, said Rick Hoekman, a leader in the bank’s wholesale banking lending team. “It was a big success” that eliminated a lot of manual work, he said. The company’s data scientists may eventually use the software to approve the credit of clients.

That’s not to say that everyone is piling in. NatWest Markets Plc was approached a couple years ago by consultancies offering AI, but turned them down. “We sensed it would involve a huge project to get it to work and would consume lots of time when we just wanted to crack on,” said Phil Lloyd, head of customer sales delivery. “We felt it might help but it wouldn’t be a nirvana.”

Plenty of other banks and asset managers have struggled with such software and are instead hiring offshore lawyers and paralegals to do the work after seeing the large amount of training and technology required.

But there’s likely no stopping AI from spreading throughout banking.

Bank of New York Mellon Corp. is working with Google Cloud to help market participants predict billions of dollars of U.S. Treasury trades that fail to settle each day, and with software company Evisort Inc. to manage contract negotiations.

“When your 12-year-old and my 12-year-old are our age, they’re not going to do finance the way we do — you can see their impatience with technology,” said Jason Granet, chief investment officer at BNY Mellon and the former head of the Libor transition at Goldman Sachs. “You’re not going to beat them, so you’ve got to join them.”

— By William Shaw with assistance from Greg Ritchie and Fergal O’Brien

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.


Continue Reading


7 Ways Machine Learning Can Enhance Your Marketing



In the digital era, no marketers can survive without mastering data, analytics, and automation; the reason is a massive surge in data generation. Suppose you look at the stats about data generation. In that case, it’s more than 2.5 quintillions of data generated every day, which equals 2.5, followed by stupefying 18 zeros according to social media today.

“And by 2025, the amount of data generated each day will surge to 463 exabytes of data globally, according to the world economic forum.” 

And the fun part is the words that humans have spoken fit into only five exabytes of data. Now imagine the importance of mastering data, analytics, and automation and why it is crucial today? You probably have got your answers by now.

But to stand out in the market and beat your competitors, you need to understand the ongoing and upcoming trends. How can you analyze them seamlessly? Through machine learning and advanced automation.

And in this blog, we’re going to learn how machine learning can enhance marketing in the highly competitive world. Remember, you’re not alone in the race, but you need to think and act a step in advance to beat your competitors.

If you get what I mean, let’s dive in and explore them in detail.

7 Coolest Ways Machine Learning Can Enhance Your Marketing

Marketing success depends upon many significant factors, from proper customer research to building the brand strategy, engaging with the customers, and delighting them; it takes a lot of effort and automation.

And to solve these massive problems, ease the marketer’s work and responsibilities through accurate data analysis, machine learning has enormous roles to play. And here is the complete breakdown of how machine learning influences marketing.

Understanding Customers in 360-degree

Every day, your customers share information about themselves, but the best thing you can do is spend most of your time where your customers love to spend. When you start paying attention, you start knowing them better and better.

You get to know your customer’s last purchase, their problems, and how you and your products can help them. When you understand their pain points and are able to fulfil their needs and predict what they are likely to purchase the next time, understand the psychology behind it – you get the 360-degree view of customers.

Real-Time Analytics Gives You On-going and Up-coming Trends

Today, in the digital era, the world is changing so fast that it’s tough to comprehend data, and that’s one reason why business decisions keep changing from time to time. Because the whole thing is when you’re up to the final decision in the making, more and more data gets bombarded.

A few free tools from Google are Google Keywords, Google Analytics, and Google Search Console. When you use them, you get the exact data you need to understand the ongoing and upcoming trends and how your competitors do the same for any location and product.

According to Gartner, real-time analytics is a discipline that requires logic and mathematics to make better decisions quickly. And again, according to Gartner’s research, by 2022, most companies will incorporate real-time analytics to push their firm to the ultimate level and stay ahead of their competitors — just to improve decision making.

Smart Engine Recommendations is the Smartest Move Ever

Businesses run on data, and that’s so true, but where does the data come from? From users, right? Yes, whenever you visit a website or purchase a product, the website cookies track everything, and from there, the analyst can know what other things you would be interested in and like to buy.

And they push you to do similar things when you visit their website. Let’s suppose you purchased an iPhone at this Great Indian Festival; what Amazon will show you next, the phone charger, the case, and tempered glass, saying people who have purchased iPhones have also purchased these items.

How does Amazon do that? Amazon does that using KNN algorithms, using smart engine recommendations. That’s the most intelligent move over.

Predictive Engagement and Analytics (Just a Few Steps Away)

The first step of data analytics is to be able to understand the data, meaning when you know the data, you know customers and what they are looking for. From there, you might know what they might actually purchase.

And predictive analytics is all about that; it’s the likelihood of customers taking a particular action and companies using different software for the accurate prediction.

The best example is “The Big Billion Sale” campaign by Flipkart. If you have looked closely, you have seen the best deals, only seven left, and many different tactics to boost sales while the price fluctuates.

When you’re about to purchase, the order gets out of stock, and again it gets available. Or something you can relate to wherever the new flagship phone launches, there are limited sales every week and delivery to the first registered customers until the device is fully available.

Chatbots are the New and Ultimate Sales Persons

Nowadays, if you see every website, it has something called chatbots, and it is NLP enabled, meaning it’s a self-learning algorithm that learns by itself. With this, you don’t need to be active on a website 24 X 7.

Chatbots are your new and ultimate sales AI-Robots and can guide your visiting customers by understanding their search intent, helping you collect the leads, and later you can turn them into customers.

Personalization is the New Customer-Centric Emotion

When you look into it from different perspectives, you can always relate to customers being emotion-driven; when you present them in the right way and poke their pain points, they are most likely to take action.

But when you personalize them, addressing them with their name, they feel ‘This company is customer-centric and valued their customers a lot. And that’s what hooks them to your business.

The best way to do this is through email marketing, and we have so many tools for the same with self-learning algorithms that automate the whole process with personalization.

Voice Search is the New Generation of Search Optimization and Search Engine

In the digital era, and with many advanced features on mobile and web apps, our life has become more sophisticated. People were hardly interested in typing out their queries but voice-searched them.

That’s what the world’s largest eCommerce platform, Amazon, does brilliantly with Alexa implementation. It works on the principle of Natural Language Processing, where it captures the audience queries, looks for the best matches and related to them through the KNN algorithm, and showcases the most relevant items to the customers with matching keywords.

That way, Amazon makes the marketing and business model easy for the end-users and holds their customers for a long time.


When you read the whole thing, you learn how advanced and essential machine learning has become and how crucial it is to integrate into the business models.

These seven machine learning algorithms have already been game-changing. If you’re a business owner or stakeholder, you must plan to implement them in your business to see it scaling.

Also, Read How to Use Machine Learning for E-Commerce

The post 7 Ways Machine Learning Can Enhance Your Marketing appeared first on AiiotTalk – Artificial Intelligence | Robotics | Technology.

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.


Continue Reading


Common Pay Per Click Mistakes and How to Avoid Them



There are plenty of articles online that talk about some recommended practices on how to build your marketing campaigns. There are also a variety of techniques for optimization and numerous concepts regarding how to structure effective online advertisements.

Since there are countless pieces of advice that are available on the internet, it is very likely for you to get lost with conflicting ideas and be confused as to what you will follow or not.

Things will be easier if you have an expert team to help you with your needs. Nevertheless, it is completely normal to commit mistakes as long as you will learn how to avoid them next time.

Not Utilizing Negative Keyword Lists Efficiently

One of your allies in the effective execution of PPC campaigns is the proper use of keywords. Aside from that, using negative keyword lists with efficiency is also a helpful way to ensure that your PPC campaigns are doing well.

“It will be a great practice if you will have a master list of negative keywords so you can apply it to all of your campaigns with particular terms or phrases that you do not want your advertisements to appear for.” 

Regularly checking the search query reports will help you avoid wasting money on search queries that you do not want your advertisements to be suggested.

Not Matching Keywords to Ad Copy

As a wise business owner, you have to exert more effort in making your advertising campaigns as relevant as possible. Since online consumers have a very short attention span, they do not have the luxury of time to deal with unnecessary and uninteresting websites.

One of the most common mistakes in PPC is when an advertiser is making one set of ads and utilizing them across multiple ad groups. It is good only for having a broad same theme but for personalization, it will make your campaigns weak.

Since you have a lot of other things to focus on for your business, it would be wise and easier to hire an ROI-driven PPC team who are experts in making relevant and successful advertising campaigns.

Focusing Too Much on an Average Position

Advertisers commit mistakes by focusing on an average position. This is because an average position of one (1) simply means that your advertisements are appearing ahead of any other paid ads in the search results.

It does not strictly mean that your ads are actually in the top spot. This is why the average position is not an indication of the location of your ads when they are suggested.

Key Takeaway

Now that you are knowledgeable regarding the common mistakes on PPC, take this information as your driving force to help yourself avoid committing these mistakes.

It is good that you know how to solve these problems when you have committed some mistakes but it is better that you know how to avoid these problems before you even commit some mistakes. You have to employ a proactive approach to ensure that you maximize the full potential of PPC as your marketing campaign.

Also Read, Impact of Artificial Intelligence and Machine Learning on SEO

The post Common Pay Per Click Mistakes and How to Avoid Them appeared first on AiiotTalk – Artificial Intelligence | Robotics | Technology.

PlatoAi. Web3 Reimagined. Data Intelligence Amplified.
Click here to access.


Continue Reading
Blockchain3 days ago

People’s payment attitude: Why cash Remains the most Common Means of Payment & How Technology and Crypto have more Advantages as a Means of payment

Automotive4 days ago

7 Secrets That Automakers Wish You Don’t Know

Startups3 days ago

The 12 TikTok facts you should know

Energy2 days ago

U Power ties up with Bosch to collaborate on Super Board technology

Supply Chain3 days ago

LPG tubes – what to think about

Gaming4 days ago

New Steam Games You Might Have Missed In August 2021

Blockchain4 days ago

What Is the Best Crypto IRA for Me? Use These 6 Pieces of Criteria to Find Out More

Gaming4 days ago

How do casinos without an account work?

IOT4 days ago

The Benefits of Using IoT SIM Card Technology

Gaming4 days ago

Norway will crack down on the unlicensed iGaming market with a new gaming law

Blockchain4 days ago

The Most Profitable Cryptocurrencies on the Market

Blockchain4 days ago

What does swapping crypto mean?

Energy3 days ago

Piperylene Market Size to Grow by USD 428.50 mn from 2020 to 2024 | Growing Demand for Piperylene-based Adhesives to Boost Growth | Technavio

Energy2 days ago

Notice of Data Security Breach Incident

AR/VR5 days ago

Preview: Little Cities – Delightful City Building on Quest

Blockchain2 days ago

Blockchain & Infrastructure Post-Event Release

Cyber Security2 days ago

Ransomware Took a New Twist with US Leading a Law Enforcement Effort to Hack Back

Blockchain3 days ago

Week Ahead – Between a rock and a hard place

Esports3 days ago

How to get Shiny Zacian and Zamazenta in Pokémon Sword and Shield

Code2 days ago

How does XML to JSON converter work?