Amazon SageMaker Studio is a web-based fully integrated development environment (IDE) where you can perform end-to-end machine learning (ML) development to prepare data and build, train, and deploy models.
Like other AWS services, Studio supports a rich set of security-related features that allow you to build highly secure and compliant environments.
One of these fundamental security features allows you to launch Studio in your own Amazon Virtual Private Cloud (Amazon VPC). This allows you to control, monitor, and inspect network traffic within and outside your VPC using standard AWS networking and security capabilities. For more information, see Securing Amazon SageMaker Studio connectivity using a private VPC.
Customers in regulated industries, such as financial services, often don’t allow any internet access in ML environments. They often use only VPC endpoints for AWS services, and connect only to private source code repositories in which all libraries have been vetted both in terms of security and licensing. Customers may want to provide internet access but also have some controls such as domain name or URL filtering and allow access to only specific public repositories and websites, possibly packet inspection, or other network traffic-related security controls. For these cases, AWS Network Firewall and NAT gateway-based deployment may provide a suitable use case.
In this post, we show how you can use Network Firewall to build a secure and compliant environment by restricting and monitoring internet access, inspecting traffic, and using stateless and stateful firewall engine rules to control the network flow between Studio notebooks and the internet.
Depending on your security, compliance, and governance rules, you may not need to or cannot completely block internet access from Studio and your AI and ML workloads. You may have requirements beyond the scope of network security controls implemented by security groups and network access control lists (ACLs), such as application protocol protection, deep packet inspection, domain name filtering, and intrusion prevention system (IPS). Your network traffic controls may also require many more rules compared to what is currently supported in security groups and network ACLs. In these scenarios, you can use Network Firewall—a managed network firewall and IPS for your VPC.
When you deploy Studio in your VPC, you control how Studio accesses the internet with the parameter
AppNetworkAccessType (via the Amazon SageMaker API) or by selecting your preference on the console when you create a Studio domain.
If you select Public internet Only (
PublicInternetOnly), all the ingress and egress internet traffic from Amazon SageMaker notebooks flows through an AWS managed internet gateway attached to a VPC in your SageMaker account. The following diagram shows this network configuration.
Studio provides public internet egress through a platform-managed VPC for data scientists to download notebooks, packages, and datasets. Traffic to the attached Amazon Elastic File System (Amazon EFS) volume always goes through the customer VPC and never through the public internet egress.
To use your own control flow for the internet traffic, like a NAT or internet gateway, you must set the
AppNetworkAccessType parameter to
VpcOnly (or select VPC Only on the console). When you launch your app, this creates an elastic network interface in the specified subnets in your VPC. You can apply all available layers of security control—security groups, network ACLs, VPC endpoints, AWS PrivateLink, or Network Firewall endpoints—to the internal network and internet traffic to exercise fine-grained control of network access in Studio. The following diagram shows the
VpcOnly network configuration.
In this mode, the direct internet access to or from notebooks is completely disabled, and all traffic is routed through an elastic network interface in your private VPC. This also includes traffic from Studio UI widgets and interfaces, such as Experiments, Autopilot, and Model Monitor, to their respective backend SageMaker APIs.
For more information about network access parameters when creating a domain, see CreateDomain.
The solution in this post uses the
VpcOnly option and deploys the Studio domain into a VPC with three subnets:
- SageMaker subnet – Hosts all Studio workloads. All ingress and egress network flow is controlled by a security group.
- NAT subnet – Contains a NAT gateway. We use the NAT gateway to access the internet without exposing any private IP addresses from our private network.
- Network Firewall subnet – Contains a Network Firewall endpoint. The route tables are configured so that all inbound and outbound external network traffic is routed via Network Firewall. You can configure stateful and stateless Network Firewall policies to inspect, monitor, and control the traffic.
The following diagram shows the overview of the solution architecture and the deployed components.
The solution deploys the following resources in your account:
- A VPC with a specified Classless Inter-Domain Routing (CIDR) block
- Three private subnets with specified CIDRs
- Internet gateway, NAT gateway, Network Firewall, and a Network Firewall endpoint in the Network Firewall subnet
- A Network Firewall policy and stateful domain list group with an allow domain list
- Elastic IP allocated to the NAT gateway
- Two security groups for SageMaker workloads and VPC endpoints, respectively
- Four route tables with configured routes
- An Amazon S3 VPC endpoint (type Gateway)
- AWS service access VPC endpoints (type Interface) for various AWS services that need to be accessed from Studio
Network routing for targets outside the VPC is configured in such a way that all ingress and egress internet traffic goes via the Network Firewall and NAT gateway. For details and reference network architectures with Network Firewall and NAT gateway, see Architecture with an internet gateway and a NAT gateway, Deployment models for AWS Network Firewall, and Enforce your AWS Network Firewall protections at scale with AWS Firewall Manager. The AWS re:Invent 2020 video Which inspection architecture is right for you? discusses which inspection architecture is right for your use case.
The solution creates a SageMaker domain and user profile.
The solution uses only one Availability Zone and is not highly available. A best practice is to use a Multi-AZ configuration for any production deployment. You can implement the highly available solution by duplicating the Single-AZ setup—subnets, NAT gateway, and Network Firewall endpoints—to additional Availability Zones.
You use Network Firewall and its policies to control entry and exit of the internet traffic in your VPC. You create an allow domain list rule to allow internet access to the specified network domains only and block traffic to any domain not on the allow list.
AWS CloudFormation resources
Network Firewall is a Regional service; for more information on Region availability, see the AWS Region Table.
Your CloudFormation stack doesn’t have any required parameters. You may want to change the
*CIDR parameters to avoid naming conflicts with the existing resources and your VPC CIDR allocations. Otherwise, use the following default values:
- ProjectName –
- DomainName –
- UserProfileName –
- VPCCIDR – 10.2.0.0/16
- FirewallSubnetCIDR – 10.2.1.0/24
- NATGatewaySubnetCIDR – 10.2.2.0/24
- SageMakerStudioSubnetCIDR – 10.2.3.0/24
Deploy the CloudFormation template
To start experimenting with the Network Firewall and stateful rules, you need first to deploy the provided CloudFormation template to your AWS account.
- Clone the GitHub repository:
- Create an S3 bucket in the Region where you deploy the solution:
You can skip this step if you already have an S3 bucket.
- Deploy the CloudFormation stack:
The deployment procedure packages the CloudFormation template and copies it to the S3 bucket your provided. Then the CloudFormation template is deployed from the S3 bucket to your AWS account.
The stack deploys all the needed resources like VPC, network devices, route tables, security groups, S3 buckets, IAM policies and roles, and VPC endpoints, and also creates a new Studio domain and user profile.
When the deployment is complete, you can see the full list of stack output values by running the following command in terminal:
- Launch Studio via the SageMaker console.
Experiment with Network Firewall
Now you can learn how to control the internet inbound and outbound access with Network Firewall. In this section, we discuss the initial setup, accessing resources not on the allow list, adding domains to the allow list, configuring logging, and additional firewall rules.
The solution deploys a Network Firewall policy with a stateful rule group with an allow domain list. This policy is attached to the Network Firewall. All inbound and outbound internet traffic is blocked now, except for the
.kaggle.com domain, which is on the allow list.
Let’s try to access
https://kaggle.com by opening a new notebook in Studio and attempting to download the front page from
The following screenshot shows that the request succeeds because the domain is allowed by the firewall policy. Users can connect to this and only to this domain from any Studio notebook.
Access resources not on the allowed domain list
In the Studio notebook, try to clone any public GitHub repository, such as the following:
This operation times out after 5 minutes because any internet traffic except to and from the .kaggle.com domain isn’t allowed and is dropped by Network Firewall.
Add a domain to the allowed domain list
To be able to run the git clone command, you must allow internet traffic to the
- On the Amazon VPC console, choose Firewall policies.
- Choose the policy network-firewall-policy-<ProjectName>.
- In the Stateful rule groups section, select the group rule domain-allow-sagemaker-<ProjectName>.
You can see the domain
.kaggle.com on the allow list.
- Choose Add domain.
- Choose Save.
You now have two names on the allow domain list.
Firewall policy is propagated in real time to Network Firewall and your changes take effect immediately. Any inbound or outbound traffic from or to these domains is now allowed by the firewall and all other traffic is dropped.
To validate the new configuration, go to your Studio notebook and try to clone the same GitHub repository again:
The operation succeeds this time—Network Firewall allows access to the
Network Firewall logging
In this section, you configure Network Firewall logging for your firewall’s stateful engine. Logging gives you detailed information about network traffic, including the time that the stateful engine received a packet, detailed information about the packet, and any stateful rule action taken against the packet. The logs are published to the log destination that you configured, where you can retrieve and view them.
- On the Amazon VPC console, choose Firewalls.
- Choose your firewall.
- Choose the Firewall details tab.
- In the Logging section, choose Edit.
- Configure your firewall logging by selecting what log types you want to capture and providing the log destination.
For this post, select Alert log type, set Log destination for alerts to CloudWatch Log group, and provide an existing or a new log group where the firewall logs are delivered.
- Choose Save.
To check your settings, go back to Studio and try to access
pypi.org to install a Python package:
This command fails with
ReadTimeoutError because Network Firewall drops any traffic to any domain not on the allow list (which contains only two domains:
On the Amazon CloudWatch console, navigate to the log group and browse through the recent log streams.
pipy.org domain shows the
blocked action. The log event also provides additional details such as various timestamps, protocol, port and IP details, event type, availability zone, and the firewall name.
You can continue experimenting with Network Firewall by adding
.pythonhosted.org domains to the allowed domain list.
Then validate your access to them via your Studio notebook.
Additional firewall rules
You can create any other stateless or stateful firewall rules and implement traffic filtering based on a standard stateful 5-tuple rule for network traffic inspection (protocol, source IP, source port, destination IP, destination port). Network Firewall also supports industry standard stateful Suricata compatible IPS rule groups. You can implement protocol-based rules to detect and block any non-standard or promiscuous usage or activity. For more information about creating and managing Network Firewall rule groups, see Rule groups in AWS Network Firewall.
Additional security controls with Network Firewall
In the previous section, we looked at one feature of the Network Firewall: filtering network traffic based on the domain name. In addition to stateless or stateful firewall rules, Network Firewall provides several tools and features for further security controls and monitoring:
Build secure ML environments
A robust security design normally includes multi-layer security controls for the system. For SageMaker environments and workloads, you can use the following AWS security services and concepts to secure, control, and monitor your environment:
- VPC and private subnets to perform secure API calls to other AWS services and restrict internet access for downloading packages.
- S3 bucket policies that restrict access to specific VPC endpoints.
- Encryption of ML model artifacts and other system artifacts that are either in transit or at rest. Requests to the SageMaker API and console are made over a Secure Sockets Layer (SSL) connection.
- Restricted IAM roles and policies for SageMaker runs and notebook access based on resource tags and project ID.
- Restricted access to Amazon public services, such as Amazon Elastic Container Registry (Amazon ECR) to VPC endpoints only.
For a reference deployment architecture and ready-to-use deployable constructs for your environment, see Amazon SageMaker with Guardrails on AWS.
In this post, we showed how you can secure, log, and monitor internet ingress and egress traffic in Studio notebooks for your sensitive ML workloads using managed Network Firewall. You can use the provided CloudFormation templates to automate SageMaker deployment as part of your Infrastructure as Code (IaC) strategy.
For more information about other possibilities to secure your SageMaker deployments and ML workloads, see Building secure machine learning environments with Amazon SageMaker.
About the Author
Yevgeniy Ilyin is a Solutions Architect at AWS. He has over 20 years of experience working at all levels of software development and solutions architecture and has used programming languages from COBOL and Assembler to .NET, Java, and Python. He develops and codes cloud native solutions with a focus on big data, analytics, and data engineering.
Falsified Satellite Images in Deepfake Geography Seen as Security Threat
By John P. Desmond, AI Trends Editor
Deepfake is a portmanteau of “deep learning” and “fake”, and refers to a synthetic media usually in which a person in an existing image or video is replaced with someone else’s likeness. Deepfakes use techniques from machine learning and AI to manipulate visual and audio content with a high potential to deceive.
Deepfakes applied to geography have the potential to falsify satellite image data, which could pose a national security threat. Scientists at the University of Washington (UW) are studying this, in the hopes of finding ways to detect fake satellite images and warn of its dangers.
“This isn’t just Photoshopping things. It’s making data look uncannily realistic,” stated Bo Zhao, assistant professor of geography at the UW and lead author of the study, in a news release from the University of Washington. The study was published on April 21 in the journal Cartography and Geographic Information Science. “The techniques are already there. We’re just trying to expose the possibility of using the same techniques, and of the need to develop a coping strategy for it,” Zhao stated.
Fake locations and other inaccuracies have been part of mapmaking since ancient times, due to the nature of translating real-life locations to map form. But some inaccuracies in maps are created by the mapmakers to prevent copyright infringement.
National Geospatial Intelligence Agency Director Sounds Alarm
Now with the prevalence of geographic information systems, Google Earth and other satellite imaging systems, the spoofing involves great sophistication and carries more risks. The director of the federal agency in charge of geospatial intelligence, the National Geospatial Intelligence Agency (NGA), sounded the alarm at an industry conference in 2019.
“We’re currently faced with a security environment that is more complex, interconnected, and volatile than we’ve experienced in recent memory—one which will require us to do things differently if we’re to navigate ourselves through it successfully,” stated NGA Director Vice Adm. Robert Sharp, according to an account from SpaceNews.
To study how satellite images can be faked, Zhao and his team at WU used an AI framework that has been used to manipulate other types of digital files. When applied to the field of mapping, the algorithm essentially learns the characteristics of satellite images from an urban area, then generates a deepfake image by feeding the characteristics of the learned satellite image characteristics onto a different base map. The researchers employed a generative adversarial network machine learning framework to achieve this.
The researchers combined maps and satellite images from three cities—Tacoma, Seattle and Beijing—to compare features and create new images of one city, drawn from the characteristics of the other two. The untrained eye may have difficulty detecting the differences between real and fake, the researchers noted. The researchers studied color histograms and frequency, texture, contrast, and spatial domains, to try to identify the fakes.
Simulated satellite imagery can serve a legitimate purpose when used to represent how an area is affected by climate change over time, for example. If there are no images for a certain period, filling in the gaps to provide perspective can provide perspective. The simulations need to be labeled as such.
The researchers hope to learn how to detect fake images, to help geographers develop data literacy tools, similar to fact-checking services. As technology continues to evolve, this study aims to encourage more holistic understanding of geographic data and information, so that we can demystify the question of absolute reliability of satellite images or other geospatial data, Zhao stated. “We also want to develop more future-oriented thinking in order to take countermeasures such as fact-checking when necessary,” he said.
In an interview with The Verge, Zhao stated the aim of his study “is to demystify the function of absolute reliability of satellite images and to raise public awareness of the potential influence of deep fake geography.” He stated that although deepfakes are widely discussed in other fields, his paper is likely the first to touch upon the topic in geography.
“While many GIS [geographic information system] practitioners have been celebrating the technical merits of deep learning and other types of AI for geographical problem-solving, few have publicly recognized or criticized the potential threats of deep fake to the field of geography or beyond,” stated the authors.
US Army Researchers Also Working on Deepfake Detection
US Army researchers are also working on a deepfake detection method. Researchers at the US Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory, in collaboration with Professor C.C. Jay Kuo’s research group at the University of Southern California, are examining the threat that deepfakes pose to our society and national security, according to a release from the US Army Research Laboratory (ARL).
Their work is featured in the paper titled “DefakeHop: A light-weight high-performance deepfake detector,” which will be presented at the IEEE International Conference on Multimedia and Expo 2021 in July.
ARL researchers Dr. Suya You and Dr. Shuowen (Sean) Hu noted that most state-of-the-art deepfake video detection and media forensics methods are based upon deep learning, which has inherent weaknesses in robustness, scalability, and portability.
“Due to the progression of generative neural networks, AI-driven deepfakes have advanced so rapidly that there is a scarcity of reliable techniques to detect and defend against them,” You stated. “We have an urgent need for an alternative paradigm that can understand the mechanism behind the startling performance of deepfakes, and to develop effective defense solutions with solid theoretical support.”
Relying on their experience with machine learning, signal analysis, and computer vision, the researchers developed a new theory and mathematical framework they call the Successive Subspace Learning, or SSL, as an innovative neural network architecture. SSL is the key innovation of DefakeHop, the researchers stated.
“SSL is an entirely new mathematical framework for neural network architecture developed from signal transform theory,” Kuo stated. “It is radically different from the traditional approach. It is very suitable for high-dimensional data that have short-, mid- and long-range covariance structures. SSL is a complete data-driven unsupervised framework, offering a brand-new tool for image processing and understanding tasks such as face biometrics.”
Read the source articles and information in a news release from the University of Washington, in the journal Cartography and Geographic Information Science, an account from SpaceNews,a release from the US Army Research Laboratory, and in the paper titled “DefakeHop: A light-weight high-performance deepfake detector.”
Data Science is Where to Find the Most AI Jobs and Highest Salaries
By John P. Desmond, AI Trends Editor
Jobs in data science grew nearly 46% in 2020, with salaries in the range of $100,000 to $130,000 annually, according to a recent account in TechRepublic based on information from LinkedIn and LHH, formerly Lee Hecht Harrison, a global provider of talent and leadership development.
Related job titles include data science specialist and data management analyst. Companies hiring were called out in the TechRepublic account, including:
Novacoast, which helps organizations build a cybersecurity posture through engineering, development, and managed services. Founded in 1996 in Santa Barbara, the company has many remote employees and a presence in the UK, Canada, Mexico, and Guatemala.
The company offers a security operations center (SOC) cloud offering called novaSOC, that analyzes emerging challenges. “We work to have an answer ready before we’ve been asked,” stated CEO Paul Anderson in a press release issued on the company’s inclusion on a list of the top 250 Managed Service Providers from MSSP Alert. novaSOC automatically collects endpoint data and correlates it with threat intelligence sources, adding in analysis and reporting to make a responsive security monitoring service. Novacoast is planning to hire 60 employees to open a new SOC in Wichita, Kansas.
Pendo is an information technology services company that provides step-by-step guides to help workers master new software packages. The software aims to boost employee proficiency through personalized training and automated support. Founded in 2013 in Raleigh, N.C., the company has raised $209.5 million to date, according to Crunchbase. Demand for the company’s services soared in 2020 as schools shifted to online teaching and many companies permitted employees to work from home.
“More people are using digital products. Many had planned to go digital but they could not afford to wait. That created opportunities for us,” stated Todd Olson, cofounder and CEO, in an account in Newsweek. The company now has about 2,000 customers, including Verizon, RE/MAX, Health AB, John Wiley & Sons, LabCorp, Mercury Insurance, OpenTable, Okta, Salesforce and Zendesk. The company plans to hire 400 more employees this year to fuel its growth as it invests in its presence overseas in an effort to win more large customers. The company recently had 169 open positions.
Infosys is a multinational IT services company headquartered in India that is expanding its workforce in North America. The company recently announced it would be hiring 500 people in Calgary, Alberta, Canada over the next three years, which would double its Canadian workforce to 4,000 employees. “Calgary is a natural next step of our Canadian expansion. The city is home to a thriving talent pool. We will tap into this talent and offer skills and opportunities that will build on the city’s economic strengths,” stated Ravi Kumar, President of Infosys, in a press release.
Over the last two years, Infosys has created 2,000 jobs across Toronto, Vancouver, Ottawa, and Montreal. The Calgary expansion will enable Infosys to scale work with clients in Western Canada, Pacific Northwest, and the Central United States across various industries, including natural resources, energy, media, retail, and communications. The company will hire tech talent from fourteen educational institutions across the country, including the University of Calgary, University of Alberta, Southern Alberta Institute of Technology, University of British Columbia, University of Toronto, and Waterloo. Infosys also plans to hire 300 workers in Pennsylvania as part of its US hiring strategy, recruiting for a range of opportunities across technology and digital services, administration and operations.
AI is Where the Money Is
In an analysis of millions of job postings across the US, the labor market information provider Burning Glass wanted to see which professions had the highest percentage of job postings requesting AI skills, according to an account from Dice. Data science was requested by 22.4% of the postings, by far the highest. Next was data engineer at 5.5%, database architect at 4.6% and network engineer/architect at 3.1%.
Burning Glass sees machine learning as a “defining skill” among data scientists, needed for day-to-day work. Overall, jobs requiring AI skills are expected to grow 43.4% over the next decade. The current median salary for jobs heavily using AI skills is $105,000, good compared to many other professions.
Hiring managers will test for knowledge of fundamental concepts and ability to execute. A portfolio of AI-related projects can help a candidate’s prospects.
Burning Glass recently announced an expansion and update of its CyberSeek source of information on America’s cybersecurity workforce. “These updates are timely as the National Initiative for Cybersecurity Education (NICE) Strategic Plan aims to promote the discovery of cybersecurity careers and multiple pathways to build and sustain a diverse and skilled workforce,” stated Rodney Petersen, Director of the NICE, in a Burning Glass press release
NICE is a partnership between government, academia, and the private sector focused on supporting the country’s ability to address current and future cybersecurity education and workforce challenges.
Trends for AI in 2021 in the beginning of the latter stages of the global pandemic were highlighted in a recent account in VentureBeat as:
- Hyperautomation, the application of AI and machine learning to augment workers and automate processes to a higher degree;
- Ethical AI, because consumers and employees expect companies to adopt AI in a responsible manner; companies will choose to do business with partners that commit to data ethics and data handling practices that reflect appropriate values;
- And Workplace AI, to help with transitions to new models of work, especially with knowledge workers at home; AI will be used to augment customer services agents, to track employee health and for intelligent document extraction.
Read the source articles and information in TechRepublic, in a press release from Novacoast, in Newsweek, in a press release from Infosys, in an account from Dice, in a Burning Glass press release and in an account in VentureBeat.
Benefits of Using AI for Facebook Retargeting In 2021
Artificial intelligence has really transformed the state of digital marketing. A growing number of marketers are using AI to connect with customers across various platforms. This includes Facebook.
There are a lot of great reasons to integrate AI technology into your Facebook marketing campaigns. One of the benefits is that you can use retargeting. AI algorithms have made it easier to reach customers that have already engaged with your website. These users might be a lot more likely to convert, which will help you grow your sales and improve your brand image.
AI Can Help Improve Your Facebook Marketing Dramatically with Retargeting
Untapped resources that come from engagement can lead to a better understanding of Facebook retargeting. SAAS SEO agency shared some recommendations that are solid, and a great starting point for any business. To understand Facebook retargeting with AI technology in-depth, take these tips to heart when organizing your resources.
What is Facebook Retargeting?
On average, each Facebook user clicks on ads at least eight times per month. These are considered to be high intent clicks from the biggest advertising platform in the world. Even the most successful marketing and advertising campaigns miss consumers on their first run.
Retargeting uses Facebook campaigns most essential tools to target specific people based on their most relevant data. This is one of the best ways to use data to improve your social media marketing strategies.
The data used is recycled from previous information attached to your old advertising. This includes information from apps, customer files, engagement and offline activity. Anything that has a metric attached to an individual can be used with Facebook retargeting.
How Can It Help Your Business?
There will always be missed opportunities before, during and after a marketing campaign. Reinvesting the data gained from the previous campaign prevents you from starting completely over. Instead of starting from scratch, you’ll gain a clear insight into what gets consumers to cross the finish line at checkout. Retargeting is meant to be a powerful tool that thrives on previous data that would otherwise go unnoticed.
Retention comes into play, but doesn’t make up the entire story of retargeting on Facebook. You can run a retargeting campaign and only look into new consumers. It’s flexible, and meant to enhance your business based on your specific needs.
The Different Types of Retargeting
The two main types of Facebook retargeting are list-based and pixel-based. Each serves a purpose, with their own specific pros and cons.
List-based retargeting is a limited but fascinating concept. It uses the data you already have on hand to create a specialized list that Facebook uses to show ads. This method works on many of the major social media platforms, but has shown significant advantages on Facebook. Since list-based targeting uses email lists as its base, companies are at the mercy of that particular resource. An outdated or inaccurate email list will lead to low quality retargeting efforts.
When relying on list-based retargeting, a larger email list is not always a guaranteed win for a company.
Upselling and Cross Selling
Even when the customer is happy, proving the value of an upsell is an ongoing process. This led to a rise in cross selling, but was only beneficial to companies that had the resources. As you reconnect with old and new customers, upselling or cross selling becomes part of the closing process.
Both methods are difficult, but become trivial once you have the data to back up your new campaign. Most companies see an increase in profits in a short amount of time. This makes Facebook retargeting a valuable way to test drive upselling and cross selling methods.
Brand awareness is the golden goose that all businesses constantly chase. Once you have a notable brand, it becomes the identity of your entire company. Protecting the brand is important, and sometimes entire marketing campaigns are launch to reinvigorate the company image. So, how does Facebook retargeting work its way into this?
Facebook lookalike audiences became a thing when companies wanted to reach new customers with similar interests and habits as their current best customers. Creating a lead that finds this new audience is possible when brand awareness reaches its peak. If you want to keep brand awareness high, then Facebook retargeting does all of the tough work while increasing your reach to new consumers.
Conversions are hard to pull off without a specific time investment. All of that goes to waste if you’re not positioning yourself to use previous data to convert customers. No matter how visitors arrive to your website, their presence is proof that there is an interest to purchase a product or service.
If they leave without making a purchase, it’s up to you to figure out why. A lost sale is not the same as losing a customer. Being able to convert that customer into an actual sale is a major strength of retargeting. And even if it’s unsuccessful, you’ll be able to use the additional data to convert another customer.
Influence Buying Decisions
When a consumer becomes firm in their buying decision, then your influence gains a significant bump. At this point your retargeting is directly influencing the buying decisions of individuals or groups. You’ll see a visual representation of this with online feedback and testimonials. All of the positive information provided comes from consumers that are satisfied with the entire sales experience.
Even the negative feedback plays a role, and can serve as the proof you need to reuse data to improve a weak point in your marketing. When a company puts effort into retargeting their ads, they gain monumental increases in customer conversions, ad recognition, clicks, sales and branded searches.
Remarketing Vs. Retargeting
Learn the difference between retargeting and remarketing. Retargeting gains the attention of interested customers that never purchased your products or services. Remarketing leans more towards gaining the attention of inactive or lost customers. Don’t make the mistake of running a retargeting campaign when remarketing would work better. The good news is that the data used from one is still essential for the other. An email list with decent accuracy can be a valuable asset for remarketing or retargeting.
Making the Right Choice
Facebook retargeting should be a priority with how you manage your data collection. Embracing its use will optimize the most important part of your business. Once you get the hang of things, your ROI will reach a whole new level.
Big bank demand for AI talent outpaces supply
Demand for artificial intelligence (AI) experts at financial institutions continues to grow, with banks looking to leverage digital channels and incorporate data-driven analytics into their workflows as the U.S. economy moves toward reopening. American Express and Wells Fargo led the pack among financial services in AI-related job offerings posted in the past quarter, with the […]
Ethereum hits $3,000 for the first time, now larger than Bank of America
Munger ‘Anti-Bitcoin’ and Buffett ‘Annoyance’ Towards Crypto Industry
The Reason for Ethereum’s Recent Rally to ATH According to Changpeng Zhao
American Airlines Passenger Arrested After Alleged Crew Attack
New Pokemon Snap: How To Unlock All Locations | Completion Guide
Chiliz Price Prediction 2021-2025: $1.76 By the End of 2025
Mining Bitcoin: How to Mine Bitcoin
BNY Mellon Regrets Not Owning Stocks of Companies Investing in Bitcoin
Ford Mach-E Co-Pilot360 driver monitoring system needs an update ASAP
Mining Bitcoin: How to Mine Bitcoin
Telcoin set to start remittance operations in Australia
Mining Bitcoin: How to Mine Bitcoin
TV Stars Fined After Disorderly Conduct Onboard British Airways
Coinbase to Acquire Crypto Analytics Company Skew
Turkey Jails 6 Suspects Connected to the Thodex Fraud Including Two CEO Siblings
Here’s the long-term ROI potential of Ethereum that traders need to be aware of
Talking Fintech: Customer Experience and the Productivity Revolution
A Year Later: Uzbekistan Plans to Lift its Cryptocurrency Ban
The dangers of clickbait articles that explore VR
Less innocent than it looks: Hydrogen in hybrid perovskites: Researchers identify the defect that limits solar-cell performance
Big Data6 days ago
AT&T shareholders vote against approving executive compensation
Blockchain1 week ago
Long Time Dogecoin Developer Sporklin Dies After Losing a Battle Against Cancer
Energy1 week ago
SDRL – New Four-Year Contract for Seadrill’s West Saturn Drillship with Equinor Brasil
Big Data1 week ago
Tesla’s Elon Musk qualifies for $11 billion options payout
Aviation6 days ago
A Clean Sheet Widebody: The Story Of The Airbus A350
Blockchain1 week ago
Polygon Rolls Out $100 Million DeFi Adoption Fund
Blockchain1 week ago
Smoothy lists on AscendEX
Blockchain1 week ago
Reebok Joins NFT Market with Limited Edition Footwear
Blockchain4 days ago
Ethereum hits $3,000 for the first time, now larger than Bank of America
Blockchain1 week ago
Mastercard Launches Crypto Rewards Card With Gemini
Blockchain1 week ago
Legal Scholar Warns Against Extraterritorial Reach of US Crypto Regulation
Blockchain1 week ago
Derivatives Exchange GlobeDX Raises $18M in Seed Round Led by Blockchain VCs