Zephyrnet Logo

Security perimeters in the cloud aren’t dead—They’re ephemeral

Date:

It goes
without saying that companies migrating IT systems and operations to the cloud
face a growing number of challenges related to security. From securing
cloud-native applications in a continuous release environment to managing
digital identity and remote access to sensitive data, virtually every aspect of
day-to-day IT and cybersecurity operations is changing rapidly. 

With
Gartner forecasting $124 billion in
worldwide cybersecurity spending this year, it’s clear that organizations are
racing to find a model that works. There’s just one problem: far too many are
trying to shoehorn legacy security models into a cloud-native world – and it’s
not working. In fact, some of the best security teams in the world today even
struggle to keep their systems secure, including Facebook, Google, Microsoft and a growing list of
others. 

What’s
going on here? It’s quite simple, actually: Just as the move to cloud forces
organizations to undergo fundamental shifts in their business and IT
operations, it also demands they make equally substantial changes to
their core assumptions about cybersecurity. 

In
response, many IT and security experts are quick to proclaim, “the perimeter is
dead.” While it’s certainly true that the old dichotomies of inside/outside or
attackers/trusted parties have become less clear, the notion that perimeters
are altogether gone is a dangerous misconception. In today’s cloud-first world,
the perimeter isn’t gone – it’s ephemeral. Instead of being fixed and
centrally organized, perimeters are just like every other aspect of the cloud
now — they are distributed and ever-shifting as conditions change. 

What’s
needed is a cloud-native security model. The good news is that it already
exists, and it’s been around for nearly a decade: zero-trust. 

Zero
trust is a radical change from pre-cloud security models. Instead of assuming
it’s possible to create a central perimeter that protects everything inside
(the “hard shell, soft center” approach), zero trust is all about: (1) reducing
an organization’s security risk by minimizing the probability of an attack via proper
identification and authentication, and (2) minimizing the impact of successful
attacks by microsegmenting authorization. That is, if someone steals a user’s
credentials, they can no longer slip through the organization’s defenses
undetected. As the user moves through a system, they must continuously verify
that they are who they say they are, with the level of verification required
scaling alongside the sensitivity of the access and the task they are
requesting or performing. 

Where do
you start? While every organization is different, there are three core
principles that guide all zero trust implementations: 

There
is No Distinction Between “Insiders” and “Outsiders”

While
most organizations today recognize that the line between “insiders” and
“outsiders” is blurring, most still follow the same model and view an employee
on an “authenticated” device as trusted. But BYOD, phishing and rogue insiders
demolish that distinction.Everyone should be considered hostile. No
implicit trust should be attributed to the location of the user, the network or
the device they are using. 

For
example, Mobile Device Management (MDM) tools give the IT team control over the
use of a personal device in the network like a tablet, but doesn’t provide true
security, only the illusion of it. If the user clicks on something that the MDM
agent on the device can’t see, the network can be easily compromised. For
instance, employees in the HR department receive and open emails with
attachments containing resumes all the time. However, MDM doesn’t protect
against attachment-borne malware that installs a trojan agent on the employee’s
device. If the security control is to treat every device under management as
trusted, then this trojan agent can simply hide in plain sight.  

Discrete
One-time Super Authentication Must End

Most
organizations have implemented 2FA, which is good, but the problem is its use
is too far reaching. Once a user is verified with 2FA they can access all sorts
of systems because the model considers their identity as confirmed going
forward. However, identification and authentication are contextual, and the
results should be valid only for the duration of the specific transaction for
which they are requested. One successful spear phishing campaign can turn a
user’s device from safe to risky if malware is downloaded on it.  Very few
organizations do transaction-level verification. Banks do it for certain types
of high-level transactions, but outside of that, it’s rare. Two-factor
authentication shouldn’t grant you elevated privileges indefinitely; it’s a
discrete event. 

We
understand this in other contexts. For instance, when flying. You don’t just
buy a ticket and then walk right onto the plane. You have to check in at the
airline desk, go through security with your boarding pass and ID, and then show
your boarding pass again at the gate. Each new level of verification allows
deeper access but not carte blanche access. A boarding pass in conjunction with
a valid ID gets you past security, but not into sensitive areas of the airport.
Similarly, a boarding pass gets you onto a plane, but does not allow you to
access the flight deck. Another example is the medical field. Certain
prescriptions, like narcotics, require patients to provide ID and take extra
steps to ensure against fraud. 

Use
Dynamic Policies With Broad Inputs

Too
often, organizations treat policies as static and rely on too few inputs,
partially out of habit and partially due to historical limits on compute power
and inability to ingest and process a multitude of signals. But to be effective
policies need to be dynamic and to be evaluated with the largest possible set
of inputs. This means that trust in the identification and authenticity of the
user, and the risk associated with the task they are attempting to perform, are
constantly evaluated.

For
example, you can have static policies that say if a user is authenticated, then
let them in. Or, if a user is authenticated and has done biometric, let them
in. And, if the user is authenticated but it’s after 5 pm, ask for biometric
then let them in. Requests from managed devices provide more inputs to use to
establish the level of trust. But static policies break because they cannot
process new information or interpret new context, so they fall back to a
default rule which might or might not be the right approach.  Dynamic policies
allow systems to act on incomplete information and provide an answer that is a
trust level on a continuum, rather than binary yes/no trust level. 

Such
security policies aren’t reliable; their lack of sophistication enables a false
sense of security and/or interferes with productivity needlessly. As humans, we
make dynamic decisions and judgment calls all the time — walking down the
street and assessing the riskiness of a neighborhood at certain times of day,
or deciding what to wear by checking the weather forecast and looking at the
sky. Security policies need to allow for similar degrees of input and to
reflect the reality of changes in users, devices and contexts. 

In a
hyperconnected world, there is no such thing as absolute security and zero risk.
Now more than ever, security is about continuously evaluating trust and making
decisions to minimize risk. Zero trust principles offer clear and actionable
guidance. They ensure protection of data no matter what users or devices are
attempting to access them by operating under the assumption that all devices
and systems might be compromised.  

Baber Amin, CTO West, Ping Identity

Source: https://www.scmagazine.com/home/opinion/executive-insight/security-perimeters-in-the-cloud-arent-dead-theyre-ephemeral/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?