Generative Data Intelligence

How to create a zero trust environment in financial services (Boris Bialek)

Date:

It wasn’t that long ago that security professionals protected their IT in much the same way that mediaeval guards protected a walled city – they made it as difficult as possible to get inside. But once someone was past the perimeter, they had generous access
to the riches contained within. In the financial sector, this means access to personal identifiable information (PII) which includes a “marketable data set” of credit card numbers, names, social security information and more. Sadly enough, there are many cases
where the castle got stormed and the end users were on the back foot. The most famous is still the

Equifax incident
, where a small breach has led to years of unhappy customers. 

Since then the mindset has changed as users increasingly access networks and applications from any geography, on any device, on platforms hosted in the cloud – the classic point to point security is obsolete. The perimeter has changed, so reliance on it
being a barrier that protects everything has changed as well.

Zero trust presents a new paradigm for cybersecurity. In a zero trust environment, the perimeter is assumed to have been breached, there are no trusted users and no user or device gains trust simply because of its physical or network location. Every user,
device and connection must be continually verified and audited. 

And it goes without saying, but given the huge amount of confidential client and customer data that the financial services industry deals with on a daily basis – and the strict regulations – this needs to be an even bigger priority. The perceived value of
this data also makes financial services organisations a primary target for data breaches. 

Here is what you need to think about to create a zero trust environment. 

Securing the data 

While ensuring that access to banking apps and online services is vital, it is actually the database that is the backend of these applications that is a key part of creating a zero trust environment. The database contains so much of an organisation’s sensitive,
and regulated, information, as well as data that may not be sensitive but is critical to keeping the organisation running. This is why it is imperative that a database is ready and able to work in a zero trust environment. 

As more databases are becoming cloud based services, a big part of this is ensuring that the database is secure by default, meaning it is secure out of the box. This takes some of the responsibility for security out of the hands of administrators because
the highest levels of security are in place from the start, without requiring attention from users or administrators. To allow access, users and administrators must proactively make changes – nothing is automatically granted. 

As more financial institutions embrace the cloud, this can get more complicated. The  security responsibilities are divided between the clients’ own organisation, the cloud providers and the vendors of the cloud services being used. This is known as the
shared responsibility model. This moves away from the classic model where IT owns hardening the servers and security, then needs to harden the software on top – say the version of the database software – and then needs to harden the actual application code.
In this model, the hardware (CPU, network, storage) are solely in the realm of the cloud provider that provisions these systems. The service provider for a Data-as-a-Service model then delivers the database hardened to the client with a designated endpoint.
Only then does the actual client team and their application developers and DevOps team come into play for the actual “solution”. 

Security and resilience in the cloud are only possible when everyone is clear on their roles and responsibilities. Shared responsibility recognizes that cloud vendors ensure that their products are secure by default, while still available, but also that
organisations take appropriate steps to continue to protect the data they keep in the cloud.

Authentication for customers and users 

In banks and finance organisations, there is always lots of focus on customer authentication, making sure that accessing funds is as secure as possible. But it is also important to make sure that access to the database on the other end is secure. An IT organisation
can use any number of methods to allow users to authenticate themselves to a database. Most often that includes a username and password, but given the increased need to maintain the privacy of confidential customer information by financial services organisations
this should only be viewed as a base layer. 

At the database layer, it is important to have transport layer security and SCRAM authentication which enables traffic from clients to the database to be authenticated and encrypted in transit.

Passwordless authentication is also something that should be considered – not just for customers, but internal teams as well. This can be done in multiple ways with the database, either auto-generated certificates that are needed to access the database or
advanced options for organisations already using X.509 certificates and have a certificate management infrastructure. 

Logging and Auditing 

As a highly regulated industry, it is also important to monitor your zero trust environment to ensure that it remains in force and exompasses your database. The database should be able to log all actions or have functionality to apply filters to capture
only specific events, users or roles. 

Role-based auditing lets you log and report activities by specific roles, such as userAdmin or dbAdmin, coupled with any roles inherited by each user, rather than having to extract activity for each individual administrator. This approach makes it easier
for organisations to enforce end-to-end operational control and maintain the insight necessary for compliance and reporting. 

Encryption 

With large amounts of valuable data, financial institutions also need to make sure that they are embracing encryption – in flight, at rest and even in use. Securing data with client-side field-level encryption allows you to move to managed services in the
cloud with greater confidence. The database only works with encrypted fields and organisations control their own encryption keys, rather than having the database provider manage them. This additional layer of security enforces an even more fine-grained separation
of duties between those who use the database and those who administer and manage it. 

Also, as more data is being transmitted and stored in the cloud – some of which are highly sensitive workloads – additional technical options to control and limit access to confidential and regulated data is needed. However, this data still needs to be used.
So ensuring that in-use data encryption is part of your zero trust solution is vital. This also enables organisations to confidently store sensitive data, meeting compliance requirements, while also enabling different parts of the business to gain access and
insights from it. 

In a world where security of data is only becoming more important, financial services organisations sit among those with the most to lose from it getting into the wrong hands. Ditching the perimeter mentality and moving towards zero trust – especially as
more cloud and as-a-service offerings are embedded in infrastructure – is the only way to truly protect such a valuable asset. 

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?