Zephyrnet Logo

Banking on mainframe-led digital transformation for financial services – IBM Blog

Date:

Banking on mainframe-led digital transformation for financial services – IBM Blog <!—-> <!– –>



Side view of young woman using ATM. Taken in Andalusia. South of Spain.

We already covered how mainframe modernization isn’t just for the financial industry, so why not address the elephant in the room? The world’s biggest modernization challenges are concentrated in the banking industry.          

Before the internet and cloud computing, and before smartphones and mobile apps, banks were shuttling payments through massive electronic settlement gateways and operating mainframes as systems of record.

Financial services companies are considered institutions because they manage and move the core aspects of our global economic system. And the beating heart of financial institutions is the IBM mainframe.

Banks have the most to gain if they succeed (and the most to lose if they fail) at bringing their mainframe application and data estates up to modern standards of cloud-like flexibility, agility and innovation to meet customer demand.

Why mainframe application modernization stalls

We’ve experienced global economic uncertainties in recent memory, from the 2008 “too big to fail” crisis to our current post-pandemic high interest rates causing overexposure and insolvency of certain large depositor banks.

While bank failures are often the result of bad management decisions and policies, there’s good reason to attribute some blame to delayed modernization initiatives and strategies. Couldn’t execs have run better analyses to spot risks within the data? Why did they fail to launch a new mobile app? Did someone hack them and lock customers out?

Everyone knows there’s an opportunity cost of putting off mainframe application modernization, but there’s a belief that it’s risky to change systems that are currently supporting operations.

Community and regional banks may lack the technical resources, whereas larger institutions have an overwhelming amount of technical debt, high-gravity data movement issues, or struggle with the business case.

Banks large and small have all likely failed on one or more modernization or migration initiatives. As efforts are scrapped, IT leaders within these organizations felt like they bit off more than they could chew.

Transforming the modernization effort should not require a wholesale rewrite of mainframe code, nor a laborious and expensive lift-and-shift exercise. Instead, teams should modernize what makes sense for the most important priorities of the business.

Here are some great use cases of banks that went beyond simply restarting modernization initiatives to significantly improve the value of their mainframes in the context of highly distributed software architectures and today’s high customer-experience expectations.

Transforming core system and application code

Many banks are afraid to address technical debt within their existing mainframe code, which may have been written in COBOL or other languages before the advent of distributed systems. Often, the engineers who designed the original system are no longer present, and business interruptions are not a good option, so IT decision-makers delay transformation by tinkering around in the middle tier.

Atruvia AG is one of the world’s leading banking service technology vendors. More than 800 banks rely on their innovative services for nearly 100 billion annual transactions, supported by eight IBM z15 systems running in four data centers. 

Instead of rip-and-replace, they decided to refactor in place, writing RESTful services in Java alongside the existing COBOL running on the mainframes. By gradually replacing 85% of their core banking transactions with modern Java, they were able to build new functionality for bank customers, while improving performance of workloads on the mainframe by 3X.

Read the Atruvia AG case study

Ensuring cyber resiliency through faster recovery

Most banks have a data protection plan that includes some form of redundancy for disaster recovery (DR), such as a primary copy of the production mainframe in the data center and perhaps an offsite secondary backup or virtual tape solution that gets a new batch upload every few months.

As data volumes inexorably increase in size, with more transactions and application endpoints, making copies of them through legacy backup technologies becomes increasingly costly and time-consuming, and reconstituting them is also slow, which can leave a downtime DR gap. There is a critical need for timelier backups and recovery to failsafe the modern bank’s computing environment, including ransomware.

ANZ, a top-five bank in Australia, sought to increase its capacity for timelier mainframe backups and faster DR performance to ensure high availability for its more than 8.5 million customers.

They built out an inter-site resiliency capacity, running mirrored IBM zSystems servers using their HyperSwap function to enable multi-target storage swaps without requiring outages, as any of the identical servers can take over production workloads if one is undergoing a backup or recovery process.

ANZ’s IT leadership gets peace of mind thanks to better system availability; but more so, they now have a modern disaster recovery posture that can be certified to provide business continuity for its customers.

Read the ANZ case study

Gaining visibility through enterprise-wide business and risk analytics

Banks depend on advanced analytics for almost every aspect of key business decisions that affect customer satisfaction, financial performance, infrastructure investment and risk management.

Complex analytical queries atop huge datasets on the mainframe can eat up compute budgets and take hours or days to run. Moving the data somewhere else—such as a cloud data warehouse—can come with even greater transport delays, resulting in stale data and poor quality decisions.

Garanti BBVA, Turkey’s second-largest bank, deployed IBM Db2 Analytics Accelerator for z/OS, which accelerates query workloads while reducing mainframe CPU consumption.

The separation of analytics workloads from the concerns and costs of the mainframe production environment allows Garanti to run more than 300 analytics batch jobs every night, and a compliance report that used to take two days to run now only takes one minute.

Read the Garanti BBVA case study

Improving customer experience at DevOps speed

Banks compete on their ability to deliver innovative new applications and service offerings to customers, so agile devtest teams are constantly contributing software features. We naturally tend to generalize these as front-end improvements to smartphone apps and API-driven integrations with cloud services.

But wait, almost every one of these new features will eventually touch the mainframe. Why not bring the mainframe team forward as first-class participants in the DevOps movement so they can get involved?

Danske Bank decided to bring nearly 1,000 internal mainframe developers into a firm-wide DevOps transformation movement, using the IBM Application Delivery Foundation for z/OS (ADFz) as a platform for feature development, debugging, testing and release management.

Even existing COBOL and PL/1 code could be ingested into the CI/CD management pipeline, then opened and edited intuitively within developers’ IDEs. No more mucking with green screens here. The bank can now bring new offerings to market in half the time it used to take.

Read the Danske Bank case study https://www.ibm.com/case-studies/danske_bank_as

Read the Danske Bank case study

The Intellyx Take

Even newer “born-in-the-cloud” fintech companies would be wise to consider how their own innovations need to interact with an ever-changing hybrid computing environment of counterparties.

A transaction on a mobile app will still eventually hit global payment networks, regulatory entities and other banks—each with their own mainframe compute and storage resources behind each request fulfillment.

There will never be a singular path forward here because no two banks are identical, and there are many possible transformations that could be made on the mainframe application modernization journey.

IT leaders need to start somewhere and select use cases that are the best fit for their business needs and the architecture of the unique application estate the mainframe will live within.

Learn more about mainframe modernization by checking out the IBM Z and Cloud Modernization Center

More from Cloud

Introducing cross-region snapshot copy for IBM Cloud Block Storage for VPC

3 min readBlock Storage Snapshots for VPC is a cost-effective regional offering that is used to create and store point-in-time copies of your IBM Block Storage for VPC boot or data volumes. With cross-region snapshot copy, you can now copy a snapshot from one region to any other region of your choice and later use that snapshot to restore to a volume in the new region. This feature can be used in disaster recovery, application migration and geographic expansion scenarios, when you…

3 min read

PostgreSQL 15 is now available on IBM Cloud Databases for PostgreSQL

2 min readPostgreSQL 15 is now available on the IBM Cloud platform. The latest version of IBM Cloud® Databases for PostgreSQL brings various features that help developers and administrators deploy their data-backed applications. PostgreSQL continues to add innovations on complex data types and improvements to its on-disk sorting algorithms and in-memory, delivering a database built for application development and safe for the user’s critical data. The current users of IBM Cloud® Databases for PostgreSQL can easily upgrade their PostgreSQL instances to Version…

2 min read

How Open Liberty and IBM Semeru Runtime proved to be the perfect pillars for Primeur

4 min readAs an independent software vendor (ISV), we at Primeur embed the Open Liberty Java runtime in our flagship data integration platform, DATA ONE. It is essential that the embedded Java runtime is both invisible to our customers yet observable to our engineers who support them. Open Liberty, IBM’s open-source Java runtime on which IBM WebSphere Liberty is built, was the perfect solution for us. Primeur and DATA ONE As a smart data integration company, we at Primeur believe in simplification.…

4 min read

Leveraging high performance computing to help solve complex challenges across industries

4 min readIn today’s competitive business landscape, having high compute power can be critical. Whether a bank needs to quickly conduct risk analyses to navigate volatile markets, the semiconductor industry needs to optimize chip design, or a life sciences company must carry out rapid and repetitive genome processing and sequencing to make their next breakthrough, high performance computing (HPC) can be key to solving large, complex problems in less time than with traditional computing methods. As enterprises look to solve their most…

4 min read

spot_img

Latest Intelligence

spot_img