INSIGHT

MSRCosmos provides state-of-the-art application development support to boost operations for a financial services provider in India

here

MSRCosmos provides state-of-the-art application development support to boost operations for a financial services provider in India

Summary

A financial services provider in India sought to improve its asset management operations by upgrading the native system for, customer use and overall channel management. MSRCosmos delivered a powerful end-to-end solution to strengthen workflow and improve business outcomes. The solution is on track to streamline any work-frame in top-line financial systems to reduce the turnaround time of a task.

About the Client

The client ranks among the largest financial services providers in India.

Incorporating functional changes to drive digital transformation

An easily accessible and functional website is appreciated by all. But when it comes to designing a UX for multiple generations, it builds up a challenge. With smart tech making a foothold in every household, the need to align with the ever-changing user behavior has become the need for every business. And, accommodating all these evolving needs through one foolproof network is what our client persevered for.

Having understood the need of the times, the client wanted to create a similar minimal-search user interface (UI) supported by a power-packed backend engine when it came to their asset management operations. As part of the upgrade process, the team identified areas of improvement both in the front-end and the backend tech infrastructure. While they orchestrated a series of modern developments on their website in-house, improvising the backend infrastructure required the help of specialists. To resolve this, they sourced in MSRCosmos’s expertise, who has been a reliable partner to their collaborates in their solution based endeavors.

About MSRCosmos

MSRCosmos LLC is a Diversity & WBENC certified software services company that works with the core mission to cater to enterprises and workforces of varied verticals with an enhanced range of IT service offerings. We offer insightful solutions to help organizations drive innovation and accelerate their company’s growth with a full spectrum of big data and advanced analytics, application development and lifecycle maintenance and enterprise resource planning and cloud services to keep them ahead of the competition and deliver real business value.

The Perfect Solution Partner

MSRCosmos was tasked to improve the ‘time to respond’ from the back-end systems. MSR had already partnered with the client’s parent company on several projects and, as a testimony to its strong delivery was awarded this assignment.

Detecting the challenge

Upon discussion, MSR and the client identified 3 core pillars that needed tech upgrade to maintain a strengthened infrastructure – front end (website), the logic engine, and the database. As part of the first phase of the digital upgrade, MSR took the plunge to improve the tenant system.

On close scrutiny, the MSRCosmos team detected that the old backend operating system failed to justify its capability as the APIs backing the processes lacked 1) service log and 2) code-wise execution details. This resulted in limited accessibility of the database with high network latency. Wherefore, the team lagged to resolve errors/issues that came their way. This set-up called for an up-gradation that could underpin the new age requirements with an effective engine, without actually hindering their database.

Overcoming the tide

Post intensive discussions with the client and explaining to them the essential mechanisms of a robust backend system, MSRCosmos proposed the immediate need to deploy Microsoft BizTalk Server to automate business processes. On receiving the nod from the client, MSR pulled in their proficient talents to successfully accomplish the task. The outcome: high transparency with a quick response rate. The turn-around time for response drastically lowered by 12 times.

Over 200 APIs across multiple business functions were worked upon based on business criticality.

A few APIs were:

  • Withdrawing amount from given folio and scheme
  • Getting scheme and bank details based on the Folio Number
  • Validation of Foreign Account Tax Compliance Act
  • Getting Payment Banks ID and Name
  • Saving Cart details against a folio
  • Saving Pre-Payment details for Purchase and SIP
  • Saving Post Payment details for Purchase and SIP
  • KYC and Aadhar validation

 
Old System Vs New System

(As depicted above) The complete architecture of the legacy system was redesigned to increase the transparency and response rate. We see how the improved system is compartmentalized to streamline information flow.

The business data flow channel as depicted in the new system is as follows:

Artifacts Task Description
Receive Port Determine Log Request To determine whether or not an add LOB data record is associated with an update operation on the table, the original operation value is logged to the LOB.
Confirm Subscription Validation To confirm and validate the subscription, and in-case of unauthorization, send a response to the endpoint owner.
Confirm Checksum Validation Perform Checksum validation and in-case of mismatch alert with a response.
Receive Pipeline Conversion of JSON to XML Quickly convert the JSON request to XML request for BizTalk internal processing.
Orchestration Initiate Business Processing Initiate the whole business processing.
Business Rules Business Validation Validate the request message from existing data and perform correction where needed.
Send Pipeline Conversion of XML to JSON Convert the response back from XML to JSON format.
Send Port Reduce Average Log Response Time To reduce turnaround time and respond to messages by quickly logging to the LOB Log database.

Things took great shape

The client has now successfully built a holistic platform for its critical asset management function, powered by a reliable backend engine. The new setup makes it easy for users to access data with faster results, thereby empowering every employee and business partner to achieve their purpose.

With BizTalk running the backend functions, it has now become easy for the client to manage the infrastructure which is modern and compliant to all security and infrastructure related controls. The icing on the cake was lower software development cost enabled by BizTalk Services that provides out-of-the-box, cloud to on-premises and line-of-business application integration. In short, it’s an overall improvement in the IT landscape, which accelerates the journey of digital transformation for the client’s asset management.

Before After
Time to respond High Low
Program/code transparency Low High
Services availability As per SLAs High
Security Appropriate Enhanced
Documentation Not descriptive & inconsistent Detailed and help developers to upgrade the system in the future
here

Data Lake Failures: How to Avoid?

A data-lake is a single source of information for business users containing enterprise-level data that is used by various functional units. Therefore, whenever there’s a hindrance to the flow of the requisite information from an enterprise data-lake, the impact could be across the organization or just a few operational aspects, leading to business disruption that could be very costly.

Data-lake failures are generally of three types –

  • A data-lake is totally down
  • A data-lake is only partially available
  • A data-lake is frequently unavailable

 

Regardless of which type of failure it is, the effects of disruption to data flow are multitudinous.

  1. Business users’ functioning (decision-making/policy-making) gets affected

 

  1. BI / Reporting team’s work and delivery is affected

 

  1. Downstream business applications consuming the data are rendered ineffective

 

  1. People/users don’t get the data/information they are waiting for

 

Obviously, the end-result of all these failures – impact on business – could be undesirable for the company.  From operational log-jams, reduced output/performance, and customer service/delivery failure to collateral damage, investor sentiments, and dented top/bottom lines, organizations could face severe challenges.

We here at MSRCosmos believe in empowering our customers to be proactive rather than be faced with providing remedial measures after a data-lake failure has occurred – which, of course, we do when left with no viable alternative.

Accordingly, we propose to all our customers a multi-pronged approach for preventing data-lake failures.

The 5-pronged data-lake failure prevention strategy

Data-lake failures, like we mentioned above, vary in the degree/magnitude of their impact. There are numerous reasons as to why data-lake failures / data-flow disruptions occur.  Outages could stem from various factors – users, policies, infrastructure, lack of preparedness, (lack of) timely intervention, etc. Thus, our failure prevention strategy is closely intertwined with the various ways in which failures occur.

1.Data-lake security and policies

The first one deals with data security and the associated policies governing the same.

Platform Access and Privileges

Who accesses the data platform and the extent of privileges under the access needs to be discrete, and constantly monitored as well. This is because some users – unintentionally or willfully – play around with the data. For example, someone accidentally deletes some records/data that may affect the data-flow – the results could be disastrous. Therefore, to nip such possibilities in the bud, you have to keep a check on user access.

Network Isolation

A strong firewall around the enterprise network would not only make it difficult to breach but will also insulate (isolate) it from intruders. 

Data Protection

Data encryption will help ensure your enterprise data is protected and safe.

Document-Level Security

Have role-based, data-level security which will ensure that only authorized personnel access documents, and only those that they have permission to access.

2.Performance evaluation and scaling

If the data-flow to the intended recipients, especially the downstream applications, is delayed owing to lack of ideal speed then the output/performance of those apps gets severely affected. Therefore, it is crucial to arrive at an ideal/optimal speed and maintain that always via a continuous, analytics-powered performance evaluation strategy.

Then comes the question of scalability which is critically important for undertaking big-data analytics. As business operations grow, it is inevitable that the data size also grows. Your data-lake supporting a few TBs of data won’t help, and will collapse in the face of extensive data pouring in. Therefore, you need to get optimal amount of licenses as well as have a scalable infrastructure.

Big-data analytics frameworks such as Hadoop and Spark are designed to scale horizontally. Thus, as the data and/or processing grows, you can just add more nodes to your cluster. This allows for continuous and seamless processing without any interruptions. However, to ensure this success, you also have to have linear scaling of the storage layer.

Data Ingestion

3.High Availability (HA) and Disaster Recovery (DR)

One of the most quintessential steps you’d have to take to prevent data-lake failures is to have the right HA measures. Having a spare server that gets automatically invoked should there be any issues with the master server, will also greatly reduce the chances of a data-lake failure.

Few HA approaches that can be adopted:

  • Metadata HA

Metadata HA is most helpful, almost critical, in the case of long-running cluster operations, as it includes critical information about the location of application data and the associated/related replicas.

  • MapReduce HA

MapReduce HA is helpful with job execution even when the related trackers and resource managers go down.

  • NFS HA

Another effective HA measure is to mount the cluster via a HA-enabled NFS. This ensures continuous and undisrupted access to both, the data that’s streaming-in and also the applications that require random read/write operations.

  • Rolling updates

Rolling upgrades is another good measure that helps minimize disruptions. Deploying updates (components) incrementally ensures there’s no downtime. Further, by undertaking maintenance or software upgrades on the cluster – a few nodes at a time, while the system continues to run – you can eliminate planned downtime.

Another critical step towards data-lake failure prevention is to have a sturdy disaster recovery (DR) set-up.

Backups
Incorporate a Hadoop distribution as part of your DR strategy. It gives you the capacity to take a snapshot of a cluster at the volume-level (all the data including files and database tables). Taking the snapshot of a cluster happens instantaneously and represents a consistent view of data as the state of the snapshot always remain same.

Have a Converged Data Platform

Experience shows that back-ups alone may not be enough for disaster recovery. Therefore, it is prudent to set-up a converged data platform for big-data disaster recovery. It will allow you to manage multiple big-data clusters across several locations and infrastructure types (cloud / on-premises) irrespective of who the service provider is, thus ensuring that the data remains consistent and up-to-date between all clusters.

4.Effective data governance

Establish an effective data governance policy in terms of how the data-lake is organized, what kind of recovery mechanisms are in place, and whether or not there is adherence to correct/authentic access. These will help in easy regeneration of information that may have been/can be affected.

5.Semantic Consistency

Semantic Consistency is achieved when two data units satisfy strong consistency by having the same semantic meanings and data values. In other words, a semantic layer is used to maintain meta-data that needs to be checked by downstream apps if there’s going to be any change in the data (columns) and, make the changes accordingly before starting. Therefore, it is highly advisable to have a semantic layer on top of your raw data.

We believe if these five steps are implemented properly, then is it safe to say that there could be zero or minimal data-lake failures.

Balaji Kandregula
Lead Architect – Big-data & Analytics

Contact MSRCosmos

Contact Us