Over half of all website traffic worldwide is now generated through mobile phones. Big data has transformed several sectors, unsecured lending being one of them. However, all stakeholders need to converge and establish responsible terms of data acquisition, use and management. Transparency will allow innovation and the AI industry to achieve scale and stability.
In the past, lending institutions have used in-person, ‘high touch’ methods of addressing the credit needs of individuals from low-income segments. In the absence of collateral or credit history, they relied on subjective assessments of a person’s capacity and intent to repay – the two criteria for deciding if the applicant’s request for a loan should be granted, and on what terms. This manual method is cost and time-intensive. The high processing costs did not justify small-ticket unsecured loans to individuals who were considered ‘high-risk’ for want of data to assess them. The sunk cost of initiating a credit assessment plays a major role in the financial exclusion of approximately 3 billion people across the world. As a consequence, the productive potential is curtailed, particularly in emerging economies where it is needed the most.
The use of data itself is not new to personal finance. However, the scale and extent of the use of data has grown immensely. So much that it has disrupted several processes within the financial sector.
The banking and finance sectors were one of the earliest adopters of technology and automation. The same happened when data became ‘big’, followed by advanced analytics. Only about a quarter of the adult population in India has a bureau score. Within this segment, the majority have scores that are too low to access affordable, unsecured credit. Today, India leads the world in data usage per individual smartphone. The unprecedented amount of data generated by smartphone-owners is a viable input to the credit risk assessment process. Expanding the market for unsecured retail credit has been one of the biggest outcomes of big data and AI.
As the disruptive and transformative powers of data become evident, it resulted in a need to re-examine the roles and responsibilities within the new ecosystem, across stakeholders. Today alternative data-based credit risk assessment platforms meet the definition of traditional credit bureaus. Their role goes beyond that of data management to the collection, running analytics and decisioning, keeping in mind risk thresholds. Almost every public and private sector lender in India acknowledges the role of big data in credit risk assessment. However, I sometimes do see lenders hesitate to partnering fully with technology and advanced analytics platforms, even when these decisioning models have demonstrated up to 25% higher approval rates, compared with traditional methods of risk assessment. This hesitation arises largely from the lack of clarity of roles and responsibilities around the collection, management and ownership of customer data – something that needs to be established quickly.
Here are a few of the core tenets and principles that should outline a framework for big data:
The customer owns the data.
Since about 80% of data is privately owned, the quality of big data-based underwriting products is linked to how much data a platform can acquire, and the quality of it. What is important to keep in mind is that the individual owns her digital footprint. This is what should underscore the regulatory and policy framework for the collection, management and use of big data in finance.
In July 2019, Indian regulators announced the launch of account aggregators (AA’s) to provide services related to collection and transfer of consumer data, with the explicit consent of the individual. The data is not visible to the AAs. Neither can they store or sell it. The laudable part is that the permission settings are dynamic and can be adjusted by the individual. This kind of enabling infrastructure and ecosystem has the potential to accelerate financial inclusion tremendously. That said, in a complex, highly stratified and low-literacy environment, the devil is often in the detail. Concurrent efforts to improve financial literacy and awareness of data security, multilingual and highly intuitive UI / UX are just a few of the things that will help formal lenders penetrate the segment of 40 million smartphone owners.
A consent-based architecture for big data deployment in the financial sector is necessary to build transparency and trust in alternative lending. People should understand what data is being used, and for what purpose.
Addressing data privacy and security is not a choice. Those who are in this for the long haul have to develop an ecosystem that is responsible and built on trust.
Balancing privacy with innovation.
The rapid adoption of India Stack and its transformation of day-to-day business has intensified the debate over balancing privacy with innovation and scaling of service delivery. Big data now allows a person to open a bank account in less than a minute, has brought down customer on-boarding and risk assessment costs drastically, helping to reach a wider segment of the population with smaller ticket sizes and more affordable credit.
Security, by design.
The race to build big data products and get them to market often leads to security measures taking a backseat. Short cuts like these will lead to major security glitches down the road, eroding trust in the big data economy. It must, therefore, be mandatory to ensure that security features such as end-to-end encryption; masking personal data, using only high-security cloud infrastructure and regular penetration testing is a part of product development. Firms can use white hat hackers to further strengthen big data products and establish a culture of data security across the ecosystem.
The Indian Data Protection Framework is a welcome step by the state to address the fast-evolving needs of the big data economy in India. High-stakes sectors like finance, healthcare and essential services have a greater stake in a robust data security framework.
Watching the watchers
Because big data has multiple origins, managers and applications, we’re seeing a blurring of lines between industries and roles. This reality needs to be acknowledged and accounted for when working towards a framework. Fortunately, the technology around data security itself has evolved considerably. Biometrics, unique IDs and blockchain are just some of the tools that we need to consider.
A consultative approach will go a long way in both educating stakeholders, as well as eliciting creative solutions to deal with the issue, that resonate across a wider audience. Given its increasingly ubiquitous nature, I also see a strong case for basic concepts around big data and AI to be a part of the school curriculum.
As human beings, it is natural to fear what we don’t understand. The good news is that with access to information becoming easier, we now have tactical and efficient ways of addressing the hype around big data. By normalizing responsible practices around data security and privacy, we can build an environment in which people feel safe and confident about sharing their information.