Fintech company Stratyfy recently announced its latest project to use artificial intelligence to ensure fair credit lending practices.
In a press release, the platform announced its partnership with the Beneficial State Foundation for the Underwriting for Racial Justice. With the new initiative, the company will work alongside the Underwriting for Racial Justice, also known as the URJ, to create more ways that members of BIPOC communities can get access to credit.
As a machine learning platform, Stratyfy’s key focus will revolve around using artificial intelligence practices in the new two-year pilot program made up of approximately 60 BIPOC leaders and innovators. As part of the program, 20 lenders will be involved, including Beneficial State Bank, NBT Bank and Community Vision. With their involvement, the loaning institutions will work alongside Stratyfy’s AI-based decision-making technology to eliminate racial disparities in the credit lending practice.
“Stratyfy is a key partner in this effort, using their credit risk solution to help lenders confidently make bold and meaningful changes, while managing risk and meeting regulatory requirements for safety and soundness,” said executive director and chief impact officer at Beneficial State Foundation, Erin Kilmer Neel, per a statement. “Beneficial State Foundation launched the Underwriting for Racial Justice program to guide lenders through a process to increase access to fair credit.”
Although AI is being used increasingly in most careers and has even spread to most aspects of life, experts, however, have pointed out that there are still some areas of the new technology that need to be improved.
AI has long since been linked to contributing to racial discrimination in medicine, art and more.
Doctors specifically have pointed out the downside to AI’s use in the field because of its machine learning background; oftentimes, members of BIPOC communities are underrepresented in the information used in the initial machine learning process that drives the AI systems.
In banking, AI has also been proven to be an issue.
In a 2021 AP News report, the news outlet found that, with AI, lenders were 80% more likely to deny a mortgage to Black applicants and 70% more likely to deny Native American applicants when compared to white applicants. Comparatively, lenders were also 50% more likely to deny Asian applicants and 40% more likely to deny Latino applicants.
Two years later, bias in AI continues to be an issue. In recent statements made for the National Fair Housing Alliance, the Vice Chair for Supervision of the Board of Governors of the Federal Reserve System, Michael Barr, warned about AI’s ability to perform “digital redlining,” denying members of communities access to credit and housing.
“While these technologies have enormous potential, they also carry risks of violating fair lending laws and perpetuating the very disparities that they have the potential to address,” said Barr in remarks per CNBC.