Money Algorithms are making the same mistakes assessing credit scores that humans did a century ago

10:00  15 may  2018
10:00  15 may  2018 Source:   qz.com

Browns WR Antonio Callaway owns up to failed drug test

  Browns WR Antonio Callaway owns up to failed drug test The rookie reiterated that his failed drug test was the result of a dilute sample, but took responsibility for his actions.The Browns' rookie wide receiver reiterated that his failed drug test was the result of a dilute sample, but took responsibility for his actions and said he is working on moving forward from his troubled past.

Money2020, the largest finance tradeshow in the world, takes place each year in the Venetian Hotel in Las Vegas. At a recent gathering, above the din of slot machines on the casino floor downstairs, cryptocurrency startups pitched their latest coin offerings, while on the main stage

The first references to actuarial—in this context, statistical analysis performed by a human —post-sentencing risk assessments appeared nearly a century ago . A new permissiveness in where and how risk scores are calculated, however, makes the modern application different and potentially

BEIJING, Dec. 18, 2017 : A customer pays with GCash in Manila, the Philippines, Nov. 11, 2017. In recent years, a number of China's technological innovations have been making their moves in the world. Among them, Dockless Shared Bicycles, High-speed Rail, Alipay and E-commerce stand out with a reputation of China's 'four great new inventions' in modern times, which have made the daily life of the public more and more convenient. (Xinhua/Qin Qing via Getty Images) © Catalyst Images BEIJING, Dec. 18, 2017 : A customer pays with GCash in Manila, the Philippines, Nov. 11, 2017. In recent years, a number of China's technological innovations have been making their moves in the world. Among them, Dockless Shared Bicycles, High-speed Rail, Alipay and E-commerce stand out with a reputation of China's 'four great new inventions' in modern times, which have made the daily life of the public more and more convenient. (Xinhua/Qin Qing via Getty Images) Editor's note: The opinions expressed in this article are those of the author on behalf of our content partner and do not represent the views of MSN or Microsoft

An AI speed test shows clever coders can still beat tech giants like Google and Intel

  An AI speed test shows clever coders can still beat tech giants like Google and Intel There is a common narrative in the world of AI that bigger is better. To train the fastest algorithms, they say, you need the most expansive datasets and the beefiest processors. Just look at Facebook’s announcement last week that it created one of the most accurate object recognition systems in the world using a dataset of 3.5 billion images. (All taken from Instagram, naturally.) This narrative benefits tech giants, helping them attract talent and investment, but a recent AI competition organized by Stanford University shows the conventional wisdom isn’t always true.

2 years ago . by Chris Ozarka. Add Comment. Doing calculations is not the same as doing math. Unfortunately, most assessments are simply assessing calculations while ignoring the concepts. Make Sure Students are Making Mistakes (when learning, not summative assessments ).

QZ: Do you think we give humans enough credit for being able to make good decisions? O’Neil: We as a society or as data scientists? And one of the problems that I talk about in the book is that, with the algorithm , the same mistake is propagated throughout the system.

Money2020, the largest finance tradeshow in the world, takes place each year in the Venetian Hotel in Las Vegas. At a recent gathering, above the din of slot machines on the casino floor downstairs, cryptocurrency startups pitched their latest coin offerings, while on the main stage, PayPal President and CEO Dan Schulman made an impassioned speech to thousands about the globe’s working poor and their need for access to banking and credit. The future, according to PayPal and many other companies, is algorithmic credit scoring, where payments and social media data coupled to machine learning will make lending decisions that another enthusiast argues are “better at picking people than people could ever be.”

Credit in China is now in the hands of a company called Alipay, which uses thousands of consumer data points—including what they purchase, what type of phone they use, what augmented reality games they play, and their friends on social media—to determine a credit score. In a culture where the elderly casually pull out their phones to pay for groceries and even the homeless don QR codes to accept donations, there’s plenty of data to draw on. And while the credit score can dictate the terms of a loan, it also acts as a proxy for general good character. In China, having a high credit rank can help your chances of accessing employment, for example, or of getting a visa to travel within Europe, and even finding a partner via online dating. One Chinese dating site, Baihe.com, offers greater visibility to users with high credit scores.

Self-learning robots may soon inspect nuclear sites

  Self-learning robots may soon inspect nuclear sites Researchers won a $1.5 million grant to develop machine learning AI. The algorithms will help robots handle tasks like decommissioning, waste handling and site monitoring. Researchers will develop AI for mapping and navigation, as well as vision-guided grasping, manipulation and cutting. The goal is to create robots that can use machine learning to adapt to unique, hostile and possibly radioactive nuclear sites.The scientists are exploring other areas to make nuclear work safer, such as augmented reality, though it's not clear as yet how that might be implemented.

Credit - scoring drives billions of dollars in lending, but the scorers ’ methods remain opaque. Algorithms can be made more accountable, respecting rights of fairness and dignity for which generations Fantasies about new power sources for human ambitions go back a century or more.

When one looks closer to the way their credit scoring algorithm is set up, one finds that banks do not assess clients in terms of their profitability, but rather, in terms of Especially in the beginning, when banks try to learn about their potential clientele, banks need to make mistakes . Mistakes cost them.

A close-up view of a Bitcoin ATM is pictured at the opening of the Ducatus cafe, a cashless cafe that accepts cryptocurrencies such as Bitcoin, in Singapore December 21, 2017. REUTERS/Edgar Su © Catalyst Images A close-up view of a Bitcoin ATM is pictured at the opening of the Ducatus cafe, a cashless cafe that accepts cryptocurrencies such as Bitcoin, in Singapore December 21, 2017. REUTERS/Edgar Su

And all of it is dictated by the algorithm.

In China, having a high credit rank can help your chances of accessing employment, for example, or of getting a visa to travel.   The decisions made by algorithmic credit scoring applications are not only said to be more accurate in predicting risk than traditional scoring methods; its champions argue they are also fairer because the algorithm is unswayed by the racial, gender, and socioeconomic biases that have skewed access to credit in the past. It might not be clear why playing video games, owning an Android phone, and having 400 Facebook friends can help to determine whether or not a loan application is successful, but a decade after the financial crisis, the logic goes, we need to trust that the numbers don’t lie.

NHL playoffs 2018: Today's scores, schedule, live updates

  NHL playoffs 2018: Today's scores, schedule, live updates Live updates, scores and schedule for today's NHL playoff games.Follow along with live updates, scores and the schedule for every NHL playoff game.

But if the system is so complex that it’s impossible to make intelligible for the people it’s affecting, it’s not doing its job, Eppink argues. Over time, De Liban says, they discovered wildly different scores when the same people were assessed , despite being in the same condition.

They reflect human biases and prejudices that lead to machine learning mistakes and misinterpretations. This bias shows up in numerous aspects of our lives, including algorithms used for electronic discovery, teacher evaluations, car insurance, credit score rankings

Alipay isn’t alone. Aside from Chinese competitors like WePay, other companies are using machine learning to make lending decisions in Sub-Saharan Africa. One such company, called Branch, is capitalizing on mobile phone adoption in Kenya, drawing down data gleaned from the hugely popular mobile payments platform M-Pesa to devise credit scores. And of course, algorithmic credit scoring isn’t confined to emerging credit markets. In Germany, Kreditech, a lending service determined to build the “Amazon for consumer finance,” is moving away from traditional metrics such as repayment histories, to mine the personality clues hidden in the Facebook data its customers surrender. Meanwhile, a U.S. company called ZestFinance uses big data to target customers whose ratings arguably never recovered from the subprime mortgage crisis.

FILE - This March 10, 2015, file photo, shows signage outside PayPal headquarters in San Jose, Calif. PayPal Holdings, Inc. reports earnings Wednesday, July 26, 2017. (AP Photo/Jeff Chiu, File) © Catalyst Images FILE - This March 10, 2015, file photo, shows signage outside PayPal headquarters in San Jose, Calif. PayPal Holdings, Inc. reports earnings Wednesday, July 26, 2017. (AP Photo/Jeff Chiu, File)

Algorithmic credit scoring is fueled by a desire to capitalize on the world’s ‘unbanked.’   As Schulman’s Money2020 speech suggests, algorithmic credit scoring is fueled by a desire to capitalize on the world’s ‘unbanked,’ drawing in billions of customers who, for lack of a traditional financial history, have thus far been excluded. But the rise of algorithmic credit also responds to anxieties in developed economies too—particularly in the aftermath of the financial crisis. A decade post-crash, there’s a whiff of a hope that big data might finally shore up the risky business of consumer credit everywhere. Whether we ought to have faith in that promise remains an open question—and one that is hard to answer given the impenetrability of machine learning.

Lethal pig virus could jump to humans

  Lethal pig virus could jump to humans A lethal pig virus that shows similarities to Sars could jump to humans. Porcine deltacoronavirus causes acute diarrhoea and vomiting in infected pigs and can be fatal.New research suggests the pathogen readily jumps between the cells of different species, including humans.First detected in China in 2012, it shares characteristics with Sars (severe acute respiratory syndrome) and Mers (Middle East respiratory syndrome).

I. INTRODUCTION. This paper is intended as a quick primer on credit scoring , and how it applies to the assessment This means that mistakes can and will be made ; but this is true of any method. Although addressing the same problem of assessing individual credit risk, we have argued that

Credit scores are another data point that can allow bias to creep into algorithms . Just knowing when an algorithm has made a mistake can be difficult. ProPublica countered, but sussing out who’s correct underscores a major obstacle: Assessing an algorithm ’s fairness depends on having

In 2002, J.P. Martin, an executive at Canadian Tire, began to analyze transactional data from the previous year. The company sold sports and recreation equipment, homewares, and automotive supplies, and issued a credit card that was widely accepted. By examining transactional histories, Martin traced correlations between the purchases that customers made and the likelihood they would default on their repayments. Responsible and socially-orientated purchases such as birdseed or tools to remove snow from roofs correlated with future creditworthiness, while cheap brands of motor oil indicated a higher likelihood of default.

PARIS, FRANCE - DECEMBER 05:  A man looks at a Bitcoin symbol on a window of the offices of the bank 'La Maison du Bitcoin' on December 05, 2017 in Paris, France. The value of the digital Cryptocurrency, Bitcoin, has increased tenfold since the beginning of the year with the price of Bitcoin now exceeding 10,000 US dollars. The value of Bitcoin rises and rises, but an investment risk remains.  (Photo by Chesnot/Getty Images) © Catalyst Images PARIS, FRANCE - DECEMBER 05: A man looks at a Bitcoin symbol on a window of the offices of the bank 'La Maison du Bitcoin' on December 05, 2017 in Paris, France. The value of the digital Cryptocurrency, Bitcoin, has increased tenfold since the beginning of the year with the price of Bitcoin now exceeding 10,000 US dollars. The value of Bitcoin rises and rises, but an investment risk remains. (Photo by Chesnot/Getty Images)

Some companies curtailed their customers’ credit if charges appeared for counseling, because depression and marital strife were signs of potential job loss.   Shortly afterwards, some credit card companies began using these and other discoveries to scrutinize their customers. In the US, every transaction processed by Visa or MasterCard is coded by a “merchant category“—5122 for drugs, for example; 7277 for debt, marriage, or personal counseling; 7995 for betting and wagers; or 7273 for dating and escort services. Some companies curtailed their customers’ credit if charges appeared for counseling, because depression and marital strife were signs of potential job loss or expensive litigation.

Warriors made us pay for mistakes, Rockets acknowledge

  Warriors made us pay for mistakes, Rockets acknowledge The Warriors' ruthless ability to prey on the Rockets' errors was key to Golden State's Game 1 win, James Harden and Chris Paul say.The Warriors were in ruthless form as they nudged ahead in the third quarter in Houston and showed no signs of letting their lead slip in a 119-106 victory, with Kevin Durant scoring 37 points.

Those are among the things that normally earn a good or bad credit score , but these people haven’t been in the working world As much as job prospects, the company is assessing personality. That is only true to a point: Algorithms do not fall from the sky. Algorithms are written by human beings.

From algorithms to aliens, could humans ever understand minds that are radically unlike our own? In doing so, we might remark on the limitations of the exercise. The mistake is to conclude, with Nagel, that there Fantasies about new power sources for human ambitions go back a century or more.

While these calculations were based on transactional histories, credit-scoring algorithms respond to datasets with thousands of variables aggregated from payment histories, social media, demographic, and even GPS data. ZestFinance’s patent describes the use of payments data, social behavior, browsing behaviors, and details of users’ social networks as well as “any social graph informational for any or all members of the borrower’s network.” Similarly, Branch’s privacy policy mentions such factors as personal data, text message logs, social media data, financial data, and handset details including make, model, and browser type. These applications don’t just draw on this aggregated data to make a decision; they create systems that recursively analyze and refine their results against a desired output, enabling the algorithm to “learn” by making its own connections. As the CEO of ZestFinance recently argued, “all data is credit data,” and the machinations of the algorithm are no longer so straight forward as snow rake = good; marriage counseling = bad.

Woman at an ATM © Catalyst Images Woman at an ATM

The black box of the algorithm dictates that no one person really knows what data—or what combinations of data—will prove significant.   While companies are generally up-front about what data is input to refine and upgrade the decision-making processes, the black box of the algorithm dictates that no one person really knows what data—or what combinations of data—will prove significant. With a little trial and error, for example, Joe Deville, a researcher at Lancaster University in the U.K., discovered that simply changing the screen resolution on his phone seemed to result in a different score for some algorithmic lenders, while others have suggested that actions as mysterious as charging your phone more often may produce a more favorable result. Meanwhile, the chief executive of Branch speaks whimsically of their machine-learning algorithm as a “robot in the sky” — a kind of AI fairy that makes lending decisions based on whether its users are naughty or nice. If you’re unhappy with the number that emerges from the black box, there’s little you can do to change or dispute it.

Scottish soldiers’ remains to be reburied in 17th century service

  Scottish soldiers’ remains to be reburied in 17th century service Tests found that bones unearthed in Durham in 2013 came from prisoners of war from the 1650 Battle of Dunbar.Extensive tests were carried out after they were found five years ago and studies showed they had been held captive in the then empty Durham Cathedral following the 1650 Battle of Dunbar.

People are making the classic investing mistake of allowing their emotions to dictate when they buy and sell. A study found that the same kinds of human foibles that come into play as people assess their own capacity for risk keep advisors from doing an accurate job, too.

On the other side of that coin, it can be fun to learn about ideas that began 100 years ago that are still a modern day issue. You will see people of the past who did not remember history and therefore made the same mistakes as the people before them.

Algorithmic credit scores might seem futuristic, but these practices do have roots in credit scoring practices of yore. Early credit agencies, for example, hired human reporters to dig into their customers’ credit histories. The reports were largely compiled from local gossip and colored by the speculations of the predominantly white, male middle class reporters. Remarks about race and class, asides about housekeeping, and speculations about sexual orientation all abounded. One credit reporter from Buffalo, New York noted that “prudence in large transactions with all Jews should be used,” while a reporter in Georgia described a liquor store he was profiling as “a low Negro shop.” Similarly, the Retailer Credit Company, founded in 1899 (now Equifax) made use of information gathered by Welcome Wagon representatives to collate files on millions of Americans for the next 60 years.

Man at an ATM © Catalyst Images Man at an ATM

By 1935, whole neighborhoods in the US were classified according to their credit characteristics. A map from that year of Greater Atlanta comes color-coded in shades of blue (desirable), yellow (definitely declining) and red (hazardous). The legend recalls a time when an individual’s chances of receiving a mortgage were shaped by their geographic status. The neighborhoods that received a hazardous rating were frequently poor or dominated by racial and ethnic minorities. The scoring practice, known today as redlining, acted as a device to reduce mobility and to keep African American families from moving into neighborhoods dominated by whites.

The Fair Credit Reporting Act in 1970 and the 1974 Equal Credit Opportunity Act were attempts to rectify these discriminatory practices. Today, or so the fintech narrative goes, we have detailed and unbiased scoring algorithms that are perceptually blind to gender, class, and ethnicity in their search for a creditworthy individual. And yet, burgeoning studies of how algorithms classify and make decisions mirror these historic geographies of exclusion, leading academics such as Cathy O’Neill and Frank Pasquale, who study the social, economic, and political effects of algorithmic decision making, to point to emergent practices of “weblining,” where algorithmic scores reproduce the same old credit castes and inequalities. Because these systems learn from existing data sets, it often follows that existing bias shapes what the machine decides is good, bad, normal or creditworthy.

This Mutation Protects Mice From Alzheimer's Disease, Could it Work on Humans?

  This Mutation Protects Mice From Alzheimer's Disease, Could it Work on Humans? A similar mutation was found in a small popular of humans, and may have the same protective effect.In a new study, published May 4 in Nature Communications, researchers from the RIKEN Center for Brain Science in Japan used a gene editing tool called CRISPR technology to create mice that had a mutated version of an App gene, a gene associated with the buildup of amyloid-beta in the brain. Doctors have observed that amyloid-beta builds up in the brains of Alzheimer’s disease patients and can interfere with brain cell communication. The hope is that a mutated version of the gene would reduce this amyloid-beta plaque formation, a press release reported.

If this extensive collection of teaching mistakes does nothing else, it certainly demonstrates how many potholes and detours exist on the road to being an excellent educator. 1. Lacking professional variety. Only teaching the same couple of classes semester after semester.

Related: Tech innovations that will soon change the world [PocketLint]

Space Tourism: We can fly to virtually any country in the world without any trouble, but what if we could all one day see the earth from space? Companies such as Virgin Galactic, SpaceX and even Amazon's Blue Origin, want to make it a reality one day, and give us a (very expensive) seat aboard a spaceship to take us into orbit. Passengers on Amazon's New Shepard space shuttle will be taken 100km above sea level, before parachuting back to earth. 10 tech innovations that will shortly change the world

Burgeoning studies of how algorithms classify and make decisions mirror historic geographies of exclusion.   These systems are fast becoming the norm. The Chinese government is now close to launching its own algorithmic “Social Credit System” for its 1.4 billion citizens, a metric that uses online data to rate trustworthiness. As these systems become pervasive, and scores come to stand for individual worth, determining access to finance, services, and basic freedoms, the stakes of one bad decision are that much higher. This is to say nothing of the legitimacy of using such algorithmic proxies in the first place.

While it might seem obvious to call for greater transparency in these systems, with machine learning and massive datasets it’s extremely difficult to locate bias. Even if we could peer inside the black box, we probably wouldn’t find a clause in the code instructing the system to discriminate against the poor, or people of color, or even people who play too many video games. More important than understanding how these scores get calculated is giving users meaningful opportunities to dispute and contest adverse decisions that are made about them by the algorithm.

Maybe then we can really see if these systems are giving credit where credit is due.

This article was originally published on Undark. Read the original article.

Learn how to write for Quartz Ideas. We welcome your comments at ideas@qz.com.

This Mutation Protects Mice From Alzheimer's Disease, Could it Work on Humans? .
A similar mutation was found in a small popular of humans, and may have the same protective effect.In a new study, published May 4 in Nature Communications, researchers from the RIKEN Center for Brain Science in Japan used a gene editing tool called CRISPR technology to create mice that had a mutated version of an App gene, a gene associated with the buildup of amyloid-beta in the brain. Doctors have observed that amyloid-beta builds up in the brains of Alzheimer’s disease patients and can interfere with brain cell communication. The hope is that a mutated version of the gene would reduce this amyloid-beta plaque formation, a press release reported.

—   Share news in the SOC. Networks

Topical videos:

This is interesting!