Being Judged by a ‘Trustworthiness’ Score
We’re all being judged by a secret ‘trustworthiness’ score
ANDREW CHEETHAM1 day ago
Nearly everything we buy, how we buy, and where we’re buying from is secretly fed into AI-powered verification services that help companies guard against credit-card and other forms of fraud, according to the Wall Street Journal.
More than 16,000 signals are analyzed by a service called Sift, which generates a “Sift score” ranging from 1 – 100. The score is used to flag devices, credit cards and accounts that a vendor may want to block based on a person or entity’s overall “trustworthiness” score, according to a company spokeswoman.
From the Sift website: “Each time we get an event — be it a page view or an API event — we extract features related to those events and compute the Sift Score. These features are then weighed based on fraud we’ve seen both on your site and within our global network, and determine a user’s Score. There are features that can negatively impact a Score as well as ones which have a positive impact.”
The system is similar to a credit score – except there’s no way to find out your own Sift score.
Factors which contribute to one’s Sift score (per the WSJ):
• Is the account new?
• Are there are a lot of digits at the end of an email address?
• Is the transaction coming from an IP address that’s unusual for your account?
• Is the transaction coming from a region where there are a lot of hackers, such as China, Russia or Eastern Europe?
• Is the transaction coming from an anonymization network?
• Is the transaction happening at an odd time of day?
• Has the credit card being used had chargebacks associated with it?
• Is the browser different from what you typically use?
• Is the device different from what you typically use?
• Is the cadence of the way you typed out your password typical for you? (tracked by some advanced systems)
Sources: Sift, SecureAuth, Patreon
The system is used by companies such as Airbnb, OpenTable, Instacart and LinkedIn.’