AirBnB wants to know if it has a "Machiavellian" personality before renting a house on the beach.
The company may be using software to judge if you are reliable enough to rent a house based on what you post on Facebook, Twitter and Instagram.
They will free the systems on social networks, execute the algorithms and get results. For people at the other end of this process, there will be no transparency in the process, no knowledge, no appeal process.
The company owns a technology patent designed to rate the "personalities" of potential guests by analyzing their activity on social networks to decide if they are a risky guest that could damage a host's house.
The final product of its technology is to assign each AirBnB guest customer a "reliability score". According to reports, this will be based not only on social media activity, but also on other data found online, including blog posts and legal records.
The technology was developed by Trooly, which AirBnB acquired three years ago. Trooly created a tool based on artificial intelligence designed to "predict reliable relationships and interactions," and that uses social networks as a data source.
The software builds the score based on perceived "personality traits" identified by the software, including some that you could predict – awareness, openness, extraversion, kindness – and some strangers – "narcissism" and "Machiavellianism," for example. (Interestingly, the software also seeks to get involved in civil litigation, suggesting that now or in the future they can ban people based on the prediction that they are more likely to sue.)
AirBnB has not said whether they use the software or not.
If you are surprised, shocked or unhappy with this news, then it is like most people who are unaware of the enormous and rapidly growing practice of judging the people (clients, citizens, employees and students) who use AI applied to networks social. exercise.
AirBnB is not the only organization that scans social networks to judge personality or predict behavior. Others include the Department of Homeland Security, employers, school districts, police departments, the CIA, insurance companies and many others.
Some estimates say that up to half of all university admission officers use social monitoring tools based on artificial intelligence as part of the candidate selection process.
Human resources departments and hiring managers also increasingly use AI social monitoring before hiring.
U.S. government agencies UU., Especially those that employ people who need security authorizations, are also relying on the monitoring of social networks to verify if there are unreliable employees.
And, as I reported in this space, the The number of smartphones performed by U.S. Customs and Border Protection. UU. when people enter the US UU. It grows radically every year. These searches include social media accounts, which could then be monitored and analyzed using AI.
And not only do schools increasingly monitor student activity on social networks, but some states begin to demand it by law.
The monitoring of social networks based on artificial intelligence is a very popular car. And organizations of all kinds are taking advantage of it.
There is only one problem.
AI-based social media monitoring is not that smart
Several organizations have been flirting with social media monitoring for years. But recent monitoring tools based on artificial intelligence have emerged and created an industry and an occupational specialty.
These tools look for personality traits such as intelligence, social responsibility, financial responsibility and behaviors such as obeying the law and behaving responsibly.
The question is not whether the AI applied to data collection works. It surely does. The question is whether social networks reveal truths about users. I am questioning the quality of the data.
For example, scanning someone's Instagram account can "reveal" that it is fabulously rich and travels the world enjoying champagne and caviar. The truth may be that they are broken and stressed influential people who exchange social exposure for hotel rooms and meals in restaurants where they take highly manipulated photos created exclusively to build reputation. Some people use social networks to deliberately create a deliberately false image of themselves.
A Twitter account can show a user as a prominent, constructive and productive member of society, but a second anonymous account unknown to social media monitoring systems would have revealed that person as a sociopathic troll who just wants to see the fire burn. world. People have multiple social media accounts for different aspects of their personalities. And some of them are anonymous.
And a person's Facebook account can be dotted with the scandalous sense of humor of a user, full of profanity and exaggeration, which monitoring tools can conclude reveals an unreliable personality, when in reality the problem is that the machines do not have sense of humor or irony. And the creators of AI tools may not have a real understanding about personality.
For example, using profanity online can reduce a person's reliability score, based on the assumption that rude language indicates a lack of ethics or morality. But recent research suggests the opposite: people with their mouths in the bathroom may, on average, be more reliable, as well as more intelligent, more honest and more capable, professionally. Do we trust that Silicon Valley software companies know or care about the subtleties and complexities of human personality?
And finally, some people are non-stop obsessive users of many social networking sites. Other people never use social networks. Most fall somewhere in between.
There is also a generational division. Younger people are statistically less likely to publish in public, preferring private messaging and social interaction in small groups. Is AI-based social media monitoring fundamentally ageist?
Women are more likely than men to post personal information on social networks (information about oneself), while men are more likely than women to post impersonal information. Posting about personal matters can be more revealing about personality. Is social media monitoring based on AI fundamentally sexist?
Is anyone asking these questions before jumping into this kind of hyper-consistent surveillance?
Companies like AirBnB are trying to solve a real problem. In the case of AirBnB, they are essentially a pairing service where the "product" for one user is … another user. And it is a matter of quality assurance. How do you minimize the damage of one user by another user?
Here is a caveat: in the last 40 years, the technology industry always exaggerates the magic dust of elves of the moment. At this time, that happens to be AI. What I fear is that companies like AirBnB have a problem, they conclude that the solution is to let the magic of AI solve it magically. They will free the systems on social networks, execute the algorithms and get results. The systems will tell who not to admit to school, who not to hire, who to strip off their security clearance and who to ban AirBnB.
For people at the other end of this process, there will be no transparency in the process, no knowledge, no appeal process.
Did the AI reject the right people? How will anyone know?
Did some of the people considered "reliable" by the AI get that distinction by playing in some way with the system? How will anyone know?
If you scan the Internet about social media monitoring, you will find many tips to "see what you publish online." Sounds like reasonable advice, until you really think about what that advice implies. Basically they are saying that if you are someone who should really be fired, not hired or rejected from a school based on your social media activity, you should really be smart and simulate or simulate the social media activity of a person who is not It is not objectionable.
As knowledge about the scope of social media monitoring is disseminated, the practice of limiting oneself on social sites, that is, playing with the AI audience and providing false information for the machines to judge that it is reliable , it will become commonplace.
Let me express it more clearly. Many types of organizations, from government agencies to businesses and technology companies of all kinds in Silicon Valley, are getting on the AI-based social media monitoring train. Dozens of companies are emerging to specialize in these tools. The practice is becoming widespread.
And when the public realizes the reality of this widespread practice, the answer will inevitably change the behavior of social networks, press the correct buttons to maintain one's "reliability score", hack the system, which makes Everything is useless. and obsolete
It's time to start worrying about AI-based social media monitoring
Here is something you can definitely intuit when scanning social networks, even without AI. The tech-savvy public is generally cautious and dismissive of "surveillance capitalism" practices such as personal data collection, web activity tracking and widespread practices of reducing user contact databases. randomization of various sites and applications, which you use to gain access to the personal information of all the people you know without your knowledge and permission.
Everyone seems to talk about it. No one seems to like it. But it is also true that the actual material "damage" of this type of daily monitoring is difficult to identify.
Meanwhile, you rarely hear online conversations about social media monitoring based on AI. However, the potential "damages" are gigantic: losing your job, rejection of school, higher insurance rates and not being able to rent a beach house on AirBnB.
I am not here with "advice" for the "untrustworthy" to play the system and trick the machines into trusting. I am here to tell you that the system can be played. And that AI-based social media monitoring to determine "reliability" is itself … not reliable.