•  
  •  
  •  
  •  
  •  
  •  

In April 2019, somewhere between raising questions about the affect social networks cause on mental health, Instagram disclosed its plans to test a feed without likes. You would still see how many people clicked the heart button under your photo, but the total number would remain invisible for other people.

The Verge released the material with all the details explaining why Instagram is so dedicated to this idea. Here is a short footage of it. “It’s about young people,” Instagram chief Adam Mosseri said that November, just ahead of the test arriving in the United States. “The idea is to try and depressurize Instagram, make it less of a competition, give people more space to focus on connecting with people that they love, things that inspire them. But it’s really focused on young people.”

Two years of tests demonstrate that likes, actually, don’t cause harmful influence on mental health. Likes will remain publicly viewable by default. But, Instagram users will get the ability to enable and disable it whenever they like, either for their whole feed or on a per-post basis.

“What we heard from people and experts was that not seeing like counts was beneficial for some, and annoying to others, particularly because people use like counts to get a sense for what’s trending or popular, so we’re giving you the choice,” the company said in a blog post.

So what happened on Instagram?

“It turned out that it didn’t actually change nearly as much about … how people felt, or how much they used the experience as we thought it would,” Mosseri said in a briefing with reporters this week. “But it did end up being pretty polarizing. Some people really liked it, and some people really didn’t.”

The New York Times reported last year that there is little evidence that the use of smartphones or social networks brings changes in mental health. Just this month, a 30-year study of teenagers and technology from Oxford University reached a similar finding.

You should highlight the fact that studies don’t prove that social networks are good for teenagers or other people. Just like they don’t show the opposite effect. Taking into consideration this info it stands to reason that changes to the user interface of individual apps would also have a limited effect.

No, this experiment is not total despair. It emphasizes that social networks are often too reluctant to learn: rigid, one-size-fits-all platform policies are making people miserable.

The previous month was a roasting procedure for Intel, because of their Bleep, an experimental AI tool for censoring voice chat during multiplayer online video games. You should definitely understand what we are talking about, if you’ve played any online shooter at least once. Rather than censoring the luggage of racist, misogynist and homophobic speech, Intel said it would put a choice in your hands.

What is the way out?

Some questions, especially related to non-sexual nudity, have different interpretations across cultures, so it will be awkward to pile them up under one global standard. It seems like a logical way out to give users the opportunity to decide whether they can see each other’s likes or whether breastfeeding photos appear in their feed.

However, the total free-to-choose mechanics are impossible to create. As it gives too much complexity to the product. Companies will still have to draw hard lines around tricky issues, including hate speech and misinformation.

On the other side, if users get more freedom in the choices they make, both people and platforms will get a profit. can get software that maps more closely to their cultures and preferences. And platforms can offload a series of impossible-to-solve riddles from their policy teams to an eager user base.

Mosseri is moving this way.

“It ended up being that the clearest path forward was something that we already believe in, which is giving people choice,” he said this week. “I think it’s something that we should do more of.”

0
0