Over the weekend, another social media platform exploded into the fray: Air cat. The app is like a combination of Twitter and Clubhouse. Instead of typing a message, you speak it. The app quickly transcribes what you say, and as your followers scroll through their feed, they’ll hear your voice alongside the transcription.
Built by Naval Ravikant, founder of AngelList, and Brian Norgard, former head of Tinder, Airchat takes a refreshingly intimate approach to social media. There are people I’ve known online for years, and it wasn’t until I followed them on AirChat that I realized I’d never heard their real voices. The platform makes it feel like we’re actually chatting with each other, but because AirChat is asynchronous, it doesn’t seem as intimidating as joining a Clubhouse room and having live conversations with strangers.
Posting with your voice can seem scary, but it’s not as intimidating as it seems: you can re-record your message if you misspoke. But if you like to send your friends 3-minute voice memos instead of typing (or if you have a podcast), AirChat seems intuitive.
AirChat wouldn’t be worth using if the transcriptions were poor, but it’s the best text-to-speech product I’ve ever used. It almost always hits the mark in English… it even transcribes Pokémon names correctly (yes, I tested this thoroughly). It seems to work well in other languages too – I found it functional in Spanish, and TechCrunch reporter Ivan Mehta said the app did a decent job transcribing Hindi. Sometimes the app will translate speech directly into English, and while the translations were generally correct in our testing, it’s not clear why or when the app translates instead of transcribes.
So, is AirChat here to stay? It depends on the type of people who can find a community on the platform. For now, the feed looks like a San Francisco coffee shop: Most of the app’s users have ties to the tech industry, which could be because tech enthusiasts are often the first to join. launch into new applications. This was not the case for Threads when it was launched (it’s just an extension of Instagram), nor even Blue sky, which developed an early culture of absurd memes and irreverence. At the moment the app has suspended invitations, so this won’t improve in the near future.
The app’s current culture could also be a reflection of its founders, who are influential in Silicon Valley and venture capital circles. But it’s telling that when AirChat introduced a channels feature, two of the first to appear were “Crypto” and “e/acc.”
This doesn’t have to be an automatic red flag: I (somewhat reluctantly) use Twitter/X every day, and the tech industry is particularly vocal there, too. But at least on X, my feed also has posts about my favorite baseball team, the music I like, or the ongoing debate over adding bike lanes to my neighborhood. So far on AirChat, I haven’t seen many conversations that don’t involve technology in one way or another.
What I consider a red flag is AirChat’s naive approach to content moderation.
“We’re going to try to put as many moderation tools in the hands of users as possible. We want to be as discreet as possible. That said, sometimes you just don’t have a choice,” Ravikant said on AirChat.
The wording of “without intervention” is reminiscent of Substack, a platform that lost popular publications like Platformer and Garbage Day after refusing to remove pro-Nazi content proactively.
Ravikant argues that AirChat should work like a dinner party: you won’t kick someone out of your home for participating in a civil debate. But if they start yelling at you violently, it would be wise to intervene.
“We don’t want to moderate for content, but we will moderate for tone,” Ravikant said.
In real-life social situations, it’s completely normal to disagree with someone and explain why you think differently. It’s a pretty manageable situation to handle at your own table. But AirChat isn’t a normal social situation, since you’re chatting with thousands of other people. Without more rigorous content moderation, this approach is like organizing a large music festival, but with only one person providing security. We were hoping everyone would enjoy the music and behave unsupervised, but that’s not realistic. Just look Woodstock ’99.
This is another way AirChat parallels Clubhouse. Clubhouse’s approach to content moderation was even more extreme, since there was no way to block people for months after launch – AirChat already has blocking and muting features , Fortunately. But Clubhouse has hosted several times anti-Semitic And misogynist conversations without consequences.
With this minimalist approach to content moderation, it’s not hard to see how AirChat could find itself in a sticky situation. What happens if someone shares copyrighted audio content on the platform? What happens when someone doxxes another user or if someone uploads CSAM? Without a real plan to deal with these situations, what will happen to AirChat?
I hope people can behave properly, because I think the concept behind AirChat is great, but we can’t be that naive. I would like to know that if neo-Nazis tried to politely explain to me why Hitler was right, the platform would be able to protect me.