Three leading social media companies have agreed to undergo independent assessments of how effectively they protect the mental health of teenage users, submitting to a battery of tests announced Tuesday by a coalition of advocacy organizations.
The platforms will be graded on whether they mandate breaks and provide options to turn off endless scrolling, among a host of other measures of their safety policies and transparency commitments. Companies that reviewers rate highly will receive a blue shield badge, while those that fair poorly will be branded as not able to block harmful content. Meta, which operates Facebook and Instagram, TikTok and Snap are first three companies to sign up for the process.
“I hope that by having this new set of standards and ratings it does improve teens’ mental health,” said Dan Reidenberg, managing director of the National Council for Suicide Prevention, who oversaw the development of the standards. “At the same time, I also really hope that it changes the technology companies: that it really helps shape how they design and they build and they implement their tools.”
Teenagers represent a coveted demographic for social media sites and the new standards come as the tech industry faces increasing pressure to better protect young users.
A wave of lawsuits alleges that leading firms have engineered their platforms to be addictive. Congress is weighing a suite of bills designed to protect children’s safety online. And state lawmakers have sought to impose age limits on social apps.
But those efforts have borne little fruit. Some legal experts argue teens and their families may face difficulty in court cases proving the connection between social media use and their struggles. Officials in Washington, meanwhile, have been unable to agree on how to regulate the industry and laws passed by the states have run into First Amendment challenges.
The voluntary standards represent an alternative approach. Reidenberg said in an interview that the ratings are not a substitute for legislation but will be a helpful way for teenagers and parents to decide how to engage with particular apps. The project is backed by the Mental Health Coalition, an advocacy group founded by fashion designer Kenneth Cole.
Cole said in a statement that the standards “recognize that technology and social media now play a central role in mental health — especially for young people — and they offer a clear path toward digital spaces that better support well-being.”
There is still no scientific consensus on whether social media is on the whole harmful for children and teenagers. While some research has found that the heaviest users have worse mental health, studies have also found that young people who are not online can also struggle. But teenagers themselves have reported becoming more uneasy about the time they spend online, with girls in particular telling pollsters at the Pew Research Center in 2024 that apps were affecting their self-confidence, sleep patterns and overall mental health.
Reidenberg said it’s clear that in some cases young people’s time online becomes problematic. He said the system was developed without funding from the tech industry, but companies will have to volunteer to participate.
Antigone Davis, Meta’s global head of safety, said the standards will “provide the public with a meaningful way to evaluate platform protections and hold companies accountable.” TikTok’s American arm said it looked forward to the ratings process. Snap called the Mental Health Coalition’s work “truly impactful.”
Organizers compared the process to how Hollywood assigns age ratings to movies or the government assesses the safety of new cars. Companies will submit internal polices and designs for review by outside experts who will develop their ratings. In all, the companies’ performance will be measured in about two dozen areas covering their policies, app design, internal oversight, user education and content.
Many of the standards specifically target users’ exposure to content about suicide and self harm. But one also targets the sheer length of time that some people spend scrolling, crediting platforms for offering either voluntary or mandatory “take-a-break” features.
The standards are being launched at an event in Washington on Tuesday. Sen. Mark R. Warner (D-Virginia) said in a statement that he welcomed the standards but they weren’t a substitute for regulatory action.
“Congress has a responsibility to put lasting, enforceable guardrails in place so that every platform is held accountable to the young people and families who use them,” he added.
The New York Times’ July 25 front page showing a picture captioned: “Mohammed Zakaria al-Mutawaq, about 18 months, with his mother, Hedaya al-Mutawaq, who said he was born healthy but recently diagnozed with severe malnutrition. A doctor said the number children dying of malnutrition in Gaza had risen sharply.” On July 30, the paper acknowledged that Mohammed “had pre-existing health problems affecting his brain and his muscle development.”
It’s one of the most emotionally searing images circulated in recent months: a malnourished child behind a fence, desperate eyes piercing through the camera lens, with a woman stretching out a bowl for food. It’s been published by international media, invoked by politicians, and shared by millions online. It has come to symbolize, for many, the reported famine in Gaza.
But there’s just one problem. The photo’s origin and context are hotly disputed — and increasingly, experts say, deliberately manipulated.
Earlier this week, Israeli Prime Minister Benjamin Netanyahu told his 3.4 million followers on X:
“There is no starvation in Gaza, no policy of starvation in Gaza.”
There is no starvation in Gaza, no policy of starvation in Gaza, and I assure you that we have a commitment to achieve our war goals.
We will continue to fight till we achieve the release of our hostages and the destruction of Hamas' military and governing capabilities. They… pic.twitter.com/cND0ZoejgJ
— Benjamin Netanyahu – בנימין נתניהו (@netanyahu) July 28, 2025
His remarks unleashed a digital firestorm. Former President Donald Trump broke ranks with his usual ally and responded:
“There is real starvation in Gaza. You can’t fake that.”
This rare division between two strong allies laid bare the intensifying war not just over territory, but over information — a propaganda war playing out across social media, newsrooms, and governments.
Hamas’s Propaganda Machinery and Media Blindness
Many analysts and security experts argue that Hamas is adept at exploiting global sympathy through carefully staged imagery. Images of skeletal children, overwhelmed hospitals, and food queues are frequently disseminated, often with little journalistic scrutiny.
Take, for instance, the viral image of a girl at a community kitchen. On X (formerly Twitter), thousands of users — aided by Elon Musk’s AI chatbot, Grok — claimed the photo was from 2014, portraying a Yazidi girl fleeing ISIS in Iraq.
But BBC Verify journalist Shayan Sardarizadeh debunked that claim. He identified the photo’s true source:
“The image is from Gaza, taken on July 26, 2025, by AP photographer Abdel Kareem Hana.”
Reverse image tools like TinEye confirmed the original publication date and location. Grok was simply wrong.
As Sardarizadeh noted:
“AI chatbots, including Grok, are not fact-checking tools and should not be used for that purpose, particularly in relation to breaking and developing events.”
Still, damage was done. The manipulated claim was spread, repeated, and believed by many — a clear example of how quickly misinformation can overshadow the truth.
The Case of Mohammed Zakaria al-Mutawaq
The New York Times’ July 25 front page showing a picture captioned: “Mohammed Zakaria al-Mutawaq, about 18 months, with his mother, Hedaya al-Mutawaq, who said he was born healthy but recently diagnozed with severe malnutrition. A doctor said the number children dying of malnutrition in Gaza had risen sharply.” On July 30, the paper acknowledged that Mohammed “had pre-existing health problems affecting his brain and his muscle development.”
Another image that shocked global audiences was that of 18-month-old Mohammed Zakaria al-Mutawaq. Published by The New York Times in a piece titled “Gazans Are Dying of Starvation”, the toddler was described as emaciated, with his father reportedly killed while searching for food.
“As an adult, I can bear the hunger, but my kids can’t,” his mother was quoted.
But investigative journalist David Collier quickly raised flags. He cited medical records showing Mohammed suffered from severe genetic disorders since birth and had required special supplements even before the war began.
In response, The New York Times issued an editor’s note:
“We have since learned new information… and have updated our story to add context about his pre-existing health problems.”
We have appended an Editors' Note to a story about Mohammed Zakaria al-Mutawaq, a child in Gaza who was diagnosed with severe malnutrition. After publication, The Times learned that he also had pre-existing health problems. Read more below. pic.twitter.com/KGxP3b3Q2B
They noted that while Mohammed’s condition had worsened due to the lack of medical care, his malnutrition was compounded, not caused, by the current war.
To critics, the update wasn’t enough.
“So you guys lied, got called out, and issued a complete non-apology,” one user posted on X.
On Wednesday, a UN-backed food security task force warned that famine “is currently playing out” in Gaza. Their analysis said Gaza City had crossed famine thresholds for food consumption and acute malnutrition.
The Hamas-run Gaza Health Ministry reports 154 deaths from hunger since October 2023 — including 89 children. However, critics question the credibility of the ministry’s figures, noting its alignment with Hamas and history of inflated or unverifiable statistics.
Meanwhile, UN Secretary-General António Guterres called the situation “a humanitarian catastrophe of epic proportions.” Human rights organizations, including Israel-based B’Tselem and Physicians for Human Rights, claim Israel is committing genocide through starvation, mass displacement, and bombings.
Yet at the same time, The New York Times also recently reported Israeli military officials denying Hamas’s alleged theft of UN aid — suggesting the crisis may be more due to distribution chaos, logistical breakdowns, and internal Hamas mismanagement than direct Israeli policy.
A Media Reckoning Is Overdue
The Western media’s responsibility in this tragedy cannot be ignored. In the rush to file emotionally evocative stories, due diligence has often been sacrificed. As the New York Budgets Editorial Standards outline: verifying visual content, especially in wartime, is not optional — it is essential.
“Every journalist must ask: Who took this photo? Where? When? Under what conditions?”
Hamas has repeatedly demonstrated it will exploit suffering for propaganda. That doesn’t mean suffering isn’t real — but it does mean every claim must be thoroughly scrutinized. Too often, however, global outlets like The New York Times, The Guardian, and Stuff have published without confirmation, only issuing updates days later.
Starvation in Gaza may well be occurring. Humanitarian groups have sounded the alarm. But in a media landscape rife with misinformation, every image, every anecdote must be questioned — not to deny suffering, but to preserve the truth.
Because when lies masquerade as evidence, the real victims — whether Palestinian civilians or the truth itself — are the ones who suffer the most.
Meta on Wednesday prevailed against a group of 13 authors in a major copyright case involving the company’s Llama artificial intelligence model, but the judge made clear his ruling was limited to this case.
U.S. District Judge Vince Chhabria sided with Meta’s argument that the company’s use of books to train its large language models, or LLMs, is protected under the fair use doctrine of U.S. copyright law.
Lawyers representing the plaintiffs, including Sarah Silverman and Ta-Nehisi Coates, alleged that Meta violated the nation’s copyright law because the company did not seek permission from the authors to use their books for the company’s AI model, among other claims.
Notably, Chhabria said that it “is generally illegal to copy protected works without permission,” but in this case, the plaintiffs failed to present a compelling argument that Meta’s use of books to train Llama caused “market harm.” Chhabria wrote that the plaintiffs had put forward two flawed arguments for their case.
“On this record Meta has defeated the plaintiffs’ half-hearted argument that its copying causes or threatens significant market harm,” Chhabria said. “That conclusion may be in significant tension with reality.”
Meta’s practice of “copying the work for a transformative purpose” is protected by the fair use doctrine, the judge wrote.
“We appreciate today’s decision from the Court,” a Meta spokesperson said in a statement. “Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology.”
Though there could be valid arguments that Meta’s data training practice negatively impacts the book market, the plaintiffs did not adequately make their case, the judge wrote.
Attorneys representing the plaintiffs said in a statement said that they “respectfully disagree” with the decision.
“The court ruled that AI companies that ‘feed copyright-protected works into their models without getting permission from the copyright holders or paying for them’ are generally violating the law,” the statement said. “Yet, despite the undisputed record of Meta’s historically unprecedented pirating of copyrighted works, the court ruled in Meta’s favor.”
Still, Chhabria noted several flaws in Meta’s defense, including the notion that the “public interest” would be “badly disserved” if the company and other businesses were prohibited “from using copyrighted text as training data without paying to do so.”
“Meta seems to imply that such a ruling would stop the development of LLMs and other generative AI technologies in its tracks,” Chhabria wrote. “This is nonsense.”
The judge left the door open for other authors to bring similar AI-related copyright lawsuits against Meta, saying that “in the grand scheme of things, the consequences of this ruling are limited.”
“This is not a class action, so the ruling only affects the rights of these thirteen authors — not the countless others whose works Meta used to train its models,” he wrote. “And, as should now be clear, this ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.”
Additionally, Chhabria noted that there is still a pending, separate claim made by the plaintiffs alleging that Meta “may have illegally distributed their works (via torrenting).”
Earlier this week, a federal judge ruled that Anthropic’s use of books to train its AI model Claude was also “transformative,” thus satisfying the fair use doctrine. Still, that judge said that Anthropic must face a trial over allegations that it downloaded millions of pirated books to train its AI systems.”
“That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft, but it may affect the extent of statutory damages,” the judge wrote.
Junk food ads are flooding your teenager’s social media feeds and it’s influencing what they choose to eat. (Jene Young/The NewYorkBudgets)
Social media’s harmful impact on the mental health of children and teenagers is well documented.
Now, new research suggests that the widespread marketing of unhealthy food and drinks on social media is influencing the food choices of young people and potentially impacting their physical health.
A University of Oxford team found “strong and consistent evidence” that digital marketing of unhealthy foods and drinks is widespread on social media, and that it influences children and teenagers.
And a recent study led by the University of Queensland found that problematic and excessive social media use is linked to young teens’ increased consumption of sweets and sugar, as well as the tendency to skip breakfast.
So, what is going on with social media and children’s diet? And what are the links?
Teens regularly exposed to junk food ads
Australian GP Isabel Hanson, from the research team behind the Oxford study, says that when young people see junk food being marketed on platforms like Instagram, YouTube or TikTok, it affects what they want to eat.
“My co-authors and I reviewed studies from around the world and saw a clear pattern: kids and teens are regularly exposed to marketing for foods high in sugar, salt and fat, often without realising it,” she says.
The marketing of unhealthy foods to children is unregulated, except for those in South Australia, which has banned the advertising of junk food on public transport. (Pexels/Pixabay)
One of those studies found Australian children aged 13 to 17 are exposed to 17 food ads each hour, with an average of almost 170 per week.
“This exposure shapes their preferences, increases their desire for those foods, and can lead to higher consumption.”
It’s something she sees play out in her work as a GP.
“Young people who grow up in environments filled with lots of screen time, social media, and exposure to advertising often have poorer diets and can struggle with their weight,” she says.
“Of course, there are lots of factors at play, but [social media] is one we can do something about.”
‘Harder to resist’
Asad Khan led the University of Queensland study that reviewed the data of 223,000 adolescents aged 13 to 14 from 41 countries.
The study found the mindless use of social media often leads to mindless eating — and sometimes mindlessly not eating.
Teens skipping breakfast is particularly problematic, according to Professor Khan, although he concedes the study only examined the amount of time teens spent on social media and not the type of content they consumed, making the link between the two difficult to plot.
Professor Asad Khan believes social media companies should “take some responsibility” for the proliferation of junk food ads on social media. (University of Queensland)
“What we found is that the mindless [and excessive] use of social media, is more problematic. And that kind of mindless use is leading towards the over consumption of sweet, sugary drinks and skipping breakfast,” he tells ABC Australian Radio.
So why do these ads for junk food on social media impact the diet of children and teens as much as they do?
Dr Hanson says these ads are designed to be appealing, and young people are generally more susceptible to this type of marketing.
“They are colourful, fun, often linked to trends or popular people, and that has a real effect on young people’s choices.”
“Young people are smart and savvy in many ways. They can spot trends quickly, navigate digital spaces with ease, and often know more about online platforms than adults do.
“But the brain continues to develop until we are in our mid-twenties, particularly the areas responsible for impulse control, decision-making and assessing risk.
“That means children and teenagers can be more influenced by social approval and less likely to pause and reflect on where a message is coming from, especially when it’s wrapped up in entertaining or peer-driven content.”
Social media advertising often doesn’t look like traditional advertising, which makes it harder to spot and easier to absorb.
And the social media algorithm, peers and influencers also play a huge part in how young people interact with food ads.
“Social media platforms are built to keep users engaged. Once a young person interacts with food content, they’re likely to see more of it,” Dr Hanson says.
“At the same time, young people are heavily influenced by what their peers are watching, liking or sharing, so if a snack or drink is popular in their online circles, it can spread quickly.”
As for the influencers spruiking junk food, they are seen as relatable and trustworthy by young people.
“When influencers promote a food or drink, even subtly, it carries a lot of weight.
“Our review showed that this kind of marketing is especially effective because it doesn’t feel like marketing. That makes it harder to recognise, and harder to resist.”
Food for good mental health
An adolescent’s relationship with food can be a complicated one.
Rates of obesity among children and young people have tripled over the past three decades, the study found.
Add the impacts of social media, courtesy of junk food ads, influencers and time-consuming scrolling, and things can become even murkier.
Sugary and highly processed foods can lead to a range of chronic diseases if over-consumed, says paediatric dietitian Miriam Raleigh.
Miriam Raleigh is a paediatric dietitian and the founder of Child Nutrition, a group of dietitians specialising in children’s food services.
Having a variety of foods from all core food groups is essential for a child’s body and brain, she says.
“We know that a diet rich in wholefoods — not those found in packets — is important for good mental health. Foods are more than vitamins and minerals, they also contain phytochemicals and antioxidants which feed our body, mind and gut.
“Having a broad range of foods allows our gut microbiome to contain a diverse range of different beneficial bacteria that is thought to have a direct link to mental health.”
Sugary foods and highly processed foods contain little nutritional value for children and teens’ growing bodies,” Raleigh says.
Holding social media companies accountable
Dr Hanson would like to see more government regulation around junk food marketing on social media rather than the voluntary industry codes that “don’t hold up in the digital space” that are currently in place.
Policies that help reduce children’s exposure to digital junk food marketing are needed and social media companies need to do more to protect young users, she argues.
“Education and social media literacy might help a bit, but let’s be honest — it’s the same for adults. When you are constantly flooded with advertising for unhealthy food, it makes you want it,” she says.
“These are highly skilled marketers using proven techniques to influence behaviour. Expecting young people to resist that, day after day, isn’t realistic.”
When asked about the federal government’s response to the issue, a spokesperson from the health department said the government has provided more than $500,000 for the University of Wollongong to deliver a feasibility study to examine the current landscape of unhealthy food marketing to children.
The feasibility study will provide a better understanding of the options available for consideration by all governments and is expected to be finalised in the second half of 2025.
MoviePass, the startup that made its mark with its movie theater subscription service, has always been known for shaking things up, and its latest venture is no exception.
The company announced on Thursday the beta launch of Mogul, a new daily fantasy entertainment platform designed specifically for the Hollywood industry.
To understand what Mogul is, it’s important to first grasp the concept of daily fantasy sports. This subcategory of fantasy sports allows players to compete over short-term periods, rather than an entire season. Players assume the role of team managers, creating their own dream teams made up of real-world athletes and earning points based on how those athletes perform in actual games.
Mogul takes this idea by allowing users, who are likely passionate movie enthusiasts interested in this sort of thing, to act as studio heads in the film industry. Players are provided with a budget and “studio credits” (in-game currency) to spend on selecting actors for their leagues.
Users can update their lineup of movie actors each day. They then participate in fantasy-style tournaments that last about a week, plus one-on-one competitions and solo challenges. Participants make calls on the results of various things, such as box office results, audience turnout, critic ratings, and potential award winners.
As users level up, they earn digital collectibles — think signed posters and memorabilia — that help them climb the leaderboard.
Mogul is built on Sui, a layer 1 blockchain and smart contract platform developed by Mysten Labs. Beta testers will receive a digital wallet to securely store their in-game virtual currency, rewards, and collectibles.
MoviePass is taking a bold leap with the introduction of Mogul, as it has never really been done before. But CEO Stacy Spikes believes it’s a huge market waiting to be tapped. He said, “People can name more actors than they can probably name sports athletes. So I think there’s a really big market opportunity there.”
Initially, when we first learned about Mogul, we didn’t anticipate that it would take off, at least not in the early stages. We wondered if there are many movie fans willing to compete with others about box office revenue or ratings.
However, we may have underestimated its appeal. The company claims that more than 400,000 people have already signed up for the early-access waitlist. It remains to be seen whether it can maintain this level of interest leading up to the official launch, but it could become popular among niche film industry followers.
During our initial conversation with Spikes, he positioned Mogul as a predictive market platform. Later on, we were told that a more fitting description would be to classify Mogul as a daily fantasy sports platform, but it may evolve to include this functionality in the future. For now, though, Mogul operates exclusively with virtual currency.
This distinction is important, especially considering the regulated nature of daily fantasy sports, as opposed to prediction market platforms, which currently exist in a legal gray area. Kalshi, for instance, has been in ongoing legal battles with state gambling regulators.
“It’s murky what needs to be approved. There are different types of clearances, depending on the markets you want in the U.S. You have to go state by state. It literally is like a Chinese puzzle with stuff all over the place,” Spikes said.
Mogul represents the initial phase of MoviePass’s long-term web3 strategy. The company has previously revealed its intention to provide on-chain rewards for attending movies. It’s also backed by Animoca Brands, a venture capital firm specializing in blockchain technology.
Last year, MoviePass partnered with Sui to allow subscribers to make payments using USD coin.
Mark Zuckerberg’s plan is to make Meta the market leader in artificial intelligence. Investors will want to know how President Donald Trump’s tariffs-heavy trade policies will impact that strategy.
Those answers could start to come as soon as this week as Meta’s AI strategy takes center stage when the company hosts its first Llama-branded conference for AI developers on Tuesday then reports its latest quarterly earnings the next day.
Already, tech companies are starting to talk about the potential impact they’re bracing for as a result of the Trump tariffs.
Intel Chief Financial Officer David Zinsner said Thursday during the chip giant’s first-quarter earnings call that U.S. trade policies “have increased the chance of an economic slowdown, with the probability of a recession growing.” Meanwhile, Google CFO Anat Ashkenazi said that day during a first-quarter earnings call that the tech giant remains committed to its $75 billion investment in capital expenditures, or capex, this year, but also acknowledged that the “timing of deliveries and construction schedules” could cause some quarter-to-quarter spending fluctuation.
For now, analysts expect Meta to follow Google’s lead and remain firm in its plan to spend as much as $65 billion in capex for AI infrastructure this year when it reports earnings Wednesday. Some analysts believe Meta could even raise the figure because AI is a core priority for the company.
“We do not expect META to cut its CapX guidance of $60B-$65B in 2025, for its GenAI infrastructure, because they see this as an important 10-year investment, we believe,” Needham analysts wrote in a research note published Wednesday. “However, tariffs add risks of upward cost revisions.”
Investors will also be monitoring Meta’s LlamaCon event at its Menlo Park, California, headquarters for any signs that its AI investments are having an immediate business impact. This will be the first time Meta hosts a developer conference specifically for its Llama family of AI models.
“Investors want to see ROI on all these AI investments, and while Meta has shown clear benefits from leveraging AI to improve its products and drive faster revenue growth, it’s been hard to quantify those benefits,” Truist Securities analyst Youssef Squali told CNBC.
Meta in April released a couple of its new Llama 4 models, which Meta Chief Product Officer Chris Cox previously said can help power so-called AI agentsthat can perform tasks for users via web browsers and other online interfaces.
It’s critical that Meta keep improving Llama to create a major business involving AI agents that companies can use to interact with their customers within apps like Facebook and WhatsApp, William Blair research analyst Ralph Schackart said.
″Meta has an early mover advantage at scale in a multi-trillion dollar market,” Schackart said in an email. “We believe Meta is very well positioned to leverage its billions of global users across multiple platforms.”
Meta is unlikely to curb its Llama investment anytime soon, but should eventually consider doing so if it fails to generate enough money to justify its costs, said Ken Gawrelski, a Wells Fargo managing director of equity research.
“We do believe that over time Meta needs to continue to evaluate whether Llama needs to be competitive with the leading-edge models,” Gawrelski said. “This is a very expensive proposition and thus far, unlike Google, Meta does not directly monetize its model in any material way.”
Chris Cox, Chief Product Officer at Meta Platforms, speaks during The Wall Street Journal’s WSJ Tech Live Conference in Laguna Beach, California on October 17, 2023.(Patrick T. Fallon/AFP/Getty Images)
Meta AI and the consumer
Analysts are also following the Meta AI digital assistant. That’s because the ChatGPT rival represents the second pillar of Zuckerberg’s AI strategy.
Zuckerberg in January said he believes 2025 “is going to be the year when a highly intelligent and personalized AI assistant reaches more than 1 billion people, and I expect Meta AI to be that leading AI assistant.”
In February, The Budgets reported that Meta was planning to debut a stand-alone Meta AI app during the second quarter and test a paid subscription service, in which users could pay monthly fees to access more powerful versions like users can with ChatGPT.
Although Meta’s enormous user base across its family of apps gives Meta AI an advantage over rivals like ChatGPT in terms of reach, they may not interact with Meta AI in the same way they do with rival chat apps, said Cantor Fitzgerald analyst Deepak Mathivanan.
Gawrelski said that people may not want to use Meta AI within Facebook and Instagram if all they want to do is passively watch the short videos that Meta algorithmically recommends to their feeds.
“This is why a separate Meta AI, where Meta could clearly articulate its use case and value proposition, could be helpful,” Gawrelski said.
A stand-alone Meta AI app could help the company better market the digital assistant and distinguish it from rivals, said Debra Aho Williamson, founder and chief analyst at Sonata Insights.
“ChatGPT has such wide brand awareness, that it’s become a moat that is soon going to be very hard to overcome,” Williamson said.
Meta event Mark Zuckerberg(Getty Image/The NewYorkBudgets/David Jackson)
The billionaire leaders of social media giants have long been under pressure to quell the spread of mis- and disinformation. No system to date, from human fact-checkers to automation, has satisfied critics on the left or the right.
One novel approach winning plaudits recently has been Community Notes. The crowdsourced method, first introduced by Twitter before Elon Musk acquired it and rebranded it as X, allows regular users to submit additional context to posts, offering up supporting evidence to set the record straight. For Musk, the system is the centerpiece of his “free speech” claims, a democracy that circumvents traditional gatekeepers of information. “You are the media,” he tells his 220 million followers.
Starting Tuesday, Mark Zuckerberg’s Meta Platforms Inc. will broadly expand the method when it begins testing its own Community Notes system for Facebook, Instagram and Threads, citing X as its inspiration. In what was seen as a controversial about-face after years of paying professional fact-checkers, Zuckerberg said its existing initiatives had become “too politically biased.” An army of volunteer users would do a “better job,” he said. YouTube began testing a version of Community Notes on its site in June.
The system has advantages over the alternatives, but its limits as an antidote to misinformation are clear. So are its benefits for executives who have been dogged by intense scrutiny over misinformation and censorship for the better part of a decade. It allows them to outsource responsibility for what happens on their platforms to their users. And also the blame.
A mockup of how users viewed a Community Note attached to a fake image posted during the California wildfires.Photo Illustration: Taylor Tyson/Bloomberg; Frank Eliason via Unsplash; X
A Bloomberg media analysis of 1.1 million Community Notes — written in English, from the start of 2023 to February 2025 — shows that the system has fallen well short of counteracting the incentives, both political and financial, for lying, and allowing people to lie, on X.
Furthermore, many of the most cited sources of information that make Community Notes function are under relentless and prolonged attack — by Musk, the Trump administration, and a political environment that has undermined the credibility of truly trustworthy sources of information.
Eliminating the rewards for promoting misinformation would go much further than crowdsourcing to clean up social media. But in a social media world of growing incentives to make money in the viral casino, Community Notes is ultimately fighting a losing battle. This column seeks to fully examine how the people behind it are fighting that battle, and what strengths and weaknesses Meta and YouTube stand to inherit by adopting its practices.
The proponents of Community Notes can point to some successes. The system has proved to be faster and is regarded as more trustworthy and transparent than professional fact-checkers. On X, offending posts receive fewer retweets and are more likely to be deleted. Internally the company felt Community Notes did a better job than traditional media of minimizing the spread of doctored or misattributed images of violence in the Israel-Gaza conflict (though a Bloomberg News analysis suggested it failed to stop a flood of deceit). It limited the virality of some hoaxes during the Los Angeles wildfires, with notes users pouncing on false images of the famous Hollywood sign aflame.
And indeed, as Musk has repeatedly stated, Community Notes often corrects him — 167 of his posts have received a note since Community Notes began.
Just Scratching the Surface
On X, users who volunteer for Community Notes can submit one to any post, adding context and links to trustworthy or original sources of information. The note’s helpfulness is then voted upon by other volunteers. If enough people agree it is worth publishing, it will be made visible to all X users under the original post. However, this happens only if a consensus is reached among users who have disagreed on other topics in the past, as judged by a bridging algorithm. The developers behind the system say this indicates the discovery of a common ground less likely to be biased in any direction.
Among Community Notes’ main achievements is its speed in addressing misinformation relative to fact-checking operations staffed with researchers or reporters. In January 2023, the median time it took to attach a note to a misleading X post was 30 hours. By February 2025, it was less than 14 hours. In contrast, on Meta, fact-checkers could sometimes take more than a week, according to one analysis.
But even with these improvements, notes typically appear after a post’s most viral stage of diffusion — in other words, after the damage is already done.
Fact Checks Response Time Chart
Fact Checks Are Getting Faster, But Not Fast Enough
Percentage of Community Notes on X visible within hours of original post
Notes from January 2023
February 2025
Source: The NewYorkBudgets analysis of X Community Notes
It’s unclear how much misinformation is on X — if it could be counted, it could be deleted. But from X’s data, it’s obvious that most misleading posts go unaddressed. A high algorithmic bar for flagging misinformation means less than 10% of notes are regarded as “helpful” by the required quorum of users with diverse viewpoints — a percentage that has been trending downward as the system has scaled up
Fact Checks Analysis Chart
Fewer Fact Checks on X Are Breaking Through
Most Community Notes don’t see the light of day as users increasingly fail to reach consensus
Source: The NewYorkBudgets analysis of X Community Notes
*Excludes notes pertaining to scams and terms of service violations. Community Notes that are currently rated “helpful” are considered published.
One reason for this downward trend is that a significant amount of published notes are later unpublished. Notes regarding divisive topics are routinely trapped in purgatory as users cannot agree — or rather see Community Notes as yet another online battlefield. Analysis for this column shows that even notes initially rated “helpful” — and published — get removed 26% of the time after disagreement sets in.
The removal rate is even higher for certain contentious topics and figures. From a sample of 2,674 notes about Russia and Ukraine in 2024, the data suggests more than 40% were unpublished after initial publication. Removals were driven by the disappearance of 229 out of 392 notes on posts by Russian government officials or state-run media accounts, based on analysis of posts that were still up on X at the time of writing. It is not uncommon to see instancesof pro-Russia voices corralling their followers to collectively vote against a proposed or published note.
Notes Visibility Analysis
Notes on Contentious Topics Are More Likely to Be Removed
Published notes related to Ukraine or Russia have a higher likelihood of disappearing than notes about other subjects
Currently visible Ukraine/Russia notes
Visible, then removed
Source: Bloomberg analysis of X Community Notes
Note: Analysis of 4,684 Community Notes that mention Ukraine, Russia, Kyiv, Moscow, Zelenskiy or Putin.
Community Notes on Musk’s posts are also more likely than the average to be removed once published. According to data collected by research group Bright Data, of Musk’s 167 noted posts, just 88 still had a note publicly visible at the time of writing. So, while Musk maintains he couldn’t “change a Community Note if someone put a gun to my head,” as he told podcaster Lex Fridman, he often doesn’t need to: His supporters frequently see to it for him.
Reliable Sources Still in Demand
Despite Musk’s support of Community Notes, he has recently signaled annoyance at some of its conclusions.
When Musk shared content alleging President Volodymyr Zelenskiy of Ukraine was polling unfavorably among his citizens, Community Notes users set the record straight (his approval rating is typically above 50% and has risen more recently). Musk lashed out, saying he would “fix” Community Notes because it was “increasingly being gamed by governments & legacy media.”
In truth, our analysis showed that these sources provided the backbone for Community Notes to function. Musk’s frequent attacks on journalism, such as calling for CBS journalists to be jailed, willfully ignore this.
Bloomberg Opinion’s analysis suggests the mainstream media was the leading source of information in published Community Notes between January 2023 and February of this year: Sites categorized by online security group Cloudflare as “news & media” and “magazines” accounted for 31% of links cited within notes. Social networks were the next leading category with 20%, followed by educational sites with 11%5.
A closer examination of the top 40 most-referenced domains within Community Notes, which accounted for more than 50% of all notes, showed the sources Musk most maligns are doing essential legwork in providing trustworthy reporting referenced in “helpful” notes. They included the Reuters news agency (“the most deceptive news organization on earth,” Musk said), the BBC (“British Pravda”) and NPR (“run by woke Stasi”).
Cited more often than any other single news source, however, is Wikipedia. The online encyclopedia, touted as the definitive model of how crowdsourced information gathering can provide a reliable resource, has had its funding challenged by Musk and his acolytes who say the platform is “controlled by far-left activists.” Musk drawslittle distinction between Wikipedia and the “legacy media,” given Wikipedia’s strict policies on acceptable sources.
One rebuttal to the importance of “legacy media” within Community Notes is that many notes link directly to source material, such as court documents or, particularly in the case of influencer or celebrity gossip, other social media posts. Indeed, the two most cited domains within Community Notes were X.com — meaning other posts on X — and clips on YouTube.com.
Still, an examination of this material shows mainstream media plays an important role. A random sampling of 400 notes citing X posts as a source showed 12% were posts by professional journalists, or directly referenced the work of media organizations. In a sample of 400 notes referencing YouTube clips, mainstream media footage was present in 29%.
Research suggests Community Notes benefits from a curious quirk of human nature: Users seem to more readily believe a stranger on the internet who links to a single New York Times article, for example, than they do the New York Times itself when it offers a fact check directly.
It is the online equivalent of podcaster Joe Rogan searching Google during a show, or a friend pulling out Wikipedia to settle a debate in a bar. But, as our analysis makes clear, for this approach to work, high-quality information must be available. Musk’s attacks, and Meta’s yanking of funding from fact-checking organizations, are damaging this ecosystem. So, too, are the large-scale job cuts being made by many prominent news organizations.
As well as losing money from Meta, international news organizations and fact-checking outfits are sounding the alarm over critical funding shortfalls as a result of Musk’s sweeping DOGE agenda.
The Trump administration has also taken a hacksaw to several government websites that are reservoirs of reliable sources, such as the website for the Centers for Disease Control and Prevention.
Meta Broadens the Experiment
The Community Notes concept will face a bigger test when it is introduced to Meta’s apps — Facebook, Instagram and Threads — which are used by some 3.3 billion people.
From what is known so far, much of it will be operationally identical to how it works on X — though Meta has not committed to publishing data on its performance. In recent years, Meta has taken steps to limit researcher access to audit what takes place on its apps.
Meta’s take on Community Notes uses source code made public by X. Screenshots provided by MetaScreenshots provided by Meta
Another question is whether Zuckerberg can foster the same kind of enthusiasm among his users as Musk has been able to do on X, where Community Notes has benefited from harnessing users’ desire to support Musk’s “free speech” agenda. This enthusiasm extends to doing the kind of work that might be expected of paid moderation staff. In February, almost a third of submitted Community Notes were addressing basic terms of service violations (such as posting gambling advertisements) or warning of scams; in other words, free labor for the network owned by the world’s richest man.
Zuckerberg is a less popular figure than Musk, and much of what is said on Meta’s apps is in more private spaces like groups or instant messaging. Still, more than 200,000 volunteer users had signed up for the new initiative, the company said. Never underestimate the innate human urge to correct someone who is wrong on the internet.
Competing Interests
It may well be that no system could ever work sufficiently well at scale to counteract tech leaders’ opposing incentives to encourage as much highly engaging content as possible.
At the same time that it is ditching its fact-checkers, Meta is boosting its programs for dishing out money to popular creators. YouTube has similar revenue-sharing arrangements with its users. Some of X’s most notorious users often share the thousands of dollars the platform has handed them as a reward for their popular posts. Stories too good to be true, or too shocking to be ignored, are an easy shortcut to attention and success.
All this is happening as X, Meta and Google all rush to promote the use of generative AI tools that make manipulating video and images significantly easier and cheaper. In a relatively short amount of time,AI “slop” has made our information ecosystem murkier.
In tackling the clear-cut cases, Community Notes has been partially effective. When issues are politically contentious, the system becomes paralyzed and weak. It’s in these areas where our information crisis festers, when details are messy and ground truths are harder to establish. Facts can evolve, experts can and do change their minds.
These nuances are Community Notes’ most glaring weakness. It allows tech leaders and their companies to wash their hands of the responsibility to adequately police their own platforms — outsourcing as much as they can to users.
To truly stem the spread of misinformation and disinformation on their platforms, social media executives need to remove the incentives that encourage it instead of hiding behind the crowd and hollow proclamations about free speech.
A picture of 20th century fox studios edit with AI and 20th YouTube. (20th century fox studios/The NewYorkBudgets/kenzie Utopia)A picture of 20th century fox studios edit with AI and 20th YouTube. (20th century fox studios/The NewYorkBudgets/kenzie Utopia)
Twenty years ago this past week, YouTube co-founder Jawed Karim posted the very first YouTube video, titled “Me at the Zoo.”
“All right. So here we are, in front of the elephants. The cool thing about these guys is that they have really, really, really long trunks. And that’s cool. … And that’s pretty much all there is to say.”
Twenty years ago this past week, YouTube co-founder Jawed Karim posted the very first YouTube video, titled “Me at the Zoo.” “All right. So here we are, in front of the elephants. The cool thing about these guys is that they have really, really, really long trunks. And that’s cool. … And that’s pretty much all there is to say.” Me at the zoo by jawed on YouTube
YouTube was so new that our Charles Osgood had to define it for “Sunday Morning” viewers back in 2006: “A website that lets just about anyone post videos for the whole world to see.”
Today, it doesn’t need explaining. YouTube is the second most-visited website on Earth, after Google, which bought YouTube for $1.65 billion in 2006.
Every single day, we collectively watch more than a billion hours of YouTube videos. Funny videos … how-to videos … cat videos. In these first 20 years, we’ve uploaded 20 billion videos to YouTube.
The most-watched of all? “Baby Shark Dance,” with about 16 billion views.
And people aren’t just watching on their phones. “People watch YouTube more than they watch any other streaming service on their big screens in their living rooms now,” said David Craig, who teaches media and culture at the University of Southern California at Annenberg.
Craig says that a key moment was the day YouTube started paying people for making videos. “YouTube came along and said, ‘Why don’t we give you some advertising revenue in exchange for the fact that you’re helping us grow our service?’” he said.
Today, YouTube roughly splits the ad revenue with the creator, according to Craig: “It does probably change a little bit for some of the bigger-name players out there who they obviously need to make sure are very happy with the service.”
Those bigger-name players include Rhett McLaughlin and Link Neal, creators of a daily show called “Good Mythical Morning.” Thirty-four million subscribers have watched their shows 14 billion times.
McLaughlin described the show’s appeal: “Two old friends hanging out, where you can be the third person in that friendship. We kind of stumbled upon this secret formula for having people come back every single day.”
They may film in a traditional TV studio, but what is the difference between YouTube and TV? “I’d like to say our talent,” Neal laughed.
“A big part of it is responding to the audience,” said McLaughlin. “You’ve got comments, right? So, there’s ways that you can connect with people online.”
David Craig said, “Creators on YouTube, specifically, are not content creators. They are for-profit community organizers. They are using this platform to build online communities that they can build a dozen different business models off of.”
For McLaughlin and Neal, those business models could include tours, books, sweatshirts, hoodies, magnets and pins. “And you can start to go bigger and sell hair products,” said Neal. “If we’re gonna spend as much time as we both spend on our hair, we are going to monetize it!”
Nobody’s monetized it better than Jimmy Donaldson, better known as MrBeast, whose videos of colossal giveaways and physical challenges have made him the most-followed YouTuber of all, with 380 million fans.
Last year, Amazon Prime spent $100 million to produce a MrBeast game show.
I asked David Craig, “Is being a YouTube star now considered a greater ambition than becoming a television star?”
“I hate to tell you this, David, but that’s been the case now for over 10 years,” Craig replied. “They’ve been surveying young people, and they’ve all said they want to grow up to be a creator or an influencer more than a celebrity – or, I’m sorry to say, a journalist.”
Rhett McLaughlin and Link Neal don’t think that the advertising industry has quite caught up with YouTube’s dominance. “If you look at the 18-to-34 age group, we outperform all of the other late-night shows combined,” said Neal. “But if you look at revenue that’s being spent on those shows versus our show, it’s not quite there yet.”
“And honestly, this is one of the reasons that we have really been interested in winning an Emmy,” McLaughlin added. “You know, we’re a part of the cultural conversation, as much as many shows that have won Emmys.”
YouTube’s detractors also worry about the algorithm. It studies which videos seem to grab your attention, and feeds you more videos like them. YouTube has been accused of letting the algorithm lead people to extreme viewpoints.
“We have this enormous diversity of opinions on our platform,” said YouTube CEO Neal Mohan. “We don’t allow adult content. We obviously don’t allow spam and fraud. And we have policies to protect young people and kids on the platform. But it’s fundamentally a platform for freedom of speech. “
So, with YouTube’s 20th anniversary upon us, what are the next few years going to be like? According to Mohan, “One of the areas that I’m very excited about is artificial intelligence. You can tell YouTube when you’re creating a video, ‘Put us in Central Park, and change the background, and have these types of birds because it’s a spring day.’ And that magical technology exists today.”
I asked, “Is there something about evolution or psychology that makes us so interested in watching other people?”
“I think it goes back to we, as human beings, are social beings,” said Mohan. “We connect with other people. We are storytellers. That is what happens billions of times a day on YouTube. And it’s back to our mission: give everyone a voice and show them the world.”
Mark Zuckerberg old Facebook profile in windows xp made with AI. Richell Fredson/The NewYorkBudgets
In a 2008 email, Mark Zuckerberg wrote that “it is better to buy than to compete.” Now, the Federal Trade Commission is trying to prove that Zuckerberg applied that same thinking when he acquired Instagram and WhatsApp, thereby snuffing out two emerging competitors to secure Facebook’s social networking supremacy.
This argument is at the heart of the FTC’s long-awaited legal showdown with Meta, which kicked off Monday. The lawsuit was filed during Trump’s first term before being refiled in 2021 under the Biden FTC. It hinges on whether the company violated antitrust law by scooping up companies that threatened its social networking monopoly in what the government calls a “buy or bury” scheme. The case, which is being tried in a Washington DC district court, is expected to last months and will feature testimony from Zuckerberg himself and other top officials, including former chief operating officer SherylSandberg.
“For more than 100 years, American public policy has insisted firms must compete if they want to succeed,” Daniel Matheson, the F.T.C.’s lead litigator, said Monday, according to The New York Times. “The reason we are here is that Meta broke the deal.”
As the FTC makes its case, it will point to Zuckerberg’s words as a smoking gun.
In emails throughout the years, Zuckerberg repeatedly referred to acquisitions as a way of buying Facebook a chance to catch up to what other social networks were doing. After the company’s failed attempt to acquire Twitter, for one, Zuckerberg wrote in an email in 2008, “I was looking forward to the extra time that would have given us to get our product in order.” Four years later, as he made the case for buying Instagram, he wrote, “[W]hat we’re really buying is time,” noting that buying Instagram would “give us a year or more to integrate their dynamics before anyone can get close to their scale again.”
The FTC’s lawsuit points to other executives’ communications as well. After Facebook acquired Instagram for $1 billion, emails showed Instagram co-founder Kevin Systrom asking why Facebook was limiting promotion of his app. In response, another Facebook executive said that the company’s vice president of product Chris Cox, was concerned “about Instagram’s feed cannibalizing our own.” All of this, the FTC argues in the suit, amounts to clear evidence that “Facebook neutralized Instagram as an independent competitor.”
Zuckerberg’s emails suggest a similar strategy, the FTC argues, when it came to acquiring WhatsApp. Fresh off the Instagram purchase, Zuckerberg wrote in an email, “WhatsApp is already ahead of us in messaging in the same way Instagram was ‘ahead’ of us in photos” and said he would “pay $[1 billion] for them if we could get them.” Two years later, Facebook bought WhatsApp for $19 billion. “For the second time in two years, Facebook employees celebrated the neutralization of an existential competitive threat,” the FTC’s complaint reads.
The company now known as Meta has called the FTC’s case “weak.” In a blog postSunday, the company’s chief legal officer Jennifer Newstead wrote that both Instagram and WhatsApp have become “better, more reliable and more secure” under Meta’s ownership. Newstead also pointed to the challenges inherent to the FTC’s case. To prove that Meta harmed competition to protect its monopoly, the government must first define the market that Meta dominates. The FTC has tried to define that market narrowly so as not to include competitors like TikTok or YouTube because including those other giants would make it far more difficult for the FTC to argue that Meta controls a dominant share of the market.
“They’ve gerrymandered a fictitious market in which Facebook and Instagram compete only with Snapchat and an app called MeWe,” Newstead wrote. “In reality, more time is spent on TikTok and YouTube than on either Facebook or Instagram.”
For Meta, the stakes of the case couldn’t be higher, which might have something to do with Zuckerberg’s recent MAGA transformation. A loss in court could mean the potential break up of the company, forcing Meta to divest from WhatsApp and Instagram. Earlier this month, Zuckerberg reportedly sought to avoid that possibility by lobbyingDonald Trump to settle the case during a meeting at the Oval Office. But in an interview with Bloomberg last month, FTC chair Andrew Fergusonsaid, “We don’t intend to sort of take our foot off the gas.”
Mary Kate Cornett, a then-18-year-old student at the University of Mississippi, moved into emergency campus housing not long after sports talk show host Pat McAfee, whose ESPN show has 2.8 million subscribers on YouTube, spread a wholly unsubstantiated and vicious rumor on a February broadcast about an unnamed freshman on that campus he said “allegedly” had sex with her boyfriend’s father.
When a phone number for the teenager, who vehemently denies the rumor, circulated online, she began receiving hateful messages, including messages instructing her to kill herself. In what NBC News confirmed was a “swatting” case, police showed up to Cornett’s mother’s house with their guns drawn. For amplifying a nasty rumor that has made her family’s life hell, Cornett and her family told NBC News they intend to take legal action against McAfee and against ESPN, which licenses McAfee’s show.
Image Source: NBC News
In what NBC News confirmed was a “swatting” case, police showed up to Cornett’s mother’s house with their guns drawn.
Thus, McAfee is once again embroiled in a conversation about sports media, “journalistic standards” and the responsibility that comes with a platform as enormous as his. Cornett spoke about her ordeal this month, first for a lengthy piece by The Athletic’s Katie Strang, and then later to NBC News’ Tom Llamas.
Cornett is the victim of a sports media environment that prioritizes salaciousness and seems disinterested in distinguishing between what’s true and what’s false. But as she rightly told NBC News, she’s not a public figure, and McAfee should have never amplified a campus rumor that seems to have originated on YikYak, an anonymous, message-based gossip app popular among the college set, before spreading to X. And no responsible adult, especially not one with an audience of millions, should be mining social media for salacious rumors to discuss nonpublic figures. Even nonjournalists used to agree that some subjects were off-limits, especially private citizens and children.
McAfee appeared to address the controversy for the first time in a live show Wednesday night, saying he never wants “to be a part of anything negative in anybody’s life” although he did not elaborate further. Neither McAfee nor ESPN has commented more explicitly about the case, but McAfee’s defenders are quick to note that he didn’t name the woman during the segment and that he repeatedly said “allegedly”— as if that automatically absolves him of responsibility when discussing a nonpublic figure to his millions of followers. In the past, McAfee, who has a history of amplifying misinformation, has repeatedly denied being a journalist and has mocked the idea that he be held to “journalistic standards.”
There’s therefore a slight irony in his repeated, almost derisive use of the word “allegedly”: It’s a convention almost exclusively used by journalists and, at times, law enforcement and legal professionals, to hedge while discussing accused crimes. (It should also be noted there’s considerable debate among journalists, especially those of us who often cover gender-based violence, about the use of “allegedly” when covering domestic violence or sexual assault cases; some contend that the word confers disbelief and doubt toward accusers.) Still, despite McAfee using that common journalistic standard, he insists that he not be held to journalistic standards.
I’d argue that regardless of the name or size of the platform, everyone with a microphone should have the human decency not to parrot unsubstantiated rumors involving nonpublic figures — especially nonpublic figures who are teenagers. That goes double when you have the institutional backing of an entity like ESPN. But for too long there’s been a blurring of the line between journalists and entertainers, within sports media in general, including at ESPN. Full disclosure: I used to write for ESPN and appear on the network’s shows, and can confidently assert that the network employs numerous journalists and entertainers who are very good at their jobs.
During the past year, in response to criticisms of McAfee and his apparent allergy to fact-checking, ESPN has said the company does, in fact, “bear some responsibility” for what gets put on its platform. ESPN licenses McAfee’s show, so he’s technically not an employee, although that does not automatically negate any potential legal exposure for ESPN over things McAfee says on its airwaves.
Cornett’s case is a stark example of how being flippant and unconcerned with the truth can hurt people, even if they aren’t named.
In November, MSNBC’s Chris Hayes called out McAfee and NFL quarterback Aaron Rodgers when they cited a made-up stat that claimed Detroit Lions quarterback Jared Goff was 6-0 in games where he’d thrown at least four interceptions. After McAfee and Rodgers credulously spotlighted it, X user MisterCiv, the person who made the original post, wrote, “if you’ve ever wondered how easy it is to spread fake information, i made this stat up while laying in bed at halftime of the game.”
As Hayes said then, “Thankfully, this is a totally harmless example of disinformation and the only consequence was McAfee getting embarrassed and having to walk it back. But what happened in that exchange between McAfee and Aaron ‘Do your own Research‘ Rodgers is basically the entire story of our information environment right now.”
But McAfee devoting more than two minutes to discussing a rumor about a father-son-girlfriend love triangle wasn’t harmless. Mary Kate Cornett says his amplification of that lie upended her life.
We can’t continue to give people a pass from the responsibility of their platforms. Cornett’s case is a stark example of how being flippant and unconcerned with the truth can hurt people, even if they aren’t named.
Cookie Consent
We use cookies to improve your experience on our site. By using our site, you consent to cookies.
Contains information related to marketing campaigns of the user. These are shared with Google AdWords / Google Ads when the Google Ads and Google Analytics accounts are linked together.
90 days
__utma
ID used to identify users and sessions
2 years after last activity
__utmt
Used to monitor number of Google Analytics server requests
10 minutes
__utmb
Used to distinguish new sessions and visits. This cookie is set when the GA.js javascript library is loaded and there is no existing __utmb cookie. The cookie is updated every time data is sent to the Google Analytics server.
30 minutes after last activity
__utmc
Used only with old Urchin versions of Google Analytics and not with GA.js. Was used to distinguish between new sessions and visits at the end of a session.
End of session (browser)
__utmz
Contains information about the traffic source or campaign that directed user to the website. The cookie is set when the GA.js javascript is loaded and updated when data is sent to the Google Anaytics server
6 months after last activity
__utmv
Contains custom information set by the web developer via the _setCustomVar method in Google Analytics. This cookie is updated every time new data is sent to the Google Analytics server.
2 years after last activity
__utmx
Used to determine whether a user is included in an A / B or Multivariate test.
18 months
_ga
ID used to identify users
2 years
_gali
Used by Google Analytics to determine which links on a page are being clicked
30 seconds
_ga_
ID used to identify users
2 years
_gid
ID used to identify users for 24 hours after last activity
24 hours
_gat
Used to monitor number of Google Analytics server requests when using Google Tag Manager