Category: Social Media

  • Big Social Media Platforms Agree to Independent Teen Safety Ratings

    Big Social Media Platforms Agree to Independent Teen Safety Ratings

    Three leading social media companies have agreed to undergo independent assessments of how effectively they protect the mental health of teenage users, submitting to a battery of tests announced Tuesday by a coalition of advocacy organizations.

    The platforms will be graded on whether they mandate breaks and provide options to turn off endless scrolling, among a host of other measures of their safety policies and transparency commitments. Companies that reviewers rate highly will receive a blue shield badge, while those that fair poorly will be branded as not able to block harmful content. Meta, which operates Facebook and Instagram, TikTok and Snap are first three companies to sign up for the process.

    “I hope that by having this new set of standards and ratings it does improve teens’ mental health,” said Dan Reidenberg, managing director of the National Council for Suicide Prevention, who oversaw the development of the standards. “At the same time, I also really hope that it changes the technology companies: that it really helps shape how they design and they build and they implement their tools.”

    Teenagers represent a coveted demographic for social media sites and the new standards come as the tech industry faces increasing pressure to better protect young users.

    A wave of lawsuits alleges that leading firms have engineered their platforms to be addictive. Congress is weighing a suite of bills designed to protect children’s safety online. And state lawmakers have sought to impose age limits on social apps.

    But those efforts have borne little fruit. Some legal experts argue teens and their families may face difficulty in court cases proving the connection between social media use and their struggles. Officials in Washington, meanwhile, have been unable to agree on how to regulate the industry and laws passed by the states have run into First Amendment challenges.

     

    The voluntary standards represent an alternative approach. Reidenberg said in an interview that the ratings are not a substitute for legislation but will be a helpful way for teenagers and parents to decide how to engage with particular apps. The project is backed by the Mental Health Coalition, an advocacy group founded by fashion designer Kenneth Cole.

     

    Cole said in a statement that the standards “recognize that technology and social media now play a central role in mental health — especially for young people — and they offer a clear path toward digital spaces that better support well-being.”

    There is still no scientific consensus on whether social media is on the whole harmful for children and teenagers. While some research has found that the heaviest users have worse mental health, studies have also found that young people who are not online can also struggle. But teenagers themselves have reported becoming more uneasy about the time they spend online, with girls in particular telling pollsters at the Pew Research Center in 2024 that apps were affecting their self-confidence, sleep patterns and overall mental health.

    Reidenberg said it’s clear that in some cases young people’s time online becomes problematic. He said the system was developed without funding from the tech industry, but companies will have to volunteer to participate.

    Antigone Davis, Meta’s global head of safety, said the standards will “provide the public with a meaningful way to evaluate platform protections and hold companies accountable.” TikTok’s American arm said it looked forward to the ratings process. Snap called the Mental Health Coalition’s work “truly impactful.”

    Organizers compared the process to how Hollywood assigns age ratings to movies or the government assesses the safety of new cars. Companies will submit internal polices and designs for review by outside experts who will develop their ratings. In all, the companies’ performance will be measured in about two dozen areas covering their policies, app design, internal oversight, user education and content.

    Many of the standards specifically target users’ exposure to content about suicide and self harm. But one also targets the sheer length of time that some people spend scrolling, crediting platforms for offering either voluntary or mandatory “take-a-break” features.

    The standards are being launched at an event in Washington on Tuesday. Sen. Mark R. Warner (D-Virginia) said in a statement that he welcomed the standards but they weren’t a substitute for regulatory action.

    “Congress has a responsibility to put lasting, enforceable guardrails in place so that every platform is held accountable to the young people and families who use them,” he added.

  • BBC to Apologize After Broadcasting Edited Version of Donald Trump Speech

    BBC to Apologize After Broadcasting Edited Version of Donald Trump Speech

    Panorama ‘completely misled’ viewers with its coverage of Donald Trump’s Capitol Hill speech, a report found. © Shawn Thew/EPA/Bloomberg

    The BBC will apologise for the misleading editing of a Donald Trump speech in a Panorama documentary, the Telegraph can disclose.

    Samir Shah, the BBC’s chairman, will write to the culture, media and sport committee on Monday to express regret for the way the speech, made on the day of the Jan 6 2021 Capitol riot, was spliced together.

    The apology will heap further pressure on Tim Davie, the BBC’s director general, to quit over an 8,000-word dossier compiled by a whistleblower that alleged widespread bias within the corporation.

    The Telegraph has previously disclosed that both Mr Davie and Mr Shah were warned of the doctored footage in May but appear to have kept quiet.

    The decision to issue an apology now raises questions about why it has taken them six months to admit viewers were misled.

    The Telegraph understands the apology will be for the misleading editing of the Trump speech. It is not clear what Mr Shah will say about the coverage of the Gaza war or alleged bias in the BBC’s reporting on gender, but it is understood that he may also advocate changes to the management and oversight of BBC Arabic.

    The Panorama episode, broadcast a week before the 2024 US election, “completely misled” viewers, according to the memo written by Michael Prescott, a former standards adviser to the BBC.

    His memo was circulated amongst senior managers, who “refused to accept there had been a breach of standards”.

    Mr Prescott is then understood to have warned Mr Shah of the “very, very dangerous precedent” set by Panorama, but received no reply.

    The existence of the dossier and its contents were revealed by The Telegraph last week, prompting calls from senior politicians, including the former prime minister Boris Johnson, for Mr Davie to resign.

    On Friday night, the White House accused the BBC of “purposeful dishonesty”, claiming it was a “Leftist propaganda machine”.

    The dossier also highlighted anti-Israel bias, especially in coverage of the war in Gaza, on its dedicated BBC Arabic news service.

    Sir Vernon Bogdanor, Britain’s foremost constitutional expert, also called on Mr Davie to resign with “immediate effect” on Saturday.

    The academic, a former professor of government at the University of Oxford, said the broadcaster had “ignored” a separate report he had sent to it, warning of distortion and bias in its reporting on Gaza.

    The Telegraph has been told that Mr Shah’s apology for misleading viewers on the editing of Mr Trump’s speech will be contained in a letter sent to Dame Caroline Dinenage, the chairman of the culture, media and sport committee.

    It is likely to raise questions over whether Mr Shah and Mr Davie tried to cover up internal concerns over the Trump edit, given that they are only now apologising in the face of intense media scrutiny.

    Danny Cohen, a former director of BBC Television, said on Saturday night: “It is extraordinary that the BBC’s leadership has been missing in action for a week amidst this growing crisis.

    “Both BBC director general Tim Davie and chairman Samir Shah were in the room when the faked Trump video was raised as a serious problem six months ago. This makes it very hard for them to excuse away the scandal.”

    In his report, Mr Prescott wrote: “Examining the charge that Trump had incited protesters to storm Capitol Hill, it turned out that Panorama had spliced together two clips from separate parts of his speech. This created the impression that Trump said something he did not and, in doing so, materially misled viewers.”

    ‘The BBC has become the story’

    In an email sent to news staff on Friday evening, Deborah Turness, the chief executive of BBC News and Current Affairs, appeared to lay the ground for the apology. She said in her email: “I’m writing to you today because it’s always difficult when the BBC becomes a story – as it has, in some quarters, this week.”

    She went on: “You will all have seen the news coverage following the leaking of a letter to the BBC board from Michael Prescott, who is a former adviser to the BBC’s editorial guidelines and standards committee (EGSC). The EGSC is a sub-committee of the BBC board.”

    She said the BBC had received a letter from Dame Caroline “seeking reassurance from the BBC, adding: “The chairman will be providing a full response on Monday, and this will be shared with you, but I felt it was important for me to come to you as CEO of BBC News before the end of the week.”

    In a statement, a BBC spokesman said on Saturday night: “The BBC chairman will provide a full response to the culture, media and sport committee on Monday.”

    ‘Serious manipulation’

    Sir John Whittingdale, the former culture secretary, in an interview with Radio 5 Live on Saturday night, said: “The BBC does great work and I’m a huge supporter of the BBC World Service, its investigative journalism has been outstanding. But all of that has been threatened in the case of the Trump speech.

    “It’s a very serious manipulation to present a picture that is not accurate and that will cast doubt on everything that the BBC says.”

    Sir John, who is MP for Maldon, said the “buck stops” with Mr Davie.

    He added: “I think part of the problem is that the director general also has the title of editor-in-chief. Ultimately he is responsible and previous director generals have had to resign.

    “If Tim Davie is to continue he has got to show that he recognises what a serious threat to the reputation of the BBC this is and to show that he is going to act very swiftly and make sure things improve and that it can’t happen again.”

    On being asked if he thought Mr Davie’s job was under threat, Sir John said: “Yes I do.”

    He added: “There are already people saying that the director general will have to resign.”

    ‘We need to listen and learn

    Nick Robinson, presenter of the BBC Today programme, said on X: “We live in a time of deep divisions – about politics and culture – Gaza/Israel, trans and women’s rights, Donald Trump’s policies and politics – to name just three.

    “The BBC like many public organisations faces competing pressures about how we navigate these treacherous waters.

    “We, like others, need to listen and learn. We can and will do better but we should stand up to those who prefer propaganda and disinformation.

    “I look forward to hearing what the chairman of the BBC will say in response to legitimate concerns which have been raised but I have no idea what he plans to say nor did he – or any other my bosses – know what I said on air today or here on X.”

  • Charlie Kirk’s Assassination Marks a Turning Point in America

    Charlie Kirk’s Assassination Marks a Turning Point in America

    10 removebg preview 2

    As the nation grapples with the news that conservative activist and commentator Charlie Kirk was gunned down in cold blood while conducting one of his signature campus debates at Utah Valley University, it is of paramount importance for our political leaders both to recognize the political moment we are in and to try to defuse a potentially combustible situation. While conservatives will be tempted to demonize whoever the deluded shooter turns out to be, and liberals will undoubtedly call for more ineffective gun control legislation, neither reaction can hope to lead to anything productive.

    Another tribute is in order to honor the memory of Kirk, who was one of the first to recognize that the radicalization of the left has been driven largely by economic disenfranchisement, a point that often goes unacknowledged by leading intellectual voices on the political right. There is a particular ideological bias that one notices in conservatives above a certain age (let’s say 45) who tend to dominate the positions of leadership in the right-leaning political organizations and think tanks now often referred to as “Conservative, Inc.” This bias is characterized by a certain disdain for the materialism and softness of young people who, having grown up in the wealthiest nation in the world with a prosperity unrivaled in human history, are simply unable to grasp the value of commitment, hard work, and the importance of moral virtue as the path to success in life. Spoiled, coddled, and ignorant of the struggles of previous generations, they feel like they are entitled to a prestigious position in the professional world and a valued social status without having to work to attain it. The problem is, according to this view, the collapse of a strong, coherent moral code and the older understanding that the sequence of success is one that demands self-denial and the deferment of gratification. These damn young people think the world owes them a living.

    Kirk was uniquely clear-eyed in seeing this as not only a disastrous political analysis but also as a whitewash of the terrible policy choices that have caused so many Americans, particularly young people, to give up on the American Dream. While it is certainly true that the collapse of traditional Christian morality and the institutions that support it over the last 50 years has significantly eroded the stability of marriage, family, communities, and other mediating institutions that have historically served as bulwarks against tyranny, the Conservative Inc. caricature of Gen Z as rich, lazy, and entitled is hardly a complete—or fair—assessment of the political and social reality we face. The truth is that Gen Z, by a whole variety of measures, faces a much tougher socio-economic reality than any other American generation in memory. The decision to get married and form a family is not only negatively affected by the decline of the Christian ethic, but it is discouraged by an economy and a social structure that is no longer working for young people. The reality is that America, circa 2025, is not conducive to affordable family formation, and Gen Z is well aware of this. Even many of their parents are waking up to the fact that the American Dream of providing the next generation with greater prospects of professional achievement and material wealth than the last seems to be on life support.

    The measures are all around us. College admission has become increasingly more competitive, and college itself is less affordable for the average American student. At the same time, the market value of a university degree and the guarantee of a lucrative career track based on that degree have steadily declined. The job market in the U.S. has changed drastically over the last four decades, with mass immigration pushing down wages and the outsourcing of production and labor abroad eliminating opportunities. Since 1981, the median age of homebuyers, the best measure of affordability and an ownership stake in the community, has gone from 31 to 56 today. The median age of first-time homebuyers has gone from 28 to 38 during the same time. Since the 1980s, stock market wealth has become increasingly concentrated among older Americans, and the percentage of young Americans with stock ownership has plummeted since the financial crisis of 2008. The recent public relations campaign of the World Economic Forum, selling the notion to young people in Western societies that “you will own nothing and be happy” because of all the efficiency and ease of a modern technological economy, seems to have been adopted wholesale by the American managerial elite responsible for creating the perverse incentives that have created and handed down a system characterized by degraded prospects of advancement and stability for people at the prime age of forming families.

    68c3aff81e3a57.58639441

    We are seeing the rise of an entire generation of dispossessed Russian serfs in a rental economy full of technological toys and conveniences that distract them from the fact that they own nothing, and they live in a country where things are visibly degrading, in contrast to the hopeful, upwardly mobile economy of previous American generations. And this is where Conservative Inc. gets it drastically wrong. Our young people may be softer, more addicted to material comfort, and less able to do things for themselves than previous generations. It doesn’t mean they possess real wealth or any ownership stake in society whatsoever. It is largely a product of technological progress and an affluent society, with wealth increasingly concentrated at the top among the ownership elite who reap all the gains of economic growth. At the expense of the hollowed-out middle class.

    Kirk argued that the more that the rising generation concludes that the current system is not working for them, the more prone they are to endorse radical, revolutionary, “tear down the system” political solutions that are purely destructive, including socialism and communism. Part of the appeal of Donald Trump and the MAGA movement is this same fundamental perception that the current system is not working and needs to be radically revamped. But unless we Make Family Formation Affordable Again, we are going to lose the political and policy argument, especially if we focus exclusively on moral decline as the root of the problem. It’s not what you would call a winning message to castigate the rising generation of voters as lazy, whiny, entitled brats who deserve their fate.

    There can be no better tribute to Kirk—who literally gave his life attempting to reintegrate young people into the American system and to get Republicans to wake up to the dangers of radicalization if they failed to do so—than for President Trump and the GOP Congress to enact a Charlie Kirk Act for Affordable Family Formation. Republicans in Washington need to seize the moment so that some good can come of this horrific incident.

  • The images of starvation in Gaza are deeply misleading

    The images of starvation in Gaza are deeply misleading

    rt34t34t34t5345t

    It’s one of the most emotionally searing images circulated in recent months: a malnourished child behind a fence, desperate eyes piercing through the camera lens, with a woman stretching out a bowl for food. It’s been published by international media, invoked by politicians, and shared by millions online. It has come to symbolize, for many, the reported famine in Gaza.

    But there’s just one problem. The photo’s origin and context are hotly disputed — and increasingly, experts say, deliberately manipulated.

    Earlier this week, Israeli Prime Minister Benjamin Netanyahu told his 3.4 million followers on X:

    “There is no starvation in Gaza, no policy of starvation in Gaza.”

    His remarks unleashed a digital firestorm. Former President Donald Trump broke ranks with his usual ally and responded:

    “There is real starvation in Gaza. You can’t fake that.”

    youtube placeholder image

    This rare division between two strong allies laid bare the intensifying war not just over territory, but over information — a propaganda war playing out across social media, newsrooms, and governments.

    Hamas’s Propaganda Machinery and Media Blindness

    Many analysts and security experts argue that Hamas is adept at exploiting global sympathy through carefully staged imagery. Images of skeletal children, overwhelmed hospitals, and food queues are frequently disseminated, often with little journalistic scrutiny.

    Take, for instance, the viral image of a girl at a community kitchen. On X (formerly Twitter), thousands of users — aided by Elon Musk’s AI chatbot, Grok — claimed the photo was from 2014, portraying a Yazidi girl fleeing ISIS in Iraq.

    Claims on social media said this photo was taken in 2014 in Iraq or Syria. In fact it was taken in Gaza City, northern Gaza Strip, on Saturday, July 26, 2025, showing Palestinians struggle to get donated food at a community kitchen. © AP Photo/Abdel Kareem Hana

    Grok responded:

    “Yes, the photo is from August 2014… on Mount Sinjar in Iraq.”

    Citing Reuters, it labeled the image a case of repurposed content.

    rtr42bt8
    A girl from the minority Yazidi sect, fleeing the violence in the Iraqi town of Sinjar, rests at the Iraqi-Syrian border crossing in Fishkhabour, Dohuk province on August 13, 2014. © Youssef Boudlal—REUTERS

    But BBC Verify journalist Shayan Sardarizadeh debunked that claim. He identified the photo’s true source:

    “The image is from Gaza, taken on July 26, 2025, by AP photographer Abdel Kareem Hana.”

    Reverse image tools like TinEye confirmed the original publication date and location. Grok was simply wrong.

    As Sardarizadeh noted:

    “AI chatbots, including Grok, are not fact-checking tools and should not be used for that purpose, particularly in relation to breaking and developing events.”

    Still, damage was done. The manipulated claim was spread, repeated, and believed by many — a clear example of how quickly misinformation can overshadow the truth.

    The Case of Mohammed Zakaria al-Mutawaq

    nn

    Another image that shocked global audiences was that of 18-month-old Mohammed Zakaria al-Mutawaq. Published by The New York Times in a piece titled “Gazans Are Dying of Starvation”, the toddler was described as emaciated, with his father reportedly killed while searching for food.

    “As an adult, I can bear the hunger, but my kids can’t,” his mother was quoted.

    But investigative journalist David Collier quickly raised flags. He cited medical records showing Mohammed suffered from severe genetic disorders since birth and had required special supplements even before the war began.

    In response, The New York Times issued an editor’s note:

    “We have since learned new information… and have updated our story to add context about his pre-existing health problems.”

    They noted that while Mohammed’s condition had worsened due to the lack of medical care, his malnutrition was compounded, not caused, by the current war.

    To critics, the update wasn’t enough.

    “So you guys lied, got called out, and issued a complete non-apology,” one user posted on X.

    On Wednesday, a UN-backed food security task force warned that famine “is currently playing out” in Gaza. Their analysis said Gaza City had crossed famine thresholds for food consumption and acute malnutrition.

    The Hamas-run Gaza Health Ministry reports 154 deaths from hunger since October 2023 — including 89 children. However, critics question the credibility of the ministry’s figures, noting its alignment with Hamas and history of inflated or unverifiable statistics.

    Meanwhile, UN Secretary-General António Guterres called the situation “a humanitarian catastrophe of epic proportions.” Human rights organizations, including Israel-based B’Tselem and Physicians for Human Rights, claim Israel is committing genocide through starvation, mass displacement, and bombings.

    Yet at the same time, The New York Times also recently reported Israeli military officials denying Hamas’s alleged theft of UN aid — suggesting the crisis may be more due to distribution chaos, logistical breakdowns, and internal Hamas mismanagement than direct Israeli policy.

    A Media Reckoning Is Overdue

    The Western media’s responsibility in this tragedy cannot be ignored. In the rush to file emotionally evocative stories, due diligence has often been sacrificed. As the New York Budgets Editorial Standards outline: verifying visual content, especially in wartime, is not optional — it is essential.

    “Every journalist must ask: Who took this photo? Where? When? Under what conditions?”

    Hamas has repeatedly demonstrated it will exploit suffering for propaganda. That doesn’t mean suffering isn’t real — but it does mean every claim must be thoroughly scrutinized. Too often, however, global outlets like The New York Times, The Guardian, and Stuff have published without confirmation, only issuing updates days later.

    Starvation in Gaza may well be occurring. Humanitarian groups have sounded the alarm. But in a media landscape rife with misinformation, every image, every anecdote must be questioned — not to deny suffering, but to preserve the truth.

    Because when lies masquerade as evidence, the real victims — whether Palestinian civilians or the truth itself — are the ones who suffer the most.

  • Meta won its AI copyright case, but the judge indicated that other lawsuits on the matter are still possible

    Meta won its AI copyright case, but the judge indicated that other lawsuits on the matter are still possible

    Meta on Wednesday prevailed against a group of 13 authors in a major copyright case involving the company’s Llama artificial intelligence model, but the judge made clear his ruling was limited to this case.

    U.S. District Judge Vince Chhabria sided with Meta’s argument that the company’s use of books to train its large language models, or LLMs, is protected under the fair use doctrine of U.S. copyright law.

    Lawyers representing the plaintiffs, including Sarah Silverman and Ta-Nehisi Coates, alleged that Meta violated the nation’s copyright law because the company did not seek permission from the authors to use their books for the company’s AI model, among other claims.

    Notably, Chhabria said that it “is generally illegal to copy protected works without permission,” but in this case, the plaintiffs failed to present a compelling argument that Meta’s use of books to train Llama caused “market harm.” Chhabria wrote that the plaintiffs had put forward two flawed arguments for their case.

    “On this record Meta has defeated the plaintiffs’ half-hearted argument that its copying causes or threatens significant market harm,” Chhabria said. “That conclusion may be in significant tension with reality.”

    Meta’s practice of “copying the work for a transformative purpose” is protected by the fair use doctrine, the judge wrote.

    “We appreciate today’s decision from the Court,” a Meta spokesperson said in a statement. “Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology.”

    Though there could be valid arguments that Meta’s data training practice negatively impacts the book market, the plaintiffs did not adequately make their case, the judge wrote.

    Attorneys representing the plaintiffs said in a statement said that they “respectfully disagree” with the decision.

    “The court ruled that AI companies that ‘feed copyright-protected works into their models without getting permission from the copyright holders or paying for them’ are generally violating the law,” the statement said. “Yet, despite the undisputed record of Meta’s historically unprecedented pirating of copyrighted works, the court ruled in Meta’s favor.”

    Still, Chhabria noted several flaws in Meta’s defense, including the notion that the “public interest” would be “badly disserved” if the company and other businesses were prohibited “from using copyrighted text as training data without paying to do so.”

    “Meta seems to imply that such a ruling would stop the development of LLMs and other generative AI technologies in its tracks,” Chhabria wrote. “This is nonsense.”

    The judge left the door open for other authors to bring similar AI-related copyright lawsuits against Meta, saying that “in the grand scheme of things, the consequences of this ruling are limited.”

    “This is not a class action, so the ruling only affects the rights of these thirteen authors — not the countless others whose works Meta used to train its models,” he wrote. “And, as should now be clear, this ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.”

    Additionally, Chhabria noted that there is still a pending, separate claim made by the plaintiffs alleging that Meta “may have illegally distributed their works (via torrenting).”

    Earlier this week, a federal judge ruled that Anthropic’s use of books to train its AI model Claude was also “transformative,” thus satisfying the fair use doctrine. Still, that judge said that Anthropic must face a trial over allegations that it downloaded millions of pirated books to train its AI systems.”

    “That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft, but it may affect the extent of statutory damages,” the judge wrote.

  • Trump Gives TikTok Another 90-Day Extension to Comply With Sale-or-Ban Order

    Trump Gives TikTok Another 90-Day Extension to Comply With Sale-or-Ban Order

    TikTok just got another lifeline from the White House, with President Donald Trump set to delay enforcement of the sale-or-ban law by another 90 days.

    “President Trump will sign an additional Executive Order this week to keep TikTok up and running,” Karoline Leavitt, White House press secretary, said in a statement on Tuesday. “As he has said many times, President Trump does not want TikTok to go dark. This extension will last 90 days, which the Administration will spend working to ensure this deal is closed so that the American people can continue to use TikTok with the assurance that their data is safe and secure.”

    On Thursday, Trump confirmed that he’d signed an executive order delaying enforcement of the law by 90 days in a Truth Social post. The deadline for TikTok parent company ByteDance to hand over control of TikTok’s US operations is now September 17.

    It’s been about five months since a law requiring TikTok to be banned in the United States unless it’s sold off by its China-based parent company technically went into effect. But thanks to President Donald Trump’s promises not to enforce the law, neither of those things have happened, aside from an approximately 14-hour blackout in January. Tuesday’s announcement marks Trump’s third extension of the ban.

    The announcement means that the app will remain accessible for its 170 million American users despite the legislation that passed last year with bipartisan support over concerns that TikTok’s Chinese ownership poses a US national security risk. And it comes as both the United States and China seek leverage in tense trade talks, in which TikTok appears to have become a bargaining chip.

    The TikTok sale-or-ban law went into effect on January 19 after it was signed by former President Joe Biden last year. TikTok briefly took itself offline, sparking outcry from creators, but quickly came back after Trump signed an order delaying the ban’s enforcement by 75 days. It was one of his first acts as president, made in hopes of reaching a deal to keep the app “alive.”

    In April, a deal that would have transferred majority control of TikTok’s US operations to American ownership was nearly finalized. But it fell apart after Trump announced additional tariffs on China, forcing the president to announce another 75-day delay to keep the app operational in the United States.

    “There are key matters to be resolved. Any agreement will be subject to approval under Chinese law,” TikTok parent company ByteDance said after Trump’s tariff policy stalled progress on the deal in April.

    That pause was set to expire on June 19, before Trump’s Thursday executive order. .

    Trump’s latest enforcement delay raises questions about the status of a deal that could secure TikTok’s long-term future in the United States. The Chinese government has offered little public indication that it would be willing to approve a sale beyond suggesting that any deal could not include TikTok’s “algorithm,” which has been called the app’s secret sauce.

    In a statement on Thursday, TikTok indicated that it is still in talks with the office of Vice President JD Vance — who Trump appointed to oversee the effort — on a deal that would secure the popular short-form video platform’s future in the United States.

    “We are grateful for President Trump’s leadership and support in ensuring that TikTok continues to be available for more than 170 million American users and 7.5 million U.S. businesses that rely on the platform as we continue to work with Vice President Vance’s Office,” TikTok said in a statement.

    The new extension comes after the United States and China agreed on a framework to ease export controls, a move that’s expected to ease tensions and prevent further escalation of export and other restrictions between the two countries. It’s not clear whether a TikTok deal is included in the framework, but cooperation between the two sides could make an agreement to transfer control of the app to a US buyer more likely.

    Earlier on Tuesday, Trump told reporters that a TikTok deal would “probably” require approval by the Chinese government and said, “I think we’ll get it.”

    “I think President Xi will ultimately approve it, yes,” the US president added.

    The deal that had been in the making earlier this year would have involved several American venture capital funds, private equity firms and tech giants investing in a company that would control TikTok’s US operations. TikTok’s China-based owner, ByteDance, would have retained a 20% stake in the spinoff company — a key stipulation of the law.

    Several other high-profile bidders had also put their hands up to acquire the platform, including a group led by billionaire Frank McCourt and “Shark Tank”-famous investor Kevin O’Leary, Amazon, AI firm Perplexity and a separate group of investors that included YouTube and TikTok star Jimmy Donaldson, known online as MrBeast.

    It was Trump who first tried to ban TikTok during his previous administration, but he has said he changed his mind after he “got to use it.” TikTok CEO Shou Chew attended Trump’s inauguration, seated on stage alongside Cabinet secretaries and other tech CEOs.

  • Teens’ Social Media Feeds Are Flooded With Junk Food Ads

    Teens’ Social Media Feeds Are Flooded With Junk Food Ads

    Junk food ads are flooding your teenager's social media feeds and it's influencing what they choose to eat. (Jene Young/The NewYorkBudgets)
    Junk food ads are flooding your teenager’s social media feeds and it’s influencing what they choose to eat. (Jene Young/The NewYorkBudgets)

    Social media’s harmful impact on the mental health of children and teenagers is well documented.

    Now, new research suggests that the widespread marketing of unhealthy food and drinks on social media is influencing the food choices of young people and potentially impacting their physical health.

    University of Oxford team found “strong and consistent evidence” that digital marketing of unhealthy foods and drinks is widespread on social media, and that it influences children and teenagers.

    And a recent study led by the University of Queensland found that problematic and excessive social media use is linked to young teens’ increased consumption of sweets and sugar, as well as the tendency to skip breakfast.

    So, what is going on with social media and children’s diet? And what are the links?

    Teens regularly exposed to junk food ads

    Australian GP Isabel Hanson, from the research team behind the Oxford study, says that when young people see junk food being marketed on platforms like Instagram, YouTube or TikTok, it affects what they want to eat.

    “My co-authors and I reviewed studies from around the world and saw a clear pattern: kids and teens are regularly exposed to marketing for foods high in sugar, salt and fat, often without realising it,” she says.

    The marketing of unhealthy foods to children is unregulated, except for those in South Australia, which has banned the advertising of junk food on public transport. (Pexels/Pixabay)

    One of those studies found Australian children aged 13 to 17 are exposed to 17 food ads each hour, with an average of almost 170 per week.

    “This exposure shapes their preferences, increases their desire for those foods, and can lead to higher consumption.”

    It’s something she sees play out in her work as a GP.

    “Young people who grow up in environments filled with lots of screen time, social media, and exposure to advertising often have poorer diets and can struggle with their weight,” she says.

    “Of course, there are lots of factors at play, but [social media] is one we can do something about.”

    ‘Harder to resist’

    Asad Khan led the University of Queensland study that reviewed the data of 223,000 adolescents aged 13 to 14 from 41 countries. 

    The study found the mindless use of social media often leads to mindless eating — and sometimes mindlessly not eating.

    Teens skipping breakfast is particularly problematic, according to Professor Khan, although he concedes the study only examined the amount of time teens spent on social media and not the type of content they consumed, making the link between the two difficult to plot.

    Professor Asad Khan believes social media companies should “take some responsibility” for the proliferation of junk food ads on social media.  (University of Queensland)

    “What we found is that the mindless [and excessive] use of social media, is more problematic. And that kind of mindless use is leading towards the over consumption of sweet, sugary drinks and skipping breakfast,” he tells ABC Australian Radio.

    So why do these ads for junk food on social media impact the diet of children and teens as much as they do?

    Dr Hanson says these ads are designed to be appealing, and young people are generally more susceptible to this type of marketing.

    “They are colourful, fun, often linked to trends or popular people, and that has a real effect on young people’s choices.”

    “Young people are smart and savvy in many ways. They can spot trends quickly, navigate digital spaces with ease, and often know more about online platforms than adults do.

    “But the brain continues to develop until we are in our mid-twenties, particularly the areas responsible for impulse control, decision-making and assessing risk.

    “That means children and teenagers can be more influenced by social approval and less likely to pause and reflect on where a message is coming from, especially when it’s wrapped up in entertaining or peer-driven content.”

    Social media advertising often doesn’t look like traditional advertising, which makes it harder to spot and easier to absorb.

    And the social media algorithm, peers and influencers also play a huge part in how young people interact with food ads.

    “Social media platforms are built to keep users engaged. Once a young person interacts with food content, they’re likely to see more of it,” Dr Hanson says.

    “At the same time, young people are heavily influenced by what their peers are watching, liking or sharing, so if a snack or drink is popular in their online circles, it can spread quickly.”

    As for the influencers spruiking junk food, they are seen as relatable and trustworthy by young people.

    “When influencers promote a food or drink, even subtly, it carries a lot of weight.

    “Our review showed that this kind of marketing is especially effective because it doesn’t feel like marketing. That makes it harder to recognise, and harder to resist.”

    Food for good mental health

    An adolescent’s relationship with food can be a complicated one.

    major global study led by Australian’s ABC estimate that 50 per cent of children and young people (aged 5-24 years) in Australia will be overweight or obese by 2050.

    Rates of obesity among children and young people have tripled over the past three decades, the study found.

    Add the impacts of social media, courtesy of junk food ads, influencers and time-consuming scrolling, and things can become even murkier.

    Sugary and highly processed foods can lead to a range of chronic diseases if over-consumed, says paediatric dietitian Miriam Raleigh.

    Miriam Raleigh is a paediatric dietitian and the founder of Child Nutrition, a group of dietitians specialising in children’s food services.

    Having a variety of foods from all core food groups is essential for a child’s body and brain, she says.

    “We know that a diet rich in wholefoods — not those found in packets — is important for good mental health. Foods are more than vitamins and minerals, they also contain phytochemicals and antioxidants which feed our body, mind and gut.

    “Having a broad range of foods allows our gut microbiome to contain a diverse range of different beneficial bacteria that is thought to have a direct link to mental health.”

    Sugary foods and highly processed foods contain little nutritional value for children and teens’ growing bodies,” Raleigh says.

    Holding social media companies accountable

    Dr Hanson would like to see more government regulation around junk food marketing on social media rather than the voluntary industry codes that “don’t hold up in the digital space” that are currently in place.

    Policies that help reduce children’s exposure to digital junk food marketing are needed and social media companies need to do more to protect young users, she argues.

    “Education and social media literacy might help a bit, but let’s be honest — it’s the same for adults. When you are constantly flooded with advertising for unhealthy food, it makes you want it,” she says.

    “These are highly skilled marketers using proven techniques to influence behaviour. Expecting young people to resist that, day after day, isn’t realistic.”

    When asked about the federal government’s response to the issue, a spokesperson from the health department said the government has provided more than $500,000 for the University of Wollongong to deliver a feasibility study to examine the current landscape of unhealthy food marketing to children.

    The feasibility study will provide a better understanding of the options available for consideration by all governments and is expected to be finalised in the second half of 2025.

  • X Experiences Temporary Outage Affecting Thousands of Users

    X Experiences Temporary Outage Affecting Thousands of Users

    Social media platform X, formerly known as Twitter, was briefly inaccessible for thousands of US users early Saturday, according to Downdetector.com, which tracks internet disruptions.

    The site appears to have resolved the outage, as DownDetector reports are down to 690 as of 11:30 a.m. ET.

    Users in the United States began reporting issues on DownDetector at about 8 a.m. ET on Saturday. By 8:26 a.m. ET, more than 25,000 US users reported issues with the X platform on the mobile app and website. Users also reported issues with the server connection.

    More than 11,000 users in the United Kingdom and hundreds in other countries have also reported issues.

    DownDetector tracks user-reported issues, so the numbers may not reflect the full scale of X’s outage.

    Problems accessing X on Friday stemmed from a data center outage, according to a post by X’s engineering team on Friday at 8:03 p.m. ET. Tech magazine Wired reported there was a fire at a data center leased by X in Hillsboro, Oregon, on Thursday morning.

    According to Downdetector, users began experiencing issues on Thursday at about 2:00 p.m. ET. According to the X developer platform, there was a site-wide outage from Thursday to Friday that has been “resolved.” But logins with X began experiencing “degraded performance” on Friday and the “incident is ongoing.”

    “Our team is working 24/7 to resolve this. Thanks for your patience — updates soon,” X wrote in the post.

    “Back to spending 24/7 at work and sleeping in conference/server/factory rooms. I must be super focused on X/xAI and Tesla (plus Starship launch next week), as we have critical technologies rolling out,” Elon Musk, who acquired the platform in 2022, wrote in response to a post on X Saturday morning which said the outages may stem from the data center fire. “As evidenced by the X uptime issues this week, major operational improvements need to be made. The failover redundancy should have worked, but did not.”

    In late March, X experienced a widespread outage that was due to a “massive cyberattack,” according to Musk.

    X said in 2024 that the site averages about 250 million daily active users. Musk announced on March 28 that he sold X to xAI, his artificial intelligence start-up.

  • Senator Presses Spotify Over Podcasts Promoting Online Drug Sales

    Senator Presses Spotify Over Podcasts Promoting Online Drug Sales

    Following reports from The NY Budgets and other news outlets, Senator Maggie Hassan is demanding information about how Spotify is handling phony podcasts promoting potentially illegal online pharmacies.

    Spotify said last week that it had removed dozens of podcasts identified by CNN that blatantly promoted the online pharmacies purportedly selling drugs such as Adderall and Oxycontin, in some cases without a prescription. Business Insider also reported that it had flagged 200 podcasts that Spotify subsequently removed.

    The fake podcasts — which had showed up among the top suggestions in searches for drug names — violated Spotify’s rules and threatened to direct users to spammy and potentially illegal websites.

    US law prohibits buying controlled substances online without a prescription. Parents, experts and lawmakers have urged tech giants to do more to prevent the sale of counterfeit or illicit drugs to young people through their platforms, after multiple teens have died of overdoses from pills bought online.

    Now Hassan, a New Hampshire Democrat and the ranking member of the Joint Economic Committee, wants answers about how these fake podcasts proliferated on Spotify and what the company is doing to stop it from happening again In a letter sent Thursday, Hassan urged Spotify CEO Daniel Ek to “take action to prevent fake podcasts that facilitate the illicit sale of drugs.”

    “Far too many parents have experienced the unimaginable pain of losing their child to an accidental overdose,” Hassan told CNN News in an exclusive statement ahead of the letter’s release. “Spotify has a responsibility to significantly ramp up its efforts to stop criminals from using the platform to facilitate deadly drug sales to anyone, especially teens.”

    The letter asks Spotify to provide details about the content it has taken down; how many users interacted with the drug sales podcasts before they were removed; whether the company earned any revenue from the podcasts; and whether Spotify works with law enforcement when it discovers illegal content. It also asks what moderation tools and practices the company has implemented to identify drug-related content and whether it will be making any updates considering the recent reports.

    Hassan has asked Spotify to respond by June 12.

    In a statement to The Budgets last week, a Spotify spokesperson said: “We are constantly working to detect and remove violating content across our service.” In response to Hassan’s letter, the company reiterated that statement, and a spokesperson added that such content also exists on other platforms and that the company has earned no revenue from the phony podcasts.

  • Deepfake Laws Lead to Prosecution and Penalties — and Some Pushback

    Deepfake Laws Lead to Prosecution and Penalties — and Some Pushback

    Pennsylvania’s attorney general recently accused a police officer of taking photos in a women’s locker room, secretly filming people while on duty and possessing a stolen handgun. But he was unable to bring charges related to a cache of photos found on the officer’s work computer featuring lurid images of minors created by artificial intelligence. When the computer was seized, in November, creating digital fakes was not yet considered a crime.

    Since then, a statewide ban on such content has taken effect. While it came too late to apply to the police officer’s case, the state’s attorney general, Dave Sunday, has already used the law to charge another man who was accused of having 29 files of A.I.-generated child sexual abuse material in his home.

    Over the past two years, American legislators have grown increasingly alarmed by the threat of malicious deepfakes. Sexual images of middle school students have been digitally faked without their permission. Vice President JD Vance disavowed an almost certainly inauthentic clip that mimicked his voice to criticize Elon Musk. An ad featuring an A.I.-generated version of the actress Jamie Lee Curtis was removed from Instagram only after she posted a public complaint.

    Legislators are responding. Already this year, 26 laws governing various kinds of deepfakes have been enacted, following 80 in 2024 and 15 in 2023, according to the political database Ballotpedia. This month in Tennessee, sharing deepfake sexual images without permission became a felony that carries up to 15 years of prison time and as much as $10,000 in fines. Iowa enacted two bills related to sexually explicit deepfakes last year, one of which established sexual images of children generated by A.I. as a felony punishable by up to five years in prison and a $10,245 fine for the first offense. In New Jersey, a recently approved ban on malicious deepfakes could result in a fine of up to $30,000 and prison time.

    California has been especially aggressive in reacting to deepfakes, passing eight related bills in September alone, including five on a single day.

    ?url=https%3A%2F%2Fcalifornia times brightspot.s3.amazonaws.com%2F43%2F54%2F791975164e6e95fad3603fff3687%2F1268294 et jamie lee curtis jlc 0313 13107
    Academy Award-winning actress Jamie Lee Curtis poses with her Oscar trophy, the morning after her win at the 95th Oscars ceremony, at the Beverly Hills Hotel in 2023. (Jay L. Clendenin / Los Angeles Times)

    “We’re in a very dangerous time, and we’re playing defense on everything that we do,” said Josh Lowenthal, a Democrat in the California Assembly, while introducing a session last week in Sacramento on the dangers of deepfakes.

    Mr. Lowenthal, who co-sponsored a recently introduced bill targeting sexually explicit deepfake material, later watched a demonstration of the technology spit out a realistic image of him in a prison cell and produce a fake news story about comments he never made.

    “I would’ve thought that was me,” he said after hearing deepfake audio of his voice, generated on the spot.

    Reining in deepfakes has also become a federal priority, and a markedly bipartisan one. Congress overwhelmingly passed the Take It Down Act, which criminalizes the nonconsensual sharing of sexually explicit photos and videos, including A.I. content, and requires tech platforms to quickly remove the content once they are notified. President Trump signed the bill in the White House Rose Garden on Monday, accompanied by his wife, Melania, who backed the legislation.

    But lawmakers’ enthusiasm for deepfake legislation has also set off a surge of pushback. Critics complain that many of the laws stifle free speech, constrain American competitiveness and are so complicated to enforce that they are, in effect, toothless.

    Because of those concerns, some Republicans in Congress are trying to curb the state actions. They are now considering a 10-year moratorium that would stop states from enforcing and passing legislation related to artificial intelligence, giving the federal government sole regulatory authority and lessening the pressure on A.I. companies. Soon after re-entering office, Mr. Trump revoked an executive order from his predecessor that sought to ensure the technology’s safety and transparency, issuing his own executive order that decried “barriers to American A.I. innovation” and pushed the United States “to retain global leadership” in the field.

    Regulating artificial intelligence requires balance, said Representative Josh Gottheimer, a Democrat from New Jersey who has helped write multiple deepfake bills. For all its potential dangers, he said, the technology could also become a powerful engine for job creation and creative expression.

    “It’s an ever-evolving space,” said Mr. Gottheimer, a candidate for governor who last month posted a video that featured, with a disclosure, a digitally generated version of himself boxing with Mr. Trump. “The key is making sure that people are protected as we harness the opportunities here.”

    Some state laws have also been challenged in court. In California, a conservative YouTube creator who posted an edited campaign video spoofing former Vice President Kamala Harris’s voice sued the attorney general last fall over two laws focused on election-related deepfakes. His argument: The regulations force social media companies to censor protected political speech, including parodies, and allow anybody to sue over content that he or she dislikes.

    The lawsuit now includes plaintiffs such as The Babylon Bee, a right-wing satirical site; Rumble, the right-wing streaming platform; and X, the social media company owned by Mr. Musk (which last month also sued Minnesota over a similar law). A federal judge ordered that enforcement of one of the California laws be temporarily paused, saying it “acts as a hammer instead of a scalpel.”

    464508738 8552618984852090 6255971325960509638 n.jpg? nc cat=110&ccb=1 7& nc sid=0b6b33& nc ohc=Nj0vc2o7XYsQ7kNvwGmUcd4& nc oc=Adkmn2gLWSBChHx9zNmnPdWqw9FReK2CWeWDM9wogUiMKctYYqwQtWd DRhs U9Uh5vy WWLzdkWdQXnscvSJ19& nc zt=23& nc ht=scontent.frdp1 1
    In Dubuque County, Iowa, Sheriff Joseph L. Kennedy is assisting a local police department with a case involving male high schoolers who shared images of female students’ faces attached to artificially generated nude bodies. (Facebook)

    Litigation isn’t the only challenge to regulating deepfakes. In Dubuque County, Iowa, Sheriff Joseph L. Kennedy is assisting a local police department with a case involving male high schoolers who shared images of female students’ faces attached to artificially generated nude bodies.

    Such cases are time-consuming to work through, requiring careful documentation, data preservation efforts, subpoenas and search warrants for devices, Sheriff Kennedy said. Occasionally, the companies behind the websites or apps that people use to make A.I. images are uncooperative, especially if they are based in a country where an Iowa law has no power, he said.

    “That’s where you can hit snags and are short on options for what you can do,” he said. “Sometimes, it just seems like we’re chasing our tails.”

    first lady melania trump speaks 104900379
    First lady Melania Trump has used AI to record her audiobook. (AP)

    While most deepfake bans are focused on sexual, political or artistic content, the technology also has banks and other businesses on high alert. Michael S. Barr, a member of the Federal Reserve’s board of governors, said in a speech last month that the technology “has the potential to supercharge identity fraud.”

    One deepfake scam bilked Arup, a British design and engineering company that worked on the Sydney Opera House and Beijing’s Bird’s Nest stadium, out of $25 million last year. Fraudsters also tried to target Ferrari last summer, using WhatsApp messages that mimicked the southern Italian accent of the automaker’s chief executive.

    “If this technology becomes cheaper and more broadly available to criminals — and fraud detection technology does not keep pace — we are all vulnerable to a deepfake attack,” Mr. Barr said.