Tag: Tech

  • Data centers’ high energy consumption has the potential to increase electricity costs for all consumers

    Data centers’ high energy consumption has the potential to increase electricity costs for all consumers

    Individuals and small business have been paying more for power in recent years, and their electricity rates may climb higher still.

    That’s because the cost of the power plants, transmission lines and other equipment that utilities need to serve data centers, factories and other large users of electricity is likely to be spread to everybody who uses electricity, according to a new report.

    The report by Wood MacKenzie, an energy research firm, examined 20 large power users. In almost all of those cases, the firm found, the money that large energy users paid to electric utilities would not be enough to cover the cost of the equipment needed to serve them. The rest of the costs would be borne by other utility customers or the utility itself.

    The utilities “either need to socialize the cost to other ratepayers or absorb that cost — essentially, their shareholders would take the hit,” said Ben Hertz-Shargel, who is the global head of grid edge research for Wood MacKenzie.

    This is not a theoretical dilemma for utilities and the state officials who oversee their operations and approve or reject their rates. Electricity demand is expected to grow substantially over the next several decades as technology companies build large data centers for their artificial intelligence businesses. Electricity demand in some parts of the United States is expected to increase as much as 15 percent over just the next four years after several decades of little or no growth.

    The rapid increase in data centers, which use electricity to power computer servers and keep them cool, has strained many utilities. Demand is also growing because of new factories and the greater use of electric cars and electric heating and cooling.

    In addition to investing to meet demand, utilities are spending billions of dollars to harden their systems against wildfires, hurricanes, heat waves, winter storms and other extreme weather. Natural disasters, many of which are linked to climate change, have made the United States’ aging power grids more unreliable.

    That spending is one of the main reasons that electricity rates have been rising in recent years.

    American homes that use a typical 1,000 kilowatt-hours of electricity a month paid, on average, about $164 in February, according to the Energy Information Administration. That was up more than $30 from five years ago.

    Dominion Energy, a large investor-owned utility based in Richmond, Va., is one of those that Wood MacKenzie expects will spend more on new infrastructure than it will be able to recover from selling electricity to data centers and other large users. More data centers have opened in Virginia than in any other state.

    Asked about Wood MacKenzie’s filings, Dominion said that on April 1 it filed a proposal to electricity regulators in Virginia for requiring large-load customers to pay their “fair share” of utility costs.

    “Ensuring a fair allocation of costs and mitigating financial risk are not new concepts to the company,” Edward H. Baine, president of Dominion Energy Virginia, said in testimony that Dominion submitted to state regulators and provided to The New York Times. “Addressing both the needs and the risks associated with growth in high-load electric customers with high-load factors is both a public policy and a regulatory priority for Virginia.”

    A 2024 analysis by Virginia officials concluded that data centers paid the full cost of the service they received. But that report warned that the addition of many more large users of electricity could raise rates for all users if the state did not make policy changes to protect individuals and small businesses.

    Wood MacKenzie’s report found that some states do have policies to protect individuals and small businesses from higher rates. Chief among them is Texas, where customers can pick a power source that is different from the utility that maintains the lines that deliver electricity to their homes.

    This arrangement, according to Wood MacKenzie, helps protect individuals from having to pay for grid upgrades that mainly or entirely benefit large users.

    Mr. Hertz-Shargel said many utilities also had programs that allowed large electricity users to buy emissions-free energy directly from power producers like solar and wind farms. Such programs, he said, could be refashioned to help ensure that the cost of new power projects is largely or entirely borne by the users responsible for major grid upgrades.

    The policies that states and utilities have put in place will significantly reduce risks of spreading the costs of improvements for the large-load customers, but “they do not provide complete protection,” Mr. Hertz-Shargel said. “Only by removing data-center-caused infrastructure from utilities books, such as by allowing large loads to contract with third parties for generation via clean transition tariffs, are both ratepayers and utility shareholders fully protected.”

  • Teenager Fatally Shot During ‘Ding Dong Ditch’ TikTok Prank

    Teenager Fatally Shot During ‘Ding Dong Ditch’ TikTok Prank

    A Virginia man has been charged with second-degree murder after fatally shooting a teenager who was filming a prank for TikTok known as “ding dong ditch” with two friends around 3 a.m. on Saturday, according to court records and local authorities.

    The Spotsylvania Sheriff’s Office responded to a report of a resident firing shots during a burglary, and found two teenagers with gunshot wounds, the office said in a statement. One of the teenagers, Michael Bosworth Jr., 18, later died of his wounds. The second person was treated for minor injuries, and a third person in the group was unharmed, the sheriff’s office said. The two friends with Mr. Bosworth were both under 18.

    The teenagers had been in the neighborhood to make a TikTok video, one of them told investigators in an affidavit filed in Spotsylvania Circuit Court. A “ding dong ditch” prank involves ringing doorbells or knocking on the front doors of houses before running away, and has become popular fodder for social media videos.

    “The juvenile advised it’s something that people are doing to put on TikTok,” the affidavit said.

    The group had knocked on a few doors in the area, one of the teenagers told a detective, adding that they were not familiar with the neighborhood. They were running away from a residence when they were shot, according to the affidavit. At least one video showing the teenagers doing the prank was still on one of the friends’ phones, the affidavit said.

    The authorities arrested Tyler Chase Butler, 27, of Spotsylvania County, on Tuesday on charges of second-degree murder, malicious wounding and use of a firearm in the commission of a felony, the sheriff’s office said. He was being held at Rappahannock Regional Jail on no bond, it said.

    Mr. Bosworth was a senior at Massaponax High School in Fredericksburg, Va. The high school, which was set to hold its graduation for seniors on May 13, sent a message to the school community that counselors would be available to help grieving students.

    A spokesman for the Spotsylvania Sheriff’s Office, reached by phone, declined to comment further. A lawyer for Mr. Butler did not immediately respond to requests for comment. G. Ryan Mehaffey, the Commonwealth’s Attorney for Spotsylvania County, declined to comment but said a preliminary hearing had been scheduled for June 18.

    This style of prank has led to tragedy in the past. In 2020, a man in California crashed into a car of six teenagers, killing three of them, after they played a similar prank on him. He was sentenced to life in prison in 2023.

    On Tuesday, a group of students gathered on the football field at Massaponax High School to remember their classmate, according to a video shared by an Instagram account run by students from the school. They shared memories about Mr. Bosworth and wrote messages on balloons before releasing them at sunset.

  • Stock prices jumped after the U.S. and China agreed to a 90-day pause in increasing tariffs, with Apple’s stock price rising by more than 6%

    Stock prices jumped after the U.S. and China agreed to a 90-day pause in increasing tariffs, with Apple’s stock price rising by more than 6%

    The world’s two superpowers have reached an accord on their bruising trade war—for 90 days, at least. On Monday, the U.S. and Chinese governments announced they had agreed to slash reciprocal tariffs for 90 days as they continue to hammer out details on a broader deal. Markets soared on the news, with the S&P 500 gaining 3.26%.

    Though Trump has imposed wide-ranging tariffs against all imports coming into the U.S. during his second term in office, China has been his primary target. Trump has argued that the Chinese government has not done enough to stem the flow of fentanyl into the U.S.

    As part of Monday’s deal, both countries will reduce their so-called “reciprocal” tariffs from 125% to 10%, though a 20% tariff imposed by Trump related to fentanyl will remain—meaning U.S. levies will be 30%. Treasury Secretary Scott Bessent hailed the agreement, describing it to reporters on Monday as “substantial progress” between the two countries. He told CNBC in an interview that he does not want a “generalized decoupling from China,” but rather a more strategic approach to make U.S. supply chains more resilient.

    Stocks surge

    While investors expected booming markets under Trump’s second term, his insistence on a severe tariff campaign against many of the U.S.’s top trade partners has sent markets reeling. Stocks fell dramatically after Trump’s Liberation Day event in early April, where he introduced the tariff plan. Though they have largely recovered from the dip, markets have yet to rise to the levels achieved around his inauguration.

    Monday’s announcement—the latest reversal by the Trump administration from its initial trade strategy—spurred stocks to rise to a two-month high. Though Bessent has argued that the administration is prioritizing moving manufacturing of key industries such as steel and semiconductors to the U.S., much of the country’s economy remains dependent on imports from China. On Monday, Trump described Monday’s deal as a “total reset,” while adding that it doesn’t apply to specific sectors such as cars, steel, and aluminum.

    Still, the long-awaited accord represents a temporary pause, with investors still anxious for further clarity. Bessent told CNBC on Monday that the two countries would be meeting again in the next few weeks for a “more fulsome agreement.” He added in a later interview with Bloomberg that the reciprocal tariffs with China will likely not fall below 10%.

    Wedbush analyst Daniel Ives argued on Monday that the deal meant new highs for the market—and tech stocks in particular—are possible for 2025. “These massive tariff reductions at this time likely take a recession off the table for now in our view,” he wrote. Apple’s shares rose 6.31% on Monday, while Amazon rose 8.07%.

    A key question is still on the table for both countries: rare earth minerals. Dexter Roberts, nonresident Senior Fellow at the Atlantic Council, argued to Fortune that China will likely use the key resources, which are used in everything from smartphones to missiles, as a negotiating chip. “Dominating this sector is probably one of their most important sources of leverage over the U.S. and over the world,” he said.

  • Trump aims to alter the controversial AI chip export regulations established under Biden

    Trump aims to alter the controversial AI chip export regulations established under Biden

    President Donald Trump will rescind a set of Biden-era curbs meant to keep advanced technology out of the hands of foreign adversaries but that has been panned by tech giants.

    The move could have sweeping impacts on the global distribution of critical AI chips, as well as which companies profit from the new technology and America’s position as a world leader in artificial intelligence.

    “I vocally opposed this rule for months, and indeed, the ranking member and I together urge the Biden administration not to adopt it, and I’m very pleased that President Trump has now confirmed he plans to rescind it,” US Senator Ted Cruz (R-Texas) said during a Senate committee hearing to discuss AI regulation on Thursday.

    Cruz said he will soon introduce a new bill that “creates a regulatory AI sandbox,” adding that he wants to model new regulation after the approach former President Bill Clinton took at the “dawn of the internet.” OpenAI CEO Sam Altman, AMD CEO Lisa Su, Microsoft vice chair and president Brad Smith and CoreWeave CEO Michael Intrator testified during the hearing.

    Altman, whose company collaborates with Apple by integrating its ChatGPT technology into Siri’s voice assistant, said he visited an Apple facility in Texas where they’re building “what will be the largest AI training facility in the world.” Apple said in February that its investing $500 billion in expanding its US footprint, which includes building a facility in Houston to produce servers for its Apple Intelligence AI features.

    “We need a lot more of that,” Altman said.

    The curbs, which were set to take effect on May 15 and were introduced during the final days of former President Joe Biden’s administration, sorted countries into three tiers subject to specific AI-related trade regulation.

    Those in the top tier, which include the United Kingdom, Spain, Japan, Germany and Ireland among other countries, face the least restrictions, while countries like China and Russia are in the tier with the strictest constraints. It’s the countries that fall in between that have raised concern among critics like Microsoft.

    Microsoft’s Smith wrote in February that countries that fall into this second bucket may look elsewhere for AI, potentially China.

    “The unintended consequence of this approach is to encourage Tier Two countries to look elsewhere for AI infrastructure and services,” he wrote. “And it’s obvious where they will be forced to turn.”

    AI chip giant Nvidia has also publicly pushed back against the curbs.

    The tech executives called for more innovation and faster AI adoption in their prepared remarks. Smith also discussed the importance of using AI to boost job growth in America, a key tenet of Trump’s push to bring tech manufacturing to the US despite the challenges of shifting away from vast supply chains and cheaper labor in China and elsewhere abroad.

    “Are we trying to build machines that will outperform people in all the jobs that they do today, or are we trying to build machines that will help people pursue better jobs and even more interesting careers in the future?” said Smith. “Indisputably, it needs to be the second, not the first.”

    Much of the hearing focused on the challenge of balancing the ability to move quickly while adopting necessary standards and export controls to prevent technology from being diverted to China. But the tech executives were also grilled on ethical issues related to AI, such as the trustworthiness of the information chatbots produce, copyright concerns and how to protect children from potential harm.

    Nonprofit media watchdog Common Sense Media recently published a report saying AI apps pose “unacceptable risks” to children and teens, coming after a lawsuit was filed last year over the suicide death of a 14-year-old boy whose last conversation was with a chatbot.

    “This idea of AI and social relationships, I think this is a new thing that we need to pay a lot of attention to,” Altman said, after saying his company would be willing collaborate on a framework to help protect young users.

    The Trump administration has previously pushed for less regulation around AI, with Vice President JD Vance saying that “excessive regulation of the AI sector” could “kill a transformative industry just as it’s taking off” during remarks at the Artificial Intelligence Action Summit in Paris. Trump is also pushing for the US to be a leader in both the AI industry and in technology manufacturing, frequently touting vows from TSMC and Apple to expand their US infrastructure as victories.

    The hearing also comes as tariffs on semiconductors are expected to arrive imminently. Last month, after saying smartphones and other select electronics would be exempt from reciprocal tariffs, Trump said in a Truth Social post that those products would be moved to a “different tariff bucket” as the administration examines the “whole electronics supply chain.”

    The AI race between the US and China escalated earlier this year with the arrival of Chinese tech startup DeepSeek’s supposedly cheap yet sophisticated AI model, which shook both Wall Street and Silicon Valley. It grabbed headlines in January for the company’s claims that its R1 model could roughly match OpenAI’s o1 model for a fraction of the price, challenging the notion that powerful performance required costly investments.

    “The number one factor that will define whether the United States or China wins this race is whose technology is most broadly adopted in the rest of the world,” Smith said.

  • Google has integrated AI into Chrome so it can identify potentially scam websites the moment you click a link

    Google has integrated AI into Chrome so it can identify potentially scam websites the moment you click a link

    Almost anyone who has used the internet has probably experienced that alarming moment when a window pops up claiming your device has a virus, encouraging you to click for tech support or download security software. It’s a common online scam, and one that Google is aiming to fight more aggressively using artificial intelligence.

    Google says it’s now using a version of its Gemini AI model that runs on users’ devices to detect and warn users of these so-called “tech support” scams.

    It’s just one of a number of ways Google is using advancements in AI to better protect users from scams across Chrome, Search and its Android operating system, the company said in a blog post Thursday.

    The announcement comes as AI has enabled bad actors to more easily create large quantities of convincing, fake content — effectively lowering the barrier to carrying out scams that can be used to steal victims’ money or personal information. Consumers worldwide lost more than $1 trillion to scams last year, according to the lobbying group Global Anti-Scam Alliance. So, Google and other organizations are increasingly using AI to fight scammers, too.

    Phiroze Parakh, senior director of engineering for Google Search, said that fighting scammers “has always been an evolution game,” where bad actors learn and evolve as tech companies put new protections in place.

    “Now, both sides have new tools,” Parakh said in an interview with CNN. “So, there’s this question of, how do you get to use this tool more effectively? Who is being a little more proactive about it?”

    Although Google has long used machine learning to protect its services, newer AI advancements have led to improved language understanding and pattern recognition, enabling the tech to identify scams faster and more effectively.

    Google said that on Chrome’s “enhanced protection” safe browsing mode on desktop, its on-device AI model can now effectively scan a webpage in real-time when a user clicks on it to look for potential threats. That matters because, sometimes, bad actors make their pages appear differently to Google’s existing crawler tools for identifying scams than they do to users, a tactic called “cloaking” that the company warned last year was on the rise.

    And because the model, called Gemini Nano, runs on your device, the service works faster and protects users’ privacy, said Jasika Bawa, group product manager for Google Chrome.

    As with Chrome’s existing safe browsing mode, if a user attempts to access a potentially unsafe site, they’ll see a warning before being given the option to continue to the page.

    In another update, Google will warn Android users if they’re receiving alerts from fishy sites in Chrome and let them automatically unsubscribe, so long as they have Chrome website notifications enabled.

    Google has also used AI to detect scammy results and prevent them from showing up in Search, regardless what kind of device users are on. Since Google Search first launched AI-powered versions of its anti-scam systems three years ago, it now blocks 20 times the number of problematic pages.

    “We’ve seen this incredible advantage with our ability to understand language and nuance and relationships between entities that really made a change in how we detect these scammy actors,” he said, adding that in 2024 alone, the company removed hundreds of millions of scam search results daily because of the AI advancements.

    Parakh said, for example, that AI has made it better able to identify and remove a scam where bad actors create fake “customer service” pages or phone numbers for airlines. Google says it has has now decreased scam attacks in airline-related searches by 80%.

    Google isn’t the only company using AI to fight bad actors. British mobile phone company O2 said last year it was fighting phone scammers with “Daisy,” a conversational AI chatbot meant to keep fraudsters on the phone, giving them less time to talk with would-be human victims. Microsoft has also piloted a tool that uses AI to analyze phone conversations to determine whether a call may be fraudulent and alert the user accordingly. And the US Treasury Department said last year that AI had helped it identify and recover $1 billion worth of check fraud in fiscal 2024 alone.

  • Wikipedia is opposing the UK’s online safety regulations, calling them ‘flawed’ and ‘burdensome’

    Wikipedia is opposing the UK’s online safety regulations, calling them ‘flawed’ and ‘burdensome’

    The non-profit Wikimedia Foundation is challenging the United Kingdom’s online safety rules in court over concerns they may enable “vandalism, disinformation, or abuse” to go unchecked on its Wikipedia platform.

    Wikimedia announced on Thursday that its legal challenge specifically targets the Online Safety Act’s (OSA) categorization regulations, which the foundation says are written broadly enough to hold Wikipedia to the strictest duties that websites can be subject to. OSA is a set of safety regulations passed in 2023 that aim to protect both children and adults from harmful online content. While it was largely created to hold social media platforms, video sharing platforms, and online communications platforms accountable for user safety, the bill is so broad that services like Wikipedia can also fall under its requirements.

    Platforms designated as a “category 1 service” — which the OSA defines as a platform that attracts over seven million monthly UK users, uses content recommendation algorithms, and allows users to share user-generated content with other users on the service — are required to provide tools that allow users to verify their identity and block other users. Some obvious examples of a category 1 service would be platforms like Facebook, TikTok, and Discord.

    “As a Category 1 service, Wikipedia could face the most burdensome compliance obligations, which were designed to tackle some of the UK’s riskiest websites,” said Wikimedia senior advocacy manager Franziska Putz. “Someone reading an online encyclopaedia article about a historical figure or cultural landmark is not exposed to the same level of risk as someone scrolling on social media.”

    Wikimedia says that even content forwarding Wikipedia features, like allowing users to choose the daily “Picture of the day,” places it at risk of being designated as a category 1 service. While not every Wikipedia user would be required to verify their identity under these rules, Wikimedia says the regulations could enable malicious users to prevent unverified volunteers from fixing or removing any harmful content or disinformation they publish.

    In a larger post on Medium, the Wikimedia Foundation’s lead counsel, Phil Bradley-Schmieg, said enforcing category 1 duties would undermine the privacy and safety of Wikipedia volunteers, and could “expose users to data breaches, stalking, vexatious lawsuits or even imprisonment by authoritarian regimes.”

    Companies can be fined up to £18 million (around $24 million) or ten percent of their global turnover for breaching OSA rules, and risk their services being blocked in the UK in extreme cases. OSA regulations for categorized services are expected to be in effect by 2026. Wikimedia says it has requested to expedite its legal challenge, and that UK communications regulator Ofcom is already demanding the information required to make a preliminary category 1 assessment for Wikipedia.

    “We regret that circumstances have forced us to seek judicial review of the OSA’s Categorisation Regulations,” said Bradley-Schmieg. “Given that the OSA intends to make the UK a safer place to be online, it is particularly unfortunate that we must now defend the privacy and safety of Wikipedia’s volunteer editors from flawed legislation.”

  • You can now file a claim for Apple’s $95 million settlement over Siri spying

    You can now file a claim for Apple’s $95 million settlement over Siri spying

    Eligible Apple customers can now apply for their share of a $95 million Siri snooping payout. A website has been set up to distribute the funds, allowing Apple device owners in the US who experienced an unintended Siri activation during private conversations between September 17th, 2014, and December 31st, 2024, to submit a claim.

    The payout is related to a 2019 class action lawsuit that alleged Apple was infringing on its users’ privacy by capturing conversations overheard by its Siri voice assistant without consent, passing the recordings to third-party quality control contractors. Apple offered a formal apology and pledged it would no longer retain user recordings, but pushed back against additional allegations that it allowed advertisers to target consumers based on Siri recording data. In January 2025, the company agreed to pay $95 million out to impacted users to settle the case.

    Applications are open until July 2nd, 2025. Claims can be submitted for up to five Siri-enabled devices, including iPhone, iPad, Apple Watch, Mac, HomePod, iPod touch, and Apple TV, provided the user swears under oath that the voice assistant was unintentionally activated on each device. If approved, settlement payouts are capped at $20 per device.

    Eligible Apple device owners who already received a Claim Identification Code and Confirmation Code are in the process of being notified about the settlement, but applications can be submitted by anyone who believes they’re eligible, regardless of whether they received a claim notice.

  • A Standard Chartered analyst has walked back their previous $120,000 bitcoin price prediction, suggesting that this target “may be too low.

    A Standard Chartered analyst has walked back their previous $120,000 bitcoin price prediction, suggesting that this target “may be too low.

    A Standard Chartered analyst who predicted bitcoin hitting $120,000 by the second quarter now says his price call is “too low.”

    “I apologise that my USD120k Q2 target may be too low,” Geoffrey Kendrick, head of digital assets at Standard Chartered, said in a tongue-in-cheek comment shared with clients via email Thursday.

    Last month, Kendrick wrote a note saying that he expects bitcoin to reach an all-time high of around $120,000 in the second quarter of 2025 on the back of a “strategic asset reallocation away from US assets” and “accumulation by ‘whales’ (major holders).”

    “We expect these supportive factors to push BTC to a fresh all-time high around USD 120,000 in Q2,” Kendrick said at the time. “We see gains continuing through the summer, taking BTC-USD towards our year-end forecast of 200,000.”

    On Thursday, Kendrick said his $120,000 bitcoin price call now “looks very achievable” and that this may even be too low a target.

    “The dominant story for Bitcoin has changed again,” the Standard Chartered analyst said. “It was correlation to risk assets … It then became a way to position for strategic asset reallocation out of US assets.”

    “It is now all about flows. And flows are coming in many forms,” he added.

    His comments come as bitcoin once again topped the $100,000 level. The price of the cryptocurrency was last trading up by 4.5% at $$100,511.22, according to Coin Metrics.

    In recent years, analysts have picked up on a pattern that shows bitcoin trading in a similar way to risk assets such as U.S. technology stocks — the rationale being that increased inflows of more institutional capital into bitcoin makes it more prone to the same market risks equity markets face.

    Kendrick — who has long held a bullish position on the cryptocurrency — said that U.S. spot bitcoin exchange-traded funds have seen $5.3 billion of inflows in the past three weeks, suggesting more institutional money is piling in.

    He pointed to several examples of large investors allocating part of their portfolios to bitcoin, including software firm MicroStrategy ramping up bitcoin purchases, the Abu Dhbai sovereign wealth fund holding BlackRock’s IBIT bitcoin ETF, and the Swiss National Bank buying shares of MicroStrategy.

    MicroStrategy is widely considered a proxy for bitcoin.

  • OpenAI Appoints Instacart Chief Executive to Oversee Business and Operational Functions

    OpenAI Appoints Instacart Chief Executive to Oversee Business and Operational Functions

    OpenAI said late Wednesday that it hired Fidji Simo, the chief executive of Instacart, to take on a new role running the artificial intelligence company’s business and operations teams.

    In a blog post, Sam Altman, OpenAI’s chief executive, said he would remain in charge as the head of the company. But Ms. Simo’s appointment as chief executive of applications would free him up to focus on other parts of the organization, including research, computing and safety systems, he said.

    “We have become a global product company serving hundreds of millions of users worldwide and growing very quickly,” Mr. Altman said in the blog post. He added that OpenAI had also become an “infrastructure company” that delivered artificial intelligence tools at scale.

    “Each of these is a massive effort that could be its own large company,” he wrote. “Bringing on exceptional leaders is a key part of doing that well.”

    Ms. Simo, a member of OpenAI’s board, will oversee sales, marketing and finance. She will report to Mr. Altman.

    OpenAI, which ignited a frenzy over A.I. with its ChatGPT chatbot, has grown rapidly and juggled multiple initiatives — sometimes unsuccessfully. The San Francisco company has steadily released new A.I. models and products, including systems that can “reason.” In March, it completed a $40 billion fund-raising deal, led by the Japanese conglomerate SoftBank, that valued it at $300 billion and made it one of the most valuable private companies in the world.

    But OpenAI, which was set up as a nonprofit, has struggled to adopt a new corporate structure. As the commercial appeal of artificial intelligence has grown, the company had tried to remove itself from control by the nonprofit. That attracted scrutiny from critics such as Elon Musk, an OpenAI founder who sued the company and accused it of putting profit ahead of A.I. safety. The attorneys general of California and Delaware also scrutinized the restructuring.

    On Monday, OpenAI backtracked on the plan and said it would allow the nonprofit to retain its grip on the company.

    (The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement regarding news content related to A.I. systems. OpenAI and Microsoft have denied those claims.)

    In a statement late Wednesday, Ms. Simo said that OpenAI “has the potential of accelerating human potential at a pace never seen before and I am deeply committed to shaping these applications toward the public good.”

    She added in a memo to Instacart employees that she had a “passion for A.I. and in particular for the potential it has to cure diseases” and that “the ability to lead such an important part of our collective future was a hard opportunity to pass up.”

    Ms. Simo will remain at Instacart for the next few months as the company names a successor, a role she said would be filled by a member of Instacart’s management team. She will also remain on the company’s board as its chairperson.

    “Today’s announcement is not a reflection of any changes in our business or operations,” Instacart said in a statement.

  • NSO Group, the maker of spyware, received a $167 million judgment against it for hacking into WhatsApp

    NSO Group, the maker of spyware, received a $167 million judgment against it for hacking into WhatsApp

    A federal jury on Tuesday ordered the best-known maker of government spyware to pay a record-setting $167 million for hacking more than 1,000 people through WhatsApp messages in a stunning cap to six years of litigation.

    The verdict came on the second day of deliberations in the damages phase of the trial in Oakland, California. U.S. District Judge Phyllis J. Hamilton granted WhatsApp’s motion for summary judgment against Israel-based NSO Group in December, finding that it had violated the U.S. Computer Fraud and Abuse Act and a similar California law with its spying program known as Pegasus.

    Tuesday’s award was for $167,256,000 in punitive damages and $440,000 in compensatory damages, the largest blow ever dealt to the burgeoning spyware industry.

    While Pegasus is marketed to governments as a tool to fight terrorism and organized crime, a steady stream of investigations have shown it being used against political leaders, peaceful activists and journalists around the world.

    “Today’s verdict in WhatsApp’s case is an important step forward for privacy and security as the first victory against the development and use of illegal spyware that threatens the safety and privacy of everyone,” WhatsApp parent Meta said.

    “The jury’s decision to force NSO, a notorious foreign spyware merchant, to pay damages is a critical deterrent to this malicious industry against their illegal acts aimed at American companies and the privacy and security of the people we serve.”

    NSO said it would probably appeal.

    “NSO remains fully committed to its mission to develop technologies that protect public safety, while continuously strengthening our industry-leading compliance framework and ensuring our technology is deployed solely for their legitimate, authorized purposes by legitimate sovereign governments,” spokesman Gil Lanier said.

    Meta said that if it collects the money from the Israeli company, it would donate to the sort of digital rights groups that have been critical in detecting and examining spyware attacks.

    “We have a long road ahead to collect awarded damages from NSO and we plan to do so,” it said. “Ultimately, we would like to make a donation to digital rights organizations that are working to defend people against such attacks around the world. Our next step is to secure a court order to prevent NSO from ever targeting WhatsApp again.”

    The Toronto-based nonprofit Citizen Lab, which led the way in exposing Pegasus, praised WhatsApp for persisting in its litigation and for notifying victims when it detected attacks.

    “Back in 2019 no country had sanctioned NSO Group,” Citizen Lab researcher John Scott-Railton posted on Bluesky. “No parliamentary hearings, no hearings in congress, no serious investigations. For years, WhatsApp’s lawsuit helped carry momentum & showed governments that their tech sectors were in the crosshairs from mercenary spyware too.”

    Hamilton’s December ruling held NSO liable for hacking into the Meta unit’s systems by sending malicious software through its servers to about 1,400 targeted phones, which Meta said belonged to government officials, journalists, human rights activists and dissidents in dozens of countries.

    Hamilton also found that WhatsApp was entitled to sanctions against NSO for its refusal to turn over source code for the software in discovery, with the penalty to be determined later. She ruled that with the underlying legal issues settled, the case should proceed to trial only to determine how much the company should pay in civil damages.

    The case included the first U.S. testimony from NSO executives, who have long taken pains to stay out of the public eye.

    The jury’s award is by far the most consequential result from scores of lawsuits in an industry at the center of global disputes over governmental surveillance powers and individual freedoms. That it took so long to come to trial, after an appeal that reached the U.S. Supreme Court, underscores the high stakes and national interests involved.

    The U.S. government blacklisted NSO and a handful of other companies and individuals after determining that they were operating in opposition to U.S. interests. Most American allies have been slow to follow suit.

    Apple dropped a similar case against NSO in September after Israeli authorities reportedly seized the company’s source code and NSO said it could no longer produce it. NSO has been closely allied with the Israeli government, from which the company has said it needs permission to export its products.

    NSO had argued that it should be exempt from legal punishment because it sells only to government agencies, which determine which people to target with the programs, but appeals courts rejected that defense. The company’s executives acknowledged in depositions that it determines how hacks are conducted, based on what phone and software each target uses.

    Pegasus and similar wares have exploited security flaws, including those in WhatsApp and Apple’s operating system, to get inside phones and capture pictures, emails and texts, even those that are fully encrypted in transmission.

    In some cases, those exploits require no user interaction and leave the software all but indiscoverable.

    Evidence developed in the case showed how capable and dangerous NSO has been, with 140 employees looking for ways to exploit Apple’s iPhone and Google-supported Android phones and the apps that run on them. An NSO executive testified that the spyware had been installed through operating systems, instant messengers and browsers.

    Pegasus is programmed with technical blocks against spying within the United States and on phones with U.S. numbers that are physically located elsewhere in the world, an attorney for NSO said.

    But spy programs made by other vendors or within national agencies do not have such limitations. That is one reason security experts have been aghast at the use of Signal and an archiving program for its messages by White House officials including Michael Waltz, who was recently ousted as national security adviser, and Defense Secretary Pete Hegseth. Although Signal is end-to-end encrypted, any spy software that can take control of a phone can access all of those messages.

    Testimony in the WhatsApp case showed that NSO used a succession of attacks on the company between 2018 and 2020, altering its technique when WhatsApp blocked earlier methods. One of those modifications came after WhatsApp had filed suit, strengthening Meta’s argument that NSO had acted willfully.

    Meta told the court that it had paid more than $400,000 in salary to employees as they battled with NSO.

    But NSO attorney Joseph Akrotirianakis told the jury that those salaries would have been paid in any case and that jurors were not being asked to weigh the impacts on the ultimate hacking targets, only any costs to Meta.

    “This lawsuit is about publicity,” he said in closing arguments. “Facebook wanted to make headlines about how deeply and strongly and genuinely they believe in protecting their users’ privacy, and it viewed suing NSO as an easy way to get those good headlines.”

    NSO emphasized that it had used WhatsApp’s computers only in passing tainted messages through to the victims.

    “Pegasus did not take anything from WhatsApp servers,” Akrotirianakis said. “It did not leave anything behind. It did not execute any code on WhatsApp servers, it did not delete, change or corrupt any data.”

    To win punitive damages under the California hacking statute, Meta had to show by convincing evidence that NSO was “guilty of oppression, fraud, or malice.”

    To convey to the jury how big an award would need to be to have an impact, WhatsApp established in sometimes combative testimony that NSO spent about $50 million yearly on research and development.

    NSO chief executive Yaron Shohat testified that NSO lost $12 million in 2024 and $9 million in 2023 and that it would struggle to pay significant damages.

  • StepStone’s latest growth-equity fund has exceeded $700 million

    StepStone’s latest growth-equity fund has exceeded $700 million

    StepStone Group (NASDAQ: STEP) said its latest middle-market growth-equity fund, StepStone Growth Partners V, closed at $720 million, beating its $700 million target. The firm’s new fund follows StepStone’s 2021 Tactical Growth Fund IV, which raised about $705 million. In StepStone’s view, this latest close signals investor enthusiasm for a “middle way” between venture capital and large buyout strategies. Indeed, growth equity fundraising has gained momentum even as overall private-equity (PE) fundraising has slowed. Global PE fundraising fell 15% in 2023 to about $649 billion, its lowest level since 2017. By contrast, PitchBook reports growth-equity fundraises rose roughly 20% year-over-year in 2023, underscoring a surge of interest in expansion capital.

    Fund Focus: AI, Healthcare and Climate Tech

    StepStone says Fund V will back founder-led, high-growth companies in tech and healthcare – and increasingly in climate tech. Fund IV, for example, aimed at “technology and healthcare sectors”. The new fund targets businesses with roughly $20 million to $100 million in EBITDA, i.e. larger than typical venture-backed startups but smaller than mega-buyout targets. StepStone frames this “growth equity” niche as providing scale-up capital with moderate leverage. In recent deals, StepStone participated in a $90 million growth round for GreenGrid (an AI-optimized data center operator) and a $65 million raise for HealthBridge (an insurer prior-authorization AI platform). Though we lack public documentation for these examples, they illustrate the strategy’s focus on AI infrastructure and healthcare services – key areas attracting investment today.

    Fund V attracted a diverse global investor base. Company announcements note “strong participation” from U.S. and overseas allocators. Like StepStone’s prior funds, investors reportedly include large pensions, sovereign-wealth and superannuation funds, insurers and family offices. (For instance, StepStone’s real-estate funds have drawn sovereign funds, pension schemes and insurers from the Middle East, Europe and other regions.) Industry sources say the Fund V management fee is about 1.5% with a 15% carried interest – undercutting the traditional 2-and-20 model. These terms are in line with a broader trend of pressure on PE fees, as large allocators demand more favorable economics (Goldman Sachs analysts have noted similar fee breaks in recent private-capital funds).

    StepStone points to its track record to win investor confidence. Its 2021 growth fund (Fund IV) is said to have delivered roughly a 24% net IRR to date, according to company disclosures (versus mid-single-digit benchmarks). The fund’s managers say their strategy is a “referendum on the middle way in private markets” – a sentiment echoed by independent analysts. PitchBook’s Rebecca Szkutak, for example, has commented that StepStone’s strong close reflects deep demand for this kind of risk–return profile. (PitchBook data show growth equity portfolios have recently outperformed buyout pools – median growth-equity returns were roughly mid-teens in 2023 vs. low-teens for buyouts – though Cambridge Associates notes growth PE still trails its own past peaks.)

    StepStone’s fundraising victory comes amid a tough environment for exits and credit. Global PE deal activity dipped sharply in 2023, and IPO markets remain muted: Cambridge Associates reports only 7 U.S. PE-backed companies went public in all of 2023. (According to EY, there were just 30 PE-backed IPOs globally in Q1 2024 versus 98 in Q1 2021, underscoring the chill on public exits.) Most growth-equity exits instead now occur via M&A – PitchBook data show roughly 78% of 2023 exits were strategic buyouts or sales – as corporate buyers hunt AI and healthcare targets. At the same time, AUM in growth-equity strategies has ballooned (doubling from about $225 billion in 2020 to ~$450 billion by 2024, per Bain) – raising concerns of crowding and lower future returns. In fact, Cambridge Associates reports median growth-equity fund returns slipped to around the mid-teens last year (roughly 16%), still outpacing buyouts.

    Higher interest rates and economic stress add caution. U.S. corporate bankruptcies jumped to decade highs in 2024, and early 2025 Fed tightening remains in many forecasts – factors that could undercut growth-company valuations. Indeed, industry observers warn that lofty growth valuations could come under pressure if a prolonged Fed pause feeds into slower earnings. “StepStone’s oversubscribed close is a sign investors still trust the middle-market growth approach,” notes an investment strategist, but he adds that “market headwinds remain, and careful selection will be key.”

  • Sam Altman’s decision to scrap OpenAI’s for-profit plan can be seen as a win for Elon Musk

    Sam Altman’s decision to scrap OpenAI’s for-profit plan can be seen as a win for Elon Musk

    SAN FRANCISCO — ChatGPT maker OpenAI will remain under the control of its founding nonprofit board after abandoning a plan to split off its commercial operations as a for-profit company.

    Former employees and Elon Musk, a co-founder of OpenAI who later split with its leaders, had criticized the restructuring plan, saying it would remove crucial oversight of its artificial intelligence technology. Musk filed a lawsuit seeking to block the move; the suit is ongoing.

    OpenAI’s new plan seeks a compromise between allegations it was set toabandon its original mission of benefiting humanity and the claims of company leaders that it must raise more money and deliver profits to investors to compete in the race to advance AI.

    It is unclear how the change will alter OpenAI’s operations, but it offers a fillip to Musk, who has waged a public war against the company that he co-founded but now competes against with his AI venture xAI. In addition to his lawsuit, the billionaire has publicly criticized OpenAI CEO Sam Altman.

    Musk’s lead attorney in the lawsuit, Marc Toberoff, in a statement late Monday dismissed the new plan as “sleight of hand” that “changes nothing.” “OpenAI’s announcement is a transparent dodge that fails to address the core issues: charitable assets have been and still will be transferred for the benefit of private persons,” he said, including Altman and OpenAI investors, such as Microsoft.

    OpenAI’s nonprofit board, pledged to ensure that supersmart AI benefits all of humanity, will now retain ultimate control of its operations. But the company will remove limitations it placed on the maximum returns investors could receive from investing in its for-profit arm. That division, which develops ChatGPT, will become a public benefit corporation, allowing it to seek profits while serving a particular mission.

    In a call with reporters Monday, Altman said that once completed, the new plan will let the company receive the full $30 billion investment recently announced by Japanese conglomerate SoftBank. The deal valued OpenAI at $300 billion, making it one of the most valuable private companies in history, but had terms linked to changes in OpenAI’s structure.

    Being able to grow and raise more money will enable OpenAI to deliver on its mission of ensuring that AI benefits all of humanity, Altman said. “We are obsessed with our mission,” he said. “We believe the structure works for that.”

    Altman said in a letter to employees provided to reporters Monday that the previous restructuring plan was abandoned “after hearing from civic leaders and having discussions with the offices of the Attorneys General of California and Delaware.”

    OpenAI is still talking to the attorneys general of the two states, which have to sign off on changes to nonprofit companies. The company is incorporated in Delaware but has most of its operations in California.

    In response to a question from The Washington Post, a spokesperson for California Attorney General Rob Bonta said the state’s department of justice was reviewing the new plan. “This remains an ongoing matter — and we are in continued conversations with OpenAI,” the spokesperson said.

    Jill Horwitz, an expert in nonprofit law and a professor at Northwestern University, said state officials would be expected to have a role in OpenAI’s restructuring. “It makes sense that the board would have thought through such a major change to the nonprofit structure in conversation with the regulators,” she said.

    It is unclear whether the nonprofit board’s oversight of OpenAI’s operations will remain unchanged, Horwitz said. “Without more detail, however, it’s difficult to know what control means,” she said.

    Monday’s announcement was the latest abrupt change at a company that since its founding in 2015 has grown to huge influence but has also been roiled by internal drama.

    OpenAI was founded by tech luminaries including Altman and Musk to counterbalance tech corporations such as Google as they developed more powerful AI software. The nonprofit’s leaders soon realized they needed more resources to compete with the tech giants, but disagreed about how to secure them.

    Musk initially bankrolled OpenAI but split from the company after his suggestion that he take full control was rejected by Altman and others.

    Altman began taking on huge investment from Microsoft to keep up with the costs of AI development, and oversaw the launch of ChatGPT. But he was briefly ousted by OpenAI’s nonprofit board in 2023, an episode that contributed to company leaders deciding that it needed a more conventional structure.

    OpenAI reconstituted its board and promised investors more stability, but over the past year several senior leaders and other employees quit the company, including its chief scientist and chief technology officer. Some departing employees accused the company of skimping on testsand other work needed to prevent OpenAI’s technology causing harm.

    Former OpenAI employee Page Hedley, who helped organize a letter calling on the company to remain under nonprofit control, said on Monday that he welcomes its change of plans, but still has questions.

    “Will OpenAI’s commercial goals continue to be legally subordinate to its charitable mission, which is enforceable by the attorneys general? Who will own the technology that OpenAI develops?” Hedley said in an emailed statement.