Tag: AI

  • Steve Jobs Once Described Designer Jony Ive as His ‘Spiritual Partner’ at Apple — Now OpenAI Has Acquired Ive’s Tech Startup for $6.4 Billion

    Steve Jobs Once Described Designer Jony Ive as His ‘Spiritual Partner’ at Apple — Now OpenAI Has Acquired Ive’s Tech Startup for $6.4 Billion

    OpenAI CEO Sam Altman called Jony Ive “the greatest designer in the world” on Wednesday after announcing his company’s plan to buy Ive’s artificial intelligence device startup io, in a deal worth $6.4 billion.

    The deal signals OpenAI’s intention to build consumer devices, likely meant to get more people using its AI services regularly. Altman and Ive have stayed mum on the specific products they’re planning to roll out, and when, but their partnership shows that OpenAI is taking a big swing: Steve Jobs once described Ive as his “spiritual partner at Apple” and a “wickedly intelligent person in all ways,” according to Walter Isaacson’s 2011 biography of the Apple co-founder.

    Ive, 58, served as Apple’s chief design officer until 2019 and spent nearly three decades designing some of the tech giant’s most iconic pieces of hardware, from the iMac and MacBook to the iPhone, iPod and iPad. Born in London, he joined Apple in 1992, five years before Jobs returned as CEO to the company he co-founded.

    Jobs quickly found a kindred spirit in Ive, later telling Isaacson that the pair typically conceived most of Apple’s new products together, before pulling in other collaborators: ”[Ive] understands business concepts, marketing concepts … He gets the big picture as well as the most infinitesimal details about each product.”

    When Jobs died in 2011, Ive delivered his eulogy, calling his former boss his “closest and most loyal friend.”

    Ive’s first collaboration with Jobs came on the colorful line of iMac personal computers released in 1998, for which the designer created striking features like a translucent plastic case and a handle on the back of the computer. Later, Ive’s focus shifted toward making products like the iPod and iPhone sleek, stylish and easy to use.

    Ive also led the design of the Apple Watch and Apple’s AirPod earbuds. “The difference that Jony has made, not only at Apple but in the world, is huge,” Jobs told Isaacson.

    When Ive left Apple in 2019 to launch his own independent design firm, LoveFrom, analysts at Deutsche Bank told CNBC News that the tech company was losing “one of [its] most important people.”

    What could Ive design for OpenAI?

    Altman is tasking Ive with trying to capture some of Apple’s magic, writing in a statement that Ive “will assume deep design and creative responsibilities across OpenAI and io.” The pair first agreed to work together on building a piece of AI-powered hardware two years ago, The New York Times reported in September.

    It’s unclear exactly what types of products will result from the partnership. Their vision is for “a product that uses AI to create a computing experience that is less socially disruptive than the iPhone,” the Times wrote. They also want to “help wean users from screens,” and are wary of tech wearables like smart glasses, The Wall Street Journal reported on Wednesday.

    Altman was an investor in startup Humane’s AI pin, a small, voice-controlled device users could wear on their lapel and use for phone calls, texts and search queries. The product was released in 2023 to a poor reception, and discontinued before the company began winding down operations in February.

    Ive and Altman could be working on something similar to the AI pin, but slightly larger and worn around users’ necks, Apple analyst Ming-Chi Kuo wrote on social media platform X on Thursday. The product, which would connect with smartphones but have no display — not unlike AirPods, in that way — could begin production in 2027, Kuo predicted.

    In the past, Ive has said that he relishes the opportunity to design new types of devices that don’t already exist in the world.

    “I love working within such a relatively new product category. The opportunities are remarkable as you can be working on just one product that can instantly shatter an entire history of product types and implicated systems,” Ive told the British Council’s Design Museum in a 2005 interview. He pointed to the iPod as an example of a product that “clearly [turned] our users’ previous experience and understanding of storing and listening to music upside down.”

  • Deepfake Laws Lead to Prosecution and Penalties — and Some Pushback

    Deepfake Laws Lead to Prosecution and Penalties — and Some Pushback

    Pennsylvania’s attorney general recently accused a police officer of taking photos in a women’s locker room, secretly filming people while on duty and possessing a stolen handgun. But he was unable to bring charges related to a cache of photos found on the officer’s work computer featuring lurid images of minors created by artificial intelligence. When the computer was seized, in November, creating digital fakes was not yet considered a crime.

    Since then, a statewide ban on such content has taken effect. While it came too late to apply to the police officer’s case, the state’s attorney general, Dave Sunday, has already used the law to charge another man who was accused of having 29 files of A.I.-generated child sexual abuse material in his home.

    Over the past two years, American legislators have grown increasingly alarmed by the threat of malicious deepfakes. Sexual images of middle school students have been digitally faked without their permission. Vice President JD Vance disavowed an almost certainly inauthentic clip that mimicked his voice to criticize Elon Musk. An ad featuring an A.I.-generated version of the actress Jamie Lee Curtis was removed from Instagram only after she posted a public complaint.

    Legislators are responding. Already this year, 26 laws governing various kinds of deepfakes have been enacted, following 80 in 2024 and 15 in 2023, according to the political database Ballotpedia. This month in Tennessee, sharing deepfake sexual images without permission became a felony that carries up to 15 years of prison time and as much as $10,000 in fines. Iowa enacted two bills related to sexually explicit deepfakes last year, one of which established sexual images of children generated by A.I. as a felony punishable by up to five years in prison and a $10,245 fine for the first offense. In New Jersey, a recently approved ban on malicious deepfakes could result in a fine of up to $30,000 and prison time.

    California has been especially aggressive in reacting to deepfakes, passing eight related bills in September alone, including five on a single day.

    ?url=https%3A%2F%2Fcalifornia times brightspot.s3.amazonaws.com%2F43%2F54%2F791975164e6e95fad3603fff3687%2F1268294 et jamie lee curtis jlc 0313 13107
    Academy Award-winning actress Jamie Lee Curtis poses with her Oscar trophy, the morning after her win at the 95th Oscars ceremony, at the Beverly Hills Hotel in 2023. (Jay L. Clendenin / Los Angeles Times)

    “We’re in a very dangerous time, and we’re playing defense on everything that we do,” said Josh Lowenthal, a Democrat in the California Assembly, while introducing a session last week in Sacramento on the dangers of deepfakes.

    Mr. Lowenthal, who co-sponsored a recently introduced bill targeting sexually explicit deepfake material, later watched a demonstration of the technology spit out a realistic image of him in a prison cell and produce a fake news story about comments he never made.

    “I would’ve thought that was me,” he said after hearing deepfake audio of his voice, generated on the spot.

    Reining in deepfakes has also become a federal priority, and a markedly bipartisan one. Congress overwhelmingly passed the Take It Down Act, which criminalizes the nonconsensual sharing of sexually explicit photos and videos, including A.I. content, and requires tech platforms to quickly remove the content once they are notified. President Trump signed the bill in the White House Rose Garden on Monday, accompanied by his wife, Melania, who backed the legislation.

    But lawmakers’ enthusiasm for deepfake legislation has also set off a surge of pushback. Critics complain that many of the laws stifle free speech, constrain American competitiveness and are so complicated to enforce that they are, in effect, toothless.

    Because of those concerns, some Republicans in Congress are trying to curb the state actions. They are now considering a 10-year moratorium that would stop states from enforcing and passing legislation related to artificial intelligence, giving the federal government sole regulatory authority and lessening the pressure on A.I. companies. Soon after re-entering office, Mr. Trump revoked an executive order from his predecessor that sought to ensure the technology’s safety and transparency, issuing his own executive order that decried “barriers to American A.I. innovation” and pushed the United States “to retain global leadership” in the field.

    Regulating artificial intelligence requires balance, said Representative Josh Gottheimer, a Democrat from New Jersey who has helped write multiple deepfake bills. For all its potential dangers, he said, the technology could also become a powerful engine for job creation and creative expression.

    “It’s an ever-evolving space,” said Mr. Gottheimer, a candidate for governor who last month posted a video that featured, with a disclosure, a digitally generated version of himself boxing with Mr. Trump. “The key is making sure that people are protected as we harness the opportunities here.”

    Some state laws have also been challenged in court. In California, a conservative YouTube creator who posted an edited campaign video spoofing former Vice President Kamala Harris’s voice sued the attorney general last fall over two laws focused on election-related deepfakes. His argument: The regulations force social media companies to censor protected political speech, including parodies, and allow anybody to sue over content that he or she dislikes.

    The lawsuit now includes plaintiffs such as The Babylon Bee, a right-wing satirical site; Rumble, the right-wing streaming platform; and X, the social media company owned by Mr. Musk (which last month also sued Minnesota over a similar law). A federal judge ordered that enforcement of one of the California laws be temporarily paused, saying it “acts as a hammer instead of a scalpel.”

    464508738 8552618984852090 6255971325960509638 n.jpg? nc cat=110&ccb=1 7& nc sid=0b6b33& nc ohc=Nj0vc2o7XYsQ7kNvwGmUcd4& nc oc=Adkmn2gLWSBChHx9zNmnPdWqw9FReK2CWeWDM9wogUiMKctYYqwQtWd DRhs U9Uh5vy WWLzdkWdQXnscvSJ19& nc zt=23& nc ht=scontent.frdp1 1
    In Dubuque County, Iowa, Sheriff Joseph L. Kennedy is assisting a local police department with a case involving male high schoolers who shared images of female students’ faces attached to artificially generated nude bodies. (Facebook)

    Litigation isn’t the only challenge to regulating deepfakes. In Dubuque County, Iowa, Sheriff Joseph L. Kennedy is assisting a local police department with a case involving male high schoolers who shared images of female students’ faces attached to artificially generated nude bodies.

    Such cases are time-consuming to work through, requiring careful documentation, data preservation efforts, subpoenas and search warrants for devices, Sheriff Kennedy said. Occasionally, the companies behind the websites or apps that people use to make A.I. images are uncooperative, especially if they are based in a country where an Iowa law has no power, he said.

    “That’s where you can hit snags and are short on options for what you can do,” he said. “Sometimes, it just seems like we’re chasing our tails.”

    first lady melania trump speaks 104900379
    First lady Melania Trump has used AI to record her audiobook. (AP)

    While most deepfake bans are focused on sexual, political or artistic content, the technology also has banks and other businesses on high alert. Michael S. Barr, a member of the Federal Reserve’s board of governors, said in a speech last month that the technology “has the potential to supercharge identity fraud.”

    One deepfake scam bilked Arup, a British design and engineering company that worked on the Sydney Opera House and Beijing’s Bird’s Nest stadium, out of $25 million last year. Fraudsters also tried to target Ferrari last summer, using WhatsApp messages that mimicked the southern Italian accent of the automaker’s chief executive.

    “If this technology becomes cheaper and more broadly available to criminals — and fraud detection technology does not keep pace — we are all vulnerable to a deepfake attack,” Mr. Barr said.

  • New Claude Model Prompts Tighter Safeguards at Anthropic

    New Claude Model Prompts Tighter Safeguards at Anthropic

    Today’s newest AI models might be capable of helping would-be terrorists create bioweapons or engineer a pandemic, according to the chief scientist of the AI company Anthropic.

    Anthropic has long been warning about these risks—so much so that in 2023, the company pledged to not release certain models until it had developed safety measures capable of constraining them.

    Now this system, called the Responsible Scaling Policy (RSP), faces its first real test.

    On Thursday, Anthropic launched Claude Opus 4, a new model that, in internal testing, performed more effectively than prior models at advising novices on how to produce biological weapons, says Jared Kaplan, Anthropic’s chief scientist. “You could try to synthesize something like COVID or a more dangerous version of the flu—and basically, our modeling suggests that this might be possible,” Kaplan says.

    Accordingly, Claude Opus 4 is being released under stricter safety measures than any prior Anthropic model. Those measures—known internally as AI Safety Level 3 or “ASL-3”—are appropriate to constrain an AI system that could “substantially increase” the ability of individuals with a basic STEM background in obtaining, producing or deploying chemical, biological or nuclear weapons, according to the company. They include beefed-up cybersecurity measures, jailbreak preventions, and supplementary systems to detect and refuse specific types of harmful behavior.

    To be sure, Anthropic is not entirely certain that the new version of Claude poses severe bioweapon risks, Kaplan tells The Budgets. But Anthropic hasn’t ruled that possibility out either. 

    “If we feel like it’s unclear, and we’re not sure if we can rule out the risk—the specific risk being uplifting a novice terrorist, someone like Timothy McVeigh, to be able to make a weapon much more destructive than would otherwise be possible—then we want to bias towards caution, and work under the ASL-3 standard,” Kaplan says. “We’re not claiming affirmatively we know for sure this model is risky … but we at least feel it’s close enough that we can’t rule it out.” 

    If further testing shows the model does not require such strict safety standards, Anthropic could lower its protections to the more permissive ASL-2, under which previous versions of Claude were released, he says.

    ?url=https%3A%2F%2Fapi.time.com%2Fwp content%2Fuploads%2F2025%2F05%2FGettyImages 1741639769
    Jared Kaplan, co-founder and chief science officer of Anthropic, on Tuesday, Oct. 24, 2023. (Chris J. Ratcliffe/Bloomberg/Getty Images)

    This moment is a crucial test for Anthropic, a company that claims it can mitigate AI’s dangers while still competing in the market. Claude is a direct competitor to ChatGPT, and brings in over $2 billion in annualized revenue. Anthropic argues that its RSP thus creates an economic incentive for itself to build safety measures in time, lest it lose customers as a result of being prevented from releasing new models. “We really don’t want to impact customers,” Kaplan told TIME earlier in May while Anthropic was finalizing its safety measures. “We’re trying to be proactively prepared.”

    But Anthropic’s RSP—and similar commitments adopted by other AI companies—are all voluntary policies that could be changed or cast aside at will. The company itself, not regulators or lawmakers, is the judge of whether it is fully complying with the RSP. Breaking it carries no external penalty, besides possible reputational damage. Anthropic argues that the policy has created a “race to the top” between AI companies, causing them to compete to build the best safety systems. But as the multi-billion dollar race for AI supremacy heats up, critics worry the RSP and its ilk may be left by the wayside when they matter most. 

    Still, in the absence of any frontier AI regulation from Congress, Anthropic’s RSP is one of the few existing constraints on the behavior of any AI company. And so far, Anthropic has kept to it. If Anthropic shows it can constrain itself without taking an economic hit, Kaplan says, it could have a positive effect on safety practices in the wider industry.

    Anthropic’s new safeguards

    Anthropic’s ASL-3 safety measures employ what the company calls a “defense in depth” strategy—meaning there are several different overlapping safeguards that may be individually imperfect, but in unison combine to prevent most threats.

    One of those measures is called “constitutional classifiers:” additional AI systems that scan a user’s prompts and the model’s answers for dangerous material. Earlier versions of Claude already had similar systems under the lower ASL-2 level of security, but Anthropic says it has improved them so that they are able to detect people who might be trying to use Claude to, for example, build a bioweapon. These classifiers are specifically targeted to detect the long chains of specific questions that somebody building a bioweapon might try to ask. 

    Anthropic has tried not to let these measures hinder Claude’s overall usefulness for legitimate users—since doing so would make the model less helpful compared to its rivals. “There are bioweapons that might be capable of causing fatalities, but that we don’t think would cause, say, a pandemic,” Kaplan says. “We’re not trying to block every single one of those misuses. We’re trying to really narrowly target the most pernicious.”

    Another element of the defense-in-depth strategy is the prevention of jailbreaks—or prompts that can cause a model to essentially forget its safety training and provide answers to queries that it might otherwise refuse. The company monitors usage of Claude, and “offboards” users who consistently try to jailbreak the model, Kaplan says. And it has launched a bounty program to reward users for flagging so-called “universal” jailbreaks, or prompts that can make a system drop all its safeguards at once. So far, the program has surfaced one universal jailbreak which Anthropic subsequently patched, a spokesperson says. The researcher who found it was awarded $25,000.

    Anthropic has also beefed up its cybersecurity, so that Claude’s underlying neural network is protected against theft attempts by non-state actors. The company still judges itself to be vulnerable to nation-state level attackers—but aims to have cyberdefenses sufficient for deterring them by the time it deems it needs to upgrade to ASL-4: the next safety level, expected to coincide with the arrival of models that can pose major national security risks, or which can autonomously carry out AI research without human input.

    Lastly the company has conducted what it calls “uplift” trials, designed to quantify how significantly an AI model without the above constraints can improve the abilities of a novice attempting to create a bioweapon, when compared to other tools like Google or less advanced models. In those trials, which were graded by biosecurity experts, Anthropic found Claude Opus 4 presented a “significantly greater” level of performance than both Google search and prior models, Kaplan says.

    Anthropic’s hope is that the several safety systems layered over the top of the model—which has already undergone separate training to be “helpful, honest and harmless”—will prevent almost all bad use cases. “I don’t want to claim that it’s perfect in any way. It would be a very simple story if you could say our systems could never be jailbroken,” Kaplan says. “But we have made it very, very difficult.”

    Still, by Kaplan’s own admission, only one bad actor would need to slip through to cause untold chaos. “Most other kinds of dangerous things a terrorist could do—maybe they could kill 10 people or 100 people,” he says. “We just saw COVID kill millions of people.”

  • Dell Aims to Be the Go-To Source for Enterprise AI Infrastructure

    Dell Aims to Be the Go-To Source for Enterprise AI Infrastructure

    Michael Dell is pitching a “decentralized” future for artificial intelligence that his company’s devices will make possible.   

    “The future of AI will be decentralized, low-latency, and hyper-efficient,” predicted the Dell Technologies founder, chairman, and CEO in his Dell World keynote, which you can watch on YouTube. “AI will follow the data, not the other way around,” Dell said at Monday’s kickoff of the company’s four-day customer conference in Las Vegas.

    Dell is betting that the complexity of deploying generative AI on-premise is driving companies to embrace a vendor with all of the parts, plus 24-hour-a-day service and support, including monitoring.

    On day two of the show, Dell chief operating officer Jeffrey Clarke noted that Dell’s survey of enterprise customers shows 37% want an infrastructure vendor to “build their entire AI stack for them,” adding, “We think Dell is becoming an enterprise’s ‘one-stop shop’ for all AI infrastructure.”

    Dell’s new offerings include products meant for so-called edge computing, that is, inside customers’ premises rather than in the cloud. For example, the Dell AI Factory is a managed service for AI on-premise, which Dell claims can be “up to 62% more cost-effective for inferencing LLMs on-premises than the public cloud.”

    dell poweredge
    Dell Technologies

    Dell brands one offering of its AI Factory with Nvidia to showcase the chip giant’s offerings. That includes, most prominently, revamped PowerEdge servers, running as many as 256 Nvidia Blackwell Ultra GPU chips, and some configurations that run the Grace-Blackwell combination of CPU and GPU.

    Future versions of the PowerEdge servers will support the next versions of Nvidia CPU and GPU, Vera and Rubin, said Dell, without adding more detail. 

    Dell also unveiled new networking switches running on either Nvidia’s Spectrum-X networking silicon or Nvidia’s InfiniBand technology. All of these parts, the PowerEdge servers and the network switches, conform to the standardized design that Nvidia has laid out as the Nvidia Enterprise AI factory.

    A second batch of updated PowerEdge machines will support AMD’s competing GPU family, the Instinct MI350. Both PowerEdge flavors come in configurations with either air cooling or liquid cooling.

    Complementing the Factory servers and switches are data storage enhancements, including updates to the company’s network-attached storage appliance, the PowerScale family, and the object-based storage system, ObjectScale. Dell introduced what it calls PowerScale Cybersecurity Suite, software designed to detect ransomware, and what Dell calls an “airgap vault” that keeps immutable backups separate from production data, to “ensure your critical data is isolated and safe.” 

    The ObjectScale products gain support for remote data access (RDMA), for use with Amazon’s S3 object storage service. The technology more than triples the throughput of data transfers, said Dell, lowers the latency of transfers by 80%, and can reduce the load on CPUs by 98%.

    “This is a game changer for faster AI deployments,” the company claimed. “We’ll leverage direct memory transfers to streamline data movement with minimal CPU involvement, making it ideal for scalable AI training and inference.”

    Dell AI Factory also emphasizes the so-called AI PC, workstations tuned for running inference. That includes a new laptop running a Qualcomm circuit board, the AI 100 PC inference card. It is meant to make local predictions with Gen AI without having to go to a central server. 

    dell pro max plus 3
    Dell Technologies

    The Dell Pro Max Plus laptop is “the world’s first mobile workstation with an enterprise-grade discrete NPU,” meaning a standalone chip for neural network processing, according to Dell’s analysis of workstation makers.

    qualcomm discrete npu in exploded dell pro max plus in datacenter
    Dell Technologies
    qualcomm discrete npu
    Dell Technologies

    The Pro Max Plus is expected to be available later this year.

    A number of Dell software offerings were put forward to aid the idea of the decentralized, “disaggregated” AI infrastructure. 

    For example, the company made an extensive pitch for its file management software, Project Lightning, which it calls “the world’s fastest parallel file system per new testing,” and which it said can achieve “up to two times greater throughput than competing parallel file systems.” That’s important for inference operations that must rapidly intake large amounts of data, the company noted.

    Also in the software bucket is what Dell calls its Dell Private Cloud software, which is meant to move customers between different software offerings for running servers and storage, including Broadcom’s VMware hypervisors, Nutanix’s hyper-converged offering, and IBM Red Hat’s competing offerings. 

    The company claimed Dell Private Cloud’s automation capabilities can allow customers to “provision a private cloud stack in 90% fewer steps than manual processes, delivering a cluster in just two and a half hours with no manual effort.”

  • Sam Altman’s decision to scrap OpenAI’s for-profit plan can be seen as a win for Elon Musk

    Sam Altman’s decision to scrap OpenAI’s for-profit plan can be seen as a win for Elon Musk

    SAN FRANCISCO — ChatGPT maker OpenAI will remain under the control of its founding nonprofit board after abandoning a plan to split off its commercial operations as a for-profit company.

    Former employees and Elon Musk, a co-founder of OpenAI who later split with its leaders, had criticized the restructuring plan, saying it would remove crucial oversight of its artificial intelligence technology. Musk filed a lawsuit seeking to block the move; the suit is ongoing.

    OpenAI’s new plan seeks a compromise between allegations it was set toabandon its original mission of benefiting humanity and the claims of company leaders that it must raise more money and deliver profits to investors to compete in the race to advance AI.

    It is unclear how the change will alter OpenAI’s operations, but it offers a fillip to Musk, who has waged a public war against the company that he co-founded but now competes against with his AI venture xAI. In addition to his lawsuit, the billionaire has publicly criticized OpenAI CEO Sam Altman.

    Musk’s lead attorney in the lawsuit, Marc Toberoff, in a statement late Monday dismissed the new plan as “sleight of hand” that “changes nothing.” “OpenAI’s announcement is a transparent dodge that fails to address the core issues: charitable assets have been and still will be transferred for the benefit of private persons,” he said, including Altman and OpenAI investors, such as Microsoft.

    OpenAI’s nonprofit board, pledged to ensure that supersmart AI benefits all of humanity, will now retain ultimate control of its operations. But the company will remove limitations it placed on the maximum returns investors could receive from investing in its for-profit arm. That division, which develops ChatGPT, will become a public benefit corporation, allowing it to seek profits while serving a particular mission.

    In a call with reporters Monday, Altman said that once completed, the new plan will let the company receive the full $30 billion investment recently announced by Japanese conglomerate SoftBank. The deal valued OpenAI at $300 billion, making it one of the most valuable private companies in history, but had terms linked to changes in OpenAI’s structure.

    Being able to grow and raise more money will enable OpenAI to deliver on its mission of ensuring that AI benefits all of humanity, Altman said. “We are obsessed with our mission,” he said. “We believe the structure works for that.”

    Altman said in a letter to employees provided to reporters Monday that the previous restructuring plan was abandoned “after hearing from civic leaders and having discussions with the offices of the Attorneys General of California and Delaware.”

    OpenAI is still talking to the attorneys general of the two states, which have to sign off on changes to nonprofit companies. The company is incorporated in Delaware but has most of its operations in California.

    In response to a question from The Washington Post, a spokesperson for California Attorney General Rob Bonta said the state’s department of justice was reviewing the new plan. “This remains an ongoing matter — and we are in continued conversations with OpenAI,” the spokesperson said.

    Jill Horwitz, an expert in nonprofit law and a professor at Northwestern University, said state officials would be expected to have a role in OpenAI’s restructuring. “It makes sense that the board would have thought through such a major change to the nonprofit structure in conversation with the regulators,” she said.

    It is unclear whether the nonprofit board’s oversight of OpenAI’s operations will remain unchanged, Horwitz said. “Without more detail, however, it’s difficult to know what control means,” she said.

    Monday’s announcement was the latest abrupt change at a company that since its founding in 2015 has grown to huge influence but has also been roiled by internal drama.

    OpenAI was founded by tech luminaries including Altman and Musk to counterbalance tech corporations such as Google as they developed more powerful AI software. The nonprofit’s leaders soon realized they needed more resources to compete with the tech giants, but disagreed about how to secure them.

    Musk initially bankrolled OpenAI but split from the company after his suggestion that he take full control was rejected by Altman and others.

    Altman began taking on huge investment from Microsoft to keep up with the costs of AI development, and oversaw the launch of ChatGPT. But he was briefly ousted by OpenAI’s nonprofit board in 2023, an episode that contributed to company leaders deciding that it needed a more conventional structure.

    OpenAI reconstituted its board and promised investors more stability, but over the past year several senior leaders and other employees quit the company, including its chief scientist and chief technology officer. Some departing employees accused the company of skimping on testsand other work needed to prevent OpenAI’s technology causing harm.

    Former OpenAI employee Page Hedley, who helped organize a letter calling on the company to remain under nonprofit control, said on Monday that he welcomes its change of plans, but still has questions.

    “Will OpenAI’s commercial goals continue to be legally subordinate to its charitable mission, which is enforceable by the attorneys general? Who will own the technology that OpenAI develops?” Hedley said in an emailed statement.

  • What’s driving the rush of companies eager to acquire Chrome?

    What’s driving the rush of companies eager to acquire Chrome?

    ChatGPT creator OpenAI and Yahoo would like to buy Google’s Chrome web browser if a federal judge orders a sale of the internet’s most popular gateway.

    The interest of these companies emerged this week during a trial that will determine whether Alphabet’s Google search empire will be broken up by federal judge Amit Mehta, who ruled last year that Google operated an illegal online search monopoly.

    The Justice Department wants Google to sell its Chrome browser, and potentially its Android operating system, among other remedies.

    Executives from OpenAI and Yahoo both disclosed in court that they would like their names in the mix if Chrome were to become available.

    Brian Provost, Yahoo Search’s general manager, said so Thursday, noting it would cost tens of billions and that the company would be able to fund it with backing from its owner, Apollo Global Management (APO).

    He said Chrome would help boost Yahoo’s market share in search from 3% to double digits, according to The Verge. As of March 2025, Chrome dominated the browser market with a market share of about 66%. Apple’s Safari held roughly 18%, and Microsoft’s Edge held 5%.

    It is “arguably the most important strategic player on the web,” Provost said, according to Bloomberg.

    Under questioning, he also said Yahoo had been working to develop its own prototype browser.

    Executives from artificial intelligence-based search providers also took the stand and said they would have an interest in Chrome if it were up for sale.

    One was Nick Turley, head of product for OpenAI’s artificial intelligence-based search platform ChatGPT.

    Turley said integrating ChatGPT with Chrome could expand OpenAI’s distribution and boost the quality of its search, which currently relies on Microsoft’s Bing browser technology. Microsoft is OpenAI’s biggest backer.

    Dmitry Shevelenko, chief business officer for Perplexity AI, also testified that the AI-fueled search startup could effectively run Chrome and that Chrome could boost its growing business.

    However, he cautioned that a buyer could shutter Google’s Chromium, the open-source technology that powers Chrome, which developers use to iterate and build new web browsers and other products.

    For that and other reasons, Google has pushed back against the government’s divestiture proposal.

    A Google representative told Yahoo Finance that forcing it to sell Chrome would jeopardize rival browser providers that rely on Chromium’s open-source code, including Microsoft’s Edge and others, and undermine privacy and security for consumers who use the search tools.

    The trial is expected to conclude on May 9. Judge Mehta is expected to issue a decision by August on how to remedy Google’s anticompetitive practices.

  • The impact of Trump’s tariff policies is highlighting Meta’s expenditures on artificial intelligence

    The impact of Trump’s tariff policies is highlighting Meta’s expenditures on artificial intelligence

    Mark Zuckerberg’s plan is to make Meta the market leader in artificial intelligence. Investors will want to know how President Donald Trump’s tariffs-heavy trade policies will impact that strategy. 

    Those answers could start to come as soon as this week as Meta’s AI strategy takes center stage when the company hosts its first Llama-branded conference for AI developers on Tuesday then reports its latest quarterly earnings the next day.

    Already, tech companies are starting to talk about the potential impact they’re bracing for as a result of the Trump tariffs. 

    Intel Chief Financial Officer David Zinsner said Thursday during the chip giant’s first-quarter earnings call that U.S. trade policies “have increased the chance of an economic slowdown, with the probability of a recession growing.” Meanwhile, Google CFO Anat Ashkenazi said that day during a first-quarter earnings call that the tech giant remains committed to its $75 billion investment in capital expenditures, or capex, this year, but also acknowledged that the “timing of deliveries and construction schedules” could cause some quarter-to-quarter spending fluctuation. 

    For now, analysts expect Meta to follow Google’s lead and remain firm in its plan to spend as much as $65 billion in capex for AI infrastructure this year when it reports earnings Wednesday. Some analysts believe Meta could even raise the figure because AI is a core priority for the company.

    “We do not expect META to cut its CapX guidance of $60B-$65B in 2025, for its GenAI infrastructure,  because they see this as an important 10-year investment, we believe,” Needham analysts wrote in a research note published Wednesday. “However, tariffs add risks of upward cost revisions.”

    Investors will also be monitoring Meta’s LlamaCon event at its Menlo Park, California, headquarters for any signs that its AI investments are having an immediate business impact. This will be the first time Meta hosts a developer conference specifically for its Llama family of AI models.

    “Investors want to see ROI on all these AI investments, and while Meta has shown clear benefits from leveraging AI to improve its products and drive faster revenue growth, it’s been hard to quantify those benefits,” Truist Securities analyst Youssef Squali told CNBC.

    Meta in April released a couple of its new Llama 4 models, which Meta Chief Product Officer Chris Cox previously said can help power so-called AI agentsthat can perform tasks for users via web browsers and other online interfaces.

    It’s critical that Meta keep improving Llama to create a major business involving AI agents that companies can use to interact with their customers within apps like Facebook and WhatsApp, William Blair research analyst Ralph Schackart said.

    Meta has an early mover advantage at scale in a multi-trillion dollar market,” Schackart said in an email. “We believe Meta is very well positioned to leverage its billions of global users across multiple platforms.”

    Meta is unlikely to curb its Llama investment anytime soon, but should eventually consider doing so if it fails to generate enough money to justify its costs, said Ken Gawrelski, a Wells Fargo managing director of equity research.

    “We do believe that over time Meta needs to continue to evaluate whether Llama needs to be competitive with the leading-edge models,” Gawrelski said. “This is a very expensive proposition and thus far, unlike Google, Meta does not directly monetize its model in any material way.”

    107319100 1697629713449 gettyimages 1730510860 AFP 33YK4AL
    Chris Cox, Chief Product Officer at Meta Platforms, speaks during The Wall Street Journal’s WSJ Tech Live Conference in Laguna Beach, California on October 17, 2023.(Patrick T. Fallon/AFP/Getty Images)

    Meta AI and the consumer

    Analysts are also following the Meta AI digital assistant. That’s because the ChatGPT rival represents the second pillar of Zuckerberg’s AI strategy. 

    Zuckerberg in January said he believes 2025 “is going to be the year when a highly intelligent and personalized AI assistant reaches more than 1 billion people, and I expect Meta AI to be that leading AI assistant.”

    In February, The Budgets reported that Meta was planning to debut a stand-alone Meta AI app during the second quarter and test a paid subscription service, in which users could pay monthly fees to access more powerful versions like users can with ChatGPT. 

    Although Meta’s enormous user base across its family of apps gives Meta AI an advantage over rivals like ChatGPT in terms of reach, they may not interact with Meta AI in the same way they do with rival chat apps, said Cantor Fitzgerald analyst Deepak Mathivanan.

    Gawrelski said that people may not want to use Meta AI within Facebook and Instagram if all they want to do is passively watch the short videos that Meta algorithmically recommends to their feeds.

    “This is why a separate Meta AI, where Meta could clearly articulate its use case and value proposition, could be helpful,” Gawrelski said.

    A stand-alone Meta AI app could help the company better market the digital assistant and distinguish it from rivals, said Debra Aho Williamson, founder and chief analyst at Sonata Insights.

    “ChatGPT has such wide brand awareness, that it’s become a moat that is soon going to be very hard to overcome,” Williamson said.

  • Why Pat McAfee’s disturbing new scandal is just the tip of the iceberg

    Why Pat McAfee’s disturbing new scandal is just the tip of the iceberg

    Image Source: NBC News
    Image Source: NBC News

    Mary Kate Cornett, a then-18-year-old student at the University of Mississippi, moved into emergency campus housing not long after sports talk show host Pat McAfee, whose ESPN show has 2.8 million subscribers on YouTube, spread a wholly unsubstantiated and vicious rumor on a February broadcast about an unnamed freshman on that campus he said “allegedly” had sex with her boyfriend’s father.

    When a phone number for the teenager, who vehemently denies the rumor, circulated online, she began receiving hateful messages, including messages instructing her to kill herself. In what NBC News confirmed was a “swatting” case, police showed up to Cornett’s mother’s house with their guns drawn. For amplifying a nasty rumor that has made her family’s life hell, Cornett and her family told NBC News they intend to take legal action against McAfee and against ESPN, which licenses McAfee’s show.

    250401 pat mcafee mn 1505 7fad44
    Image Source: NBC News

    Thus, McAfee is once again embroiled in a conversation about sports media, “journalistic standards” and the responsibility that comes with a platform as enormous as his. Cornett spoke about her ordeal this month, first for a lengthy piece by The Athletic’s Katie Strang, and then later to NBC News’ Tom Llamas.

    Cornett is the victim of a sports media environment that prioritizes salaciousness and seems disinterested in distinguishing between what’s true and what’s false. But as she rightly told NBC News, she’s not a public figure, and McAfee should have never amplified a campus rumor that seems to have originated on YikYak, an anonymous, message-based gossip app popular among the college set, before spreading to X. And no responsible adult, especially not one with an audience of millions, should be mining social media for salacious rumors to discuss nonpublic figures. Even nonjournalists used to agree that some subjects were off-limits, especially private citizens and children. 

    McAfee appeared to address the controversy for the first time in a live show Wednesday night, saying he never wants “to be a part of anything negative in anybody’s life” although he did not elaborate further. Neither McAfee nor ESPN has commented more explicitly about the case, but McAfee’s defenders are quick to note that he didn’t name the woman during the segment and that he repeatedly said “allegedly”— as if that automatically absolves him of responsibility when discussing a nonpublic figure to his millions of followers. In the past, McAfee, who has a history of amplifying misinformation, has repeatedly denied being a journalist and has mocked the idea that he be held to “journalistic standards.” 

    There’s therefore a slight irony in his repeated, almost derisive use of the word “allegedly”: It’s a convention almost exclusively used by journalists and, at times, law enforcement and legal professionals, to hedge while discussing accused crimes. (It should also be noted there’s considerable debate among journalists, especially those of us who often cover gender-based violence, about the use of “allegedly” when covering domestic violence or sexual assault cases; some contend that the word confers disbelief and doubt toward accusers.) Still, despite McAfee using that common journalistic standard, he insists that he not be held to journalistic standards.

    I’d argue that regardless of the name or size of the platform, everyone with a microphone should have the human decency not to parrot unsubstantiated rumors involving nonpublic figures — especially nonpublic figures who are teenagers. That goes double when you have the institutional backing of an entity like ESPN. But for too long there’s been a blurring of the line between journalists and entertainers, within sports media in general, including at ESPN. Full disclosure: I used to write for ESPN and appear on the network’s shows, and can confidently assert that the network employs numerous journalists and entertainers who are very good at their jobs. 

    During the past year, in response to criticisms of McAfee and his apparent allergy to fact-checking, ESPN has said the company does, in fact, “bear some responsibility” for what gets put on its platform. ESPN licenses McAfee’s show, so he’s technically not an employee, although that does not automatically negate any potential legal exposure for ESPN over things McAfee says on its airwaves.   

    In November, MSNBC’s Chris Hayes called out McAfee and NFL quarterback Aaron Rodgers when they cited a made-up stat that claimed Detroit Lions quarterback Jared Goff was 6-0 in games where he’d thrown at least four interceptions. After McAfee and Rodgers credulously spotlighted it, X user MisterCiv, the person who made the original post, wrote, “if you’ve ever wondered how easy it is to spread fake information, i made this stat up while laying in bed at halftime of the game.”

    As Hayes said then, “Thankfully, this is a totally harmless example of disinformation and the only consequence was McAfee getting embarrassed and having to walk it back. But what happened in that exchange between McAfee and Aaron ‘Do your own Research‘ Rodgers is basically the entire story of our information environment right now.”

    But McAfee devoting more than two minutes to discussing a rumor about a father-son-girlfriend love triangle wasn’t harmless. Mary Kate Cornett says his amplification of that lie upended her life.

    We can’t continue to give people a pass from the responsibility of their platforms. Cornett’s case is a stark example of how being flippant and unconcerned with the truth can hurt people, even if they aren’t named.

  • Reddit’s AI Answers Are About to Get Faster and More Accurate

    Reddit’s AI Answers Are About to Get Faster and More Accurate

    Reddit has expanded a partnership with Google to use the tech giant’s Gemini tool and tap artificial intelligence for Reddit Answers. Google is helping Reddit manage information from 100,000 online communities and more than 400 million weekly active users, making their online conversations more searchable for Reddit Answers, Google said on Wednesday. 

    It could mean faster and more accurate search in addition to summaries, follow-up questions and the ability to have an AI conversation about the content, not unlike what users of AI chatbot services such as ChatGPT would expect.

    Reddit unveiled Answers in late 2024, telling its users that it would be easier to search for content from Reddit — more often than not, answers to questions — without the need to go to Google and use its search function. Reddit already had a $60 million deal with Google to help train generative AI models on its content.

    The company has also been expanding its translation services across its communities with the help of AI.

    Reddit’s unique position in tech and AI

    Although Reddit’s AI efforts may be much more visible to people lately, the company has been positioning itself with the technology for a while, said Rowan Curran, a senior analyst at Forrester Research.

    “Reddit’s AI efforts have been ongoing for a number of years across a number of fronts,” Curran said. “For example, they were one of the early companies to develop their own model for code generation and assistance to support Reddit’s developers.”

    While the company is getting some outside assistance with AI, it’s also attractive to companies such as Google and OpenAI because of how much content it owns and generates. 

    “Reddit’s position as both a provider as well as a well of information puts them in a somewhat unique position to attempt to capitalize heavily both on the ingestion and preparation of data for AI use cases, as well as the serving of answers and information created from that data,” he said.

    Reddit’s position as an ongoing generator of deeply human discussion is something that shouldn’t be lost in the rush to implement AI, says author Christine Lagorio, who wrote about the company in her book We Are the Nerds: The Birth and Tumultuous Life of Reddit, the Internet’s Culture Laboratory.

    “I certainly hope it doesn’t diminish Reddit’s strength, which is increasingly its ecosystem of real human answers to real human questions, a welcome contrast from the rest of the increasingly artificial (and artificial-sounding, and artificial-looking) web,” Lagorio said.

  • A Screenless Phone Featuring ChatGPT Might Be on the Way as OpenAI Allegedly Considers Jony Ive’s AI Startup.

    A Screenless Phone Featuring ChatGPT Might Be on the Way as OpenAI Allegedly Considers Jony Ive’s AI Startup.

    ChatGPT maker OpenAI is reportedly looking into a potential acquisition of an artificial intelligence startup co-founded by former Apple design chief Jony Ive and OpenAI CEO Sam Altman. The deal could exceed $500 million, according to The Information.

    The venture, called io Products, is developing a range of AI-powered technologies including a screenless phone concept and smart home devices, the publication said. io Products has denied that a phone is in development, however, and OpenAI didn’t respond to a request for comment.

    The closely guarded AI hardware initiative was first reported by The New York Times in September. Ive — who is renowned for designing the iPhone, iPad and other iconic Apple products — said he was partnering with Altman to create a new AI-driven computing device aimed at being “less socially disruptive than the iPhone.”

    Although few specifics have emerged about the device, Ive and Altman have reportedly secured early-stage backing from investors, including Laurene Powell Jobs, the widow of Apple co-founder Steve Jobs. Funding was expected to reach $1 billion by the end of last year, according to The New York Times.

    In addition to acquisition talks, OpenAI is said to be exploring strategic partnerships with the venture. If a deal materializes, OpenAI would gain access to both the underlying technology and the core engineering team.

    The report arrives as the AI voice assistant landscape grows, with OpenAI, GoogleMeta and others racing to advance their AI chatbot offerings. A deal could also tighten OpenAI’s integration with a hardware player.

    Ive’s design firm, LoveFrom — founded after his departure from Apple five years ago — is spearheading the device’s development. The company, co-founded by renowned luxury designer Marc Newson, a key contributor to the Apple Watch, includes former Apple executives such as Tang Tan, who led iPhone hardware design. LoveFrom’s client list includes brands like Airbnb and Ferrari.

    OpenAI would ‘maintain its lead’ in AI

    Jitesh Ubrani, a manager at market research firm IDC, told CNET a move into hardware would enable OpenAI to continue expanding across various platforms and make a stronger push into more environments.

    “By partnering with a hardware startup, OpenAI can help maintain its lead across these other device types and usage scenarios,” Ubrani said. “Until the launch of AI, smart home hardware innovation [started] to plateau and by combining forces, the two companies could also benefit from growth in this space by injecting AI into the home.”

  • According to the WGA, TV writing positions dropped by 42 percent in the 2023‑24 season.

    According to the WGA, TV writing positions dropped by 42 percent in the 2023‑24 season.

    Even with the 2023 strikes in Hollywood’s rearview mirror, writers are still feeling the pinch.

    On Friday, the Writers Guild of America released new job statistics highlighting recent declines in television-writing jobs across various levels of the hierarchy. Post-Peak TV, those at the peak of profession were the largest casualties (in numbers).

    Of the 1,319 fewer TV writer jobs for the 2023-24 season (vs. 2022-23; pre-strikes), 642 jobs were lost — a decline of 40 percent — at the co-executive producer or higher (up to showrunner) level. Lower-level writers (staff writer, story editor, executive story editor) were the next most affected with 378 fewer jobs versus the prior season, down 46 percent. Mid-level positions (co-producer through consulting/supervising producer) declined by 299 (-42 percent).

    All told, there were 1,819 television writing jobs last season, a 42 percent decline from the 2022-23 season. Last season’s numbers are far fewer than even the COVID season of 2019-20, which employed 2,722 writers.

    Cord-cutters and corporate greed are to blame, the WGA says.

    “With an industry in transition — cable TV subscriptions and cable programming declining, a massive run-up and then pullback in streaming series as Wall Street demands quicker streaming platform profits — the number of TV jobs has declined,” the WGA’s latest jobs report reads.

    The report said the “studios’ prolonged unwillingness to negotiate a fair deal in 2023” was also to blame as it shortened the 2023-24 TV season.

    The WGA writers strike ran from May to September 2023. The Directors Guild of America reached a deal with media companies, but actors also took to picket lines as the SAG-AFTRA strike ran from July to November. Seasons of scripted shows were trimmed and some pickups were canceled. Approximately 37 percent fewer WGA-covered episodic series aired in 2023-24, per the report.

    The report was sent to WGA members Friday morning by the WGA West board of directors and WGA East council; The Hollywood Reporter obtained the email.

    “Writing careers have always been difficult to access and sustain, but the contraction has made it especially challenging,” the email to members reads. “We are all subject to the decisions of the companies that control this industry, who have pulled back spending on content based on the demands of Wall Street. Compounding that, the current administration seems intent on causing economic chaos and undermining our democracy.”

    Solid WGA data for the still-ongoing 2024-25 television season is still months away, the guild said. The WGA’s new contract with the studios should help employment bounce back — to some degree.

    It’s not just about needing more jobs, though that’s certainly a part of the WGA’s current mission. The 2023 negotiations were an attempt to thwart downsizing, yes, but also about “ensuring that however many projects the companies make, the jobs are good ones,” a WGA spokesperson told THR for this story.

    Television Writing Jobs Chart

    Television Writing Jobs, by Level

    Job Level2018-20192019-20202022-20232023-2024
    Lower Level Jobs (Staff Writer, Story Editor, Exec. Story Editor)795741824446
    Mid-Level Jobs (Co-Producer through Consulting/Supervising Producer)708649720421
    Upper Level Jobs (Co-EP through Showrunner)1,5081,3321,594952
    SOURCE: WRITERS GUILD OF AMERICA

    Lest writers think movies are a safe haven in this post-Peak TV period, they are not. Though the number of WGA-covered films has been pretty stable over the past few years, the number of screenwriters working is down 15 percent. Screenwriter earnings are down 6 percent.

  • OpenAI has filed a lawsuit against Elon Musk, saying he’s not acting in good faith.

    OpenAI has filed a lawsuit against Elon Musk, saying he’s not acting in good faith.

    OpenAI is suing Elon Musk over claims he has tried “nonstop” to slow down its business for his own benefit.

    The company accuses the Tesla boss of using “bad-faith tactics” against OpenAI to help him control cutting-edge AI technology.

    Mr Musk sued OpenAI chief executive Sam Altman last year in a bid to stop him from changing its corporate structure. He co-founded OpenAI with Mr Altman but left several years ago. 

    The countersuit opens up a new front in the high-stakes – and long-running – battle between two Silicon Valley heavyweights, who both say they are acting in the best interests of OpenAI and the public.

    “Elon’s nonstop actions against us are just bad-faith tactics to slow down OpenAI and seize control of the leading AI innovations for his personal benefit,” OpenAI said in a statement on Wednesday. “Today, we countersued to stop him.”

    Last week, a federal judge in Oakland, California, set a March 2026 trial date in Mr Musk’s suit in a bid to fast-track the legal fight.

    US District Judge Yvonne Gonzalez Rogers previously declined to grant Mr Musk an injunction that would temporarily halt OpenAI’s conversion from a non-profit to a for-profit company.

    She also said that she expected Mr Musk to give evidence in the case.

    Mr Musk alleges that OpenAI strayed from its founding mission as a non-profit to develop AI for the benefit of humanity and is therefore in breach of contract.

    He left the company in 2018.

    “This is about control. This is about revenue. It’s basically about one person saying, ‘I want control of that start-up’,” said Ari Lightman, professor of digital media and marketing at Carnegie Mellon University.

    Lightman said it has been a distraction from making AI safe and equitable.

    “That takes a backseat with all this rigmarole over control and monetization,” Lightman said.

    OpenAI claims Mr Musk has “been spreading false information about us,” in a X post on Wednesday, adding “Elon’s never been about the mission. He’s always been about his own agenda.”

    Musk’s xAI is a competitor to OpenAI, but has so far lagged behind. Last month, xAI acquired Musk’s social media platform X – formerly Twitter.

    Mr Musk claims the combined company, XAI Holdings, is valued at more than $100 billion.

    In February, Mr Musk made an unsolicited bid for OpenAI, offering to buy it for $97.4 billion, which Mr Altman rejected by posting: “no thank you but we will buy twitter for $9.74 billion if you want.”

    In a statement to the BBC, Mr Musk’s lawyer Marc Toberoff said: “Had OpenAI’s Board genuinely considered the bid, as they were obligated to do, they would have seen just how serious it was.”

    “It’s apparent they prefer to negotiate with themselves on both sides of the table than engage in a bona fide transaction in the best interests of the charity and the public,” Mr Toberoff added.