Author: Eldin Yovlz

  • SpaceX Pushes for Early Index Inclusion Ahead of Potential IPO

    SpaceX Pushes for Early Index Inclusion Ahead of Potential IPO

    Elon Musk’s SpaceX is seeking an early boost for shares after the rocket-and-satellite business makes its stock market debut later this year.

    Advisers for the company, which recently merged with xAI, have reached out to major index providers, including Nasdaq, to discuss how SpaceX and this year’s other hot startups might join key indexes sooner than normal, according to people familiar with the matter.

    Companies typically must wait several months or a year after their public debut before gaining inclusion in a major index such as the S&P 500 or the Nasdaq 100. Inclusion unlocks access to retail and institutional capital from funds, particularly those mimicking the performance of indexes that have to hold the companies in the index.

    The traditional waiting period is intended to give the companies time to demonstrate that they are stable and liquid enough to handle extensive buying from index funds.

    SpaceX hopes to skirt traditional rules in an effort to bring liquidity to its shareholders sooner as part of its planned IPO. SpaceX advisers have sought index policy changes that would fast-track its entry into major indexes for the company and benefit other highly-valued private companies, the people said.

    Last valued at $800 billion, SpaceX is targeting a valuation of more than $1 trillion, a listing that would become the largest-ever U.S. IPO.

    The headquarters of the Office of Personnel Management in Washington.
    Elon Musk. © Al Drago/Bloomberg

    Investors and advisers to companies planning to go public this year are concerned not only about initial trading, but also that the standard six-month lockup period—which prevents early investors, executives and employees from selling their stock—might prompt significant selling that pressures shares. After Meta went public in 2012, shares sank when early investors unloaded all at once.

    SpaceX is exploring ways to better balance supply and demand to avoid that outcome, some of the people said.

    Advocates of index methodology changes have said that by allowing newly public companies earlier entry to key indexes, individual investors, who have famously missed out on the big gains in private markets, could secure earlier exposure via popular exchange-traded funds and index funds.

    Earlier this week, the Nasdaq Stock Market shared proposals to update some of the Nasdaq 100 index methodology and asked for feedback from market participants.

    Among the proposals is a potential “fast entry” process. Under this option, companies whose market capitalizations rank in the top 40 of the Nasdaq 100’s constituents could be added to the index after 15 trading days. Companies typically now must wait at least three months to be added to the index. At their current valuations, SpaceX, OpenAI and Anthropic would all qualify.

    The S&P Total Market Index and MSCI indexes have fast-track options, which some advisers to SpaceX are also exploring in an effort to ensure the IPO trades well, some of the people familiar with the matter said.

    The one index where there is now no fast-entry option is also one of the most important: The S&P 500. To join the index, a company must be U.S.-based, profitable and have a market capitalization of at least $22.7 billion. Joining gives it access to a steadier index-fund investor base.

    OpenAI is laying the groundwork for a fourth-quarter IPO as it races rival Anthropic to list shares publicly. OpenAI is aiming to raise $100 billion before the IPO at a valuation of more than $800 billion, while Anthropic is raising billions more at a valuation of $350 billion.

  • Elon Musk Says SpaceX and xAI Will Merge to Build AI Data Centers in Space

    Elon Musk Says SpaceX and xAI Will Merge to Build AI Data Centers in Space

    Elon Musk in animated space. © The NY Budgets/Britta Pedersen-Pool/Getty Images

    On Monday, Elon Musk announced that he was merging two of his companies, SpaceX and xAI, in a deal said to be worth $1.25 trillion. The reason, Musk said in an announcement, was that in order for AI to grow, it needed to go to space.

    AI relies on “large terrestrial data centers” that run on “immense amounts of power and cooling,” he said, which comes at great expense to the environment and community opposition. The solution: data centers in space. “In the long term, space-based AI is obviously the only way to scale,” Musk said.

    Musk isn’t the only one looking to launch data centers into orbit. Google has Project Suncatcher to build solar-powered AI data centers in space. China is looking into space-based data centers, as is Europe. As we reported last year, space-based data centers — in the form of satellites with solar panels — are Big Tech’s latest fad and Silicon Valley’s newest investable venture.

    On the surface, it sounds like a logical solution to the unique problem presented by power-hungry data centers. Local communities are rising up against data center projects over concerns about electricity demand, water usage, and rising utility rates. Launching those data centers into space means they are not taking up any space on Earth, and in a sun-synchronous orbit there is the availability of solar energy.

    AI relies on “large terrestrial data centers” that run on “immense amounts of power and cooling,” Musk said, which comes at great expense to the environment

    But there’s another, simpler way of looking at Musk’s merger: SpaceX is profitable, and xAI is not. Not only is xAI not profitable, it’s in the midst of a serious cash burn as it races to compete with well-financed rivals like Google and OpenAI. As Bloomberg recently reported, the AI company is burning about $1 billion a month as it spends heavily to build data centers, recruit talent, and run the social media platform X.

    Meanwhile, SpaceX generated about $8 billion in profit on an estimated $16 billion of revenue ​last year, Reuters reported. The main revenue driver is Starlink, which accounts for up to 80 percent of the company’s revenue. Since 2019, SpaceX has launched over 9,500 satellites and boasts up to 9 million broadband internet users. The company is also a major government contractor, having secured over $20 billion in NASA and Defense Department deals since 2008. When it goes public later this year, SpaceX is expected to raise up to $50 billion in investment.

    Meanwhile, xAI has it own government tie-ups. The Department of Defense is using Grok, in addition to other chatbots, to analyze information that flows through its military intelligence networks.

    It’s not clear how investors will feel about merging the cash-burning xAI with the profitable SpaceX. But it’s important to note that Musk has done this before, when he merged the debt-ridden SolarCity with Tesla in 2016. Since Musk was the largest shareholder and chairman of both Tesla and SolarCity, shareholders sued to block the merger, alleging it was a $2.6 billion “bailout” of a cash-strapped, struggling company. Musk eventually won the lawsuit, with a judge ruling that he did not force Tesla to overpay for SolarCity.

    Musk now faces a new lawsuit from Tesla shareholders over his creation of xAI. The lawsuit alleges that Musk breached his fiduciary duty to Tesla by forming xAI, which competes with the automaker for AI talent, resources, and Musk’s attention. The news that SpaceX is acquiring xAI certainly won’t settle those concerns; if anything, it makes it more chaotic and complex.

    So where does this all leave Tesla? In the most recent earnings report, Tesla said it was investing $2 billion into xAI “to enhance Tesla’s ability to develop and deploy AI products and services into the physical world at scale.” Grok, xAI’s chatbot that’s currently under investigation in multiple countries for generating nonconsensual sexualized images of people, including children, was recently integrated into certain Tesla vehicles as a voice assistant. Grok also lags behind OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, and other large language models in several key metrics.

    Data centers in space is pure Musk futurism that has no guarantee of success. It’s not as simple as just strapping a GPU to a rocket and hitting “launch.” First off, GPUs are total power hogs. Unless you’ve got a nuclear reactor floating up there, you’re going to need a massive solar arrays to power it. Then there’s the communication situation; even if you’re hitching a ride on Starlink, you still have to figure out the budget for sending info back and forth to Earth. Eventually, the numbers start to look pretty scary.

    Musk says merging SpaceX and xAI is the way to make it happen. And perhaps one day he’ll take the suggestion of bullish investors to combine all his companies, including Tesla, Neuralink, and the Boring Company, into one massive, Musk-run mega-corporation: Musk Inc., if you will. How will Tesla shareholders react?

    “Tesla is Musk’s liquid piggy bank, since it’s publicly traded; his other companies are not,” Tesla investor James McRitchie said during a prevote presentation before the company’s 2024 shareholder meeting, according to The Wall Street Journal. “Either he sticks around long enough to use our shareholder capital to fund his other ventures, or he shifts his attention sooner if we reject his pay package and turn off the money tap.”

  • Nvidia’s Record Profits Alleviate Investor Concerns Amid AI Boom

    Nvidia’s Record Profits Alleviate Investor Concerns Amid AI Boom

    GettyImages 2192215403 b83fd81cbd854c3587c2fe593617f3a5
    Nvidia CEO Jensen Huang delivers a keynote address at CES on Jan. 6, 2025. © Patrick T. Fallon / Getty Images
    Stock Widget

    Nvidia NVDA +4.25% ▲ reported record sales and strong guidance Wednesday, helping soothe jitters about an artificial intelligence bubble that have reverberated in markets for the last week.

    Sales in the October quarter hit a record $57 billion as demand for the company’s advanced AI data center chips continued to surge, up 62% from the year-earlier quarter and exceeding consensus estimates from analysts polled by FactSet. The company increased its guidance for the current quarter, estimating that sales will reach $65 billion—analysts had predicted revenue of $62.1 billion for the quarter.

    Shares in the world’s most-valuable publicly listed company rose almost 5% in premarket trading Thursday.

    “We’ve entered the virtuous cycle of AI,” said Nvidia Chief Executive Jensen Huang. “AI is going everywhere, doing everything, all at once.”

    Wednesday’s result will allow investors to breathe a sigh of relief. Each Nvidia quarterly earnings report has come to be seen as a financial Super Bowl of sorts as the AI boom has taken off. The company is regarded as a bellwether for both the health of the tech industry and the market as a whole.

    This quarter, however, the stakes seemed higher. Rarely has an earnings report from a single company been greeted with such nervous anticipation.

    In recent weeks, investors have sold off big tech names, worried that companies are spending far too much money on data centers, chips, and other infrastructure in the race to design and operate the world’s most powerful AI models, with little hope of recouping their investments in the near term.

    Adding to the pressure is a flurry of recent AI deals structured using what critics have dubbed “circular” funding mechanisms—broadly referring to suppliers like Nvidia making large capital investments in the businesses of the customers who buy their products. Just a few months ago, investors viewed such deals with enthusiasm, pumping up shares for a variety of AI-related companies, but this week one such deal—between Nvidia, Microsoft and Anthropic—was greeted warily.

    This week, 45% of global fund managers surveyed by Bank of America said that an AI stock-market bubble was one of the biggest risks facing the market.

    A number of bearish moves by high-profile investors have also rattled tech markets. Last week, Masayoshi Son’s SoftBank Group sold its entire $5.8 billion stake in Nvidia to divert that money to other AI investments, while a hedge fund run by influential billionaire venture capitalist Peter Thiel unloaded its entire $100 million Nvidia stake in the third quarter.

    Earlier this month, Michael Burry—who famously predicted the popping of the subprime mortgage securities bubble and was profiled in the Michael Lewis book “The Big Short: Inside the Doomsday Machine”—revealed in a securities filing that he was betting against the stocks of both Nvidia and AI-heavy defense analytics firm Palantir.

    “The last few weeks, there have been some escalating cracks in the AI landscape,” said Matt Stucky, chief portfolio manager for equities at Northwestern Mutual Wealth Management Company, an Nvidia shareholder. “Nvidia is the beneficiary of a lot of AI spending, and market forces are pushing back harder and harder on that spending.”

    Quarterly net income was $31.9 billion, 65% higher than a year earlier. Sales of Nvidia’s Blackwell line of graphics processing units—its most powerful chips yet—were “off the charts,” Huang said. Revenue from Nvidia’s data center segment set a record at $51.2 billion, beating analysts’ expectations of $49 billion.

    The potential for revenue increases may be limited going forward after the Trump administration announced earlier this month that it is not considering allowing a version of the Blackwell chip to be sold in China, a fast-growing AI market that represents tens of billions of dollars in potential sales.

    Half of the company’s long-term opportunity will come from customers’ transition to accelerated computing and generative AI, Colette Kress, Nvidia’s chief financial officer, said on a call with investors. While sizable purchase orders for Nvidia’s Hopper Platform never materialized in the quarter due to geopolitical issues with China, the company remains committed to engaging with governments, she added.

    In separate news, the Commerce Department approved the sale of up to 70,000 advanced artificial-intelligence chips to two companies based in the United Arab Emirates and Saudi Arabia, a big win for the Middle Eastern nations as they seek to catch up in the AI race. The approvals are a reversal from earlier this year, when some administration officials rejected the idea of exporting directly to the state-backed companies over security concerns.

     

    Terms of the deal will allow U.S. firms to sell up to 35,000 of Nvidia’s GB300 servers or their equivalents to both G42, a state-run AI firm based in Abu Dhabi, and Humain, a Saudi government-backed AI venture, government officials said. Nvidia competitor Advanced Micro Devices also has an agreement worth billions of dollars to work with Humain.

    Nvidia’s stock price more than doubled between early April and late October, rising from the low $90s to more than $200 per share, but has lost ground in the last few weeks as bubble worries have grown. So far this year, it’s up about 30%.

  • Nvidia’s $5 Trillion Milestone: What Does It Mean for the Future of AI and Tech?

    Nvidia’s $5 Trillion Milestone: What Does It Mean for the Future of AI and Tech?

    rgjewurghiueghehui 1
    Stock Widget

    Nvidia Corp. NVDA +5.50% ▲ etched its name deeper into history books Wednesday, becoming the first publicly traded company to eclipse a $5 trillion market capitalization—a staggering milestone that underscores the artificial intelligence revolution’s grip on global markets, even as whispers of an impending bubble grow louder. The Silicon Valley chipmaker’s shares surged as much as 5.5% during the session, closing at $207.04 with 24.3 billion shares outstanding, catapulting its valuation to $5.03 trillion. Just three months after breaching $4 trillion and a mere two years after cracking $1 trillion, Nvidia’s ascent—up 50% year-to-date and over 1,500% in the past five years—has outpaced the Nasdaq’s 23% gain this year and the S&P 500’s 17%, cementing its status as the world’s most valuable firm ahead of Microsoft MSFT +2.10% ▲ ($4 trillion) and Apple AAPL +1.80% ▲ ($3.9 trillion).

    The rally, which added nearly $140 billion to Nvidia’s coffers in a single day, was supercharged by CEO Jensen Huang’s announcements at the company’s annual AI conference in Washington, D.C., on Tuesday. Huang revealed a pipeline of $500 billion in AI chip orders through next year, alongside a flurry of high-profile deals: a partnership with Uber Technologies Inc. to advance robotaxi development, a $1 billion investment in Nokia Oyj for next-generation 6G networks, and collaboration with the U.S. Department of Energy to construct seven new AI supercomputers. Last month, Nvidia committed $100 billion to OpenAI, aiming to deploy at least 10 gigawatts of AI data centers to supercharge the ChatGPT maker’s computing prowess. “These aren’t hypotheticals—these companies are generating real revenues, and the products are profitable,” Huang told NBC News, brushing off bubble concerns. “Generative AI has evolved from interesting to indispensable.”

    Nvidia’s dominance in graphics processing units (GPUs)—repurposed from gaming rigs to the lifeblood of AI training for models like ChatGPT and image generators—has made it indispensable to Big Tech’s AI arms race. Its largest customers, including OpenAI, Tesla Inc., xAI, Meta Platforms Inc., Amazon.com Inc., and Oracle Corp., have funneled billions into Nvidia’s H100 and upcoming Blackwell chips, driving demand that outstrips supply. The semiconductor giant’s market cap now dwarfs the combined valuations of rivals like Advanced Micro Devices Inc., Intel Corp., Broadcom Inc., Taiwan Semiconductor Manufacturing Co., Micron Technology Inc., ASML Holding NV, Lam Research Corp., Qualcomm Inc., and Arm Holdings Plc—collectively worth less than half of Nvidia’s heft.

    To put $5 trillion in perspective: It’s equivalent to roughly 25 Walt Disney Cos., 50 Nikes, 96 Ford Motor Cos., 945 Macys, or over 3,311 JetBlue Airways Corps. Nvidia alone towers over the entire S&P 500 energy sector (three times its size) and eclipses major international benchmarks like Germany’s DAX and France’s CAC indices (more than double each). More strikingly, its valuation surpasses the gross domestic product of every nation on Earth except the United States ($29.1 trillion) and China ($18 trillion), per World Bank and IMF data—including India, Japan, the U.K., and Germany ($4.6 trillion last year). A $1,000 investment in Nvidia a decade ago, when shares bottomed at $0.47 in February 2015, would now be worth $441,000—a 44,000% return that has minted fortunes, including Huang’s estimated $174.4 billion net worth, ranking him eighth on Forbes’ billionaire list.

    The AI boom, often likened to the iPhone’s 2007 debut for its transformative potential, has propelled Nvidia from a $10 billion niche player in 2015 to this colossus. Yet, the speed of its rise—stock up 3.4% to an intraday high of $207.85 Wednesday—has reignited debates over sustainability. Officials at the Bank of England flagged AI’s “growing risk” of a tech stock burst earlier this month, while IMF Managing Director Kristalina Georgieva echoed warnings of parallels to the late-1990s dot-com bubble. Nvidia’s shares, trading at a forward price-to-earnings multiple of 45, reflect sky-high expectations for sustained GPU demand amid an AI infrastructure spend projected to hit $1 trillion annually by 2030, per McKinsey & Co.

    Geopolitical crosswinds add intrigue. Huang jetted to South Korea this week for the Asia-Pacific Economic Cooperation (APEC) summit, where free-trade ideals clash with escalating U.S. tariffs on tech and beyond. A pivotal sideline Thursday: a face-to-face between President Donald Trump and Chinese President Xi Jinping, where Trump pledged to discuss Nvidia’s chips. In August, the administration struck a deal with Nvidia and AMD to ease export curbs on advanced chips to China in exchange for a 15% revenue cut to Washington—despite national security qualms over potential military diversions. Commerce Secretary Howard Lutnick quipped on CNBC in July that selling America’s “fourth best” AI tech to Beijing was “cool,” but not the top tiers. Nvidia’s August overtures for a China-specific chip, plus a $5 billion infusion into Intel (where the U.S. government now holds a 10% stake worth $11 billion), highlight efforts to balance export growth with domestic bolstering under the CHIPS Act.

    For investors, Nvidia’s milestone is a double-edged sword. The Magnificent Seven tech stocks, led by Nvidia, have shouldered 60% of the S&P 500’s gains this year, but rotation risks loom if AI hype cools. “Nvidia isn’t just a company—it’s the AI proxy,” said Dan Ives, Wedbush Securities analyst. “But at $5 trillion, any earnings miss could trigger a reality check.” With Blackwell production ramping and partnerships like the Nokia tie-up eyeing 6G’s trillion-dollar frontier, Nvidia’s trajectory suggests more records ahead. Yet, as Huang attends APEC amid Trump-Xi tensions, the chip king’s fate remains intertwined with the very global supply chains it seeks to redefine.

  • Meta Stock Falls Even After Strong Revenue Report

    Meta Stock Falls Even After Strong Revenue Report

    Stock Widget

    Meta Platforms Inc. delivered a resounding third-quarter earnings beat on Wednesday, with adjusted earnings per share of $7.25 topping analyst expectations of $6.69 and revenue surging to $51.24 billion against forecasts of $49.41 billion, as polled by LSEG. The results underscored the social media giant’s robust advertising engine and user engagement amid a resurgent digital ad market, yet Meta META -1.20% ▼ shares tumbled 1.2% in after-hours trading to $582.34, capping a volatile session that saw the stock dip 0.3% during regular hours. Investors, spooked by Meta’s forecast of “significant acceleration” in AI-related infrastructure costs next year—potentially ballooning to tens of billions—brushed aside the positives, signaling growing unease over the sustainability of Big Tech’s AI arms race.

    The earnings, released after the bell on October 29, highlighted Meta’s operational resilience. Net income soared to $15.69 billion, or $6.03 per share, a 35% jump from $11.58 billion, or $4.39 per share, a year earlier—well ahead of FactSet’s consensus of $5.22. Revenue climbed 19% year-over-year, fueled by a 22% uptick in ad sales to $50.1 billion, as daily active users across Facebook, Instagram, and WhatsApp swelled to 3.28 billion, up 6% from last year. CEO Mark Zuckerberg touted the quarter as a “strong foundation” for AI integrations, including enhanced Reels recommendations and Llama model advancements, which drove a 12% increase in time spent on the platforms.

    Yet, the post-earnings glow faded swiftly. Meta’s guidance for Q4 projected revenue of $52.5 billion to $54 billion, in line with Wall Street’s $53.2 billion midpoint, but the real headwind was the capex outlook. The company flagged a “meaningful ramp” in 2026 AI infrastructure spending, on top of the $39 billion already earmarked for 2025, to fuel data centers and GPU acquisitions from Nvidia Corp. “We’re investing aggressively in AI to stay ahead,” Zuckerberg said on the earnings call, but analysts like Bank of America’s Justin Post worried aloud about the “long-term growth manifestation” of these outlays, especially as rivals like OpenAI pivot toward ads and social features, intensifying competition in Meta’s core turf.

    The reaction rippled across global markets. In Frankfurt pre-market trading Thursday, Meta (META.O) shares slipped 2.6% to €530, mirroring a 5.1% drop in Microsoft Corp. (MSFT.O) amid its own Azure cloud growth slowdown warning—dragging Nasdaq futures down over 1%. The Magnificent Seven cohort, already under scrutiny for AI hype, saw broader pressure: Alphabet Inc. and Amazon.com Inc. reports later in the week loom large, with investors parsing for similar spending spikes. “Meta’s beat was textbook, but the AI capex fog is thick—it’s all about the denominator now,” said Wedbush Securities analyst Daniel Ives, who maintains an Outperform rating but trimmed his price target to $650 from $675.

    Meta’s Q3 performance aligns with a digital ad sector rebound, projected to grow 12% to $740 billion globally in 2025 per eMarketer, buoyed by election-year spending and e-commerce tailwinds. Reality Labs, Meta’s metaverse arm, narrowed losses to $4.2 billion from $5.1 billion, with Quest headset sales up 15%—a bright spot amid Zuckerberg’s pivot to AI glasses and wearables. Still, the stock’s 1.2% after-hours slide erased $25 billion in market cap, leaving Meta at $1.48 trillion—down 5% year-to-date versus the Nasdaq’s 23% gain.

    Looking ahead, Wall Street eyes Meta’s AI monetization roadmap at next week’s investor day, where details on ad-targeting LLMs and enterprise tools could assuage fears. For now, the earnings saga encapsulates Big Tech’s paradox: explosive growth meets escalating costs in an AI gold rush that has minted trillion-dollar valuations but risks a valuation reset if returns lag. As Ives put it, “The party’s still on, but the bill just arrived.”

  • China’s complex relationship with Nvidia’s H20 chip is marked by both its potential benefits and significant concerns

    China’s complex relationship with Nvidia’s H20 chip is marked by both its potential benefits and significant concerns

    Stock Widget

    Chinese authorities have intensified scrutiny of domestic tech giants, including Tencent TCEHY -2.30% ▼, ByteDance, and Baidu BIDU -1.85% ▼, over their purchases of Nvidia’s NVDA -3.45% ▼ H20 AI chips, raising concerns about data security and urging companies to prioritize domestic alternatives. The regulatory pressure also extends to AMD AMD -2.10% ▼, while domestic chipmakers like SMIC 981.HK +5.20% ▲ benefit from the push toward technological self-sufficiency. Major Chinese firms like Alibaba BABA -1.95% ▼ face difficult decisions as they navigate between proven U.S. technology and regulatory pressure to adopt domestic alternatives.

    The Cyberspace Administration of China (CAC) and other regulatory bodies have held meetings with these firms and smaller tech companies in recent weeks, questioning the necessity of relying on U.S.-made chips when local options are available. This development threatens Nvidia’s recently restored access to the Chinese market and could generate billions in revenue for the U.S. government through a novel export deal, while highlighting China’s push for technological self-sufficiency in the global AI race.

    The CAC’s recent actions mark a significant escalation in China’s oversight of foreign AI technology. According to Reuters, Chinese officials have summoned major internet firms, including Tencent, ByteDance, and Baidu, to explain their reasons for purchasing Nvidia’s H20 chips, designed specifically for the Chinese market to comply with U.S. export restrictions. One source indicated that authorities expressed concerns about potential information risks, particularly the possibility that materials submitted by Nvidia for U.S. government review could contain sensitive client data. “The regulators are worried about what Nvidia might be sharing with U.S. authorities,” the source said, speaking on condition of anonymity due to the private nature of the meetings.

    While no outright ban on H20 purchases has been issued, Bloomberg News reported on August 12, 2025, that Chinese authorities have sent official notices discouraging the use of H20 chips for government or national security-related projects, affecting both state-owned enterprises and private companies. A separate report by The Information claimed that the CAC directed over a dozen tech firms, including Alibaba, to suspend Nvidia chip purchases entirely, citing data security concerns. These directives followed the Trump administration’s decision in July 2025 to reverse export curbs on the H20, allowing Nvidia to resume sales in China after a ban earlier this year.

    The CAC’s concerns were amplified by state-controlled media, with outlets like Yuyuan Tantian, affiliated with CCTV, publishing articles on platforms like WeChat that criticized the H20 chips for alleged security risks, lack of technological advancement, and environmental inefficiencies. Nvidia, in a statement on August 12, 2025, refuted these claims, asserting that the H20 is “not a military product or for government infrastructure” and emphasizing that China has ample domestic chip alternatives for its needs. Tencent, ByteDance, Baidu, and Alibaba did not respond to requests for comment, and the CAC remained silent on the matter.

    The scrutiny of Nvidia’s H20 chips comes amid heightened U.S.-China tensions over AI technology. The H20, a less-advanced version of Nvidia’s flagship AI chips, was developed to navigate U.S. export controls imposed in late 2023, which restricted sales of more powerful chips like the A100 and H100 to China. The Trump administration’s reversal of the H20 ban in July 2025 was part of a broader deal with Nvidia and AMD, announced last week, requiring the companies to remit 15% of their China sales revenue for certain advanced chips to the U.S. government. According to posts on X, this arrangement could generate billions of dollars for Washington, with Nvidia’s China sales alone accounting for $17 billion—or 13% of its total revenue—in its fiscal year ending January 26, 2025.

    However, China’s renewed guidance could jeopardize this revenue stream. By discouraging H20 purchases, Beijing is signaling its intent to reduce reliance on U.S. technology, a move that aligns with its broader “Made in China 2025” initiative to achieve technological self-sufficiency. Domestic chipmakers like Huawei and SMIC are ramping up production of AI accelerators, with Huawei’s Ascend series emerging as a viable rival to the H20. SMIC’s stock rose 5% on August 12, 2025, reflecting investor optimism about growing demand for locally produced chips.

    The regulatory pressure also extends to AMD, with Bloomberg reporting that China’s guidance affects its MI308 chip, though no specific notices targeting AMD were confirmed. AMD did not respond to inquiries outside regular business hours. The uncertainty surrounding foreign chip purchases has sparked speculation on X that Nvidia and AMD may raise prices for their chips in China to offset the 15% revenue share to the U.S. government, potentially further incentivizing Chinese firms to pivot to domestic alternatives.

    The global AI chip market, projected to reach $400 billion by 2027, is a critical battleground for U.S. and Chinese tech giants. Nvidia has long dominated the market, with its GPUs powering AI applications worldwide. In China, the company’s H20 chip was a lifeline after U.S. sanctions curtailed sales of its more advanced models. However, Beijing’s push for domestic alternatives threatens Nvidia’s market share, which accounted for 13% of its revenue in the last fiscal year.

    China’s domestic chip industry, while growing, faces challenges due to U.S. sanctions on advanced chipmaking equipment, such as lithography machines critical for producing cutting-edge processors. Despite these constraints, companies like Huawei have made significant strides, with posts on X highlighting the performance of Huawei’s Ascend chips in AI workloads. “Huawei’s chips are closing the gap with Nvidia’s H20,” tweeted one tech analyst, reflecting growing confidence in China’s capabilities.

    For Chinese tech giants, the CAC’s directives create a delicate balancing act. Companies like Tencent, ByteDance, and Baidu rely on AI chips to power their cloud computing, search, and social media platforms. While Nvidia’s H20 offers proven performance, the regulatory pressure to adopt domestic chips could force a shift, even if local alternatives lag in certain applications. Smaller tech firms, less equipped to navigate regulatory scrutiny, may face greater challenges in securing reliable chip supplies.

    At the heart of China’s caution is a deep-seated concern about data security and U.S. influence. The CAC’s meetings with Nvidia representatives last month focused on whether the H20 chip posed backdoor risks that could compromise Chinese user data and privacy. These concerns echo broader fears in Beijing that U.S. technology could be used to monitor or manipulate Chinese systems, a sentiment amplified by state media.

    Conversely, Washington has its own worries about China’s access to advanced AI chips. U.S. President Donald Trump’s suggestion on August 11, 2025, that Nvidia might be allowed to sell a scaled-down version of its Blackwell chip in China reflects a pragmatic approach to balancing economic interests with national security. However, this proposal has sparked debate, with critics arguing that even less-advanced U.S. chips could enhance China’s military capabilities. China’s foreign ministry responded on August 12, 2025, urging the U.S. to maintain a stable global chip supply chain, signaling its desire to avoid further escalation.

    China’s cautious stance on Nvidia’s H20 chips underscores the broader geopolitical tug-of-war over AI technology. For Nvidia, the regulatory hurdles threaten a critical market, forcing the company to navigate a complex landscape of compliance and competition. The 15% revenue-sharing deal with the U.S. government adds further pressure, potentially increasing costs for Chinese buyers and accelerating the shift to domestic alternatives.

    For Chinese tech firms, the CAC’s guidance reflects a broader push for technological independence, but it also risks disrupting their AI development timelines. While Huawei and SMIC are making strides, scaling production to meet domestic demand remains a challenge, particularly given U.S. restrictions on advanced manufacturing equipment. The global chip supply chain, already strained by sanctions and trade disputes, faces further uncertainty as both nations vie for dominance.

    As the AI race intensifies, the outcome of this standoff will have far-reaching implications. For now, China’s scrutiny of Nvidia’s H20 chips signals a bold step toward self-reliance, while the U.S. grapples with balancing economic gains against strategic concerns. The global tech industry, caught in the crossfire, awaits clarity on how this high-stakes rivalry will reshape the future of AI.

  • China’s dominance in the open-source AI sector has alarmed both Washington and Silicon Valley, prompting a reevaluation of strategies

    China’s dominance in the open-source AI sector has alarmed both Washington and Silicon Valley, prompting a reevaluation of strategies

    China’s aggressive push into open-source artificial intelligence (AI) is sending shockwaves through Washington and Silicon Valley, as free-to-use large language models (LLMs) from companies like DeepSeek, Alibaba, and others rapidly gain traction worldwide. These permissively licensed models, which allow developers and corporations to customize and deploy AI for commercial use without costly licensing fees, are reshaping the global AI landscape. This development has sparked alarm among U.S. policymakers and tech giants, who fear that Beijing’s strategy could set a new global standard for AI development, potentially eroding America’s technological dominance.

    The Rise of Chinese Open-Source AI

    China’s ascent in open-source AI has been swift and strategic. Companies like DeepSeek, a Beijing-based startup, and Alibaba Group, through its Qwen model, have released a series of advanced LLMs under open-source licenses, making them freely available to developers worldwide. Unlike proprietary models from U.S. firms like OpenAI and Anthropic, which often come with steep subscription costs or restricted access, these Chinese models offer high performance at zero cost, lowering barriers to entry for AI applications in industries ranging from healthcare to finance.

    A Wall Street Journal report on August 13, 2025, highlighted the global adoption of these models, noting that developers in Europe, Southeast Asia, and Latin America are increasingly integrating DeepSeek’s R-1 and Alibaba’s Qwen into their software and enterprise solutions. Posts on X echo this sentiment, with developers praising the models’ performance and accessibility. One user noted, “DeepSeek’s R-1 is outperforming some paid models in coding tasks, and it’s free. This is a game-changer for small startups.”

    The appeal of these models lies in their permissive licensing, which allows users to modify and deploy the code for commercial purposes without restrictions. This approach contrasts sharply with the closed ecosystems of many U.S.-based AI companies, which rely on proprietary systems to maintain competitive edges. For instance, OpenAI’s GPT-5, launched earlier this month, has faced criticism for its high subscription costs and limited accessibility for non-paying users, prompting some developers to explore Chinese alternatives.

    A Wake-Up Call for Washington

    The growing influence of Chinese open-source AI has caught the attention of U.S. policymakers, who view Beijing’s push as a deliberate attempt to shape global technical standards and exert soft power in the AI ecosystem. According to Foreign Affairs, policy specialists warn that Washington’s current AI strategy, which heavily favors proprietary development, risks ceding control of open-source innovation to China. “If the United States fails to account for the appeal of freely available models, American companies could surrender technological leadership in fast-moving markets like edge computing and enterprise software,” the publication noted.

    This concern is amplified by China’s broader ambitions. Beijing has invested heavily in AI as part of its “Made in China 2025” initiative, aiming to establish itself as a global leader in emerging technologies. By distributing open-source models, Chinese companies are not only gaining market share but also fostering a global developer community that aligns with their standards and tools. This strategy mirrors China’s earlier success in setting global standards for 5G technology through companies like Huawei.

    U.S. officials are particularly worried about the national security implications. At the Black Hat cybersecurity conference in August 2025, researchers highlighted the vulnerability of open-source LLMs to prompt-injection attacks and other manipulations, raising concerns about their use in critical infrastructure. The Biden administration has responded by exploring policies to strengthen safeguards for open-source AI, but analysts argue that a more proactive approach is needed to counter China’s momentum. “Washington needs to balance the advantages of openness with measures to protect intellectual property and national security,” said Dr. Li Wei, a cybersecurity expert at MIT.

    Silicon Valley, long accustomed to leading the AI race, is grappling with the implications of China’s open-source surge. Companies like OpenAI, Anthropic, and Google, which have built their business models around proprietary AI systems, now face pressure to adapt to a market where free alternatives are gaining ground. “China is commoditizing AI,” tweeted one industry analyst. “Developers will always go with open source when available, and large businesses prefer it for privacy and customization.”

    The market dynamics are shifting rapidly. The global AI market, projected to reach $1.8 trillion by 2030, is increasingly driven by enterprise adoption and edge computing, where open-source models excel due to their flexibility and cost-effectiveness. Chinese models like DeepSeek’s R-1 are particularly well-suited for edge AI applications, such as autonomous vehicles and IoT devices, where lightweight, customizable models are critical. This has led some Silicon Valley firms to reconsider their strategies, with rumors that companies like Meta AI are exploring more open-source offerings to compete.

    The financial stakes are high. OpenAI, valued at $150 billion in 2024, relies heavily on its subscription-based ChatGPT Plus and API services for revenue. However, the availability of free, high-quality alternatives could erode its market share, particularly among cost-conscious startups and international developers. Similarly, Anthropic’s Claude 3.5 and xAI’s Grok 3, while competitive, face challenges in matching the accessibility of Chinese models. xAI, for instance, offers a free tier for Grok 3 on platforms like x.com, but its usage quotas are limited, potentially pushing users toward Chinese alternatives.

    The proliferation of open-source AI models raises significant security and ethical questions. Cybersecurity experts warn that open-source LLMs are highly susceptible to attacks, such as prompt injections, where malicious inputs can manipulate a model’s outputs. This vulnerability is particularly concerning for applications in sensitive sectors like finance and healthcare. At the Black Hat conference, researchers emphasized the need for robust safeguards, noting that “the lessons of the past 25 years in cybersecurity have been forgotten” in the rush to adopt open-source AI.

    Moreover, the global adoption of Chinese models raises concerns about data privacy and geopolitical influence. While open-source licenses allow for transparency, there is unease about the potential for Chinese firms to embed backdoors or collect metadata through widespread use of their models. U.S. policymakers are exploring regulations to address these risks, but such measures could stifle innovation if not carefully balanced.

    China’s open-source AI strategy is not just about technology; it’s about global influence. By offering free, high-quality models, Chinese companies are building a global developer ecosystem that aligns with their technological frameworks. This approach mirrors the open-source software movement of the 1990s, when Linux challenged Microsoft’s dominance by offering a free, customizable alternative. Today, China is positioning itself as the Linux of AI, with companies like DeepSeek and Alibaba leading the charge.

    Alibaba’s Qwen, for example, has gained significant traction in Asia and Europe, with developers citing its ease of integration and robust multilingual capabilities. DeepSeek’s R-1, meanwhile, has been praised for its performance in coding and scientific applications, making it a favorite among academic researchers and startups. These models are not only competing on price but also on quality, with benchmarks showing they rival or even surpass some Western models in specific tasks.

    For Washington and Silicon Valley, the rise of Chinese open-source AI is a wake-up call. To remain competitive, the U.S. must invest in its own open-source initiatives while addressing security concerns. Some experts advocate for a hybrid approach, combining the benefits of open-source innovation with robust oversight to protect national interests. “The U.S. can’t afford to ignore the appeal of open-source AI,” said Dr. Sarah Kim, a technology policy analyst at Stanford. “But it needs a strategy that fosters innovation without compromising security.”

    On the corporate front, Silicon Valley is beginning to respond. Meta AI, which has long championed open-source AI through projects like LLaMA, is reportedly accelerating its efforts to release more advanced models. Meanwhile, startups like xAI are exploring ways to expand free access to their models, such as Grok 3, to compete with Chinese offerings. For developers interested in exploring xAI’s capabilities, the company directs them to its API documentation at https://x.ai/api.

    As the AI race intensifies, China’s open-source strategy has exposed vulnerabilities in the U.S.’s proprietary-centric approach. The question now is whether Washington and Silicon Valley can adapt quickly enough to maintain their edge in a market where accessibility and cost are becoming as critical as technological prowess. For now, China’s lead in open-source AI is reshaping the global conversation, forcing the U.S. to confront a future where its dominance is no longer guaranteed.

  • OpenAI’s troubled GPT-5 rollout has exposed significant hurdles to maintaining its leadership position in the fiercely competitive AI market

    OpenAI’s troubled GPT-5 rollout has exposed significant hurdles to maintaining its leadership position in the fiercely competitive AI market

    OpenAI, the trailblazing artificial intelligence company behind ChatGPT, is facing significant turbulence with the recent rollout of its latest language model, GPT-5. Launched earlier this month to its 800 million ChatGPT users, the upgrade promised breakthroughs in coding, creativity, and conversational authenticity. However, a wave of user dissatisfaction, coupled with technical hiccups, has cast a shadow over the release, raising questions about OpenAI’s ability to maintain its dominance in the rapidly evolving AI market. CEO Sam Altman has acknowledged the “bumpy” launch, pledging to address user concerns, including improving the chatbot’s tone and restoring access to older models for paying customers.

    A High-Stakes Launch Falls Short

    When OpenAI unveiled GPT-5 on August 7, 2025, it heralded the model as a significant leap forward, boasting enhanced capabilities in coding, creative writing, and a reduction in what the company called “sycophancy”—the tendency of AI to overly agree with users. The rollout was intended to solidify OpenAI’s position as the leader in generative AI, especially as competitors like Anthropic, xAI, and Google’s DeepMind continue to gain ground with their own advanced models. Yet, the launch has been anything but smooth.

    Posts on X and other social media platforms reveal widespread user frustration, with many claiming that GPT-5’s performance falls short of the promised “PhD-level expertise.” Users have reported issues ranging from inconsistent responses to a colder, less engaging conversational tone compared to its predecessor, GPT-4o. “It feels like GPT-5 is trying too hard to be neutral and ended up robotic,” tweeted one user, echoing a sentiment shared across tech forums. In response to the backlash, OpenAI has doubled its rate limits to handle the influx of complaints and is actively addressing user feedback.

    Sam Altman, OpenAI’s CEO, admitted the launch’s shortcomings in a recent statement, calling it “a little more bumpy than expected.” He emphasized that while GPT-5 represents a step toward more advanced AI, true artificial general intelligence (AGI)—a system capable of continuous learning and human-like reasoning—remains elusive. “We’re not there yet,” Altman said, acknowledging that critical capabilities like adaptive learning are still missing. This candid admission has sparked debate about whether OpenAI overhyped GPT-5’s capabilities to maintain investor confidence and market share.

    Market Dynamics: A Crowded AI Landscape

    The AI market is more competitive than ever, with OpenAI facing mounting pressure from rivals. Anthropic’s Claude 3.5, xAI’s Grok 3, and Google’s Gemini have all made significant strides, offering users alternatives that prioritize different strengths, such as safety, conversational warmth, or specialized applications. Market analysts estimate that OpenAI’s valuation, which soared to $150 billion in 2024, could face scrutiny if user dissatisfaction persists. Posts on X suggest that some investors view the GPT-5 rollout as a test of OpenAI’s ability to deliver on its ambitious promises amid this crowded field.

    According to a recent report from VentureBeat, OpenAI’s decision to roll out GPT-5 to all 800 million ChatGPT users simultaneously may have contributed to the launch’s challenges. Unlike previous phased rollouts, the company opted for a universal release to maximize impact, but this approach strained its infrastructure and left little room for iterative improvements based on early feedback. The move has drawn comparisons to software launches in the tech industry, where premature scaling often leads to user dissatisfaction.

    The broader AI market is projected to grow to $1.8 trillion by 2030, driven by demand for generative AI in industries like healthcare, finance, and education. OpenAI’s early dominance, fueled by ChatGPT’s viral success in 2022, gave it a first-mover advantage. However, competitors are closing the gap. Anthropic, founded by former OpenAI researchers, has gained traction with its focus on safe and interpretable AI systems. Meanwhile, xAI’s Grok 3, available on platforms like x.com and mobile apps, offers users a free tier with robust capabilities, posing a direct challenge to OpenAI’s subscription-based model.

    Addressing User Concerns: Tone and Access to Older Models

    One of the most vocal criticisms of GPT-5 centers on its conversational tone, which some users describe as “cold” or “detached” compared to GPT-4o. In response, Altman has promised to refine the model’s tone to make interactions feel more natural and engaging. “We’ve heard the feedback loud and clear,” he said in a recent interview. “We’re working on updates to make GPT-5 feel more human and less like a machine reciting facts.” This acknowledgment reflects OpenAI’s attempt to balance technical precision with user expectations for warmth and relatability in AI interactions.

    Additionally, OpenAI has taken the unusual step of restoring access to older models like GPT-4o for paying customers, a move that has sparked mixed reactions. While some users welcome the option to revert to a model they found more reliable, others see it as an admission of GPT-5’s shortcomings. “Why push a new model if you’re already bringing back the old one?” tweeted one user, reflecting a sentiment that OpenAI may have rushed the rollout. The decision to offer older models is limited to premium subscribers, which has raised concerns about accessibility for free-tier users who make up the majority of ChatGPT’s user base.

    Financial and Strategic Implications

    The rocky rollout has financial implications for OpenAI, which relies heavily on its subscription-based ChatGPT Plus and enterprise offerings. While the company does not disclose specific revenue figures, analysts estimate that ChatGPT Plus, priced at $20 per month, generates hundreds of millions in annual revenue. The decision to allow paying customers to access older models could help retain subscribers frustrated with GPT-5, but it also risks undermining confidence in the new model.

    Strategically, OpenAI is navigating a delicate balance between innovation and user satisfaction. The company’s API service, which powers integrations for developers and businesses, remains a key growth driver. However, any perception of instability in its flagship models could deter enterprise clients who prioritize reliability. To address this, OpenAI has pledged to release regular updates to GPT-5, with a focus on improving performance and addressing user feedback. For developers interested in leveraging GPT-5, OpenAI has directed them to its API documentation at https://x.ai/api, signaling a commitment to supporting enterprise use cases despite the consumer-facing challenges.

    Looking Ahead: Can OpenAI Regain Momentum?

    The GPT-5 rollout serves as a critical test for OpenAI as it seeks to maintain its position as the undisputed leader in generative AI. While the company’s early successes with ChatGPT set a high bar, the current backlash underscores the challenges of scaling AI systems to meet diverse user expectations. Posts on X suggest that some users are already exploring alternatives like xAI’s Grok 3, which offers a free tier with competitive features and a conversational style that some find more engaging.

    Industry experts remain cautiously optimistic about OpenAI’s ability to recover. “This isn’t the first time a major tech company has faced a bumpy product launch,” said Dr. Emily Chen, an AI researcher at Stanford University. “OpenAI has the talent and resources to iterate quickly, but they need to prioritize transparency and user trust to avoid losing ground to competitors.” Chen’s comments reflect a broader sentiment that OpenAI’s long-term success hinges on its ability to address user concerns while continuing to push the boundaries of AI innovation.

    For now, OpenAI is doubling down on its commitment to improvement. Altman’s acknowledgment of the rollout’s challenges, combined with promises of tonal refinements and access to older models, signals a willingness to adapt. Whether these efforts will be enough to restore user confidence and fend off competitors remains to be seen. As the AI race intensifies, OpenAI’s next moves will be closely watched by users, investors, and industry observers alike.

  • Perplexity AI Wants to Buy Google’s Chrome Browser for $34.5 Billion

    Perplexity AI Wants to Buy Google’s Chrome Browser for $34.5 Billion

    Stock Widget

    AI startup Perplexity AI has made an unsolicited $34.5 billion bid for Google’s GOOGL -1.20% ▼ Chrome browser.

    That figure is higher than Perplexity’s current valuation, but the company said several investors have agreed to back the deal. In July, Perplexity was valued at $18 billion as part of an extension that valued the company at $14 billion months earlier.

    Google did not immediately respond to NYB’s request for comment. The Wall Street Journal was first to report the bid.

    Perplexity is best known for its AI-powered search engine that gives users simple answers to questions and links out to the original source material on the web. Last month, it launched its own AI-powered browser called Comet.

    The startup is in the middle of a battle for supremacy in generative AI, with companies including Meta and OpenAI offering massive salaries and signing bonuses to top engineers. Megacap tech companies are spending tens of billions of dollars a year on AI infrastructure to build large language models and run hefty workloads, while startups are raising billions of dollars from venture investors, hedge funds and tech giants to pay for the hardware and headcount needed to compete.

    Perplexity was approached by Meta earlier this year about a potential acquisition, but the companies did not finalize a deal.

    Perplexity’s bid comes after the U.S. Department of Justice proposed Google divest Chrome as part of the antitrust suit the company lost last year. The judge in the case ruled that Google has held an illegal monopoly in its core market of internet search.

    In response, Google said that the DOJ was pushing “a radical interventionist agenda,” and that the agency’s proposal was “wildly overbroad.” The company has not yet disclosed how it plans to adjust its business following the antitrust ruling.

    Chrome, which Google launched in 2008, provides the search giant with data it then uses for targeting ads. The DOJ said in a filing following the court’s decision that forcing the company to get rid of Chrome would create a more equal playing field for search competitors.

    “To remedy these harms, the [Initial Proposed Final Judgment] requires Google to divest Chrome, which will permanently stop Google’s control of this critical search access point and allow rival search engines the ability to access the browser that for many users is a gateway to the internet,” the DOJ wrote.

    Perplexity’s bid for Chrome is not the first time it’s taken a big swing.

    The startup submitted a proposal to merge with the short-form video app TikTok in January. TikTok’s future in the U.S. has been uncertain since 2024, when Congress passed a bill that would ban the platform unless its Chinese owner, ByteDance, divested from it.

    As of August, Perplexity’s proposed structure for a TikTok deal has not materialized.

  • OpenAI has officially launched GPT-5, its most advanced model to date, following a two-year development period

    OpenAI has officially launched GPT-5, its most advanced model to date, following a two-year development period

    OpenAI on Thursday announced GPT-5, its latest and most advanced large-scale artificial intelligence model.

    The company is making GPT-5 available to everyone, including its free users. OpenAI said the model is smarter, faster and “a lot more useful,” particularly across domains like writing, coding and health care.

    “I tried going back to GPT-4, and it was quite miserable,” OpenAI CEO Sam Altman said in a briefing with reporters.

    Since launching its AI chatbot ChatGPT in 2022, OpenAI has rocketed into the mainstream. The company said it expects to hit 700 million weekly active users on ChatGPT this week, and it is in talks with investors about a potential stock sale at a valuation of roughly $500 billion, as CNBC News previously reported.

    OpenAI said GPT-5′s hallucination rate is lower, which means the model fabricates answers less frequently. The company said it also carried out extensive safety evaluations while developing GPT-5, including 5,000 hours of testing. 

    Instead of outright refusing to answer users’ questions if they are potentially risky, GPT-5 will use “safe completions,” OpenAI said. This means the model will give high-level responses within safety constraints that can’t be used to cause harm. 

    “GPT-5 has been trained to recognize when a task can’t be finished, avoid speculation and can explain limitations more clearly, which reduces unsupported claims compared to prior models,” said Michelle Pokrass, a post-training lead at OpenAI.

    During the briefing, OpenAI demonstrated how GPT-5 can be used for “vibe coding,” which is a term for when users generate software with AI based on a simple written prompt. 

    The company asked GPT-5 to create a web app that could help an English speaker learn French. The app had to have an engaging theme and include activities like flash cards and quizzes as well as a way to track daily progress. OpenAI submitted the same prompt into two GPT-5 windows, and it generated two different apps within seconds. 

    The apps had “some rough edges,” an OpenAI lead said, but users can make additional tweaks to the AI-generated software, like changing the background or adding additional tabs, as they see fit.

    GPT-5 is rolling out to OpenAI’s Free, Plus, Pro and Team users on Thursday. This launch will be the first time that Free users have access to a reasoning model, which is a type of model that “thinks,” or carries out an internal chain of thought, before responding. If Free users hit their usage cap, they’ll have access to GPT-5 mini.

    OpenAI’s Plus users have higher usage limits, and Pro users have unlimited access to GPT-5 as well as access to GPT-5 Pro. ChatGPT Edu and ChatGPT Enterprise users will get access to GPT-5 roughly a week from Thursday.

    “It’s hard to believe it’s only been two and a half years since @sama joined us in Redmond to show the world GPT-4 for the first time in Bing, and it’s incredible to see how far we’ve come since that moment,” Microsoft CEO Satya Nadella wrote in a Thursday X post, referring to OpenAI CEO Sam Altman’s appearance at Microsoft headquarters in Washington in February 2023.

    The new model is coming to Microsoft products Thursday, according to a company blog post. Microsoft 365 Copilot is getting GPT-5, as well as the Copilot for consumers and the Azure AI Foundry that developers can use to incorporate AI models into third-party applications.

    , a company that helps enterprises manage their computer files, has been testing GPT-5 across a wide variety of data sets in recent weeks.  

    Aaron Levie, the CEO of Box, said previous AI models have failed many of the company’s most advanced tests because they struggle to make sense of complex math or logic within long documents. But Levie said GPT-5 is a “complete breakthrough.” 

    “The model is able to retain way more of the information that it’s looking at, and then use a much higher level of reasoning and logic capabilities to be able to make decisions,” Levie told CNBC in an interview. 

    OpenAI is releasing three different versions of the model for developers through its application programming interface, or API. Those versions, gpt-5, gpt-5-mini and gpt-5-nano, are designed for different cost and latency needs. 

    Earlier this week, OpenAI released two open-weight language models for the first time since it rolled out GPT-2 in 2019. Those models were built to serve as lower-cost options that developers, researchers and companies can easily run and customize.

    But with GPT-5, OpenAI also has a broader consumer audience in mind. The company said interacting with the model feels natural and “more human.” 

    Altman said GPT-5 is like having a team of Ph.D.-level experts on hand at any time. 

    “People are limited by ideas, but not really the ability to execute, in many new ways,” he said. 

  • Figma’s IPO went well

    Figma’s IPO went well

    108179904 1753981862141 Figma OB Photo 20250731 PODIUM PRESS 515
    Figma celebrates its initial public offering at the New York Stock Exchange on July 31, 2025. © NYSE
    Stock Widget

    The IPO of collaborative design software company Figma FIGMA +250.00% ▲ set Wall Street abuzz after the company’s stock skyrocketed 250% on its debut, briefly valuing the firm at a staggering $60 billion. But while critics claim the underpricing left billions on the table, a deeper look reveals the offering was less a botched move than a strategic play in a flawed, but still functional, system.

    The central point of contention revolves around the $33 IPO price compared to the stock’s $115.50 opening price. On paper, that discrepancy meant over $3 billion in potential value lost for early investors — a gap that venture capitalist Bill Gurley and others argue is proof of malpractice or even a rigged system that favors elite institutional clients.

    However, this view may oversimplify what is a much more nuanced, strategic process.

    Among the biggest sellers in the IPO were top VC firms — Index Ventures, Greylock Partners, Kleiner Perkins, and Sequoia Capital — who sold a relatively small portion of their holdings (roughly 11 million shares combined). Even with the price surge, these firms are sitting on massive long-term gains, some reaching 27x to 1,900x their initial investments.

    They could’ve demanded a higher IPO price. That they didn’t suggests intentionality — perhaps prioritizing long-term brand visibility, talent recruitment, and future capital raises over a few billion more in immediate gains.

    One exception may be the Marin Community Foundation, which sold all its shares for $440 million. While it may feel some sting from the missed upside, a windfall of that size is still considerable.

    The massive stock surge wasn’t necessarily a failure — it may have been part of a deliberate strategy. A strong debut creates momentum, enhances public perception, and strengthens business development efforts. It can also be a catalyst for easier future share sales, particularly once lockup periods expire.

    In contrast, IPOs that underperform on their debut — like NIQ Global Intelligence and SailPoint — can make follow-on offerings or exits more challenging for existing investors. For VCs still holding over 200 million shares of Figma — now worth over $25 billion — the big-picture matters more than Day One.

    Limited Supply + High Demand = Scarcity Frenzy

    Figma only floated about 7% of its total share capital. That limited supply, mixed with intense demand from both institutional and retail investors, led to the inevitable spike in price. Reports suggest 40x oversubscription for shares — a telltale sign of market hype and anticipation.

    In such scenarios, investors tend to inflate orders massively just to secure a tiny allocation. The price jump becomes self-fulfilling as buyers chase the illusion of “free money.”

    Retail enthusiasm was a major driver behind Figma’s post-IPO momentum. But this behavior remains unpredictable and difficult for underwriters to factor into pricing models. Should bankers have set the IPO at 80x 2024 revenues, as implied by the closing price on day one? That’s debatable.

    There’s also a widespread myth that Wall Street banks favor hedge funds in IPO allocations. In Figma’s case, mutual funds and long-only investors likely received the lion’s share, aligning with company and VC preferences.

    This isn’t the first IPO to face public criticism, nor is it the first time alternative methods have been proposed:

    But none of these alternatives have replaced the traditional book-building process, largely because each comes with its own pitfalls. Dutch auctions often struggle with price discovery. Direct listings only work for well-known brands and can be volatile. SPACs, meanwhile, have earned a tarnished reputation amid poor post-merger performance.

    Chesterton’s Fence and the IPO Machine

    Critics often see the traditional IPO system as a relic sustained by a cartel of bankers. But like Chesterton’s Fence, the system endures for a reason: it balances complex interests — founders, VCs, long-term investors, and underwriters — in a high-stakes, high-pressure environment.

    Yes, the system has flaws. In the 1990s, banks were accused of allocating IPO shares to executives to curry future business — a practice that died out but could re-emerge without regulatory vigilance. Yet, no better replacement has proven itself at scale.

    Figma’s IPO might look scandalous at first glance. But on closer inspection, it reflects calculated trade-offs in a system that — while imperfect — remains resilient. The firm’s backers likely made their decisions with eyes wide open.

    As Winston Churchill once said of democracy:

    “It is the worst form of government — except for all the others that have been tried.”

    The IPO process, it seems, deserves the same faint praise.

    Figma’s debut mirrors the broader IPO market’s renewed energy in 2025, following a two-year lull. Investors are cautiously optimistic, though volatility remains a concern. Expect more tech unicorns to test the waters this year — and for the IPO debate to rage on.

  • Palantir’s Success in Washington and the Resulting 600% Surge in Its Stock Price

    Palantir’s Success in Washington and the Resulting 600% Surge in Its Stock Price

    Once dismissed as a niche Silicon Valley data-mining firm, Palantir Technologies PLTR +600.00% ▲ has undergone a dramatic metamorphosis, transforming into a central fixture in Washington’s national security and AI strategies. As its stock soared nearly 600% from early 2024 through mid‑2025, Palantir cemented its reputation as a co-equal to political insiders—and embraced the aggressive posture of the Trump era it now serves.

    Once dismissed as a niche Silicon Valley data-mining firm, Palantir Technologies has undergone a dramatic metamorphosis, transforming into a central fixture in Washington’s national security and AI strategies. As its stock soared nearly 600% from early 2024 through mid‑2025, Palantir cemented its reputation as a co-equal to political insiders—and embraced the aggressive posture of the Trump era it now serves.

    In early 2023, CEO Alex Karp stunned the company by announcing that Palantir was developing a next-generation Artificial Intelligence Platform (AIP)—even though no such project existed. As The Wall Street Journal recounts, Karp viewed the shift toward AI as inevitable and confidently placed Palantir at the center of it. His engineers then raced to build the product. What emerged became a centerpiece of national defense contracts and commercial integrations.

    In Q2 2025, AIP’s adoption helped Palantir smash through its first $1 billion quarterly revenue—a 33% rise in profits and skyrocketing U.S. commercial business by 93% year-over-year.

    Palantir’s proximity to power was turbocharged in President Trump’s second term, as the firm took over major federal contracts. It consolidated dozens of disparate deals into a $10 billion Department of Defense agreement, serviced by Palantir’s mission-grade Gotham and AIP platforms Axios reported.

    This alignment transformed Palantir from tech oddball to national strategic partner. Its new posture earned comparisons to Trump himself—tough, unfiltered, unapologetically patriotic.

    Palantir’s share price multiplied more than six-fold since early 2024, drawing enormous investor attention. Analyst Stephen Guilfoyle of WallStreetPit flagged the firm’s explosive growth: over 52% U.S. business growth in Q4, a 36% revenue increase, $1.25 billion in adjusted free cash flow, and profitability—even boasting 7 cents adjusted EPS. He raised his price target to a lofty $153/share, reflecting continued bullish sentiment.

    The stock’s rise has outpaced major indices. In early 2025, Palantir was among the top performers in the S&P 500 and Nasdaq‑100, ending over $400 billion in market cap—surpassing giants like Salesforce and Adobe.

    With the stock surging, CEO Karp executed an aggressive share selloff: 38 million shares worth roughly $1.88 billion in 2024 alone, much of it near the presidential election. He’s signaled plans to sell nearly 10 million more in 2025, indicating a continued cash-out strategy leveraging Palantir’s rally.

    Despite such windfalls, critics highlight Palantir’s outsized valuation—trading at more than 200x future earnings and 80x projected revenue, per FT’s John Foley. While revenue is strong, skeptics warn the stock behaves like a meme—powered more by hype than fundamentals.

    Palantir’s success rests on an ideological playbook: blend AI prowess with government proximity. The company has built a “revolving door” of personnel exchanges between Washington and its executive ranks—including figures drawn from the Pentagon, CIA, DHS, and even the UK’s NHS. That insider network helped lock in contracts exceeding $1.3 billion with U.S. defense agencies and expanded lobbying to $5.8 million in 2024.

    The firm’s approach is flexible: smartly toe political lines, anticipate shifts in power, and monetize defense policymaking. Palantir’s global positioning reflects that model—growing its Washington footprint even as its commercial footprint expands.

    The company’s victories aren’t immune to challenges. In February 2025, Palantir shares plunged nearly 20% after news broke that the Pentagon might cut defense spending by 8% annually for five years, threatening Palantir’s pipeline.

    Moreover, critics raise alarms about ethics and bias—its close ties to ICE and surveillance applications invite scrutiny over privacy, fairness, and oversight.

    Still, Palantir’s AI platform is winning new contracts beyond defense—it now serves clients like the FAA, CDC, IRS, and even corporate giants, and stands as a singular example of AI-centric growth in a sluggish tech sector.

    Palantir’s journey from controversial data firm to the poster child of AI‑powered government contracting has redefined what it means to succeed in tech—the old Silicon Valley playbook of consumer apps and venture capital liquidity has been traded for political entanglement and defense scoring.

    Its 600% stock run was fueled not just by AI hype, but by a deliberate embrace of political alignment and contract design. The question now is whether that trajectory can last—once the federal tide turns, or budgets tighten, Palantir’s value may be tested.

  • Microsoft has eliminated dozens of positions in Washington

    Microsoft has eliminated dozens of positions in Washington

    Stock Widget

    Microsoft MSFT -0.75% ▼ is laying off 40 Washington-based employees, as the company continues to trim its workforce amid record spending on artificial intelligence.

    Monday’s layoffs, disclosed in a state filing, are separate from previous announcements of global job cuts. The company announced in May that it was letting go of more than 6,000 workers and made another announcement in July for an additional 9,000 employees.

    Microsoft said Monday’s cuts throughout the company were very small.

    In Washington, Microsoft has cut 3,160 jobs so far this year, including Monday’s layoffs.

    Organizational and workforce changes are a necessary and regular part of managing our business,” a Microsoft spokesperson said in an emailed statement. “We will continue to prioritize and invest in strategic growth areas for our future and in support of our customers and partners.”

    The company is continuing a run of one of the largest layoffs in its history while reporting record quarterly revenues and profits. Last week, Microsoft’s fiscal year earnings stunned Wall Street, especially for its cloud and AI business.

    Microsoft reported last week that it invested $88 billion over the past year to build out its AI infrastructure and plans to spend another $30 billion by the end of September.

    Microsoft CEO Satya Nadella addressed this “incongruence” in a memo to employees last month.

    “This is the enigma of success in an industry that has no franchise value,” he said. “Progress isn’t linear. It’s dynamic, sometimes dissonant, and always demanding.

    Despite the waves of layoffs, Microsoft’s head count is relatively unchanged, Nadella said, as the company prioritizes hiring in other parts of its business. Microsoft Microsoft reported that it had 228,000 employees at the end of June. the same number that it reported last year.

  • Delta Air Lines Confirms to U.S. Legislators That It Will Not Personalize Ticket Prices with AI

    Delta Air Lines Confirms to U.S. Legislators That It Will Not Personalize Ticket Prices with AI

    ATLANTA, GA — Delta Air Lines is facing mounting scrutiny over its adoption of artificial intelligence in airfare pricing, following sharp criticism from U.S. lawmakers who raised concerns about potential “personalized pricing” — a practice where AI could tailor fares based on a customer’s individual data or perceived willingness to pay.

    In a letter sent Friday to three Democratic senators — Ruben Gallego (AZ), Mark Warner (VA), and Richard Blumenthal (CT) — Delta firmly denied any intent to use AI in that manner, stating:

    “There is no fare product Delta has ever used, is testing or plans to use that targets customers with individualized prices based on personal data. Our ticket pricing never takes into account personal data.”

    The issue surfaced after the senators expressed alarm at comments made by Delta President Glen Hauenstein in December, when he said that Delta’s AI pricing system can predict “the amount people are willing to pay for the premium products related to the base fares.” The lawmakers interpreted this to mean Delta could eventually implement AI tools that price tickets based on individual “pain points” — essentially, the maximum price a specific person might accept.

    In a joint statement last week, the senators warned that such a practice would “likely mean fare price increases up to each individual consumer’s personal ‘pain point.’” The phrase sparked public backlash, fueling concerns over digital price discrimination in a sector where pricing transparency is already murky.

    While Delta clarified that it is not using AI to set fares on a per-person basis, it acknowledged that it will expand AI-powered dynamic pricing systems to cover 20% of its domestic network by the end of 2025, in collaboration with Israeli startup Fetcherr, which specializes in AI-driven pricing models.

    Delta reiterated that this technology is intended to streamline conventional pricing systems based on aggregate market factors — such as demand, fuel prices, and competition — not consumer behavior or identity.

    Delta emphasized that dynamic pricing has been used across the airline industry for over 30 years, long before the arrival of advanced machine learning tools. Historically, ticket prices have fluctuated based on broad variables like demand spikes during holidays, competitor pricing, or regional economic trends.

    In the letter to lawmakers, Delta wrote:

    “Given the tens of millions of fares and hundreds of thousands of routes for sale at any given time, the use of new technology like AI promises to streamline the process by which we analyze existing data and the speed and scale at which we can respond to changing market dynamics.”

    In other words, AI would merely optimize what was already a complex pricing algorithm — not personalize it.

    Still, lawmakers remain unconvinced. Senator Gallego responded to Delta’s letter, stating:

    “Delta is telling their investors one thing, and then turning around and telling the public another. If Delta is in fact using aggregated instead of individualized data, that is welcome news — but we need clarity.”

    Delta’s assurances came amid broader industry and regulatory unease. American Airlines CEO Robert Isom voiced his own concerns during an earnings call last week:

    “This is not about bait and switch. This is not about tricking. Talk about using AI in that way — I don’t think it’s appropriate. And certainly from American, it’s not something we will do.”

    At the legislative level, Representatives Greg Casar (TX) and Rashida Tlaib (MI) introduced a bill last week that would ban the use of AI for pricing or wage decisions based on personal data. The bill directly references potential scenarios such as airlines raising ticket prices after detecting a consumer searching for a family obituary — a hypothetical scenario designed to illustrate emotional exploitation through algorithmic targeting.

    The bill comes after the Federal Trade Commission (FTC) released a January staff report warning that companies increasingly use personal information — such as location, demographics, and even mouse movements — to adjust prices for goods and services.

    According to the FTC:

    “Retailers frequently use people’s personal information to set targeted, tailored prices… A consumer profiled as a new parent could be intentionally shown higher-priced baby thermometers.”

    Delta’s partnership with Fetcherr and its AI revenue management strategy signals a broader trend in the travel and transportation sector. Airlines are exploring AI to help navigate volatile fuel prices, shifting post-pandemic demand patterns, and ongoing labor shortages.

    Fetcherr’s AI pricing platform is designed to mimic stock market dynamics, adjusting prices in real-time based on numerous market variables — from macroeconomic indicators to real-time seat availability. While powerful, such models inevitably raise transparency and fairness concerns.

    Despite the controversy, investors have reacted with cautious optimism. Delta shares (NYSE: DAL) rose 1.4% Friday following the company’s public response, reflecting investor confidence in Delta’s ability to manage AI implementation without triggering regulatory blowback.

    Industry analysts, however, remain split.

    Morgan Stanley aviation analyst Richard Hill commented:

    “AI will inevitably change airline economics. But companies must tread carefully. Crossing the line into personal pricing is a reputational and legal minefield — and Congress is watching.”

    While Delta has now publicly pledged not to use personal data for individualized fares, pressure from lawmakers and consumer advocates shows no sign of abating.

    Expect greater regulatory scrutiny in the coming months, as AI tools proliferate across industries. For now, the travel sector remains a key battleground in the growing debate over algorithmic fairness, data ethics, and the power of artificial intelligence to reshape market behavior.

  • Zuckerberg’s War on the iPhone

    Zuckerberg’s War on the iPhone

    ugedry3g4uyfrg3uygr34
    Mark Zuckerberg didn’t use Apple’s name the other day when laying out his vision for marrying superintelligent AI and his hardware. He might as well have. The New York Budgets/Getty Images

    In a tech world long dominated by Apple, Meta CEO Mark Zuckerberg has taken aim at a new target: the iPhone as the center of personal computing. In a memo and earnings commentary timed to perfection, Zuckerberg may have fired the first true salvo in what could become the next platform war. His weapon? A sweeping vision of AI-powered smart glasses as the future “primary computing device”.

    On July 30, Zuckerberg published a manifesto titled “Personal Superintelligence,” in which he outlined his belief that AI is on the brink of superintelligence—a form of artificial general intelligence tailored to empower individuals. He wrote that future computing will live not in handheld screens, but in devices that see, listen, and respond—smart glasses. These, he declared, would replace smartphones as the dominant interface in daily life.

    He further asserted that absent such wearable AI gear, people would be at a “significant cognitive disadvantage.”

    Meta isn’t just talking. The company is pouring billions into the infrastructure to make this happen. It owns a 49% stake in Scale AI (~$14.3 billion) and plans AI data centers like Prometheus and Hyperion, each the size of Manhattan blocks. Annual capital expenditures for 2025 now top $66–$72 billion, heavily skewed toward AI infrastructure.

    Meta stock rose approximately 9% following the update—investors cheered the clarity of the pivot.

    Still, profitability concerns persist. Analysts note slower revenue and profit growth, powered by competitive pressure, TikTok’s ad traction, and uncertain returns on AI investments.

    In blunt terms: Zuckerberg is relinquishing reliance on Apple’s iPhone ecosystem. By betting on glasses, Meta seeks to sidestep Apple’s App Store fees, privacy sandbox, and hardware constraints. Previously, Zuckerberg criticized Apple for lacking innovation, limited accessory support, and suppressing wearable integration, particularly with Ray-Ban AI glasses.

    If realized, Apple’s dominance over personal computing could erode—substituted by a multimodal AI interface owned by Meta.

    Meta’s Llama AI—once open source—now faces tighter licensing, citing safety concerns. Zuckerberg signaled the company may not fully open-source its future models, unlike earlier promises.

    Zuckerberg’s announcement wasn’t marketing fluff—it was a fresh chapter in competition. He framed AI as a democratizing force, contrasting with rivals who allegedly envision automation reducing people to passive recipients of machine labor. Instead, he pitched a future of “personal empowerment.”

    Expect Apple to accelerate its own AI/AR strategy, possibly unveiling Vision Pro successors or streamlined AI glasses. If smart glasses gain traction, we may see a shift away from swipe-driven wearables toward continuous ambient AI.Regulators may reexamine data collection and privacy standards if Meta’s glasses become ubiquitous.

    Zuckerberg’s vision is bold, built on investment muscle and a clear sense of timing. He’s betting that AI and wearables will redefine personal computing—and unseat Apple’s long reign. Whether he succeeds will depend on execution: user trust, device comfort, battery life, AI reliability, and a compelling ecosystem no longer tethered to a screen.

    For now, however, it’s safe to say: Meta’s not only challenging the iPhone—but reimagining what a “computer” could be.