Close Menu
    Facebook X (Twitter) Instagram
    星期六, 2 5 月
    Instagram Pinterest TikTok RSS
    • Home
    • Categories
      • Fashion
      • Beauty
      • Tech
    • Seasonal
    • Guides
    Home - Elon Musk confirms xAI used OpenAI’s models to train Grok
    Tech

    Elon Musk confirms xAI used OpenAI’s models to train Grok

    longdaBy longda2026年5月2日没有评论2 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    In a federal courtroom in California on Thursday, Elon Musk testified that his own AI startup, xAI, has used OpenAI’s models to improve its own.

    The matter at question is model distillation, a common industry practice by which one larger AI model acts as a “teacher” of sorts to pass on knowledge to a smaller AI model, the “student.” Although it’s often used legitimately within companies using one of their own AI models to train another, it’s also a practice that’s sometimes used by smaller AI labs to try to get their models to mimic the performance of a larger competitor’s model.

    Asked on the stand whether he knew what model distillation was, Musk said it’s to use one AI model to train another. When asked whether xAI has distilled OpenAI’s technology, Musk seemed to avoid the question, saying that “generally all the AI companies” do such a thing. And when asked if that was a yes, he said, “Partly.”

    When pressed, Musk said, “It is standard practice to use other AIs to validate your AI.”

    Model distillation has been on the rise and has incited more controversy among AI labs, in recent years, since the lines for what’s legal — and what violates a company’s certain terms or policies — often fall within a gray area. Companies like OpenAI and Anthropic have accused Chinese firms of distilling their models, with OpenAI publicly stating its concerns about DeepSeek, and Anthropic specifically naming DeepSeek, Moonshot, and MiniMax. Google, also, has taken steps to try to prevent what it calls “distillation attacks,” or “a method of intellectual property theft that violates Google’s terms of service.”

    In Anthropic’s own blog post on the matter, the company wrote, “Distillation is a widely used and legitimate training method. For example, frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers. But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.”

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThe Curator, the Artist, and the Artisans Bringing Morroco to Venice
    Next Article There’s a Reason the Smartest People I Know Are Watching Reality TV
    longda
    • Website

    Related Posts

    The things we’re building now

    2026年5月2日

    Dyson Spot and Scrub Ai review: a great mop, a worse vacuum

    2026年5月2日

    Some of Xteink’s credit card-sized e-readers are losing their best feature

    2026年5月2日
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Instagram
    • Pinterest
    About

    SonemGlobal is a blog-style product discovery site that shares curated recommendations across fashion, beauty, home, tech, and gifts. We highlight top picks, affordable finds, helping readers discover great products easily.

    We're social, connect with us:

    Facebook X (Twitter) Pinterest
    HELP
    • About Us
    • Contact Us
    • Accessibility Policy
    • Privacy Policy
    • Terms of Use
    • Disclaimer
    Copyright © 2026. Designed by sonemglobal.com.
    • About Us
    • Contact Us
    • Accessibility Policy
    • Privacy Policy
    • Terms of Use
    • Disclaimer

    Type above and press Enter to search. Press Esc to cancel.