Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Circle wants its USDC payments network to be the Ripple killer

    July 4, 2025

    $1B in Bitcoin moves from Satoshi-era wallet after 14 years of inactivity

    July 4, 2025

    AGII Refines Sync Performance Across Chains to Boost Response Efficiency

    July 4, 2025
    Facebook X (Twitter) Instagram
    Friday, July 4
    • About
    • Contact us
    • Privacy Policy
    Facebook X (Twitter) LinkedIn YouTube
    Blockchain Echo
    Banner
    • Lithosphere News Releases
    • Bitcoin
    • Crypto
    • Ethereum
    • Litecoin
    • Altcoins
    • Blockchain
    Blockchain Echo
    Home » Open-source AI isn’t the end-all game—Bringing AI onchain is
    Crypto

    Open-source AI isn’t the end-all game—Bringing AI onchain is

    John SmithBy John SmithMay 6, 2025No Comments5 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

    In January 2025, DeepSeek’s R1 surpassed ChatGPT as the most downloaded free app on the US Apple App Store. Unlike proprietary models like ChatGPT, DeepSeek is open-source, meaning anyone can access the code, study it, share it, and use it for their own models.

    This shift has fueled excitement about transparency in AI, pushing the industry toward greater openness. Just weeks ago, in February 2025, Anthropic released Claude 3.7 Sonnet, a hybrid reasoning model that’s partially open for research previews, also amplifying the conversation around accessible AI. 

    Yet, while these developments drive innovation, they also expose a dangerous misconception: that open-source AI is inherently more secure (and safer) than other closed models.

    The promise and the pitfalls

    Open-source AI models like DeepSeek’s R1 and Replit’s latest coding agents show us the power of accessible technology. DeepSeek claims it built its system for just $5.6 million, nearly one-tenth the cost of Meta’s Llama model. Meanwhile, Replit’s Agent, supercharged by Claude 3.5 Sonnet, lets anyone, even non-coders, build software from natural language prompts.

    The implications are huge. This means that basically everyone, including smaller companies, startups, and independent developers, can now use this existing (and very robust) model to build new specialized AI applications, including new AI agents, at a much lower cost, faster rate, and with greater ease overall. This could create a new AI economy where accessibility to models is king.

    But where open-source shines—accessibility—it also faces heightened scrutiny. Free access, as seen with DeepSeek’s $5.6 million model, democratizes innovation but opens the door to cyber risks. Malicious actors could tweak these models to craft malware or exploit vulnerabilities faster than patches emerge.

    Open-source AI doesn’t lack safeguards by default. It builds on a legacy of transparency that has fortified technology for decades. Historically, engineers leaned on “security through obfuscation,” hiding system details behind proprietary walls. That approach faltered: vulnerabilities surfaced, often discovered first by bad actors. Open-source flipped this model, exposing code—like DeepSeek’s R1 or Replit’s Agent—to public scrutiny, fostering resilience through collaboration. Yet, neither open nor closed AI models inherently guarantee robust verification.

    The ethical stakes are just as critical. Open-source AI, much like its closed counterparts, can mirror biases or produce harmful outputs rooted in training data. This isn’t a flaw unique to openness; it’s a challenge of accountability. Transparency alone doesn’t erase these risks, nor does it fully prevent misuse. The difference lies in how open-source invites collective oversight, a strength that proprietary models often lack, though it still demands mechanisms to ensure integrity.

    The need for verifiable AI

    For open-source AI to be more trusted, it needs verification. Without it, both open and closed models can be altered or misused, amplifying misinformation or skewing automated decisions that increasingly shape our world. It’s not enough for models to be accessible; they must also be auditable, tamper-proof, and accountable. 

    By using distributed networks, blockchains can certify that AI models remain unaltered, their training data stays transparent, and their outputs can be validated against known baselines. Unlike centralized verification, which hinges on trusting one entity, blockchain’s decentralized, cryptographic approach stops bad actors from tampering behind closed doors. It also flips the script on third-party control, spreading oversight across a network and creating incentives for broader participation, unlike today, where unpaid contributors fuel trillion-token datasets without consent or reward, then pay to use the results.

    A blockchain-powered verification framework brings layers of security and transparency to open-source AI. Storing models onchain or via cryptographic fingerprints ensures modifications are tracked openly, letting developers and users confirm they’re using the intended version. 

    Capturing training data origins on a blockchain proves models draw from unbiased, quality sources, cutting risks of hidden biases or manipulated inputs. Plus, cryptographic techniques can validate outputs without exposing personal data users share (often unprotected), balancing privacy with trust as models strengthen.

    Blockchain’s transparent, tamper-resistant nature offers the accountability open-source AI desperately needs. Where AI systems now thrive on user data with little protection, blockchain can reward contributors and safeguard their inputs. By weaving in cryptographic proofs and decentralized governance, we can build an AI ecosystem that’s open, secure, and less beholden to centralized giants.

    AI’s future is based on trust… onchain

    Open-source AI is an important piece of the puzzle, and the AI industry should work to achieve even more transparency—but being open-source is not the final destination.

    The future of AI and its relevance will be built on trust, not just accessibility. And trust can’t be open-sourced. It must be built, verified, and reinforced at every level of the AI stack. Our industry needs to focus its attention on the verification layer and the integration of safe AI. For now, bringing AI onchain and leveraging blockchain tech is our safest bet for building a more trustworthy future.

    David Pinger

    David Pinger

    David Pinger is the co-founder and CEO of Warden Protocol, a company that focuses on bringing safe AI to web3. Before co-founding Warden, he led research and development at Qredo Labs, driving web3 innovations such as stateless chains, webassembly, and zero-knowledge proofs. Before Qredo, he held roles in product, data analytics, and operations at both Uber and Binance. David began his career as a financial analyst in venture capital and private equity, funding high-growth internet startups. He holds an MBA from Pantheon-Sorbonne University.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleIndia’s Supreme Court says Bitcoin trading is like a refined Hawala network
    Next Article The UK’s crypto ambitions: Navigating regulatory uncertainty
    John Smith

    Related Posts

    $1B in Bitcoin moves from Satoshi-era wallet after 14 years of inactivity

    July 4, 2025

    HSBC Orion to launch MENA’s first digitally native bond with ADX and FAB

    July 4, 2025

    Polymarket’s $50M Zelenskyy suit bet nears resolution

    July 4, 2025
    Leave A Reply Cancel Reply

    Top Posts

    🐍 Lunar New Year Scratch & Win Campaign Is Live with a Grand Prize of 8,888,888 VERSE (~$1800) | by Bitcoin.com | Jan, 2025

    January 24, 2025

    Trade VERSE/USDT on KuCoin to Earn your Share of $8400 in Rewards! | by Bitcoin.com | Jan, 2025

    January 24, 2025

    Boost Your Crypto: Up to 30% Cash Back! | by Bitcoin.com | Jan, 2025

    January 24, 2025
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    About Us

    Stay updated on the world of cryptocurrency
    Your one-stop source for daily crypto news and insights
    Blockchainecho.info: Your trusted daily crypto companion

    Most Popular

    🐍 Lunar New Year Scratch & Win Campaign Is Live with a Grand Prize of 8,888,888 VERSE (~$1800) | by Bitcoin.com | Jan, 2025

    January 24, 2025

    Trade VERSE/USDT on KuCoin to Earn your Share of $8400 in Rewards! | by Bitcoin.com | Jan, 2025

    January 24, 2025

    Boost Your Crypto: Up to 30% Cash Back! | by Bitcoin.com | Jan, 2025

    January 24, 2025
    Copyright © 2025
    • Home
    • Buy Now

    Type above and press Enter to search. Press Esc to cancel.