Back to articles
April 8, 2025

In an exclusive feature with Building Useful Stuff, Marko Stokic, Head of AI at Oasis Protocol, shared insights into the protocol’s pivotal role within the HumanAIx Alliance. He highlighted the growing importance of confidential computing and verifiable AI in shaping the future of decentralized AI.

Stokic reaffirmed Oasis Protocol’s dedication to creating a secure, decentralized AI infrastructure. He pointed out that while Trusted Execution Environments (TEEs) provide essential security benefits, they are not enough on their own. For a truly trustless system, TEEs must be supported by decentralized key management, thorough auditing, reproducible builds, and on-chain attestations.

Below is the full transcription of the interview video.

Host: Welcome to this episode of Building Useful Stuff, where we explore the latest developments within the HumanAIx ecosystem. For those who are not yet familiar, the OORT team recently unveiled HumanAIx at an event in Hong Kong—a groundbreaking initiative that brings together numerous founding members to accelerate the development of decentralized AI infrastructure.

Today’s guest represents one of these founding members: Oasis Protocol. Joining us is Marko Stokic, Head of AI at Oasis Protocol. Marko, welcome to the show, and thank you for being here.

Marko Stokic: Thank you for having me. It’s a pleasure to join the discussion.

Host: Before we dive deeper into HumanAIx for our audience, I’d like to begin with a timely question. Recently, the co-founder of Oasis shared a statement on Twitter that caught our attention. He remarked:
"If it’s not open-source, not backed by a decentralized key management system, not audited with a reproducible build running in a Trusted Execution Environment (TEE), and not periodically attested on-chain, then it should not be trusted."
Could you share your perspective on this statement?

Marko Stokic: It is difficult to disagree with our co-founder. His statement addresses a critical conversation currently unfolding, especially within crypto circles, concerning the development of trustless AI agents. The prevailing narrative is: how does one build truly unbreakable agents?

The initial approach often assumes that simply hosting an agent within a Trusted Execution Environment (TEE) suffices. For context, TEEs are core to Oasis Network architecture. Specifically, all Oasis nodes run on Intel SGX CPUs, which generate TEEs to securely execute smart contracts, thereby ensuring on-chain privacy.

However, our co-founder emphasizes an essential point: while TEEs are valuable, they alone are insufficient. Malicious code can still be hosted within a TEE. Security, therefore, requires additional layers.

At Oasis, the goal is to construct the most secure, decentralized solution possible. This includes:

To enhance trust further, we advocate for explorers that allow public verification of these attestations. Additionally, reproducible builds ensure that what is deployed in the TEE matches the intended application. For every application—what we call "Ruffle" apps—we provide compose files that enable reconstruction of the application image, validating its authenticity within the TEE environment.

Host: That is remarkable. Given Oasis Protocol’s involvement in DeFi, GameFi, AI, and data tokenization, could you elaborate on the role Oasis Protocol plays within HumanAIx? How is the protocol contributing to this pioneering alliance?

Marko Stokic: Of course. Oasis Protocol has engaged in discussions with OORT and other members of the alliance for some time. When the HumanAIx initiative was proposed, it aligned immediately with our vision.

Our specialization lies in confidential compute, which we have been pioneering for years. Interestingly, the definition of confidential compute on CoinMarketCap was authored by Oasis Protocol—a testament to our expertise.

Confidential compute is critical in decentralized AI because it often involves utilizing external compute resources. While developers may own their models and data, they frequently rely on third-party hardware for computation. Without confidential compute, this process becomes a "negative black box," where privacy is compromised, and verification is absent.

For sensitive use cases such as healthcare, privacy is non-negotiable. Oasis Protocol provides this capability, offering any builder in the ecosystem a secure, confidential compute infrastructure.

Host: There have been many debates around open-source versus closed-source AI. Companies like Meta and Microsoft claim their models are open-source, yet significant components remain undisclosed. How does HumanAIx distinguish itself from these practices?

Marko Stokic: An important question. Indeed, while I fully support open-source development, openness alone does not equate to transparency. Meta’s LLaMA models, for example, are open-source in principle, but the training data remains undisclosed.

HumanAIx differentiates itself by pursuing full verifiability across the AI development and deployment lifecycle. Within the alliance, builders and users alike will be able to verify:

Ideally, this will include privacy safeguards as well. Furthermore, end-users will have tools to independently verify that these processes are conducted properly.

While the initiative is still in progress, this commitment to transparency is fundamental to HumanAIx’s mission.

Host: There are ongoing discussions about the ethical use of AI. How is HumanAIx addressing ethical considerations, including AI safety?

Marko Stokic: This is an important dimension. The argument from proponents of closed systems is typically framed around safety: if powerful models are made public, bad actors might exploit them.

However, I consider this view flawed. Restricting access stifles innovation and consolidates control within large corporations, as seen over the past decade. In contrast, open-source models benefit from broader scrutiny. Issues such as biases and vulnerabilities are identified more quickly when more eyes are examining the model.

Where HumanAIx adds value is in the realm of FAIR (Findable, Accessible, Interoperable, Reusable) data usage. Even open-source AI has struggled with this: contributors rarely receive fair recognition or incentives when their data is used.

Web3, however, excels at incentivizing participation. Blockchain mechanisms, similar to those that secured Bitcoin through decentralized mining, can now incentivize users to contribute proprietary data. This creates a virtuous cycle where participants are fairly rewarded not just at the point of data contribution, but continuously—every time their data powers an AI application.

The alliance is particularly interested in advancing solutions for ongoing, equitable compensation, especially as foundational models are fine-tuned and deployed over time.

Host: That continuous incentivization is a compelling factor for anyone exploring decentralized AI. How would such a mechanism function in practice? For example, current platforms like ChatGPT do not reward users for the data they provide.

Marko Stokic: Indeed, this remains a complex challenge. Presently, contributors may receive compensation for tasks like data labeling or initial data sharing, often facilitated by crypto payments.

However, achieving full lifecycle incentivization requires more advanced frameworks. For instance, repositories such as GitHub would need mechanisms to trace model updates and deployments back to the original data contributors.

Projects like Sentient are actively pursuing this vision, having secured substantial funding. Their goal is to enable tracking of model evolution and to reward data contributors whenever those models are utilized.

While this approach remains in development, it represents one of the most promising paths toward fair, continuous reward structures in decentralized AI.

Host: A quick follow-up—recently, the Bybit hack raised concerns about AI-assisted attacks. Theoretically, could AI be trained to execute a hack of that scale?

Marko Stokic: The Bybit incident primarily involved social engineering rather than technical exploitation. That said, the potential for AI to conduct such attacks exists.

AI companions, for instance, have already demonstrated the ability to convincingly emulate human interaction, sometimes with tragic consequences. If AI systems can persuade individuals into harmful actions, the risk of AI-facilitated social engineering attacks becomes very real.

This possibility is concerning, particularly because AI systems could exploit human vulnerabilities to gain unauthorized access. While this scenario is theoretical, it cannot be dismissed.

Host: Indeed, it echoes dystopian narratives like Skynet. To close, returning to HumanAIx—what do you perceive as the biggest challenges for the alliance as it builds toward decentralized AI?

Marko Stokic: Decentralized AI is a monumental undertaking that demands collaboration. Each partner within the alliance contributes specific expertise:

No single project can achieve this alone. The ecosystem must come together, combining resources and knowledge to create a comprehensive solution.

Theoretically, this is complex. But it is precisely this complexity that makes the effort worthwhile. Over the coming months, as collaboration deepens and projects move closer to deployment, clearer answers will emerge.

Host: Thank you, Marko. We greatly appreciate your insights and the time you have dedicated to helping our audience better understand the HumanAIx initiative and Oasis Protocol’s role within it. For those who found value in today’s discussion, stay tuned for future episodes of Building Useful Stuff.

✅ Official Links

Please follow ONLY our official X account and double-check URLs before engaging

Latest posts

See all articles

HumanAIx Founding Members Series – Ep 3 Yield Guild Games: Empowering Decentralized AI through Community-Driven Data, Gaming, and Global Inclusion

In a special episode of “Building Useful Stuff,” Beryl Li, co-founder of Yield Guild Games (YGG), shared insights into the guild’s role as a founding member of HumanAIx.

Read more
#HumanAIxMembers

OORT Insider: DataHub Leaderboard Updates, Testnet Close, Community Campaigns, Partnership, and More

Catch up with OORT's recent updates from product, ecosystem, community, social media, content, and insights.

Read more
#Newsletter