AiThority Interview with Dariia Porechna, Head of Protocol Engineering at Autonomys Network
Dariia Porechna, Head of Protocol Engineering at Autonomys Network, highlights more on the use of AI to scale program builds, AI innovations from around the world and more about blockchain in this interview.
————
Hi Dariia tell us about yourself and your role at Autonomys
I am the Head of Protocol Engineering at Autonomys Network, where I drive the technological advancements of our hyper-scalable decentralized AI ecosystem. This infrastructure is designed to support secure super dApps and on-chain agents, equipping them with advanced AI capabilities. I hold a degree in Applied Mathematics and Cryptography from Kyiv Polytechnic University, where I focused on exploring practical vulnerabilities in encryption standards. My professional journey includes experience working with Stephen Wolfram at Wolfram Research, which has deepened my expertise in cryptography and blockchain technology. Since joining Autonomys in May 2022, I have been instrumental in developing critical components of our architecture, including consensus mechanisms (Proof-of-Space + Proof-of-Time), distributed storage, and decoupled execution. As the primary author of our upcoming whitepaper, my goal is to articulate an ambitious vision for autonomous post-AI economy rooted in scalability, security, and resilience. The work I do is driven by a commitment to creating human-centered and universally accessible systems empowering individuals and communities to engage with a transparent and equitable decentralized future.
Also Read: AI and Its Biggest Myths: What the Future Holds
How has Autonomys evolved over the years?
Autonomys has undergone a remarkable evolution, starting with its inception by two builders back in 2018 and founding of Subspace Labs in 2021. Initially focused on building a hyper-scalable and eco-friendly blockchain, we introduced the Proof of Archival Storage consensus protocol, which set the foundation for the Distributed Storage Network. During this period, our Aries and Gemini testnets saw significant milestones, including the Gemini II incentivized testnet, which reached over 30,000 live nodes across 60+ countries and pledged over 1.3 PiB in storage. These early successes were reinforced by a $33 million strategic funding round led by Pantera Capital, supporting our mission to deliver scalable and decentralized infrastructure. The most transformative moment came in 2024 when we rebranded as Autonomys, reflecting our growing commitment to decentralized AI and autonomous systems for both humans and AI. As we transition from the Gemini testnet into our Mainnet, we’re launching critical components like the permanent storage with EVM compatibility, allowing Ethereum developers to deploy dApps seamlessly. Autonomys is now positioned to not only meet the demands of decentralized storage and compute but also lead the charge into the future of AI-driven blockchain innovation. Our vision continues to evolve with the goal of building an inclusive, autonomous future, powered by decentralized AI technology.
We’d love to hear about your recent collaboration and how that empowers the plans you have for AI and blockchain innovation?
Our partnership with Masa is a game-changer for decentralized AI innovation. By integrating Autonomys’ decentralized infrastructure with Masa’s real-time data streams, we’re equipping AI developers with access to rich datasets for optimizing AI agents and training models. Developers working on our execution platform will be able to leverage Masa’s data to enhance the accuracy and efficiency of their AI solutions, allowing for more advanced AI-powered dApps. This partnership breaks down traditional barriers to AI development, providing a decentralized and permissionless environment for developers to build cutting-edge applications that are secure, scalable, and globally deployable.
What should developers keep in mind when using AI to scale program builds?
When using AI to scale program builds, developers should prioritize integrating AI in a way that enhances both efficiency and security without sacrificing decentralization. This means selecting scalable infrastructure that supports high-throughput and real-time data, like decentralized AI platforms, to handle the vast amounts of data AI required for training and inference. Additionally, it’s crucial to design systems that maintain transparency, ensuring that AI decision-making processes and actions taken by AI agents are auditable and trustworthy. Developers should also be mindful of modularity, enabling flexible upgrades and future-proofing their builds for the evolving landscape of AI and blockchain technologies.
Also Read: Humanoid Robots And Their Potential Impact On the Future of Work
Can you talk about the most innovative AI innovations from around the world that have piqued your interest?
Personally, one of the most exciting developments in AI right now is the evolution of multi-agent systems and frameworks that allow many agents to collaborate and solve complex tasks together. These agents can be specialized or general, and when working in concert, they show potential for applications in everything from advanced simulations to real-world autonomous systems like supply chain logistics, smart grids, or large-scale data analysis. The ability for these agents to communicate and make decisions collectively marks a significant leap in how AI can address multifaceted challenges.
Another key innovation is the development of compact AI models that can run on edge devices. Traditionally, AI models required immense computational resources, often needing cloud infrastructure for processing. However, advancements in model compression, quantization, and efficient architectures (such as transformers for edge devices) are enabling powerful AI to operate directly on devices like smartphones, wearables, or IoT sensors. This shift reduces latency, enhances privacy, and opens up new possibilities for AI in healthcare monitoring, autonomous drones, and smart home applications, where constant connectivity to the cloud isn’t feasible or desirable.
Also, reinforcement learning in complex environments has made notable progress. Recent breakthroughs, particularly in training models to master intricate games like StarCraft or Dota 2, have demonstrated the scalability and flexibility of reinforcement learning algorithms. Beyond gaming, these techniques are being applied to industrial automation, robotics, and self-driving vehicles.
Lastly, neurosymbolic AI—which combines symbolic reasoning with deep learning—offers an intriguing approach to making AI systems more interpretable and capable of reasoning in ways that reflect human thought. This hybrid method is expected to bridge the gap between powerful but opaque deep learning systems and the need for transparency and explainability in high-stakes domains like healthcare or legal reasoning.
[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]
Dariia Porechna is the Head of Protocol at Autonomys, where she leads the development and implementation of the network’s core technologies, including consensus mechanisms, distributed storage, and scalability solutions. Originally from Kyiv, Ukraine, and now residing in Rome, Italy, she holds a degree in Applied Mathematics and Cryptography from Kyiv Polytechnic University. Since joining Autonomys in May 2022, Dariia has played a key role in advancing the project, from conducting research and debugging systems to authoring the new whitepaper, making her a driving force behind the network’s innovation and growth.
Autonomys Network—the foundation layer for AI3.0—is a hyper-scalable decentralized AI (deAI) infrastructure stack encompassing high-throughput permanent distributed storage, data availability and access, and modular execution. Our deAI ecosystem provides all the essential components to build and deploy secure super dApps (AI-powered dApps) and on-chain agents, equipping them with advanced AI capabilities for dynamic and autonomous functionality. Our infrastructure is designed to not only support vast present-day use cases with truly decentralized, secure, and scalable solutions but to serve as an optimized environment for future innovation and advancements yet to be discovered.
Comments are closed.