Cryptopolitan has had the pleasure of sitting down with Tory Green, the not-so-new CEO of io.net, to talk about the specifics of his vision for the company, and his mission in this industry.

io.net is a decentralized network that’s all about making GPU power more accessible to everyone.

It’s designed to cut costs and speed up projects for engineers and businesses by offering quick access to a huge pool of GPUs whenever they need them. 

The network, called the Internet of GPUs (IOG), pulls together GPUs from all over the world, letting users tap into tons of computing power for things like AI, machine learning, and cloud gaming.

Here is all we learned from Tory:

QUESTION: Nice to meet you, Tory. Let’s get right into it. So, you stepped into the CEO role at a uniquely important time for io.net. What steps did you or are you taking to make sure everything runs smoothly and stays on track during this leadership change?

ANSWER: I officially took on the CEO role in June, but I’ve been running the show for the last 15 months anyway, so the switch was no big deal. 

First thing I did was lay out a clear strategy, making sure everyone knew where we’re headed and what steps we need to take to get there. Everyone being on the same page is crucial to keeping things on track.

Next, I made sure we had a solid leadership team in place. I put the right people in the right roles—people who have the know-how and the drive to handle challenges and help us grow. I also pushed for transparency and open communication. 

We’ve set up regular meetings and check-ins to keep everyone in the loop about what’s going well and what’s not. This keeps the team engaged and ready to adapt as we navigate this leadership change.

Also, I’ve kept the focus on getting things done. Transitions can easily become distractions, so I’ve made it a priority to keep our operational goals front and center. We’re staying on course, continuing to innovate, and scaling io.net like we planned.

QUESTION: io.net has been growing its decentralized GPU network really fast. What specific technical issues have come up as you try to keep everything running smoothly, especially with low latency and high reliability, as you scale up to manage hundreds of thousands of GPUs all over the world?

ANSWER: As io.net scales its decentralized GPU network, we’ve naturally faced some technical challenges. One issue is ensuring consistent performance across geographically dispersed nodes, which we address by optimizing data processing and minimizing latency, especially for real-time AI tasks.

Reliability is another focus. With a large and diverse network, we’ve implemented advanced monitoring, validation, and redundancy systems to prevent node failures and maintain high performance.

Scalability is critical as well, and we’ve developed algorithms to efficiently allocate resources and balance loads, ensuring smooth operations as we grow.

By addressing these challenges head-on, we continue to maintain the high standards our users expect as we expand.

QUESTION: How do you see DePin working in other industries like healthcare and energy, beyond just AI and cloud gaming?

ANSWER: DePIN has significant potential beyond AI and cloud gaming, with transformative applications in industries like healthcare and energy. 

In healthcare, decentralized infrastructure can improve data sharing and access to computers for tasks like medical imaging, genomics, and AI-driven diagnostics. 

By tapping into global, underutilized resources, healthcare providers can process large datasets more efficiently, enabling faster and more accurate patient care.

In the energy sector, DePIN could support smarter energy grids and renewable energy management. 

Decentralized networks can help balance supply and demand by efficiently distributing compute power to monitor energy use, optimize distribution, and manage storage solutions in real-time. This would not only reduce costs but also improve sustainability and grid resilience.

Ultimately, DePIN’s decentralized model offers flexibility, scalability, and cost-efficiency, making it an ideal solution for industries like healthcare and energy that require robust infrastructure to handle complex, data-intensive tasks.

QUESTION: What new innovations are you bringing in to make sure your network can handle the increasingly complex tasks needed by AI and machine learning applications?

ANSWER: To handle the increasingly complex tasks required by AI and machine learning applications, io.net is implementing several innovations to ensure our decentralized GPU network remains robust and scalable. 

One of the key upgrades we’ve made is our Proof of Work system, which includes enhanced hardware verification, VRAM checks, and stricter CPU benchmarks to maintain high performance across the network.

We’re also optimizing scalability. By pooling global compute resources into decentralized clusters, we can flexibly scale to meet demand, enabling dynamic expansion while reducing latency, especially for inferencing operations. 

This ensures that as AI tasks grow in complexity, our network can continue to deliver the necessary performance.

In addition, we’re introducing a tiering system that requires verification for enterprise providers, ensuring the highest quality compute resources are available.

This, combined with the implementation of staking and slashing mechanisms, helps maintain network integrity and reliability as we scale up to manage hundreds of thousands of GPUs.

Ultimately, our focus is on maintaining an enterprise-grade platform that matches or exceeds the standards of traditional cloud providers, offering superior performance, reliability, and cost-efficiency for AI and machine learning applications.

QUESTION: You’ve recently brought in some people from top tech companies to your executive team. What are your personal expectations from io.net’s current staff?

ANSWER: My personal expectations for our team revolve around three key areas: accountability, innovation, and collaboration.

First, accountability is non-negotiable. Every team member has clear KPIs, and I expect them to take ownership of their responsibilities and deliver on their goals. 

We’re building a culture where results matter, and each individual plays a direct role in the success of the company.

Second, I expect innovation. We’re operating in an industry that’s evolving rapidly, and it’s essential that our team stays agile, proactive, and creative in solving problems. 

I want everyone at io.net to push boundaries and think beyond traditional solutions, especially as we continue to disrupt the cloud and AI markets.

Finally, collaboration is key. With talent coming from diverse backgrounds—whether it’s from Web3, AI, or top cloud providers like AWS and GCP—our strength lies in how well we work together. 

I expect our staff to not only bring their expertise but also foster a collaborative environment where ideas are shared openly and efficiently.

Ultimately, my expectation is that every member of the io.net team will embody these values, driving us forward as we continue to grow and innovate in the decentralized computer space.

QUESTION: You’ve talked a lot about focusing on operational excellence and discipline. Can you give some real examples of how you’re putting these ideas into practice within io.net’s decentralized setup, where the usual top-down structure isn’t as strong?

ANSWER: Operational excellence and discipline are key to io.net’s success, even in our decentralized setup. 

While we don’t have a traditional top-down structure, we maintain accountability through clear KPIs for every team member. Regular check-ins ensure progress is tracked without needing rigid oversight.

Transparency is another cornerstone. Teams share consistent updates on progress and challenges, allowing us to quickly address issues. This keeps operations running smoothly across our global workforce.

We also use automation and monitoring tools to manage our network, ensuring seamless performance with minimal manual intervention. This approach helps us maintain discipline while scaling.

In short, we rely on accountability, transparency, and smart technology to maintain operational excellence in our decentralized environment.

QUESTION: Your mission to make GPU compute power accessible to everyone is central to io.net. How are you making sure this access doesn’t end up favoring certain areas over others, especially in less-developed regions?

ANSWER: Ensuring GPU compute power is accessible to all regions, including less-developed areas, is central to io.net’s mission. 

Our decentralized network taps into underutilized GPUs worldwide, allowing us to distribute resources more equitably across 138+ countries, not just in developed regions.

We’ve implemented a tiered system with KYC/KYB verification to ensure high-quality compute is accessible globally, while a fair staking mechanism allows broad participation and rewards, regardless of location.

By working with local partners, we further tailor access to meet the unique needs of different regions, ensuring no area is left behind as we expand.

Ultimately, our decentralized model ensures equitable access to GPU power, making advanced computing available to everyone, everywhere.

QUESTION: io.net has onboarded about 20,000 GPUs to help AI startups. What are the main metrics you use to measure how well these deployments are doing, and how does that influence your plans to expand?

ANSWER: To measure the success of our GPU deployments, we focus on several key metrics. These include compute hours delivered, utilization rates, and uptime. 

These metrics give us a clear view of how effectively the GPUs are being used to support AI startups and ensure that we’re meeting their needs in terms of performance and availability.

We also track user feedback and customer retention rates, as these help gauge satisfaction and highlight areas where we can improve. 

This data plays a crucial role in shaping our expansion plans, ensuring that as we scale, we continue to deliver high-quality compute power efficiently.

By monitoring these metrics, we can fine-tune our network, optimize resource allocation, and decide when and where to onboard additional GPUs to meet growing demand.

QUESTION: The collaboration between io.net and Chainbase involves integrating an omnichain data network into your AI projects. What specific challenges have you faced in making different blockchain networks work together seamlessly?

ANSWER: Integrating an omnichain data network like Chainbase into our AI projects comes with specific challenges, especially around ensuring seamless communication between different blockchain networks. 

One of the biggest hurdles is interoperability—making sure various chains can share data without delays or inconsistencies.

Another challenge is managing the complexity of cross-chain transactions, which can introduce latency or increase the risk of errors. 

To address this, we’ve developed robust validation protocols and optimized our architecture to ensure data flows smoothly across chains without compromising speed or security.

Ultimately, overcoming these challenges requires constant innovation, and we’re committed to refining our systems to enable efficient cross-chain collaboration as we scale our AI projects.

QUESTION: How do you make sure compute power remains reliable and consistent across such a varied and sometimes unstable supplier base, especially when dealing with mission-critical applications?

ANSWER: Ensuring reliable and consistent compute power across a varied and sometimes unstable supplier base is a top priority, especially for mission-critical applications. 

We address this by implementing advanced monitoring and validation systems that continuously track the performance and health of every node in our network. This helps us quickly identify and address underperforming or unstable suppliers.

We’ve also built redundancy into our network, ensuring that workloads can be dynamically shifted to other nodes if any issues arise, preventing disruptions in service. 

Additionally, we’ve introduced a tiered system that requires verification for high-quality suppliers, prioritizing those with a proven track record of reliability.

For mission-critical applications, we implement stricter SLAs (Service Level Agreements) to ensure top-tier performance and reliability. 

This allows us to maintain high standards, regardless of the variability in our supplier base, while continuing to scale.

INTERVIEWER: Alright, that’s our time. Thanks for doing this, Tory.

TORY: Thanks.