Logo

guess who pulled the rug from under you?

Trissy Details

Times Rugged:
0
% Rugged:
0%
Times Pumped:
0
% Pumped:
0%

Why does Robotics need a token? The short (traders) answer: speculation. Take ai16z for example. Never even had token utility and it became the most used product because of the depth and plugins of its Eliza toolkit. I believe token utility isn't as important as building infra which is actually usable by outside devs. Eliza was the most used github repo at one point and had devs from all industries coming to test it. This should be the goal for anyone building in both AI and robotics. Speculation drove mania -> volatility due to the belief of future utility and demonstrated how impactful it is as a fundamental. Profitable traders understand the asymmetric value of this. We know token utility is beneficial, especially in the right circumstances though. Virtuals took a more crypto native flywheel approach to the launchpad framework, attaching all forms of commerce and distribution to their token. Virtuals and ai16z topped at $3 bil and $2.6 bil mcaps respectively. One housed the strongest flywheel we’ve seen since DeFi szn and the other a global framework which proved far more successful regarding developer usage and majority of teams building on Virtuals tooling eventually had to move over because they were being restricted so much. It is interesting watching $VIRTUAL lead the way for robotics atm, taking a standardized approach to incentivising data provisioning + funding new start ups. They introduced Unicorn, which is their new launchpad model. Replacing Virtuals older points system with direct token stakes and rewards. They’ve gone back to a more traditional launchpad route where each new Unicorn startup (a robotics project on Virtuals) starts at a low valuation and acts more like a bonding curve. The founding team’s funding is vested and only unlocked as the project grows, forcing builders to deliver results. They also launched SeeSaw, crowdsourcing rich spatial datasets (humans recording first person videos of tasks so robots can learn from real world experiences. Packaged as a fun mobile app that crowdsources human interaction videos to train AI and robot agents. This “middle way” focuses on cloud data infrastructure and funding. There’s no question that high quality real world data is crucial for embodied AI. Especially in the foundational phase, robotics benefits from large volumes of varied environmental input. Data like this is the fuel early models need to learn and generalize. Virtual's approach helps bootstrap this layer effectively and has its place in setting the floor for capabilities. But over time, this value plateaus. As more data protocols emerge, the volume of available real world data increases, while the number of end users who can meaningfully absorb and use this data doesn’t scale linearly. Which means the returns become more concentrated, mostly benefiting teams building large foundational models. These models will still matter and be profitable, but the edge starts to shift elsewhere. What starts to matter more is giving users the ability to collect and use their own custom data. Custom data pipelines are where I see more value accruing, tools that allow a store owner, a warehouse team, or a household to quickly gather and fine tune robots to their specific environments. That kind of data won’t be bundled in any dataset marketplace. As we’ve seen with LLM’s, most users don’t care about the training rituals behind GPT. They care about how to feed it their own docs. The long term opportunity is in making that collection and integration loop simple. While Virtuals is going for data (fuel for AI models) and a marketplace to fund and share in robot ventures, I believe the biggest impact will come from those who remove the most abstractions from complexities of robotics development. Hardware, software and data need a unified toolkit which gives individual devs a chance to experiment without needing to build a custom framework, which is what sparked AI szn this time last year. Data is important and real world data is significantly more important for robotics than AI, especially in the early innings to set the foundations. But I don’t believe this is where the biggest value layer occurs in the long term. What we need is better abstraction of tooling, giving developers faster iteration loops going from A -> B. Data is only one of the inputs in a very large hardware and software stack. Robotics is far too deep of a sector to throw a crypto incentive layer over and needs to be looked at from a holistic view. Data -> Perception -> Planning & Reasoning -> Control & Actuation -> Feedback Integration. Due to this depth, there won’t be any single crypto company which will build a monolithic stack covering each of these areas (full stack humanoid for example), if they were they would have raised 8/9 figs in web2 and wouldn’t bother with crypto. The most impactful token utility will come from supporting tooling that gives devs incentives to grow out an open source library of new plugins/attachments with flexibility. Something which rewards devs for contributing mapping software for specific motors, sensors, cameras etc, alongside leading foundational models that then plugs in to any robot. On top of this, whoever builds the most successful task marketplace will be akin to unlocking custom games on Roblox or Fortnite. Humanoids are still like toddlers, they need to be taught (tasks) which improve their feedback to environmental scenarios, slowly turning them into functioning adults. This won’t be possible without global coordination as there isn’t large amounts of quality real world data yet, and more importantly, tooling which can help abstract this entire iteration flow. Which is why I’m so bullish on $CODEC as it’s essentially creating a new robotics middleware from scratch, whereas Virtuals leverages existing AI models and focuses on aggregating resources around them. Codec’s architecture might enable faster iteration on actual robot tasks (since it provides a framework to quickly deploy and share new behaviors), whereas Virtuals architecture aims to accelerate the inputs and support for those tasks (data + funding). The core idea is to replace fragile, hard coded automation scripts with adaptive AI “Operators” which are very aligned with leading VLA architecture from companies like Deepmind etc. Finding a way to attach token utility (incentives for mapping and abstraction of iteration loops) is where we’ll see the biggest impact. The majority of robotic foundation models are already going open source and this isn’t a decentralized crypto pipe dream psyop we try to spin on other narratives. Hardest and most important part is acquiring real users/devs, then you add the flywheel on top to supercharge the ecosystem. Imagine if ai16z had Virtuals flywheel.

Tweet Date:
2025-11-05 19:33:42 (UTC+0)
Tweet Price:
$1.42853
Tweet + 1h Price:
$1.39480