Jordan highlighted that businesses often struggle to apply AI models effectively because the data they rely on is too generalized. He explained that while large language models can generate seemingly relevant outputs, they lack the specificity required for real-world business applications. As a result, organizations need to focus on acquiring targeted data that meets their unique needs, rather than relying on large datasets from platforms like Google or Reddit.
2. Building a Diverse Community of Contributors
Jordan mentioned the importance of having a diverse set of contributors, with over 700,000 data builders from different socio-economic backgrounds worldwide. This diversity not only enriches the data collected for AI training but also ensures that the AI models reflect a broader range of human experiences. He emphasized that diversity leads to better-quality data and ultimately more reliable AI models.
3. Quality Control Through Incentive Mechanisms
Jordan introduced the concept of a staking mechanism combined with voting to maintain data quality. Contributors must stake a commitment to sound data contributions, which encourages accountability and quality control. Those who consistently provide quality contributions are paid more and face lower staking requirements for future participation, while poor-quality contributions are punished through slashing mechanisms.
4. The Role of Reputation in Governance
Jordan detailed how reputation plays a crucial role in data validation within their system. As users build their credibility through consistent quality contributions, they advance through different tiers of the voting system, from 'scout' to 'guard' and eventually to 'judges.' This tiered system allows for more efficient verification of data as users gain trust, ultimately enabling a more responsive and effective governance model for data quality.
5. Addressing the Challenges of Participant Engagement
Jordan discussed a common concern regarding user participation in governance systems, stating that the implementation of the staking mechanism acts as a barrier to entry. Despite this, his findings indicate that a dedicated fraction of their community of data builders actively engages in the voting process, thereby creating a self-selecting group focused on maintaining quality and contributing meaningful insights.
6. Iterating on User Feedback for System Improvement
Jordan revealed that their initial implementations utilized a points system to test and refine their platform based on user behavior. This approach allowed the team to identify which components required adjustments in real time. By iterating based on feedback and ensuring that incentivization aligns with desired behaviors, they aim to improve user experience continually.
Join our research newsletter!
Value-packed daily reports covering news, markets, on-chain data, fundraising, governance, and more – sent to your inbox. Saving you 1 hour of research daily.