https://github.com/BitMind-AI/bitmind-subnet
The V2 release of Subnet 34 marks a major leap forward in decentralized deepfake detection. At its core is our most anticipated feature since our August mainnet debut: AI-generated video detection. This milestone release also incorporates weeks of careful architectural refinements.
This article serves as a comprehensive overview of v2.0.0, from its high level implications down to the nuts and bolts of its refactored codebase.
Let’s dive in!
The integration of AI-generated video detection, the centerpiece of our V2 release, marks a crucial expansion of our subnet’s detection capabilities. In SN34 versions 1.x, all miner-hosted models operated on images, with no ability to capture the temporal patterns present in sequences of video frames. V2 closes this gap by providing video data transmission infrastructure, and allowing miners to deploy models that take temporal features into account during inference.
As a precursor to true video detection in BitMind’s applications, API calls to v1.x of SN34 would send the thumbnail frame of a video to the subnet miners for classification. While this anecdotally worked better than expected, V2 is positioned to dramatically improve our performance on synthetic video detection.
BitMind’s Deepfake Detection Applications
BitMind’s Deepfake Detection Applications
While our miners ramp up, competing to deploy the most accurate video detection algorithms, the BitMind team is working to bring video submission as a feature to our subnet’s collection of derivative applications. These two paths will soon converge to surface highly accurate, free, decentralized detection of AI generated videos to consumers.
TLDR, Highlights of V2 include:
The next two sections are organized as follows: