sandeep

Code2Video introduces a revolutionary framework for generating professional educational videos directly from executable Python code. Unlike pixel-based diffusion or text-to-video models, Code2Video treats code as the core generative medium, enabling

CAP4D introduces a unified framework for generating photorealistic and animate style rendering 4D portrait avatars from any number of reference images as well as even a single image. By combining

Test3R is a novel and simple test-time learning technique that significantly improves 3D reconstruction quality. Unlike traditional pairwise methods such as DUSt3R, which often suffer from geometric inconsistencies and poor generalization,

BlenderFusion is a novel framework that merges 3D graphics editing with diffusion models to enable precise, 3D-aware visual compositing. Unlike prior approaches that struggle with multi-object and camera disentanglement, BlenderFusion

Ever wondered how those slick background removal tools actually work? You upload a photo, click a button, and boom, the subject pops while the clutter disappears. But behind that magic

The Google DeepMind team has unveiled its latest evolution in their family of open models –  Gemma 3, and it’s a monumental leap forward. While the AI space is crowded

LongSplat is a new framework that achieves high-quality novel view synthesis from casually captured long videos, without requiring camera poses. It overcomes challenges like irregular motion, pose drift, and memory

DINOv3 is a next-generation vision foundation model trained purely with self-supervised learning. It introduces innovations that allow robust dense feature learning at scale with models reaching 7B parameters and achieves

Genie 3 is a general-purpose world model which, given just a text prompt, generates dynamic, interactive environments in real time and rendered at 720p, 24 fps, while maintaining consistency over

In the complex world of modern medicine, two forms of data reign supreme: the visual and the textual. On one side, a deluge of images, X-rays, MRIs, and pathology slides.

In the fast-paced world of artificial intelligence, a new model is making waves for its innovative approach and impressive performance: MOLMO (Multimodal Open Language Model), developed by the Allen Institute

PartCrafter is the first unified 3D generative model that jointly synthesizes multiple semantically meaningful and geometrically distinct 3D parts from a single RGB image without any segmentation required. Powered by