🗿️Need a 3D asset for your project? Now you can generate it locally for free. ✅ Pros: • Generate decent quality 3D assets at 512 resolution in ~22s • Text or Sketch input • Free and open source • Run locally (no internet required) • Near real-time 1 second 3D generation at 128 resolution • Open source MIT license, allowing commercialized, personal, and research use 🚫 Cons: • Geometry can get pretty gnarly (let's see this 2 more papers down the line) • Textured meshes are vertex colored which is more coarse-grained ⚙️ Installation: 📦 TripoSR 3D Model: https://lnkd.in/eKBDAtVW 📦 GUI: https://lnkd.in/efwbUmBF Video 🎵 Audio by Simon Folwar https://lnkd.in/etuYgYRq 🎞️ Editing in DaVinci Resolve #artificialintelligence #foss #opensource #productdesign #ai #3d
Mike Sokol’s Post
More Relevant Posts
-
Data Enthusiast | Data Analyst | Data Science | ML/DL/AI | Analytics | Visualization | ETL | UI/UX | NFT | Power Apps | IT | Content Writer | Jobs/Recruitment | Quoran | Follow for more
🚀 Transforming the realm of image editing, "Diffusion Handles" introduces a groundbreaking technique to manipulate 3D objects in diffusion images seamlessly. This innovative method leverages pre-trained diffusion models and 2D depth estimation, bypassing the need for fine-tuning or 3D object retrieval. The result? Stunningly realistic edits that maintain the object's identity with impressive control over complex 3D occlusions and lighting. 🌟 Takeaway: "Diffusion Handles" is set to revolutionize creative design, offering a new dimension of generative image editing that's both user-friendly and technologically advanced. #AI #MachineLearning #ComputerVision #GenerativeDesign #Innovation #TechNews #DiffusionModels #3DEditing #ArtificialIntelligence #DeepLearning #CreativeAI
To view or add a comment, sign in
-
txt-(to-multi-images-)to-3D via Gaussian Splats reconstruction by Google. Not a new approach but a new SoTA for sure. #genAI #txtto3d #GaussianSplatting
Host, TED AI Show | Ex-Google PM (3D Maps, ARCore, 360/VR) | Scout, A16z | VFX creator with 1.4M+ subs & 445M+ views | ੴ
Create a 3D model from a single image, set of images or a text prompt in < 1 minute 😮💨 This new AI paper called CAT3D shows us that it’ll keep getting easier to produce 3D models from 2D images — whether it’s a sparser real world 3D scan (a few photos instead of hundreds) or your favorite 2D image generator like Midjourney (just an image). How does this magic work? “This architecture is similar to video diffusion models, but with camera pose embeddings for each image instead of time embeddings. The generated views are passed into a robust 3D reconstruction pipeline to create the 3D representation (Zip-NeRF or 3DGS)” The Google crew strike again! Looks better than ReconFusion too. Hope there’s a code release. 🔗 Paper/project (with interactive 3D samples you can play with): https://cat3d.github.io/ #ai #3d #cgi #ml #vfx
To view or add a comment, sign in
-
Also for hindsight. Thinking laterally in terms of encoding room acoustic parameters within 3D space?
Host, TED AI Show | Ex-Google PM (3D Maps, ARCore, 360/VR) | Scout, A16z | VFX creator with 1.4M+ subs & 445M+ views | ੴ
Create a 3D model from a single image, set of images or a text prompt in < 1 minute 😮💨 This new AI paper called CAT3D shows us that it’ll keep getting easier to produce 3D models from 2D images — whether it’s a sparser real world 3D scan (a few photos instead of hundreds) or your favorite 2D image generator like Midjourney (just an image). How does this magic work? “This architecture is similar to video diffusion models, but with camera pose embeddings for each image instead of time embeddings. The generated views are passed into a robust 3D reconstruction pipeline to create the 3D representation (Zip-NeRF or 3DGS)” The Google crew strike again! Looks better than ReconFusion too. Hope there’s a code release. 🔗 Paper/project (with interactive 3D samples you can play with): https://cat3d.github.io/ #ai #3d #cgi #ml #vfx
To view or add a comment, sign in
-
Major breakthrough in 3D scene reconstruction from Google. They trained a model to reconstruct a NeRF from a single image. Look at the quality, it’s amazing! „CAT3D uses a multi-view latent diffusion model to generate novel views of the scene. This model can be conditioned on any number of observed views (input images with corresponding camera poses embedded as ray coordinates), and is trained to produce multiple consistent novel images of the scene at specified target viewpoints. This architecture is similar to video diffusion models, but with camera pose embeddings for each image instead of time embeddings. The generated views are passed into a robust 3D reconstruction pipeline to create the 3D representation (Zip-NeRF or 3DGS).”
Host, TED AI Show | Ex-Google PM (3D Maps, ARCore, 360/VR) | Scout, A16z | VFX creator with 1.4M+ subs & 445M+ views | ੴ
Create a 3D model from a single image, set of images or a text prompt in < 1 minute 😮💨 This new AI paper called CAT3D shows us that it’ll keep getting easier to produce 3D models from 2D images — whether it’s a sparser real world 3D scan (a few photos instead of hundreds) or your favorite 2D image generator like Midjourney (just an image). How does this magic work? “This architecture is similar to video diffusion models, but with camera pose embeddings for each image instead of time embeddings. The generated views are passed into a robust 3D reconstruction pipeline to create the 3D representation (Zip-NeRF or 3DGS)” The Google crew strike again! Looks better than ReconFusion too. Hope there’s a code release. 🔗 Paper/project (with interactive 3D samples you can play with): https://cat3d.github.io/ #ai #3d #cgi #ml #vfx
To view or add a comment, sign in
-
Director Graphics & AI Evangelism | AI PC Advocacy | Director Community Engagement Programs | Developer Relations | Creator Enthusiast | Brand Advocacy | 3D Rendering SME | Product UX Management
So many possibilities with this combination of image diffusion and NeRF. An entire immersive 3D world could be created from a single 2D image😲
Host, TED AI Show | Ex-Google PM (3D Maps, ARCore, 360/VR) | Scout, A16z | VFX creator with 1.4M+ subs & 445M+ views | ੴ
Create a 3D model from a single image, set of images or a text prompt in < 1 minute 😮💨 This new AI paper called CAT3D shows us that it’ll keep getting easier to produce 3D models from 2D images — whether it’s a sparser real world 3D scan (a few photos instead of hundreds) or your favorite 2D image generator like Midjourney (just an image). How does this magic work? “This architecture is similar to video diffusion models, but with camera pose embeddings for each image instead of time embeddings. The generated views are passed into a robust 3D reconstruction pipeline to create the 3D representation (Zip-NeRF or 3DGS)” The Google crew strike again! Looks better than ReconFusion too. Hope there’s a code release. 🔗 Paper/project (with interactive 3D samples you can play with): https://cat3d.github.io/ #ai #3d #cgi #ml #vfx
To view or add a comment, sign in
-
I'm a part of the Games Studio Unity Technologies, where we help Unity Games Publishers optimize their games for better performance, port to new platforms, and support platform leaders like Google.
Dang. Camera pose embedding and nerf tech make it possible to gen ai an interactive scene.
Host, TED AI Show | Ex-Google PM (3D Maps, ARCore, 360/VR) | Scout, A16z | VFX creator with 1.4M+ subs & 445M+ views | ੴ
Create a 3D model from a single image, set of images or a text prompt in < 1 minute 😮💨 This new AI paper called CAT3D shows us that it’ll keep getting easier to produce 3D models from 2D images — whether it’s a sparser real world 3D scan (a few photos instead of hundreds) or your favorite 2D image generator like Midjourney (just an image). How does this magic work? “This architecture is similar to video diffusion models, but with camera pose embeddings for each image instead of time embeddings. The generated views are passed into a robust 3D reconstruction pipeline to create the 3D representation (Zip-NeRF or 3DGS)” The Google crew strike again! Looks better than ReconFusion too. Hope there’s a code release. 🔗 Paper/project (with interactive 3D samples you can play with): https://cat3d.github.io/ #ai #3d #cgi #ml #vfx
To view or add a comment, sign in
-
Are you upscaling your renders with AI? We're about to launch our Hand Elements 3D model pack over on Visune. Not only is this the first time we've ventured into human anatomy, it's the first time we're publicly recommending AI upscaling to inject realism into the shots. This comparison demonstrates the power of AI upscaling. In this example using the amazing Magnific AI. Here's the process: - Render high resolution from KeyShot - Downscale the image to 720p (this gives Magnific lots of headroom to work its magic) - Upload to Magnific and upscale 4x - Composite original render and Magnific output in Photoshop Our 3D models give you huge amounts of control in the composition and provide the upscaler with lighting and tone references to build on. From importing the hand model to exporting the edit, this was a 20-minute process. And whilst not perfect, is about as efficient you can get for bringing life into your visuals. #keyshot #render #productdesign #industrialdesign #design #3d #ai
To view or add a comment, sign in
-
How beautiful can Architecture be?
Fantastic work by our student shamus.crowe.architecture developed during the “Parametric Intelligence” workshop led by Tim Fu. Participants learn to fully develop their design from 2D to 3D, using holistic AI workflow, going from AI concept design with Midjourney, parametric design with Rhino/Grasshopper, and LookX for detail development and rendering, and how to sell a design, post-processing techniques, productivity strategies, essential resources, and overview of other applications to prep for the coming era of AI. Tap the 🔗 link for more information: https://lnkd.in/eR95jamA #artificialintelligence #midjourney #parametricdesign #computationaldesign #architecturestudents #formgeneration #photoshop
To view or add a comment, sign in
-
Fantastic work by our student shamus.crowe.architecture developed during the “Parametric Intelligence” workshop led by Tim Fu. Participants learn to fully develop their design from 2D to 3D, using holistic AI workflow, going from AI concept design with Midjourney, parametric design with Rhino/Grasshopper, and LookX for detail development and rendering, and how to sell a design, post-processing techniques, productivity strategies, essential resources, and overview of other applications to prep for the coming era of AI. Tap the 🔗 link for more information: https://lnkd.in/eR95jamA #artificialintelligence #midjourney #parametricdesign #computationaldesign #architecturestudents #formgeneration #photoshop
To view or add a comment, sign in
-
Innovation. Glass is considered as one of the main contributors of the Global Warming Potential (GWP) with a high carbon footprint. This concept is an eye opener, where it allows us to think thay it can be used to reduce operational energy by converting it to solar panels.
Fantastic work by our student shamus.crowe.architecture developed during the “Parametric Intelligence” workshop led by Tim Fu. Participants learn to fully develop their design from 2D to 3D, using holistic AI workflow, going from AI concept design with Midjourney, parametric design with Rhino/Grasshopper, and LookX for detail development and rendering, and how to sell a design, post-processing techniques, productivity strategies, essential resources, and overview of other applications to prep for the coming era of AI. Tap the 🔗 link for more information: https://lnkd.in/eR95jamA #artificialintelligence #midjourney #parametricdesign #computationaldesign #architecturestudents #formgeneration #photoshop
To view or add a comment, sign in
Senior Product Designer • Search Personalization
3moThis is WILD! Amazing work.