AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |
Back to Blog
Nuke 13 machine learning1/23/2024 ![]() It was from this realization that Cop圜at was born.Ĭop圜at forms part of the machine learning toolset that has been integrated into NukeX. Even though the results may take longer, artists would rather spend the time training specifically for their shot, knowing that they’d get the best outcome possible. While the underlying technology remains the same, in VFX we can leverage the skills and knowledge of the artist to help train the network, rather than relying on generic pre-trained tools. The user expects instant results, possibly running on low-performance hardware like a phone and there's no guarantee they're very skilled or tech-savvy it's actually completely the opposite in visual effects.” “With normal machine learning, you have to use these broad data sets because you never know what footage the user's going to want to use it on. “That's when it kind of occurred to me that visual effects isn't like general machine learning,” Ben tells us. With such a tiny dataset he had low expectations of the outcome-to his surprise, it worked. ![]() Whilst figuring out how to start a training run, Ben experimented on an out-of-focus shot from his own film, using just eleven small crops in an attempt to bring it back into focus. It wasn’t until Ben started to test the ML server did his mindset quickly change. From this dataset of before and after images, the ML Server would then train a neural network to do whatever task the user asked of it, like deblurring an image. Little did the team know it would inspire a whole new way of looking at machine learning in visual effects.Ĭonventional wisdom suggested that the ML Server would need a large database of images to learn from, roughly tens of thousands, in order for it to be effective. In the beginning, the initial ML Server created by Foundry’s research team was meant to allow customers to easily experiment with ML inside of Nuke, as well as to provide an internal tool for quick network prototyping. We take a look at this exciting new toolset and chat with Ben Kent, Foundry Research Engineering Manager, and A.I. Cop圜at will then train a neural network to replicate the transformation from before to after, this can be used to apply the effect to the rest of the entire sequence. ![]() If an artist has a complex or time-consuming effect such as creating a garbage matte that needs to be applied across a sequence, an artist can feed the plug-in with just a few example frames. With this in mind, Foundry, as part of the recent Nuke 13.0 release, has integrated a new suite of machine learning tools including Cop圜at-a plug-in that allows artists to train neural networks to create custom effects for their own image-based tasks.Ĭop圜at is designed to save artists huge amounts of time. ML is a saving grace for many and is becoming a crucial part of VFX pipelines everywhere. Artists are met with fresh challenges and need new ways of working and tools that can keep pace. This is especially important in the changing climate, with schedules getting tighter and projects becoming more complex. With its increased popularity comes an opportunity for a more streamlined and efficient way of working. It’s no secret that machine learning (ML) has been on the rise in the visual effects (VFX) industry for the past few years. ![]()
0 Comments
Read More
Leave a Reply. |