Snap lenses, as we have known them until now, employ machine learning within preset limits as designated by Snap. However, now with SnapML, these limits have been lifted: Lens Creators can now input their own machine learning parameters to deploy any number of visually engaging AR effects in the real world.
SnapML bridges the worlds of data science and creativity to create new engaging, memorable, and unique AR experiences. Not only is it truly a creative sandbox, SnapML can also unlock distinctive triggers for people to discover organically while playing around with lenses and effects, which means that no two AR experiences will be exactly alike — and that you can bake more layers and capabilities into each brand experience from the onset.
A few examples:
Custom segmentation — the ability to detect and mask any shape or object trained by SnapML. For example, you could have the lens recognise your brand logo, and then augment an effect on top of it.
Multiple segmentation — the ability to mask different areas of the world — everything from the floor and sky down to specific objects like trees and people — with individual effects, allowing a totally unique AR experience all within one lens.
Hand gesture recognition — the ability to trigger specific Lenses with specific hand gestures, eg. a peace sign.