Zephyrnet Logo

What’s Coming Up Next for Snapchat? Here’s Everything You Need to Know

Date:

SnapML at a glance, from left to right: Multiple segmentation, hand gestures, custom segmentation, ground segmentation, foot tracking.

Snap lenses, as we have known them until now, employ machine learning within preset limits as designated by Snap. However, now with SnapML, these limits have been lifted: Lens Creators can now input their own machine learning parameters to deploy any number of visually engaging AR effects in the real world.

SnapML bridges the worlds of data science and creativity to create new engaging, memorable, and unique AR experiences. Not only is it truly a creative sandbox, SnapML can also unlock distinctive triggers for people to discover organically while playing around with lenses and effects, which means that no two AR experiences will be exactly alike — and that you can bake more layers and capabilities into each brand experience from the onset.

A few examples:

— the ability to detect and mask any shape or object trained by SnapML. For example, you could have the lens recognise your brand logo, and then augment an effect on top of it.

— the ability to mask different areas of the world — everything from the floor and sky down to specific objects like trees and people — with individual effects, allowing a totally unique AR experience all within one lens.

— the ability to trigger specific Lenses with specific hand gestures, eg. a peace sign.

Source: https://arvrjourney.com/whats-coming-up-next-for-snapchat-here-s-everything-you-need-to-know-b452c4c2f520?source=rss—-d01820283d6d—4

spot_img

Latest Intelligence

spot_img