DEMOCRATISING CREATIVITY

Top Sensei announcements from Adobe MAX 2020 that made me think I have what it takes to become an artistic savant.

Portrait of Tammy Strobel
My Reading Room

Sneaks at Adobe MAX has historically been the most interesting stream of the conference because it showcases what the company is working on behind the scenes. Some of these technology previews eventually make it into a real Adobe product in one way or another, but some also feel like flights of fantasy from the minds of a developer trying to create the next big software magic.

Now, bear in mind that the term “Photoshop magic” has long been used to describe any digital image manipulation that looked like it took a professional graphics designer hours to achieve…usually on Adobe Photoshop, because that’s what all serious graphics designers used. 

My Reading Room

Today however, our smartphones are able to perform Photoshop magic with a tap of your finger, in real-time, and even on videos. I’m talking about beautification filters, virtual backgrounds, augmented reality animations, transformations, lighting adjustments…and the list goes on. These aren’t even dedicated photo editing apps; every communication and social media app has a host of such 3D/AR/filter tools at its disposal. So, it’s actually a real feat that Adobe continuously manages to blow my mind during Adobe MAX and this year, it’s not just Sneaks that piqued my interest, it’s just about everything.

And the one thing that really struck me was how Adobe has leveraged hard on AI improvements since they announced the Sensei AI framework back in 2016. I bring this up because watching this year’s Adobe MAX really shows how Sensei integration within the Adobe universe is the perfect example of maturing AI technologies and how AI fits into our everyday lives.

Fundamentally, for me, it answers the question of whether AI is going to make all our jobs obsolete. And the answer that I’m sticking with—at least for now—is no. 

ADOBE MAX REALLY SHOWS HOW SENSEI INTEGRATION WITHIN THE ADOBE UNIVERSE IS THE PERFECT EXAMPLE OF MATURING AI TECHNOLOGIES AND HOW AI FITS INTO OUR EVERYDAY LIVES. 

My Reading Room

THE SKY’S THE LIMIT

Let’s start with good old Photoshop magic. Two new Sensei-powered features coming into Photoshop is Neural Filters and Sky Replacement. Now, both of these tools feel like something most phone apps can do, but at a level x100. Neural Filters for example doesn’t just take a photo to apply skin smoothing or beauty touch-ups; it can scan a 2D image, identify objects and do really crazy things like turning your head…in a still photo!

Sky Replacement also feels pedestrian at first. It’s just a masking and overlay filter to replace the sky in a photo? How mundane, or so I thought. Of course, I expected improvements to things like content aware masking and edge refinements with every new generation, but Sky Replacement won’t just replace sky more accurately, it can even take your new sky and apply the correct colourisation to the rest of your picture. So if you’re replacing a morning sky with an evening sky for example, your entire picture changes accordingly so it actually looks like it was shot in the evening. 

My Reading Room
My Reading Room

THE AGE OF VIDEO

Video is another big topic for creators today, and if you thought Photoshop was daunting to the average person, then Premiere Pro would be like trying to climb 20 HWM | NOVEMBER 2020 up Mount Doom. The number of buttons and dials and option one needs to manage just to edit a ‘simple’ vlog is no joke. And that’s where Sensei comes in to help ease some of that workload with Speech to Text. At MAX 2020, Adobe announced auto-captioning/transcribing for Premiere Pro. Again, this doesn’t seem too impressive at first, when YouTube has a real-time auto captioning option that does a reasonably good job that you can just toggle on and off while streaming a video. But where Adobe impresses is the fact that it doesn’t just transcribe, it tries to match the pacing of spoken dialogue and syncs it all in the video timecode, itself fully editable. 

My Reading Room
My Reading Room

DISNEY, HERE I COME

Taking speech recognition technology one step further, Character Animator now has Speech-Aware Animation functionality. Remember when I was already impressed by how Premiere Pro uses Sensei to process speech so it can produce text captions with correct pacing too? Well, Speech-Aware Animation—as its name implies—tries to create natural-looking animations by generating lip-synched mouth, as well as corresponding head and eyebrow movements. I’m not going to dwell much on this topic since I know next to nothing about animation, but I thought it’ll be a great segue into this year’s Sneaks as Speech-Aware Animation was actually called Project Sweet Talk, and was initially previewed at last year’s Adobe MAX Sneaks 2019. 

IS IT TIME FOR SNEAKS 2020 YET?

So, Neural Filters, Sky Replacement and Speech Aware features have definitely made their way into real products, but if you’ve read this far, you’d have realised I’m quite interested in photo and video announcements, so what are my favourite Sneaks this year?

No pictures on Adobe site, only videos linked here. https://blog. adobe.com/en/2020/10/21/max-sneaks-2020-where-creativity-and-innovation-knows-no-bounds.html 

Material World

I’ve seen apps use your phone camera or existing photo as a design template. I’m not talking about just using a picture as a background, but extracting design or colour schemes from a picture. For example, Samsung’s Galaxy smartwatches let’s you create custom watch faces so you can colour-coordinate your #ootd. The Material World Sneak blows everything I’ve seen out of the water with the ability to produce realistic, environment and lighting-aware 3D textures from a single 2D photo. Not only that, Adobe was able to batch process a whole bunch of pictures to fill a completely blank/white 3D scene into a fully-textured and coloured world. 

THE MATERIAL WORLD SNEAK BLOWS EVERYTHING I’VE SEEN OUT OF THE WATER WITH THE ABILITY TO PRODUCE REALISTIC, ENVIRONMENT AND LIGHTING-AWARE 3D TEXTURES FROM A SINGLE 2D PHOTO. 

My Reading Room

On The Beat

How does a computer vision researcher perform TikTok dance challenges? With AI of course. This Sneak showed how Adobe Sensei is used to identify patterns in both video and audio samples, then synching them up in order to create a timed output that feels fluid. My explanation seems overly simplified, but that’s essentially what it does. You’ve really got to see the video yourself to appreciate how genius this is. Now, why this is interesting to me is the fact that it’s basically an expansion the Speech Aware capabilities from before.

Comic Blast

The last one on my list is Comic Blast, a tool that uses Sensei to process a standard comic text script (I think it still has to follow a particular format) and then generate a whole comic book complete with panels, speech and effects bubbles. Then you can import your art into the panels, where it seems like more content-aware tools will further allow you to modify the art, such as morphing character faces with your own picture, create animations such a rotating rocks or parallax effects. 

My Reading Room
My Reading Room

YOU CAN IMPORT YOUR ART INTO THE PANELS, WHERE IT SEEMS LIKE MORE CONTENT-AWARE TOOLS WILL FURTHER ALLOW YOU TO MODIFY THE ART, SUCH AS MORPHING CHARACTER FACES WITH YOUR OWN PICTURE. 

BACK TO THE TOP

When I look back at older editions of Adobe MAX, I remember when new features were introduced, they were presented by people who were obviously familiar with the apps and tools. I would normally come away wishing I could do even half of what was shown, knowing I didn’t have the creativity to pull it off.

Today, I come away from a presentation thinking, “Oh that’s cool. I can’t wait to try that myself”.

On the outset, it feels like Photoshop magic has become less magical, in the sense that the professional creative’s expertise or knowledge is no longer needed. Advanced image and video manipulation can be achieved at the touch of a button.

But look closer, and you’ll find what’s happening with Adobe Sensei follows the same trend as AI implementation across other industries. In truth, these Sensei-powered features don’t really take away the creative’s creativity (for the lack of a better word). Instead, it helps remove the mundane roadblocks in a creative’s workflow.

Anyone that’s ever tried to manually edge a photo with human hair will tell you that it’s tedious work. The same goes for transcribing and timing videos, or adjusting animations frame by frame. If all these jobs can be automated and enhanced by contextually-aware AI algorithms, all the more power for creatives to actually focus on being creative.

Sure, I may now be able to change the sky in my pictures without having to go through a 20-step masking and blending Photoshop tutorial, but I’d still have to know how to take a good picture in the first place. Sensei may be able to perfectly time my wildly swinging hands to the BTS remix of Savage Love, but I’s still need to learn how to dance. 

By Zachary Chan PHOTOS ADOBE