Filed under:

Building an in-browser image editor with React

How we enable Concert Ad Manager users to upload, crop, zoom, and optimize their images — completely in-browser without any external services.

Last month, Vox Media launched Concert Ad Manager. It’s a self-serve platform for marketers of all sizes and budgets to create, target and publish ads across the Concert and Concert Local marketplaces.

For the first time ever, we’ve opened the door to smaller advertisers with limited budgets and allowed them to take advantage of the premium Concert network. These advertisers might not have the budget to hire a fancy ad agency to create ads for them.

What does this mean for you if you’re one of these advertisers?

Instead of turning to Facebook, Instagram or Google for a cheap, targeted advertising campaign with a limited budget, you can use Concert Ad Manager to create beautiful ads with your existing brand assets and run them on the Concert network.

As a bonus, your ads will be running on the sites of dozens of top-tier Concert publishers instead of in a Facebook feed next to a photo of Uncle Ron’s potato salad (with no disrespect to Uncle Ron).

A creative builder for the average user

Concert ads are designed to have a premium look and feel. Because of this, it’s unlikely that your existing assets are perfectly-sized for a Concert ad without a couple adjustments.

Since advertisers using Concert Ad Manager might not have access to an ad agency or to expensive tools like Photoshop, we wanted to build easy-to-use image editing features within the app itself. Users need to be able to upload their image assets, crop them, and scale them to fit mobile and desktop ad sizes.

With other services at Vox Media like Chorus, we might have used our existing infrastructure to process images at request time – or a server-side image manipulation solution to process and re-upload them.

However, the fact that we wanted to give users full control over focal point selection and scaling was difficult to translate directly into the existing constraints of these systems. Additionally, we didn’t want to put an additional burden on the infrastructure used for editorial purposes at Vox Media for a revenue concern.

It’s also possible we could have purchased an existing third-party image editing solution to handle this. There are plenty of them!

But in our research, we found that many of these solutions require you to use that company’s hosting and delivery service. The Concert CDN serves billions of image requests every month, and this would be a huge additional cost — no matter which vendor we would have chosen.

So we decided to build one ourselves! The Concert Ad Manager creative builder allows you to upload images, crop them, and scale them to a focal point — completely within the client-side of the browser.

Here’s a video of how it looks:

The rest of this post is a technical deep-dive into how it all works.

A custom hook for custom images

Concert Ad Manager is built with React, so we started by creating a custom React hook for image editing — aptly-named useEditableImage().

The most important return value from this hook is a function called handleNewImage(). This is called by the consumer whenever a new image has been added or dragged-and-dropped into the input field.

We start by validating the image size and image mime-type fits our requirements.

Next, we upload the full original image to S3 using a signed URL. It’s important we store a reference to this original image to be able to create cropped and resized renditions later on.

Once that upload is complete, we pass the URL of the hosted source image to a processImage() function, along with the dimensions of the current image field (desktop or mobile ad sizes), and a default zoom and focalPoint.

In this function, we create an HTML <canvas> and prepare to draw the image and set the dimensions of the canvas to the desired output dimensions.

Then, we gather some coordinates to reposition the image with a function called getTransformCoordinates().

Here’s the function in its entirety:

Let’s break it down step-by-step.

The goal is to place the focal point of the image as close as possible to the visual center of the output image.

We start by finding the real pixel coordinates of the visual center of the output image:

Then, we determine how much we need to scale the image to fit within the bounds. We emulate CSS’s background-position: cover with Math.max:

We take that scale factor, in addition to a zoom factor, and apply it to the original image dimensions. In most cases, this will make the image larger than the canvas.

Next, we determine an “ideal center” for the image by resolving the focal point of the scaled image.

This is where my brain starts to sweat. It helps me to imagine holding a landscape picture frame over a portrait photo, and moving the photo underneath to find the right positioning:

Finally, we attempt to overlay the original image’s center at the visual center of the output dimensions (as determined in the first step):

It’s possible that we won’t be able to use this ideal positioning. For example, the user might select a focal point on the extreme edge of the canvas. We don’t want black bars on the edges of the image (known as letterboxing or pillarboxing).

So we need to make a compromise and find a realistic top and left position for the image within the frame. We do this by calculating how much “wiggle room” we have by finding the difference in dimensions between the scaled input image dimensions and the output image dimensions.

Then we clamp the ideal top and left coordinates between the max values (which are actually negative numbers: as far top and left off-canvas as possible) and zero, which would align the image perfectly at the top left corner of the canvas.

The return values from this function are whole numbers: top, left, width and height.

Now that we have the transform coordinates, we use them to draw the image on the canvas. We call Canvas.toBlob() and return an in-memory representation of this drawn image from this function.

Using the in-memory blob, we upload the image again.

Why upload the same image twice? We want to ensure we always have a scaled, cropped version of the user’s image at all times. We wouldn’t want to traffic a giant image across our network, because we care about bandwidth and want to ensure our ads are performant.

We also can’t rely upon a user to make an initial edit to get this scaled version of the image — perhaps they like the way it looks on the first try, for example.

Finally, we call a user-provided onChange() function.

This persists the original image URL, focal point, zoom, and processed image URL to our ad assets payload. This gets synced with our GraphQL service. It also gets sent to our ad preview iframes within the creative builder for real-time updates.

Building a focal point picker

Rather than having the user specify the crop bounds of an image by hand, we decided to offer two explicit controls: a focal point picker and a zoom slider.

This keeps the experience the same for editing images for multiple canvas sizes (mobile and desktop).

We start by adding an ImageEditor component and a FocalPointPicker component which lives inside it.

Here’s the FocalPointPicker component in its entirety:

This component accepts a string representing the URL of the original, unprocessed image; the current focalPoint; and an onChange callback.

We display the original image for focal point selection in a standard <img> tag with 50% CSS opacity. Its container gets a black background to make it more obvious where the light-colored focal point handle resides.

Then, we render a “handle” and position it absolutely over the image. This serves as the UI the user can click and drag over the image to interactively position their desired focal point.

Inside the handle, we render a white circle with the mix-blend-mode: overlay. This gives a “flashlight” effect to the handle, brightening the focused area of the darkened image below.

A dark photo of a cabin with a light-colored circle highlighting a spot in the center of the photo

We want to track a couple different states within the image and the handle:

  • When the user is dragging (onMouseDown, onMouseMove, onMouseUp)
  • When the user clicks an area of the image

To accomplish this, we add listeners for onMouseDown to both the image and the handle, and set an internal state isDragging.

We add a listener to onMouseMove as well. This event conveniently includes a movementX and movementY delta payload - meaning we don’t have to track the journey of the user’s mouse and how far it’s traveled. Instead, we convert the pixel movement parameters into percentages, add them to the original focal point, and update the focal point

When the user clicks an area of the image, we get the top and left coordinates of the click event by doing some math to compare the image bounds with the click coordinates. Then we convert those coordinates to percentages within the image, and update the focal point.

We also need to listen for when the user lets the mouse up to ensure we know that they have stopped dragging. This tells us we can stop calculating the new focal point with each movement.

Once the user is happy with the focal point they have selected, they press the Save button. This kicks off a callback which propagates through the ImageEditor component. It fires another function returned from our useEditableImage hook: handleImageChange().

We reuse the same math calculation from when the user uploaded the original image — only this time, we have a new focalPoint input.

This processed image is re-uploaded using a signed URL and is now ready for distribution on our CDN.

Making the focal point picker accessible

The focal picker we’ve built is really nice for users who are able to control a mouse and can see their screens for immediate feedback.

But this excludes users who cannot use a mouse, or who cannot see their screen to see the focal point they have chosen.

Let’s fix that!

Inside our focal point picker, we’ll add a new ManualPicker component.

Here’s what it looks like:

Inside this component, we make a copy of the focalPoint and store it in a local, mutable state. Then, we split the X and Y values and display them in two separate text inputs with labels describing what they represent: percentages of the horizontal and vertical focal point.

When the user modifies the values and blurs away from the inputs, we format the values and propagate the changes up to the main focal point picker component.

Since this is a duplicate form input for our picker, we don’t want to display this to users who are using a mouse. By default, we make the accessible picker visible only to screen readers using a CSS utility class.

Then, we wrap the hidden picker with a focusable element — like a link — and add an onFocus listener. This way, when the user focuses into the element with their keyboard, we can automatically remove the CSS class hiding the accessible picker and make it visible and usable.

Building a zoom slider

We need users to be able to zoom into their images to highlight a certain area.

Inside our ImageEditor component, we add another new component called ZoomSlider.

Similar to the FocalPointPicker, ZoomSlider accepts a current zoom value and a callback function.

We ended up using a third-party React component called rc-slider because it had opinionated styles ready-to-use.

When the user changes the zoom level with their mouse or their keyboard, we propagate the new zoom level up to the ImageEditor component, which handles updating the editable image hook.

Showing live previews within the ad

All of these in-browser image editing features are nice, but they’re not very useful to someone building an ad if they can’t see the effects of their edits in real time.

This was a bigger challenge than we expected, and it required an update to our ad rendering framework.

One possible solution is to keep generating a blob of the preview image, and instead of uploading it with a signed URL, we keep it in memory and display it in the ad.

The problem with this was that it’s very computationally expensive. We wanted a buttery-smooth editing experience, not a choppy one.

Thankfully, our pals on the Chorus team mentioned a blog post they wrote five (!!) years ago about building their own image cropping preview tool

In this tool, they leveraged an SVG with an image tag to manipulate different cropping patterns in real time. We decided to give this a try within our ads.

Inside our ad markup, next to the img tag holding the computed, static image, we add a new svg containing an image element:

The image element references the original, uncropped image URL. We set the width and height of the image element to the original width and height of the uploaded, uncropped image.

Then, we use the viewBox property of the SVG to represent the desired image, taking into account the user’s focalPoint and zoom inputs.

How do we calculate these viewBox coordinates?

Thankfully, we had already done quite a bit of math in this area (see getTransformCoordinates() above), so we felt good about rolling up our sleeves to make this work, too.

As it turns out, the logic required for providing viewBox coordinates was the inverse of the logic required for providing canvas cropping coordinates: instead of moving and stretching a picture to fit inside a frame, we needed to move and stretch the frame to display a portion of the picture.

We start by adding a new getViewBoxCoordinates() function to our library:

Let’s break this down.

Similar to getTransformCoordinates(), we accept the original image dimensions, the desired image output dimensions, a focalPoint and a zoom.

This time, we start by defining the ideal visual center of the input image (as opposed to the output image, like we do when drawing on a canvas).

Then, we determine how much we need to scale the frame to cover the entire image (using Math.min instead of Math.max this time).

We determine the width and height of the “frame” by factoring in the scale from the previous step as well as any user-provided zoom input.

From there, we calculate the ideal top and left coordinates for the frame given the ideal focalPoint from the first step.

Next, we figure out how much room we actually have to work with given the image dimensions - and we clamp the frame to ensure respect the bounds of the image.

Finally, we return top, left, width and height values from the function, which get translated into viewBox coordinates.

Our course, we want these values to be updated in real-time as the user drags their mouse or the zoom slider. This could mean a lot of events, and we certainly don’t want to make a server round trip for each of these preview values

Instead, when the user is in live-editing mode, we store these preview values in a separate mutable asset store for the current ad which lives in-memory

These preview values are sent over postMessage to our desktop and mobile ad iframes, which know to update the ad markup with the latest inputs.

Finally, we show and hide the SVG preview component within our ad markup using CSS based on whether the user is in Edit Mode for an image.

Once the user presses Cancel or Save, we exit Edit Mode, and the SVG component is hidden. The ad reverts back to using the static version of the image, hosted on the CDN.

Because we’ve encapsulated the image editing logic into a hook, and each component within our image field (ImageEditor, FocalPointPicker, ZoomSlider) is composable, it means we can re-use this functionality across multiple fields in our creative builder: desktop background image, mobile background image, video thumbnail, and more.

The markup in the ad itself is also minimal, so it’s easy to copy/paste any place where we display a static image within the ad


I really enjoyed building out this complex feature with the Concert Platforms team. It was truly a team effort — most engineers have contributed to the code you see above in some way or another.

This entire image editing concept would not exist if it hadn’t come from the minds of our product and design team, either.

Be sure to check out Concert Ad Manager! It’s free to create an account, draft up a new ad and start playing with the in-browser image editing tools described above.