I recently chatted with the super-smart Amanda Zamora for her UT Knight Center course on social media analytics for journalists. Amanda’s the Senior Engagement Editor at ProPublica. She asked me to talk about some experiments we’ve done at NPR around audio and social media.

The conversation, which is available on YouTube, inspired me to pull together some tips I’ve learned over the past few years.

This isn’t everything. Just a list of things I’ve personally learned.

Know what you’re testing

Define some questions you want to answer and actually write them down before you start testing. It’s easy to just do something for the sake of doing it. Clearly articulated questions keep you focused.

For our social audio experiments, we wanted to answer two key questions:

Will people listen to the audio we produce at a higher-than-normal rate? Will people share the audio packages?

We discovered that people listened to our social audio packages at a rate five times higher than normal and the vast majority of our audience came from social media.

Illustration by Russ Gossett/NPR

Find easy ways to test it

Once you know what you’re testing, cook up some creative ways you’re going to make it happen. And it’s a good idea to have a few different methods in case one falls through.

The how doesn’t need to be complicated. It could be as simple as tweeting something one day and a different version the next. If you have access to multiple social accounts, that’s another way you can measure how your experiment works in different formats and different places.

We’re lucky enough to work with member station journalists on many of our experiments. This provides a high-quality and diverse sampling from different places around the country.

For one experiment, we used printed headlines to categorize and map different types of local stories. We then used the NPR Facebook page to test against the categories we came up with.

The result was a framework that NPR member stations continue to test in their newsrooms today:

Illustration by Russ Gossett/NPR

Know how to measure it

Take a look at the questions you’re testing. Then list off ways you can actually measure whether or not it’s working. This can be the hard part. Because while some metrics are easy to find — will this type of story get more pageviews than that type of story? — it’s rarely not straightforward.

If you’re having trouble finding out how to measure your experiments, find someone internally or externally who might be able to help.

Keep in mind, measuring an experiment isn’t always about numbers. We often measure things like, Did this project change the way journalists produce audio?

Consider using existing data

Before you create something brand new, is there something already out there you can use?

For one of our experiments, we wanted to answer this question: Do serious stories get as much social traction as fun stories? Instead of building from scratch, we pulled data from 800 published stories and poured our analysis into a spreadsheet.

The result: Serious stories were just as shareable as fun stories.

Illustration by Russ Gossett/NPR

Time-box your experiments

It might be a week. Or two weeks. Or two months. Whatever it is, come up with an end date and stick to it.

For our social audio experiments, we launched two six-week rounds with station journalists. At the end of six weeks, we stopped. Why? Because you can experiment forever and ever. The stopping points allow you to see what you’ve done, what you should do next and whether you’re doing the right thing in the first place.

That doesn’t mean you need to say goodbye to the experiment at the end of the trial. In fact, an end date allows you to decide next steps and hopefully build into a new one.

Time-boxing experiments also keeps them manageable and on track — if you only have two weeks to test something, you need to be laser-focused.

Hold retrospectives

Another reason for time restraints: Retrospectives. At the end of an experiment, get together with your team. Talk about what worked, what didn’t work, what you’d do differently next time. Even if these are quick, they’re a valuable reset.

The secret is to get these things on the calendar well in advance. Otherwise it’s easy to lose track and move onto the next thing.

Make some rules

Time-boxes, retrospectives, planning. Essentially, it’s a good idea to add some structure.

How much? It varies from test to test. Start with the thing you’re trying to solve or the questions you’re trying to answer and build from there. What do you need to do to make that happen?

One-off mini experiments are great, just make sure you capture what you learned. And if the mini-experiment was intriguing, make sure an opportunity to follow up doesn’t get lost.

Break some rules

Yes, some structure is important and helps you stay focused.

Here’s the tricky part: It can also kill the experiment.

If you over-engineer it, you’ll limit your ability to discover something you weren’t looking for in the first place. The purpose of structure is to create an environment where you can focus on experimenting and avoid distractions.

So create structure. Have a roadmap. State objectives. Just be flexible enough to throw some of it out halfway through because you learn something new or your assumptions are proven wrong.

Share what you find

Write a post. Tweet about it. Have conversations about it. Our team makes an effort to share what we’re working on even when we haven’t reached any conclusions.

This benefits everyone. The rest of us to learn what you’re doing. And it’s another step in the experimental process. It gets your information out into the world and opens it up for feedback. Some of that feedback might identify errors or might challenge findings. Some of it might lead you down a new road. Some of it might reinforce your ideas. Some of it might lead to a partnership.

You never know. That’s why it’s an experiment.