Learn / Blog / Article

Back to blog

3 common questions from Hotjar’s engineering candidates, answered

At Hotjar, our Engineering team often gets questions from candidates in the hiring process about how we function internally. Questions like these tend to come up after a candidate’s technical interview, when we may have run out of time to discuss them in depth. We know this information is useful for candidates, and we can’t always address these questions as fully as we’d like. So, we’re sharing our responses here.

Last updated

11 Apr 2023

Reading time

4 min

Share

This blog aims to answer our most frequently asked questions in more detail. The three questions below cover the following topics:

  1. Technical debt 

  2. Test types and code coverage

  3. Builds and releases

Hopefully, it’ll be helpful for future Hotjarians. Let's dive in!

Question 1: do you have any technical debt, and how much time do you spend preventing and addressing it? 

Who doesn’t have tech debt? Our codebase has existed for over seven years, and it definitely has parts that need more love.

Different squads (our internal term for a single cross-functional team) have varying amounts and deal with it differently depending on their needs. If it isn’t broken and the code doesn’t need to change, it often becomes debt as a result of other parts evolving.

For example, an endpoint for a new feature might use a different validation library. Generally, the old endpoint will remain on the old method unless there’s a pressing need to remove it (such as the older validation library becoming outdated and this dependency blocking us from moving to the next version of Python.)

We try to look at this as technical solvency rather than technical debt. No debt means you’re over-engineering for today’s requirements, which can easily change tomorrow; too much means you’re under-engineering and sacrificing quality, which reduces your reliability and time to market. Striking a good balance between the two is something we try to do every day.

If we count all the initiatives we invest in to manage our technical debt, we spend around 20% of our time managing it. This includes work done by squads in the various product Tribes, our Engineering Enablement Tribe, and Chapter Weeks, where we spend one week per quarter on wider-impact initiatives instead of product squad work.

Question 2: what libraries and test types do you use for Python automated testing, and what’s your code coverage?

We use pytest for testing in Python. Our test suite consists of: 

  • Unit tests: tests that aren’t touching a database, such as checking the boundaries of a validator 

  • Integration tests: where the tests make a full round trip to a database and back, such as ensuring a repository method works as expected

  • API tests: where we treat an endpoint as a black box and see if the expected result occurs

We also have some legacy full-stack tests (which are not actively developed anymore for various reasons) as well as smoke tests that cover some basic user interface (UI) paths in our web application and are run every time a deployment occurs.

Some parts of our codebase also use mypy to ensure the correct usage of type hints.

When it comes to code coverage, our backend coverage hovers around 90%. We don’t chase coverage as a metric—it feels too low-level to give confidence in whether our tests enable us to ship high-quality work effectively.

We prefer to look at the total cycle time and amount of rework needed. We also rely on our company's core values to conduct a cost-benefit analysis of our investment in testing: are we being bold and moving fast? Are we putting our customers at the heart of everything?

Question 3: how do you handle builds and releases, and how often do you release?

For our monolith, every merge request (MR) opened runs all the tests and starts up a separate environment (we call these review environments) to aid in testing within an isolated environment.

As for releases, any engineer can release once they feel confident. All we require is MR approval from a colleague and that the release is within our deployment timeframe—around 9 am–5:30 pm CET, Monday to Friday. This way, colleagues are available to help in the event of an incident, and people’s time off is respected.

We use automation to deploy a colleague’s MR, monitor it for 10 minutes, and then merge if successful or inform them to roll back otherwise.

As for how often we release?

On our monolith, we tend towards around 10-15 releases. But in total (that is, including services and front-end), we release around 38-45 times a day.

For feature releases, we have a homegrown feature flag system that we use to show or hide functionality to or from specific users.

Are you a future Hotjarian?

If you want to be part of the Hotjar team, we’re sure you have even more questions. Check out our Careers page and experience our six-step Engineering recruitment process for yourself. We can’t wait to meet you.

Ready for your dream job?

Join Hotjar's growing team and help us build the best digital experience insights platform. 🚀