Scalability and Performance Improvements in Backend


Written by Martin Führlinger, Backend Engineer

At Runtastic we face a rise in visitors on our servers yearly. We often have a better variety of requests within the spring and summer time than within the autumn and winter. All year long, particularly in spring and early summer time, there are a lot of sports activities occasions, like marathons, the Wings for Life World Run, Run For The Oceans, and different campaigns with a whole lot of contributors regionally or all over the world.

As backend engineers, we’ve to control the well being of our companies and we have to scale and enhance our working system yearly. Scaling can usually be completed by both growing the quantity of servers dealing with the requests or making the requests quicker. Each actions enhance the variety of requests which might be dealt with per minute. 

Including extra servers and staff sounds straightforward, and it’s for a backend developer in our setup, because it must be completed by the OPS staff. However extra importantly, {hardware} sources are additionally mandatory. And as everyone knows, {hardware} sources value cash. 

So, the less expensive method is to scale the requests by making them quicker. Relying on the implementation, this may both be straightforward if we discover low hanging fruit or advanced if we rewrite elements of the code. 

This spring my colleague Martin Landl and I invested a while in enhancing our core companies, which I need to share. Since we use NewRelic to research and monitor our companies, we have been capable of see the enhancements a couple of minutes after deploying the modifications.


Everytime you examine efficiency enhancements, caching is a giant matter. Caching signifies that you retailer a price or an object you learn or calculated earlier than in an simply accessible storage system (like memcached) so that you don’t have to fetch or calculate it once more. This works for a lot of conditions. So, we added some extra caching (we have been, after all, already utilizing caching).

Our purchasers make a number of requests inside a couple of minutes, to fetch working classes, consumer statistics, consumer knowledge, Information Feed and different issues throughout startup. Since all of those requests cross our central gateway, we have been capable of save about 75% of the database calls by simply including a easy caching of the present consumer, which is used for authentication. Right now it decreased the database queries by about 60000 per minute. 


The code round that is merely:

identities_cache.fetch(cache_key) do
  identification =
  identities_cache.retailer(cache_key, identification) if identification

In one other service (this time java-based) we have been capable of cache some coaching plan meta info, which is mainly static knowledge, to scale back the response time by almost 50%. This was completed by simply including just a few traces of code in just a few devoted courses:

import org.hibernate.annotations.Cache;
import org.hibernate.annotations.CacheConcurrencyStrategy;
@Cache(utilization = CacheConcurrencyStrategy.READ_ONLY)


Caching promotion knowledge, which solely modifications each few months, leads to a couple hundred requests to the database as an alternative of almost 5000.

Though many database calls or calculations might be prevented by caching, fairly often caching just isn’t the way in which to go. One other good enchancment is to keep away from pointless work.

Keep away from pointless work

In our companies we regularly use hooks, which mechanically run on each save or replace of an entity. Some examples are: calculating the plausibility of a working session or calculating the quickest paths inside a session to have the ability to calculate the information (quickest 5K, quickest half marathon,…), geocoding and plenty of different issues. Utilizing these hooks is fairly good, as you don’t should take care about this code in each use case then. However in some circumstances this extra work is avoidable, as a result of it simply doesn’t make sense.

By eradicating the plausibility calculation throughout a session that’s nonetheless stay (live-tracking), we saved about 15 ms per stay session replace request, which is definitely 20% of the request (~75 right down to about 60 ms). This calculation is pointless, as a result of the typical pace and different values of that session are usually not ultimate but, as it’s nonetheless stay.

Because the stay replace request is the topmost request we’ve on this service (with as much as 60000 requests per minute) this even results in an general discount of response time in that service:

And the perfect a part of this enchancment is, that it is just one line of code on the proper place:

return true if session.sort == "run_session" && session.live_tracking_active

One other enchancment was decreasing the quantity of code executed. This consists of database calls by prefetching the entities and utilizing it from the native variable, as an alternative of fetching it a number of instances. On this case we’ve a request returning coaching plan info which is a part of the response along with the classes. If the response accommodates a whole lot of classes with a coaching plan assigned, this led to many requests to the database, as we fetched the coaching plan info per session. On common we had about 36 database queries per request, however there have been some requests with way more .. like this one with 1089 database queries.

Taking a look on the histogram of that request additionally proves that there are a whole lot of requests that are fairly quick, however nonetheless lots, that are fairly gradual due to that.

The improved code fetches the coaching plan info for all classes directly, and shops that info inside a neighborhood variable. Assigning the data to every session then makes use of this native knowledge. After deploying the development we see a a lot much less database calls on common (5.4)

 and a a lot better histogram.

Keep away from pointless code

Though avoiding pointless work is the perfect enchancment, as not executed code is unquestionably the quickest code, generally mandatory code might be improved additionally moderately simply. When analysing our companies, I used to be questioning concerning the time wanted to render an enormous entity. So an entity with a whole lot of attributes took fairly a while to render appropriately to json. Digging into the serialization of a particular entity with a whole lot of attributes that was particularly gradual, I discovered these traces, that are used for 23 attributes on this entity:

def format_timestamp(ts)
    format_timestamp_value(@object.public_send(ts)) if @object.public_send(ts)

I believed it could be helpful to not learn the attribute twice, so i modified these to this:

def format_timestamp(ts)
  val = @object.public_send(ts)
  format_timestamp_value(val) if val

As you may see, this can be a moderately easy change, simply don’t use the public_send technique to get the worth twice, as an alternative retailer the worth in a neighborhood variable first. The influence was fairly good:

It diminished the rendering time of that vast entity (> 100 attributes) from round 40 to about 32 ms. That is 25 % much less time. As this was one other fairly central piece of code on this service, it diminished the general time additionally from round 105 ms to about 93 ms.

The final enchancment I need to point out is shifting code to be executed when it’s mandatory and never earlier than. We calculate the fastests paths of a working session (as talked about above already, the quickest 5 kilometers, the quickest mile, quickest half marathon and related). That is completed on the mobiles already, however as there are imported classes (e.g. from garmin) and guide classes too, we have to have the identical logic within the backend. We additionally test if the uploaded session already has all mandatory values calculated, to have the ability to calculate the lacking ones. Earlier than the development we all the time fetched the hint outdoors the quickest paths calculation (we abstracted that right into a gem) and handed it into the tactic. After altering to only cross a trace-reader, which is ready to fetch the hint when it’s mandatory into the tactic we saved as much as 100 ms on common. So as an alternative of round 150 ms per job, it solely takes round 50 ms on common now.


To summarize our findings and ideas, I collected some tips we adopted when trying to find bottlenecks and potential enhancements. 

  • Enhance code that’s used typically. Even when it solely takes just a few milliseconds, whether it is completed 2 billion instances a day, it’s nonetheless lots (e.g. plausibility code calculation on each replace)
  • Enhance code that takes a very long time (enhance and keep away from queries)
  • Seek for entities that are static and cache them (e.g. static knowledge like coaching plan info, or knowledge which is used typically inside a short while just like the consumer in our central gateway)
  • It’s not environment friendly to spend hours enhancing one thing that’s completed solely as soon as a day and pace it up from e.g. 60 seconds to 30 seconds.
  • If a request has dozens or lots of of database queries, there could be one thing unusual occurring.


Source link


Please enter your comment!
Please enter your name here