I am building a Rails backend to an iPhone app.
After profiling my application, I have found the following call to be especially expensive in terms of performance:
@messages.as_json
This call returns about 30 message objects, each including many child records. As you can see, a single message json response may make many DB calls to be composed:
def as_json(options={})
super(:only => [...],
:include => {
:user => {...},
:checkin => {...}
}},
:likes => {:only => [...],
:include => { :user => {...] }}},
:comments => {:only => [...],
:include => { :user => {:only => [...] }}}
},
:methods => :top_highlight)
end
On average the @messages.as_jsoncall (all 30 objects) takes almost 1100ms.
Wanting to optimize I've employed memcached. With the solution below, when all my message objects are in cache, average response is now 200-300ms. I'm happy with this, but the issue I have is that this has made cache miss scenarios even slower. In cases where nothing is in cache, it now takes over 2000ms to compute.
# Note: @messages has the 30 message objects in it, but none of the child records have been grabbed
@messages.each_with_index do |m, i|
@messages[i] = Rails.cache.fetch("message/#{m.id}/#{m.updated_at.to_i}") do
m.as_json
end
end
I understand that there will have to be some overhead to check the cache for each object. But I'm guessing there is a more efficient way to do it than the way I am now, which is basically serially, one-by-one. Any pointers on making this more efficient?