some Doubts/questions about on how to structure a Django deployment

Hi

im planning on deploying a project, for the porpuse of not making the post to long, you can find more context about the main idea of the project in this question i made before in the forum here

So, i only have experience deploying some Django projects(University projects) in the Render . com free instances and in AWS EC2 free machines. For the project i have now, i will be using this:

  • Django (with DRF)
  • A postgres DB
  • Redis
  • Celery and celery-beat

I had check some cloud provider and i decited to go with Digital Ocean for the moment, this because i found AWS and also other stuff like GCP more confusing to use and also more confusing about the pricing stuffs, and also because DO looks like is a bit cheaper.

So my original idea was to deploy the thing in Digital Ocean like this:

  • DO App-Platform for the Django server
  • A managed DB intance for postgres
  • A managed instance for redis
  • A droplet(a normal VM) to run the celery workers and celery-beat

i have to main question here.

What do you thing about this set up?

I have seen in some sub-reddits that people recomend just setting the whole thing (celery, the DB’s and the main server) with for example docker compose and to just run all the continers in one VM, this aproach seems to be cheaper in terms of money, but my newbie brain tells me that this may not be as good as the other option i mentioned, but i could be completly wrong, so what do you guys thing about this aproach? what aproach should i go with?

if you have any other tip, trick or recomention that can i could use, that would be very helpfull.

thanks a lot guys.

Hello there!
My rule of thumb for deployment is always:
Start small, simple and upgrade it later if you need it.

Generally on my company we do a lot of MVPs, an mobile APP with the exact same setup of yours for the backend. But for the backend, generally we do one VM for the Django, Celery and Redis, and setup the database on other service like RDS (on AWS), because we don’t have any DBAs to handle the backups, performance tuning and other scenarios that we simply just don’t want to take care of.
With this setup is pretty easy to scale vertically (adding more RAM and CPU) and if you ever get to a point where 8-16 CPU cores aren’t handling the incoming load of requests, you can always optimize a few bits, by caching, inspecting bad queries (N+1 specially), and if that is still not enough then you have such a big user base that your system would require a team to keep it up. And that’s when you can start thinking about decoupling your services to separate resources, for example, it may be wise to separate your celery-workers to a separate machine than your django machine to keep the more resources to it (I said machine here, but you can use some serverless stuff from your cloud provider as well), but at this point if you’re also using Redis as your broker, then you’ll need to also make Redis available to your machines, or use some service from your cloud provider (SQS from AWS).
You get the idea, tackle one problem at a time, and don’t try to get ahead of the future, you won’t be able to guess every aspect of it correctly

1 Like

In addition to the excellent advice above, there’s also the question of how large your budget is for on-going management and maintenance of the system.

Doing everything within a single system does work - and works well (that’s how we do it). However, managing such a system does take time and knowledge.

One benefit of paying for managed DB and redis instances is that you (your staff) don’t have to spend the time managing them. So a key question is whether you are going to have those resources available on an on-going basis to keep your system fully functional. The less time you’re going to have, the better off you’re going to be by paying someone else to take care of those things.

Keep in mind that unlike a site consisting of static pages, any site using an “active” framework requires continual effort. You cannot deploy something and then forget about it - and expect it to remain operational for the indefinite future.

1 Like

Hi @KenWhitesell and @leandrodesouzadev

i think going for with the aproach taht you recomended of the dedicated DB instance and one VM for the rest of the stuff.

I have just a couple more questions.

asummig my bussies time may be around 20-30 requests per second, where the main endpoint that will be hit by the clients long polling will just simply make a string comparison and then look for a value in redis and then return it.

From your experince guys, do you think that a droplets(VM) like this will be suffient for the case i mentioned.

  • django, redis and celery VM, 4 GB RAM, 2 vCPU and 4000 GiB of data transfer.

could this VM handle this with just 2 GB of RAM or should i just leave it at 4?
do you thing the CPU for the redis, django and celery VM is enough ?
do you recomend installing redis directly or should i use docker?

thanks a lot for the help guys.

There’s no generic pat-answer for this. You should perform load-testing and runtime characteristic collection on your project to determine what size environment will best suit your needs.

1 Like