What is the best folder structure for a django project?

What is the best folder structure for a django project so as to make these things easy:

  • Testing
  • Separating prod/dev settings
  • managing templates
  • easy deployment
  • Easy handle of static/media files
  • Any other best practice

I recommend using django-cookiecutter it generates a well-formatted project structure out of the box.

mysite-polls-templates-polls-base.html---------the best folder structure. you can also delete the folder of second polls. but the first structure is recommended.

Well, first I’m going to make a Dockerfile based on alpine with uWSGI to handle static files. I prefer to not have configuration files, so I tend to have long command lines, but you could do otherwise if you wanted, also note that djcli dbcheck waits for the DB to be ready for Django, so it will look something like this:

CMD /usr/bin/dumb-init bash -euxc "djcli dbcheck && ./manage.py migrate --noinput \    
  && uwsgi \    
  --spooler=/spooler \ # required for djcall
  --http-socket=0.0.0.0:8000 \    
  --chdir=/app \    
  --plugin=python,http,router_static,router_cache \    
  --module=$UWSGI_MODULE \    
  --http-keepalive \    
  --harakiri=120 \    
  --max-requests=100 \    
  --master \    
  --workers=24 \    
  --processes=12 \    
  --chmod=666 \    
  --log-5xx \    
  --vacuum \    
  --enable-threads \    
  --post-buffering=8192 \    
  --ignore-sigpipe \    
  --ignore-write-errors \    
  --disable-write-exception \    
  --mime-file /etc/mime.types \                                                                                                                                                                                
  --offload-threads '%k' \                                                                                                                                                                                          
  --file-serve-mode x-accel-redirect \                                                                                                                                                                              
  --route '^/static/.* addheader:Cache-Control: public, max-age=7776000' \                                                                                                                                          
  --static-map $STATIC_ROOT=$STATIC_URL \                                                                                                                                                                           
  --static-gzip-all \                                                                                                                                                                                               
  --cache2 'name=statcalls,items=100' \                                                                                                                                                                             
  --static-cache-paths 86400 \                                                                                                                                                                                      
  --static-cache-paths-name statcalls"

But that means that CI should build the image, example with gitlab-ci, relying on environment variables to push on the registry of the repo by default:

build:    
  image: docker:dind    
  stage: build    
  script:    
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY    
    - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME .    
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME    

Then, some docker-compose to add other services and not be limited to deploying just a django container:

  • docker-compose.yml: your base services
  • docker-compose.override.yml: overrides for localhost development

Of course, I don’t have rabbitmq nor memcached in them because I use uWSGI which provides such features natively. But there is something I need to make sure, is that the Django/uWSGI docker-compose service contains such labels, so that Traefik will be able to route them, and a variable image so that I can change it:

    image: ${IMAGE}
    labels:
      - "traefik.enable=true"
      - "traefik.frontend.rule=Host: ${HOST}"
      - "traefik.port=8000"

The first thing I’m going to do on a new server is a couple of commands:

  • bigsudo yourlabs.ssh @somehost install my keys, harden configuration …
  • bigsudo yourlabs.traefik @somehost install docker and traefik
  • for every user i want to add by github ssh key: bigsudo yourlabs.ssh adduser username=someuser @somehost
  • I also deploy other Ansible roles, such as monitoring, firewall and the likes, but it’s not necessary to get started.

Then, I can deploy the project with one command:

export HOST=staging.somehost
export IMAGE=$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
docker-compose up

Note that this will work if your CI server hosts staging.somehost, it’s extremely fast when you run it on the server that built the image because then docker will not have to pull the image from anywhere: great for ephemeral (per-branch) deployments, that I will show here because for me it’s absolutely required to maintain a clean master branch, that a developer does not need to push un-finished stuff to master to show ongoing development to the product team, ie. with this config:

review-deploy:
  image: yourlabs/docker
  stage: test
  artifacts:
    paths:
      - target
  environment:
    name: test/$CI_COMMIT_REF_NAME
    url: http://${CI_ENVIRONMENT_SLUG}.ci.somehost
    on_stop: review-stop  # thanks to this, gitlab will auto-destroy deployment on merge
  script:
    - export HOST=${CI_ENVIRONMENT_SLUG}.ci.somehost
    - export IMAGE=$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
    - docker-compose  --project-name $CI_ENVIRONMENT_SLUG up --detach --no-build
  after_script:
    - docker-compose --project-name $CI_ENVIRONMENT_SLUG logs
    - docker-compose --project-name $CI_ENVIRONMENT_SLUG ps
  except:
    refs:
      - master
      - staging
      - production

Anyway, adding more is also fine, you can cumulate them with -f docker-compose flag:

  • docker-compose.ephemeral.yml: does not rely on any filesystem volume for per-branch temporary deployments,
  • docker-compose.persistent.yml, relies on having a home directory on the filesystem to persist data into, for staging/production
  • and so on, feel free to combine.

But then CI needs to deploy on production, which should be on another server, in this case I use bigsudo again:

production:
  image: yourlabs/ansible
  stage: production
  environment:
    name: production
    url: https://prod.somehost
  before_script:
    - mkdir -p ~/.ssh; echo "$PROD_SSH_KEY" > ~/.ssh/id_ed25519; echo "$SSH_FINGERPRINTS" > ~/.ssh/known_hosts; chmod 700 ~/.ssh; chmod 600 ~/.ssh/*
  script:
    - export $(echo $PROD_EMAIL_SETTINGS | xargs)
    - export $(echo $PROD_ENV | xargs)
    - HOST=prod.somehost
      bigsudo yourlabs.compose
      compose=docker-compose.yml,docker-compose.production.yml
      compose_backend_image=$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
      compose_backend_build=
      docker_auth_username=$CI_REGISTRY_USER
      docker_auth_password=$CI_REGISTRY_PASSWORD
      docker_auth_registry=$CI_REGISTRY
      home=/home/production
      deploy@prod.somehost
      -v | tee out
    - grep unreachable=0 out
    - grep failed=0 out

This is really a small extract of the simple but sophisticated setup I like.

1 Like