Zerossl / Letsencrypt client library with autorenewal management

Zerossl is a Elixir library to automatically manage and refresh your Zerossl and Letsencrypt certificates natively, without the need for extra applications like acme.sh bash script or certbot clients. The client implements the ACME(v2) rfc8555http-01 challenge auth mechanism to issue and refresh a genuine certificate against Zerossl

Installation

If available in Hex, the package can be installed by adding zerossl to your list of dependencies in mix.exs:

def  deps  do
[
  {:zerossl, "~> 1.1.2"}
]
end

The x509 leveraged for key management has some OTP dependencies to match. Read here for more information.

Configuration

In your config.exs or prod.exs add the following config:

config  :zerossl,
  provider: :letsencrypt,
  cert_domain:  "myfancy-domain.com",
  certfile:  "./cert.pem",
  keyfile:  "./key.pem"

where

Key and certificate are always stored on FS to avoid regenerating them upon reboot.

Additional optional config

The :user_email and :account_key are not required for providers that do not requre EAB (such as letsencrypt). When the provider requires EAB and none of these settings keys are configured, the application raises an exception.

Instructions

Prerequisites

  1. docker
  2. python3 virtual env (for tests)

Virtual env

source venv/bin/activate
pip install -r requirements.txt

Test

python -m unittest discover tests

Coverage

Replace firefox with anythinggoes browser.

pip install coverage
coverage run -m unittest discover tests
coverage report
coverage html
firefox htmlcov/index.html

Build

./build.sh

Run

run.sh

Additional context

Why flask

I had initially done the API by subclassing BaseHTTPRequestHandler to basically have empty requirements.txt (which I like as a general approach).

Afterall this was supposed to be for a friend so, whatever works...

Than I saw you mentioned Flask in test.sh so I "upgraded" to that just to demo its usage and pip usage along with it.

Persistency (since you mentioned)

I added some sqlite3 persistency to avoid having to submit the CSVs on restart and to keep track of the records. At the same time and for sake of tests I opted in a private "flush" API to clean the database. A less invasive approach would have been to physically delete the database file. Anyhow that path can be kept private and excluded by an hypothetical reverse proxy.

Even though I keep the records in the database, I added the instant balance sums to avoid iterating over the records every time I have to calculate the balance. This comes at the cost of keeping two sources of truth (the database and the in-memory sums). In case of misalignment, restarting the service will fix it (somehow*).

Approach to partial failures

When uploading data from CSV, if there are invalid lines (as in the example), the record is skipped and a warning is logged. This logic might be not ok because if the invalid line was not a comment, but a malformed record, the user would not be notified.

Shortcomings

  1. I'm serving the app directly not behind nginx / gunicorn etc.. which is not ideal architecturally

  2. I used SQlite to keep things simple since the amount of data is supposingly small. For higher volume there might be better choices (e.g. postgres, redis, etc..)

  3. Partial failures are allowed. In real world this might not be ok, and there might be need for more pre-validation or in the worst case a rollback. For APIs of these kind I like the apply/commit approach where you can stash your changes live and see the result, and if things are ok you can consolidate/persist them (for example I do this for networking at work, to avoid losing the VMs, along with a confirmation API to prove you can reach the VM after live network reconfiguration.. ).

If I had more time I would

  1. add a proper logging framework
  2. add more testcases if possible (maybe completing coverage of tax_tracker.py with missed lines 51 and 52)
  3. add more error handling and pre-validation to the API endpoints
  4. reduce the amount of code within try/catch statements if possible
  5. add more comments/doc to the code
  6. add a docker-compose yaml
  7. return a structured log of the lines of the CSV that failed to be parsed in the data upload API HTTP response.