SQLite Forum

deploying 1_000_000 sqlitedb on AWS?
Login
Suppose, hypothetically, you take the view of "one db per user" and need to deploy 1_000_000 *.db files on AWS. What would the correct architecture for such a problem?

The current best solution I have so far is:

  1. store each sqlitedb as a separate file in s3, s3/user_###_###.db

  2. when the user makes a request, cluster checks if db is already pulled locally, if so, route to that machine, if not, pick a machine, have it download s3/user_###_###.db

  3. machine with the relevant db locally handles the requests

  4. after 5 minutes (or some type of LRU), the local db is copied back to s3

==========

  This naive strategy clearly has too many flaws to enumerate (but describes the gist).

  Is there a best practice guide for deploying millions of sqlite db's on S3? For those with similar experience, any advice ?