github
html_url | issue_url | id | node_id | user | created_at | updated_at | author_association | body | reactions | issue | performed_via_github_app |
---|---|---|---|---|---|---|---|---|---|---|---|
https://github.com/simonw/sqlite-utils/issues/358#issuecomment-1255332217 | https://api.github.com/repos/simonw/sqlite-utils/issues/358 | 1255332217 | IC_kwDOCGYnMM5K0tl5 | 9599 | 2022-09-22T17:27:34Z | 2022-09-22T17:27:34Z | OWNER | I've been thinking about this more recently. I think the first place to explore these will be in the `create-table` command (and underlying APIs). Relevant docs: https://www.sqlite.org/lang_createtable.html#check_constraints > A CHECK constraint may be attached to a column definition or specified as a table constraint. In practice it makes no difference. Each time a new row is inserted into the table or an existing row is updated, the expression associated with each CHECK constraint is evaluated and cast to a NUMERIC value in the same way as a [CAST expression](https://www.sqlite.org/lang_expr.html#castexpr). If the result is zero (integer value 0 or real value 0.0), then a constraint violation has occurred. If the CHECK expression evaluates to NULL, or any other non-zero value, it is not a constraint violation. The expression of a CHECK constraint may not contain a subquery. Something like this: sqlite-utils create-table data.db entries id integer title text tags text --pk id --check tags:json Where `--check tags:json` uses a pre-baked recipe for using the SQLite JSON function to check that the content is valid JSON and reject it otherwise. Then can bundle a bunch of other pre-baked recipes, but also support the following: --check 'x > 3' --check 'length(phone) >= 10' The besign reason for the `column:recipe` format here is to reuse `--check` for both pre-defined recipes that affect a single column AND for freeform expressions that get added to the end of the table. Detecting `column name:recipe` with a regex feels safe to me. | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1082651698 | |
https://github.com/simonw/sqlite-utils/issues/358#issuecomment-1255333969 | https://api.github.com/repos/simonw/sqlite-utils/issues/358 | 1255333969 | IC_kwDOCGYnMM5K0uBR | 9599 | 2022-09-22T17:29:09Z | 2022-09-22T17:29:09Z | OWNER | Quick demo of a check constraint for JSON validation: ``` sqlite> create table test (id integer primary key, tags text, check (json(tags) is not null)); sqlite> sqlite> insert into test (tags ('["one", "two"]'); sqlite> insert into test (tags) values ('["one", "two"'); Error: stepping, malformed JSON (1) ``` | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1082651698 | |
https://github.com/simonw/sqlite-utils/issues/358#issuecomment-1255340974 | https://api.github.com/repos/simonw/sqlite-utils/issues/358 | 1255340974 | IC_kwDOCGYnMM5K0vuu | 9599 | 2022-09-22T17:34:45Z | 2022-09-22T17:34:45Z | OWNER | A few other recipes off the top of my head: - `title:maxlength:20` - set a max length, `length(title) <= 20` - `created:date` - check for `yyyy-mm-dd` date, `select :date == date(:date) is not null` ([demo](https://latest.datasette.io/_memory?sql=select+%3Adate+%3D%3D+date%28%3Adate%29+is+not+null&date=2022-01-01)) - `age:positiveint` - check `age` is a positive integer, `printf('%', age) = age and age > 0` (untested) | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1082651698 | |
https://github.com/simonw/sqlite-utils/issues/358#issuecomment-1255341690 | https://api.github.com/repos/simonw/sqlite-utils/issues/358 | 1255341690 | IC_kwDOCGYnMM5K0v56 | 9599 | 2022-09-22T17:35:23Z | 2022-09-22T17:35:23Z | OWNER | Make me think also that `sqlite-utils create-table` should have an option to dump out the SQL without actually creating the table. | { "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
1082651698 | |
https://github.com/simonw/datasette/issues/1415#issuecomment-1255603780 | https://api.github.com/repos/simonw/datasette/issues/1415 | 1255603780 | IC_kwDOBm6k_c5K1v5E | 17532695 | 2022-09-22T22:06:10Z | 2022-09-22T22:06:10Z | NONE | This would be great! I just went through the process of figuring out the minimum permissions for a service account to run `datasette publish cloudrun` for [PUDL](https://github.com/catalyst-cooperative/pudl)'s [datasette deployment](https://data.catalyst.coop/). These are the roles I gave the service account (disclaim: I'm not sure these are the minimum permissions): - Cloud Build Service Account: The SA needs this role to publish the build on Cloud Build. - Cloud Run Admin for the Cloud Run datasette service so the SA can deploy the build. - I gave the SA the Storage Admin role on the bucket Cloud Build creates to store the build tar files. - The Viewer Role is [required for storing build logs in the default bucket](https://cloud.google.com/build/docs/running-builds/submit-build-via-cli-api#permissions). More on this below! The Viewer Role is a Basic IAM role that [Google does not recommend using](https://cloud.google.com/build/docs/running-builds/submit-build-via-cli-api#permissions): > Caution: Basic roles include thousands of permissions across all Google Cloud services. In production environments, do not grant basic roles unless there is no alternative. Instead, grant the most limited [predefined roles](https://cloud.google.com/iam/docs/understanding-roles#predefined_roles) or [custom roles](https://cloud.google.com/iam/docs/understanding-custom-roles) that meet your needs. If you don't grant the Viewer role the `gcloud builds submit` command will successfully create a build but returns exit code 1, preventing the script from getting to the cloud run step: ``` ERROR: (gcloud.builds.submit) The build is running, and logs are being written to the default logs bucket. This tool can only stream logs if you are Viewer/Owner of the project and, if applicable, allowed by your VPC-SC security policy. The default logs bucket is always outside any VPC-SC security perimeter. If you want your logs saved inside your VPC-SC perimeter, use your own bucket. See https://cloud.google.com/build/docs… | { "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 } |
959137143 |