Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/cloud/features/scheduler/airflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Start by installing the `tobiko-cloud-scheduler-facade` library in your Airflow
Make sure to include the `[airflow]` extra in the installation command:

``` bash
$ pip install tobiko-cloud-scheduler-facade[airflow]
pip install tobiko-cloud-scheduler-facade[airflow]
```

!!! info "Mac Users"
Expand Down
2 changes: 1 addition & 1 deletion docs/cloud/features/scheduler/dagster.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ dependencies = [
And then install it into the Python environment used by your Dagster project:

```sh
$ pip install -e '.[dev]'
pip install -e '.[dev]'
```

### Connect Dagster to Tobiko Cloud
Expand Down
12 changes: 6 additions & 6 deletions docs/cloud/features/security/single_sign_on.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ Here is what you will see if you are accessing Tobiko Cloud via Okta. Click on t
You can see what the status of your session is with the `status` command:

``` bash
$ tcloud auth status
tcloud auth status
```


Expand All @@ -156,7 +156,7 @@ $ tcloud auth status
Run the `login` command to begin the login process:

``` bash
$ tcloud auth login
tcloud auth login
```

![tcloud_login](./single_sign_on/tcloud_login.png)
Expand All @@ -183,11 +183,11 @@ Current Tobiko Cloud SSO session expires in 1439 minutes
In order to delete your session information you can use the log out command:

``` bash
> tcloud auth logout
Logged out of Tobiko Cloud
tcloud auth logout
# Logged out of Tobiko Cloud

> tcloud auth status
Not currently authenticated
tcloud auth status
# Not currently authenticated
```

![tcloud_logout](./single_sign_on/tcloud_logout.png)
Expand Down
2 changes: 1 addition & 1 deletion docs/cloud/features/xdb_diffing.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Then, specify each table's gateway in the `table_diff` command with this syntax:
For example, we could diff the `landing.table` table across `bigquery` and `snowflake` gateways like this:

```sh
$ tcloud sqlmesh table_diff 'bigquery|landing.table:snowflake|landing.table'
tcloud sqlmesh table_diff 'bigquery|landing.table:snowflake|landing.table'
```

This syntax tells SQLMesh to use the cross-database diffing algorithm instead of the normal within-database diffing algorithm.
Expand Down
11 changes: 6 additions & 5 deletions docs/concepts/state.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ The state file is a simple `json` file that looks like:
You can export a specific environment like so:

```sh
$ sqlmesh state export --environment my_dev -o my_dev_state.json
sqlmesh state export --environment my_dev -o my_dev_state.json
```

Note that every snapshot that is part of the environment will be exported, not just the differences from `prod`. The reason for this is so that the environment can be fully imported elsewhere without any assumptions about which snapshots are already present in state.
Expand All @@ -102,7 +102,7 @@ Note that every snapshot that is part of the environment will be exported, not j
You can export local state like so:

```bash
$ sqlmesh state export --local -o local_state.json
sqlmesh state export --local -o local_state.json
```

This essentially just exports the state of the local context which includes local changes that have not been applied to any virtual data environments.
Expand Down Expand Up @@ -174,10 +174,11 @@ If your project has [multiple gateways](../guides/configuration.md#gateways) wit

```bash
# state export
$ sqlmesh --gateway <gateway> state export -o state.json

sqlmesh --gateway <gateway> state export -o state.json
```
```bash
# state import
$ sqlmesh --gateway <gateway> state import -i state.json
sqlmesh --gateway <gateway> state import -i state.json
```

## Version Compatibility
Expand Down
4 changes: 2 additions & 2 deletions docs/guides/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -269,7 +269,7 @@ gateways:
We can override the `dummy_pw` value with the true password `real_pw` by creating the environment variable. This example demonstrates creating the variable with the bash `export` function:

```bash
$ export SQLMESH__GATEWAYS__MY_GATEWAY__CONNECTION__PASSWORD="real_pw"
export SQLMESH__GATEWAYS__MY_GATEWAY__CONNECTION__PASSWORD="real_pw"
```

After the initial string `SQLMESH__`, the environment variable name components move down the key hierarchy in the YAML specification: `GATEWAYS` --> `MY_GATEWAY` --> `CONNECTION` --> `PASSWORD`.
Expand Down Expand Up @@ -1492,7 +1492,7 @@ Example enabling debug mode for the CLI command `sqlmesh plan`:
=== "Bash"

```bash
$ SQLMESH_DEBUG=1 sqlmesh plan
SQLMESH_DEBUG=1 sqlmesh plan
```

=== "MS Powershell"
Expand Down
2 changes: 1 addition & 1 deletion docs/guides/migrations.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ SQLMeshError: SQLMesh (local) is using version '1' which is behind '2' (remote).
The project metadata can be migrated to the latest metadata format using SQLMesh's migrate command.

```bash
> sqlmesh migrate
sqlmesh migrate
```

Migration should be issued manually by a single user and the migration will affect all users of the project.
Expand Down
12 changes: 6 additions & 6 deletions docs/integrations/dbt.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,19 +19,19 @@ Therefore, SQLMesh is packaged with multiple "extras," which you may optionally
At minimum, using the SQLMesh dbt adapter requires installing the dbt extra:

```bash
> pip install "sqlmesh[dbt]"
pip install "sqlmesh[dbt]"
```

If your project uses any SQL execution engine other than DuckDB, you must install the extra for that engine. For example, if your project runs on the Postgres SQL engine:

```bash
> pip install "sqlmesh[dbt,postgres]"
pip install "sqlmesh[dbt,postgres]"
```

If you would like to use the [SQLMesh Browser UI](../guides/ui.md) to view column-level lineage, include the `web` extra:

```bash
> pip install "sqlmesh[dbt,web]"
pip install "sqlmesh[dbt,web]"
```

Learn more about [SQLMesh installation and extras here](../installation.md#install-extras).
Expand All @@ -41,7 +41,7 @@ Learn more about [SQLMesh installation and extras here](../installation.md#insta
Prepare an existing dbt project to be run by SQLMesh by executing the `sqlmesh init` command *within the dbt project root directory* and with the `dbt` template option:

```bash
$ sqlmesh init -t dbt
sqlmesh init -t dbt
```

This will create a file called `sqlmesh.yaml` containing the [default model start date](../reference/model_configuration.md#model-defaults). This configuration file is a minimum starting point for enabling SQLMesh to work with your DBT project.
Expand Down Expand Up @@ -247,8 +247,8 @@ Instead, SQLMesh provides predefined time macro variables that can be used in th
For example, the SQL `WHERE` clause with the "ds" column goes in a new jinja block gated by `{% if sqlmesh_incremental is defined %}` as follows:

```bash
> WHERE
> ds BETWEEN '{{ start_ds }}' AND '{{ end_ds }}'
WHERE
ds BETWEEN '{{ start_ds }}' AND '{{ end_ds }}'
```

`{{ start_ds }}` and `{{ end_ds }}` are the jinja equivalents of SQLMesh's `@start_ds` and `@end_ds` predefined time macro variables. See all [predefined time variables](../concepts/macros/macro_variables.md) available in jinja.
Expand Down
14 changes: 7 additions & 7 deletions docs/integrations/dlt.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ SQLMesh enables efforless project generation using data ingested through [dlt](h
To load data from a dlt pipeline into SQLMesh, ensure the dlt pipeline has been run or restored locally. Then simply execute the sqlmesh `init` command *within the dlt project root directory* using the `dlt` template option and specifying the pipeline's name with the `dlt-pipeline` option:

```bash
$ sqlmesh init -t dlt --dlt-pipeline <pipeline-name> dialect
sqlmesh init -t dlt --dlt-pipeline <pipeline-name> dialect
```

This will create the configuration file and directories, which are found in all SQLMesh projects:
Expand All @@ -33,7 +33,7 @@ SQLMesh will also automatically generate models to ingest data from the pipeline
The default location for dlt pipelines is `~/.dlt/pipelines/<pipeline_name>`. If your pipelines are in a [different directory](https://dlthub.com/docs/general-usage/pipeline#separate-working-environments-with-pipelines_dir), use the `--dlt-path` argument to specify the path explicitly:

```bash
$ sqlmesh init -t dlt --dlt-pipeline <pipeline-name> --dlt-path <pipelines-directory> dialect
sqlmesh init -t dlt --dlt-pipeline <pipeline-name> --dlt-path <pipelines-directory> dialect
```

### Generating models on demand
Expand All @@ -43,25 +43,25 @@ To update the models in your SQLMesh project on demand, use the `dlt_refresh` co
- **Generate all missing tables**:

```bash
$ sqlmesh dlt_refresh <pipeline-name>
sqlmesh dlt_refresh <pipeline-name>
```

- **Generate all missing tables and overwrite existing ones** (use with `--force` or `-f`):

```bash
$ sqlmesh dlt_refresh <pipeline-name> --force
sqlmesh dlt_refresh <pipeline-name> --force
```

- **Generate specific dlt tables** (using `--table` or `-t`):

```bash
$ sqlmesh dlt_refresh <pipeline-name> --table <dlt-table>
sqlmesh dlt_refresh <pipeline-name> --table <dlt-table>
```

- **Provide the explicit path to the pipelines directory** (using `--dlt-path`):

```bash
$ sqlmesh dlt_refresh <pipeline-name> --dlt-path <pipelines-directory>
sqlmesh dlt_refresh <pipeline-name> --dlt-path <pipelines-directory>
```

#### Configuration
Expand All @@ -83,7 +83,7 @@ Load package 1728074157.660565 is LOADED and contains no failed jobs
After the pipeline has run, generate a SQLMesh project by executing:

```bash
$ sqlmesh init -t dlt --dlt-pipeline sushi duckdb
sqlmesh init -t dlt --dlt-pipeline sushi duckdb
```

Then the SQLMesh project is all set up. You can then proceed to run the SQLMesh `plan` command to ingest the dlt pipeline data and populate the SQLMesh tables:
Expand Down
10 changes: 5 additions & 5 deletions docs/integrations/engines/bigquery.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Follow the [quickstart installation guide](../../installation.md) up to the step
Instead of installing just SQLMesh core, we will also include the BigQuery engine libraries:

```bash
> pip install "sqlmesh[bigquery]"
pip install "sqlmesh[bigquery]"
```

### Install Google Cloud SDK
Expand All @@ -35,19 +35,19 @@ Follow these steps to install and configure the Google Cloud SDK on your compute
- Unpack the downloaded file with the `tar` command:

```bash
> tar -xzvf google-cloud-cli-{SYSTEM_SPECIFIC_INFO}.tar.gz
tar -xzvf google-cloud-cli-{SYSTEM_SPECIFIC_INFO}.tar.gz
```

- Run the installation script:

```bash
> ./google-cloud-sdk/install.sh
./google-cloud-sdk/install.sh
```

- Reload your shell profile (e.g., for zsh):

```bash
> source $HOME/.zshrc
source $HOME/.zshrc
```

- Run [`gcloud init` to setup authentication](https://cloud.google.com/sdk/gcloud/reference/init)
Expand Down Expand Up @@ -114,7 +114,7 @@ The output will look something like this:
We've verified our connection, so we're ready to create and execute a plan in BigQuery:

```bash
> sqlmesh plan
sqlmesh plan
```

### View results in BigQuery Console
Expand Down
16 changes: 8 additions & 8 deletions docs/reference/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -277,33 +277,33 @@ Example enabling debug mode for the CLI command `sqlmesh plan`:
=== "Bash"

```bash
$ sqlmesh --debug plan
sqlmesh --debug plan
```

```bash
$ SQLMESH_DEBUG=1 sqlmesh plan
SQLMESH_DEBUG=1 sqlmesh plan
```

=== "MS Powershell"

```powershell
PS> sqlmesh --debug plan
sqlmesh --debug plan
```

```powershell
PS> $env:SQLMESH_DEBUG=1
PS> sqlmesh plan
$env:SQLMESH_DEBUG=1
sqlmesh plan
```

=== "MS CMD"

```cmd
C:\> sqlmesh --debug plan
sqlmesh --debug plan
```

```cmd
C:\> set SQLMESH_DEBUG=1
C:\> sqlmesh plan
set SQLMESH_DEBUG=1
sqlmesh plan
```

## Runtime Environment
Expand Down