1
0
Fork 0
mirror of https://github.com/NixOS/nix synced 2025-06-27 21:01:16 +02:00

Tagging release 2.27.1

-----BEGIN PGP SIGNATURE-----
 
 iQFHBAABCAAxFiEEtUHVUwEnDgvPFcpdgXC0cm1xmN4FAmfheacTHGVkb2xzdHJh
 QGdtYWlsLmNvbQAKCRCBcLRybXGY3kt2B/4tQvs6iDXA12d409ClHbVQjr1d0FLP
 rv8RxZ7Z4+Jaw8r2ra/I+gpr9juI5ULyEJWqfES72hTvbYPjH1Grsrrjak1tx57E
 +STs21oEPojE8LXsFH1oZamGPPIIpyQdxCvTgZs1N6cqUfCRQ3Jx97X6E6SIGJDR
 VqBM4ruSXCY57yT36HqwYydTkxzZHiNP5wwABGfSb7u9pYW5x3r8W7+fQ3udTnCw
 kCRhA5vnfxIQSlxu4j7dJqSCGzOIPnhYB19bXDV4aPhl4sn3pkBCdMZxPBlCWSwx
 it0ngMITf+TeiMpVl2TtvMBOHtlGrbhusbyKcsqzFYULGyGOC9ngTAY3
 =/JzB
 -----END PGP SIGNATURE-----

Merge tag '2.27.1' into detsys-main

Tagging release 2.27.1
This commit is contained in:
Eelco Dolstra 2025-03-24 21:28:03 +01:00
commit dab0ff4f9e
200 changed files with 4734 additions and 1977 deletions

2
.gitignore vendored
View file

@ -14,7 +14,7 @@
/tests/functional/lang/*.err /tests/functional/lang/*.err
/tests/functional/lang/*.ast /tests/functional/lang/*.ast
outputs/ /outputs
*~ *~

View file

@ -106,3 +106,14 @@ pull_request_rules:
labels: labels:
- automatic backport - automatic backport
- merge-queue - merge-queue
- name: backport patches to 2.26
conditions:
- label=backport 2.26-maintenance
actions:
backport:
branches:
- "2.26-maintenance"
labels:
- automatic backport
- merge-queue

View file

@ -1 +1 @@
2.26.3 2.27.1

View file

@ -342,6 +342,9 @@ const redirects = {
"scoping-rules": "scoping.html", "scoping-rules": "scoping.html",
"string-literal": "string-literals.html", "string-literal": "string-literals.html",
}, },
"language/derivations.md": {
"builder-execution": "store/drv/building.md#builder-execution",
},
"installation/installing-binary.html": { "installation/installing-binary.html": {
"linux": "uninstall.html#linux", "linux": "uninstall.html#linux",
"macos": "uninstall.html#macos", "macos": "uninstall.html#macos",
@ -368,6 +371,7 @@ const redirects = {
"glossary.html": { "glossary.html": {
"gloss-local-store": "store/types/local-store.html", "gloss-local-store": "store/types/local-store.html",
"gloss-chroot-store": "store/types/local-store.html", "gloss-chroot-store": "store/types/local-store.html",
"gloss-content-addressed-derivation": "#gloss-content-addressing-derivation",
}, },
}; };

View file

@ -17,6 +17,11 @@
- [Store Object](store/store-object.md) - [Store Object](store/store-object.md)
- [Content-Addressing Store Objects](store/store-object/content-address.md) - [Content-Addressing Store Objects](store/store-object/content-address.md)
- [Store Path](store/store-path.md) - [Store Path](store/store-path.md)
- [Store Derivation and Deriving Path](store/derivation/index.md)
- [Derivation Outputs and Types of Derivations](store/derivation/outputs/index.md)
- [Content-addressing derivation outputs](store/derivation/outputs/content-address.md)
- [Input-addressing derivation outputs](store/derivation/outputs/input-address.md)
- [Building](store/building.md)
- [Store Types](store/types/index.md) - [Store Types](store/types/index.md)
{{#include ./store/types/SUMMARY.md}} {{#include ./store/types/SUMMARY.md}}
- [Nix Language](language/index.md) - [Nix Language](language/index.md)
@ -126,4 +131,5 @@
- [Release 1.0.0 (2025-??-??)](release-notes-determinate/rl-1.0.0.md) - [Release 1.0.0 (2025-??-??)](release-notes-determinate/rl-1.0.0.md)
- [Nix Release Notes](release-notes/index.md) - [Nix Release Notes](release-notes/index.md)
{{#include ./SUMMARY-rl-next.md}} {{#include ./SUMMARY-rl-next.md}}
- [Release 2.27 (2025-03-03)](release-notes/rl-2.27.md)
- [Release 2.26 (2025-01-22)](release-notes/rl-2.26.md) - [Release 2.26 (2025-01-22)](release-notes/rl-2.26.md)

View file

@ -20,7 +20,7 @@ For a local machine to forward a build to a remote machine, the remote machine m
## Testing ## Testing
To test connecting to a remote Nix instance (in this case `mac`), run: To test connecting to a remote [Nix instance] (in this case `mac`), run:
```console ```console
nix store info --store ssh://username@mac nix store info --store ssh://username@mac
@ -106,3 +106,5 @@ file included in `builders` via the syntax `@/path/to/file`. For example,
causes the list of machines in `/etc/nix/machines` to be included. causes the list of machines in `/etc/nix/machines` to be included.
(This is the default.) (This is the default.)
[Nix instance]: @docroot@/glossary.md#gloss-nix-instance

View file

@ -69,7 +69,7 @@ It can also execute build plans to produce new data, which are made available to
A build plan itself is a series of *build tasks*, together with their build inputs. A build plan itself is a series of *build tasks*, together with their build inputs.
> **Important** > **Important**
> A build task in Nix is called [derivation](@docroot@/glossary.md#gloss-derivation). > A build task in Nix is called [store derivation](@docroot@/glossary.md#gloss-store-derivation).
Each build task has a special build input executed as *build instructions* in order to perform the build. Each build task has a special build input executed as *build instructions* in order to perform the build.
The result of a build task can be input to another build task. The result of a build task can be input to another build task.

View file

@ -22,11 +22,11 @@ It is based on the current generation of the active [profile](@docroot@/command-
The arguments *args* map to store paths in a number of possible ways: The arguments *args* map to store paths in a number of possible ways:
- By default, *args* is a set of [derivation] names denoting derivations in the default Nix expression. - By default, *args* is a set of names denoting derivations in the default Nix expression.
These are [realised], and the resulting output paths are installed. These are [realised], and the resulting output paths are installed.
Currently installed derivations with a name equal to the name of a derivation being added are removed unless the option `--preserve-installed` is specified. Currently installed derivations with a name equal to the name of a derivation being added are removed unless the option `--preserve-installed` is specified.
[derivation]: @docroot@/glossary.md#gloss-derivation [derivation expression]: @docroot@/glossary.md#gloss-derivation-expression
[realised]: @docroot@/glossary.md#gloss-realise [realised]: @docroot@/glossary.md#gloss-realise
If there are multiple derivations matching a name in *args* that If there are multiple derivations matching a name in *args* that
@ -65,11 +65,11 @@ The arguments *args* map to store paths in a number of possible ways:
This can be used to override the priority of the derivations being installed. This can be used to override the priority of the derivations being installed.
This is useful if *args* are [store paths], which don't have any priority information. This is useful if *args* are [store paths], which don't have any priority information.
- If *args* are [store derivations](@docroot@/glossary.md#gloss-store-derivation), then these are [realised], and the resulting output paths are installed. - If *args* are [store paths] that point to [store derivations][store derivation], then those store derivations are [realised], and the resulting output paths are installed.
- If *args* are [store paths] that are not store derivations, then these are [realised] and installed. - If *args* are [store paths] that do not point to store derivations, then these are [realised] and installed.
- By default all [outputs](@docroot@/language/derivations.md#attr-outputs) are installed for each [derivation]. - By default all [outputs](@docroot@/language/derivations.md#attr-outputs) are installed for each [store derivation].
This can be overridden by adding a `meta.outputsToInstall` attribute on the derivation listing a subset of the output names. This can be overridden by adding a `meta.outputsToInstall` attribute on the derivation listing a subset of the output names.
Example: Example:
@ -121,6 +121,8 @@ The arguments *args* map to store paths in a number of possible ways:
manifest.nix manifest.nix
``` ```
[store derivation]: @docroot@/glossary.md#gloss-store-derivation
# Options # Options
- `--prebuilt-only` / `-b` - `--prebuilt-only` / `-b`

View file

@ -125,7 +125,10 @@ derivation is shown unless `--no-name` is specified.
- `--drv-path` - `--drv-path`
Print the path of the [store derivation](@docroot@/glossary.md#gloss-store-derivation). Print the [store path] to the [store derivation].
[store path]: @docroot@/glossary.md#gloss-store-path
[store derivation]: @docroot@/glossary.md#gloss-derivation
- `--out-path` - `--out-path`

View file

@ -67,7 +67,7 @@ md5sum`.
- `--type` *hashAlgo* - `--type` *hashAlgo*
Use the specified cryptographic hash algorithm, which can be one of Use the specified cryptographic hash algorithm, which can be one of
`md5`, `sha1`, `sha256`, and `sha512`. `blake3`, `md5`, `sha1`, `sha256`, and `sha512`.
- `--to-base16` - `--to-base16`

View file

@ -42,8 +42,8 @@ standard input.
- `--eval` - `--eval`
Just parse and evaluate the input files, and print the resulting Just parse and evaluate the input files, and print the resulting
values on standard output. No instantiation of store derivations values on standard output.
takes place. Store derivations are not serialized and written to the store, but instead just hashed and discarded.
> **Warning** > **Warning**
> >

View file

@ -42,7 +42,7 @@ the path of the downloaded file in the Nix store is also printed.
- `--type` *hashAlgo* - `--type` *hashAlgo*
Use the specified cryptographic hash algorithm, Use the specified cryptographic hash algorithm,
which can be one of `md5`, `sha1`, `sha256`, and `sha512`. which can be one of `blake3`, `md5`, `sha1`, `sha256`, and `sha512`.
The default is `sha256`. The default is `sha256`.
- `--print-path` - `--print-path`

View file

@ -15,7 +15,7 @@ Each of *paths* is processed as follows:
1. If it is not [valid], substitute the store derivation file itself. 1. If it is not [valid], substitute the store derivation file itself.
2. Realise its [output paths]: 2. Realise its [output paths]:
- Try to fetch from [substituters] the [store objects] associated with the output paths in the store derivation's [closure]. - Try to fetch from [substituters] the [store objects] associated with the output paths in the store derivation's [closure].
- With [content-addressed derivations] (experimental): - With [content-addressing derivations] (experimental):
Determine the output paths to realise by querying content-addressed realisation entries in the [Nix database]. Determine the output paths to realise by querying content-addressed realisation entries in the [Nix database].
- For any store paths that cannot be substituted, produce the required store objects: - For any store paths that cannot be substituted, produce the required store objects:
1. Realise all outputs of the derivation's dependencies 1. Realise all outputs of the derivation's dependencies
@ -32,7 +32,7 @@ If no substitutes are available and no store derivation is given, realisation fa
[store objects]: @docroot@/store/store-object.md [store objects]: @docroot@/store/store-object.md
[closure]: @docroot@/glossary.md#gloss-closure [closure]: @docroot@/glossary.md#gloss-closure
[substituters]: @docroot@/command-ref/conf-file.md#conf-substituters [substituters]: @docroot@/command-ref/conf-file.md#conf-substituters
[content-addressed derivations]: @docroot@/development/experimental-features.md#xp-feature-ca-derivations [content-addressing derivations]: @docroot@/development/experimental-features.md#xp-feature-ca-derivations
[Nix database]: @docroot@/glossary.md#gloss-nix-database [Nix database]: @docroot@/glossary.md#gloss-nix-database
The resulting paths are printed on standard output. The resulting paths are printed on standard output.

View file

@ -11,7 +11,7 @@ This shell also adds `./outputs/bin/nix` to your `$PATH` so you can run `nix` im
To get a shell with one of the other [supported compilation environments](#compilation-environments): To get a shell with one of the other [supported compilation environments](#compilation-environments):
```console ```console
$ nix develop .#native-clangStdenvPackages $ nix develop .#native-clangStdenv
``` ```
> **Note** > **Note**
@ -93,11 +93,13 @@ It is useful to perform multiple cross and native builds on the same source tree
for example to ensure that better support for one platform doesn't break the build for another. for example to ensure that better support for one platform doesn't break the build for another.
Meson thankfully makes this very easy by confining all build products to the build directory --- one simple shares the source directory between multiple build directories, each of which contains the build for Nix to a different platform. Meson thankfully makes this very easy by confining all build products to the build directory --- one simple shares the source directory between multiple build directories, each of which contains the build for Nix to a different platform.
Nixpkgs's `configurePhase` always chooses `build` in the current directory as the name and location of the build. Here's how to do that:
This makes having multiple build directories slightly more inconvenient.
The good news is that Meson/Ninja seem to cope well with relocating the build directory after it is created.
Here's how to do that 1. Instruct Nixpkgs's infra where we want Meson to put its build directory
```bash
mesonBuildDir=build-my-variant-name
```
1. Configure as usual 1. Configure as usual
@ -105,24 +107,12 @@ Here's how to do that
configurePhase configurePhase
``` ```
2. Rename the build directory
```bash
cd .. # since `configurePhase` cd'd inside
mv build build-linux # or whatever name we want
cd build-linux
```
3. Build as usual 3. Build as usual
```bash ```bash
buildPhase buildPhase
``` ```
> **N.B.**
> [`nixpkgs#335818`](https://github.com/NixOS/nixpkgs/issues/335818) tracks giving `mesonConfigurePhase` proper support for custom build directories.
> When it is fixed, we can simplify these instructions and then remove this notice.
## System type ## System type
Nix uses a string with the following format to identify the *system type* or *platform* it runs on: Nix uses a string with the following format to identify the *system type* or *platform* it runs on:
@ -179,7 +169,8 @@ See [supported compilation environments](#compilation-environments) and instruct
To use the LSP with your editor, you will want a `compile_commands.json` file telling `clangd` how we are compiling the code. To use the LSP with your editor, you will want a `compile_commands.json` file telling `clangd` how we are compiling the code.
Meson's configure always produces this inside the build directory. Meson's configure always produces this inside the build directory.
Configure your editor to use the `clangd` from the `.#native-clangStdenvPackages` shell. You can do that either by running it inside the development shell, or by using [nix-direnv](https://github.com/nix-community/nix-direnv) and [the appropriate editor plugin](https://github.com/direnv/direnv/wiki#editor-integration). Configure your editor to use the `clangd` from the `.#native-clangStdenv` shell.
You can do that either by running it inside the development shell, or by using [nix-direnv](https://github.com/nix-community/nix-direnv) and [the appropriate editor plugin](https://github.com/direnv/direnv/wiki#editor-integration).
> **Note** > **Note**
> >
@ -195,6 +186,8 @@ You may run the formatters as a one-off using:
./maintainers/format.sh ./maintainers/format.sh
``` ```
### Pre-commit hooks
If you'd like to run the formatters before every commit, install the hooks: If you'd like to run the formatters before every commit, install the hooks:
``` ```
@ -209,3 +202,30 @@ If it fails, run `git add --patch` to approve the suggestions _and commit again_
To refresh pre-commit hook's config file, do the following: To refresh pre-commit hook's config file, do the following:
1. Exit the development shell and start it again by running `nix develop`. 1. Exit the development shell and start it again by running `nix develop`.
2. If you also use the pre-commit hook, also run `pre-commit-hooks-install` again. 2. If you also use the pre-commit hook, also run `pre-commit-hooks-install` again.
### VSCode
Insert the following json into your `.vscode/settings.json` file to configure `nixfmt`.
This will be picked up by the _Format Document_ command, `"editor.formatOnSave"`, etc.
```json
{
"nix.formatterPath": "nixfmt",
"nix.serverSettings": {
"nixd": {
"formatting": {
"command": [
"nixfmt"
],
},
},
"nil": {
"formatting": {
"command": [
"nixfmt"
],
},
},
},
}
```

View file

@ -2,6 +2,8 @@
This section shows how to build and debug Nix with debug symbols enabled. This section shows how to build and debug Nix with debug symbols enabled.
Additionally, see [Testing Nix](./testing.md) for further instructions on how to debug Nix in the context of a unit test or functional test.
## Building Nix with Debug Symbols ## Building Nix with Debug Symbols
In the development shell, set the `mesonBuildType` environment variable to `debug` before configuring the build: In the development shell, set the `mesonBuildType` environment variable to `debug` before configuring the build:
@ -13,6 +15,15 @@ In the development shell, set the `mesonBuildType` environment variable to `debu
Then, proceed to build Nix as described in [Building Nix](./building.md). Then, proceed to build Nix as described in [Building Nix](./building.md).
This will build Nix with debug symbols, which are essential for effective debugging. This will build Nix with debug symbols, which are essential for effective debugging.
It is also possible to build without debugging for faster build:
```console
[nix-shell]$ NIX_HARDENING_ENABLE=$(printLines $NIX_HARDENING_ENABLE | grep -v fortify)
[nix-shell]$ export mesonBuildType=debug
```
(The first line is needed because `fortify` hardening requires at least some optimization.)
## Debugging the Nix Binary ## Debugging the Nix Binary
Obtain your preferred debugger within the development shell: Obtain your preferred debugger within the development shell:

View file

@ -87,7 +87,11 @@ A environment variables that Google Test accepts are also worth knowing:
This is used to avoid logging passing tests. This is used to avoid logging passing tests.
Putting the two together, one might run 3. [`GTEST_BREAK_ON_FAILURE`](https://google.github.io/googletest/advanced.html#turning-assertion-failures-into-break-points)
This is used to create a debugger breakpoint when an assertion failure occurs.
Putting the first two together, one might run
```bash ```bash
GTEST_BRIEF=1 GTEST_FILTER='ErrorTraceTest.*' meson test nix-expr-tests -v GTEST_BRIEF=1 GTEST_FILTER='ErrorTraceTest.*' meson test nix-expr-tests -v
@ -95,6 +99,22 @@ GTEST_BRIEF=1 GTEST_FILTER='ErrorTraceTest.*' meson test nix-expr-tests -v
for short but comprensive output. for short but comprensive output.
### Debugging tests
For debugging, it is useful to combine the third option above with Meson's [`--gdb`](https://mesonbuild.com/Unit-tests.html#other-test-options) flag:
```bash
GTEST_BRIEF=1 GTEST_FILTER='Group.my-failing-test' meson test nix-expr-tests --gdb
```
This will:
1. Run the unit test with GDB
2. Run just `Group.my-failing-test`
3. Stop the program when the test fails, allowing the user to then issue arbitrary commands to GDB.
### Characterisation testing { #characaterisation-testing-unit } ### Characterisation testing { #characaterisation-testing-unit }
See [functional characterisation testing](#characterisation-testing-functional) for a broader discussion of characterisation testing. See [functional characterisation testing](#characterisation-testing-functional) for a broader discussion of characterisation testing.
@ -144,7 +164,7 @@ $ checkPhase
Sometimes it is useful to group related tests so they can be easily run together without running the entire test suite. Sometimes it is useful to group related tests so they can be easily run together without running the entire test suite.
Each test group is in a subdirectory of `tests`. Each test group is in a subdirectory of `tests`.
For example, `tests/functional/ca/meson.build` defines a `ca` test group for content-addressed derivation outputs. For example, `tests/functional/ca/meson.build` defines a `ca` test group for content-addressing derivation outputs.
That test group can be run like this: That test group can be run like this:
@ -213,10 +233,10 @@ edit it like so:
bar bar
``` ```
Then, running the test with `./mk/debug-test.sh` will drop you into GDB once the script reaches that point: Then, running the test with [`--interactive`](https://mesonbuild.com/Unit-tests.html#other-test-options) will prevent Meson from hijacking the terminal so you can drop you into GDB once the script reaches that point:
```shell-session ```shell-session
$ ./mk/debug-test.sh tests/functional/${testName}.sh $ meson test ${testName} --interactive
... ...
+ gdb blash blub + gdb blash blub
GNU gdb (GDB) 12.1 GNU gdb (GDB) 12.1

View file

@ -1,5 +1,13 @@
# Glossary # Glossary
- [build system]{#gloss-build-system}
Generic term for software that facilitates the building of software by automating the invocation of compilers, linkers, and other tools.
Nix can be used as a generic build system.
It has no knowledge of any particular programming language or toolchain.
These details are specified in [derivation expressions](#gloss-derivation-expression).
- [content address]{#gloss-content-address} - [content address]{#gloss-content-address}
A A
@ -13,37 +21,45 @@
- [Content-Addressing File System Objects](@docroot@/store/file-system-object/content-address.md) - [Content-Addressing File System Objects](@docroot@/store/file-system-object/content-address.md)
- [Content-Addressing Store Objects](@docroot@/store/store-object/content-address.md) - [Content-Addressing Store Objects](@docroot@/store/store-object/content-address.md)
- [content-addressed derivation](#gloss-content-addressed-derivation) - [content-addressing derivation](#gloss-content-addressing-derivation)
Software Heritage's writing on [*Intrinsic and Extrinsic identifiers*](https://www.softwareheritage.org/2020/07/09/intrinsic-vs-extrinsic-identifiers) is also a good introduction to the value of content-addressing over other referencing schemes. Software Heritage's writing on [*Intrinsic and Extrinsic identifiers*](https://www.softwareheritage.org/2020/07/09/intrinsic-vs-extrinsic-identifiers) is also a good introduction to the value of content-addressing over other referencing schemes.
Besides content addressing, the Nix store also uses [input addressing](#gloss-input-addressed-store-object). Besides content addressing, the Nix store also uses [input addressing](#gloss-input-addressed-store-object).
- [derivation]{#gloss-derivation} - [content-addressed storage]{#gloss-content-addressed-store}
A description of a build task. The result of a derivation is a The industry term for storage and retrieval systems using [content addressing](#gloss-content-address). A Nix store also has [input addressing](#gloss-input-addressed-store-object), and metadata.
store object. Derivations declared in Nix expressions are specified
using the [`derivation` primitive](./language/derivations.md). These are
translated into low-level *store derivations* (implicitly by
`nix-build`, or explicitly by `nix-instantiate`).
[derivation]: #gloss-derivation
- [store derivation]{#gloss-store-derivation} - [store derivation]{#gloss-store-derivation}
A [derivation] represented as a `.drv` file in the [store]. A single build task.
It has a [store path], like any [store object]. See [Store Derivation](@docroot@/store/derivation/index.md#store-derivation) for details.
It is the [instantiated][instantiate] form of a derivation.
Example: `/nix/store/g946hcz4c8mdvq2g8vxx42z51qb71rvp-git-2.38.1.drv`
See [`nix derivation show`](./command-ref/new-cli/nix3-derivation-show.md) (experimental) for displaying the contents of store derivations.
[store derivation]: #gloss-store-derivation [store derivation]: #gloss-store-derivation
- [derivation path]{#gloss-derivation-path}
A [store path] which uniquely identifies a [store derivation].
See [Referencing Store Derivations](@docroot@/store/derivation/index.md#derivation-path) for details.
Not to be confused with [deriving path].
[derivation path]: #gloss-derivation-path
- [derivation expression]{#gloss-derivation-expression}
A description of a [store derivation] in the Nix language.
The output(s) of a derivation are store objects.
Derivations are typically specified in Nix expressions using the [`derivation` primitive](./language/derivations.md).
These are translated into store layer *derivations* (implicitly by `nix-env` and `nix-build`, or explicitly by `nix-instantiate`).
[derivation expression]: #gloss-derivation-expression
- [instantiate]{#gloss-instantiate}, instantiation - [instantiate]{#gloss-instantiate}, instantiation
Save an evaluated [derivation] as a [store derivation] in the Nix [store]. Translate a [derivation expression] into a [store derivation].
See [`nix-instantiate`](./command-ref/nix-instantiate.md), which produces a store derivation from a Nix expression that evaluates to a derivation. See [`nix-instantiate`](./command-ref/nix-instantiate.md), which produces a store derivation from a Nix expression that evaluates to a derivation.
@ -55,7 +71,7 @@
This can be achieved by: This can be achieved by:
- Fetching a pre-built [store object] from a [substituter] - Fetching a pre-built [store object] from a [substituter]
- Running the [`builder`](@docroot@/language/derivations.md#attr-builder) executable as specified in the corresponding [derivation] - Running the [`builder`](@docroot@/language/derivations.md#attr-builder) executable as specified in the corresponding [store derivation]
- Delegating to a [remote machine](@docroot@/command-ref/conf-file.md#conf-builders) and retrieving the outputs - Delegating to a [remote machine](@docroot@/command-ref/conf-file.md#conf-builders) and retrieving the outputs
<!-- TODO: link [running] to build process page, #8888 --> <!-- TODO: link [running] to build process page, #8888 -->
@ -65,7 +81,7 @@
[realise]: #gloss-realise [realise]: #gloss-realise
- [content-addressed derivation]{#gloss-content-addressed-derivation} - [content-addressing derivation]{#gloss-content-addressing-derivation}
A derivation which has the A derivation which has the
[`__contentAddressed`](./language/advanced-attributes.md#adv-attr-__contentAddressed) [`__contentAddressed`](./language/advanced-attributes.md#adv-attr-__contentAddressed)
@ -73,7 +89,7 @@
- [fixed-output derivation]{#gloss-fixed-output-derivation} (FOD) - [fixed-output derivation]{#gloss-fixed-output-derivation} (FOD)
A [derivation] where a cryptographic hash of the [output] is determined in advance using the [`outputHash`](./language/advanced-attributes.md#adv-attr-outputHash) attribute, and where the [`builder`](@docroot@/language/derivations.md#attr-builder) executable has access to the network. A [store derivation] where a cryptographic hash of the [output] is determined in advance using the [`outputHash`](./language/advanced-attributes.md#adv-attr-outputHash) attribute, and where the [`builder`](@docroot@/language/derivations.md#attr-builder) executable has access to the network.
- [store]{#gloss-store} - [store]{#gloss-store}
@ -84,6 +100,12 @@
[store]: #gloss-store [store]: #gloss-store
- [Nix instance]{#gloss-nix-instance}
<!-- ambiguous -->
1. An installation of Nix, which includes the presence of a [store], and the Nix package manager which operates on that store.
A local Nix installation and a [remote builder](@docroot@/advanced-topics/distributed-builds.md) are two examples of Nix instances.
2. A running Nix process, such as the `nix` command.
- [binary cache]{#gloss-binary-cache} - [binary cache]{#gloss-binary-cache}
A *binary cache* is a Nix store which uses a different format: its A *binary cache* is a Nix store which uses a different format: its
@ -130,7 +152,7 @@
- [input-addressed store object]{#gloss-input-addressed-store-object} - [input-addressed store object]{#gloss-input-addressed-store-object}
A store object produced by building a A store object produced by building a
non-[content-addressed](#gloss-content-addressed-derivation), non-[content-addressed](#gloss-content-addressing-derivation),
non-[fixed-output](#gloss-fixed-output-derivation) non-[fixed-output](#gloss-fixed-output-derivation)
derivation. derivation.
@ -138,7 +160,7 @@
A [store object] which is [content-addressed](#gloss-content-address), A [store object] which is [content-addressed](#gloss-content-address),
i.e. whose [store path] is determined by its contents. i.e. whose [store path] is determined by its contents.
This includes derivations, the outputs of [content-addressed derivations](#gloss-content-addressed-derivation), and the outputs of [fixed-output derivations](#gloss-fixed-output-derivation). This includes derivations, the outputs of [content-addressing derivations](#gloss-content-addressing-derivation), and the outputs of [fixed-output derivations](#gloss-fixed-output-derivation).
See [Content-Addressing Store Objects](@docroot@/store/store-object/content-address.md) for details. See [Content-Addressing Store Objects](@docroot@/store/store-object/content-address.md) for details.
@ -188,7 +210,7 @@
> >
> The contents of a `.nix` file form a Nix expression. > The contents of a `.nix` file form a Nix expression.
Nix expressions specify [derivations][derivation], which are [instantiated][instantiate] into the Nix store as [store derivations][store derivation]. Nix expressions specify [derivation expressions][derivation expression], which are [instantiated][instantiate] into the Nix store as [store derivations][store derivation].
These derivations can then be [realised][realise] to produce [outputs][output]. These derivations can then be [realised][realise] to produce [outputs][output].
> **Example** > **Example**
@ -216,7 +238,7 @@
directly or indirectly “reachable” from that store path; that is, directly or indirectly “reachable” from that store path; that is,
its the closure of the path under the *references* relation. For its the closure of the path under the *references* relation. For
a package, the closure of its derivation is equivalent to the a package, the closure of its derivation is equivalent to the
build-time dependencies, while the closure of its output path is build-time dependencies, while the closure of its [output path] is
equivalent to its runtime dependencies. For correct deployment it equivalent to its runtime dependencies. For correct deployment it
is necessary to deploy whole closures, since otherwise at runtime is necessary to deploy whole closures, since otherwise at runtime
files could be missing. The command `nix-store --query --requisites ` prints out files could be missing. The command `nix-store --query --requisites ` prints out
@ -230,14 +252,14 @@
- [output]{#gloss-output} - [output]{#gloss-output}
A [store object] produced by a [derivation]. A [store object] produced by a [store derivation].
See [the `outputs` argument to the `derivation` function](@docroot@/language/derivations.md#attr-outputs) for details. See [the `outputs` argument to the `derivation` function](@docroot@/language/derivations.md#attr-outputs) for details.
[output]: #gloss-output [output]: #gloss-output
- [output path]{#gloss-output-path} - [output path]{#gloss-output-path}
The [store path] to the [output] of a [derivation]. The [store path] to the [output] of a [store derivation].
[output path]: #gloss-output-path [output path]: #gloss-output-path
@ -246,14 +268,11 @@
- [deriving path]{#gloss-deriving-path} - [deriving path]{#gloss-deriving-path}
Deriving paths are a way to refer to [store objects][store object] that ar not yet [realised][realise]. Deriving paths are a way to refer to [store objects][store object] that might not yet be [realised][realise].
This is necessary because, in general and particularly for [content-addressed derivations][content-addressed derivation], the [output path] of an [output] is not known in advance.
There are two forms:
- *constant*: just a [store path] See [Deriving Path](./store/derivation/index.md#deriving-path) for details.
It can be made [valid][validity] by copying it into the store: from the evaluator, command line interface or another store.
- *output*: a pair of a [store path] to a [derivation] and an [output] name. Not to be confused with [derivation path].
- [deriver]{#gloss-deriver} - [deriver]{#gloss-deriver}

View file

@ -99,8 +99,8 @@ Derivations can declare some infrequently used optional attributes.
to make it use the proxy server configuration specified by the user to make it use the proxy server configuration specified by the user
in the environment variables `http_proxy` and friends. in the environment variables `http_proxy` and friends.
This attribute is only allowed in *fixed-output derivations* (see This attribute is only allowed in [fixed-output derivations][fixed-output derivation],
below), where impurities such as these are okay since (the hash where impurities such as these are okay since (the hash
of) the output is known in advance. It is ignored for all other of) the output is known in advance. It is ignored for all other
derivations. derivations.
@ -119,135 +119,6 @@ Derivations can declare some infrequently used optional attributes.
[`impure-env`](@docroot@/command-ref/conf-file.md#conf-impure-env) [`impure-env`](@docroot@/command-ref/conf-file.md#conf-impure-env)
configuration setting. configuration setting.
- [`outputHash`]{#adv-attr-outputHash}; [`outputHashAlgo`]{#adv-attr-outputHashAlgo}; [`outputHashMode`]{#adv-attr-outputHashMode}\
These attributes declare that the derivation is a so-called *fixed-output derivation* (FOD), which means that a cryptographic hash of the output is already known in advance.
As opposed to regular derivations, the [`builder`] executable of a fixed-output derivation has access to the network.
Nix computes a cryptographic hash of its output and compares that to the hash declared with these attributes.
If there is a mismatch, the derivation fails.
The rationale for fixed-output derivations is derivations such as
those produced by the `fetchurl` function. This function downloads a
file from a given URL. To ensure that the downloaded file has not
been modified, the caller must also specify a cryptographic hash of
the file. For example,
```nix
fetchurl {
url = "http://ftp.gnu.org/pub/gnu/hello/hello-2.1.1.tar.gz";
sha256 = "1md7jsfd8pa45z73bz1kszpp01yw6x5ljkjk2hx7wl800any6465";
}
```
It sometimes happens that the URL of the file changes, e.g., because
servers are reorganised or no longer available. We then must update
the call to `fetchurl`, e.g.,
```nix
fetchurl {
url = "ftp://ftp.nluug.nl/pub/gnu/hello/hello-2.1.1.tar.gz";
sha256 = "1md7jsfd8pa45z73bz1kszpp01yw6x5ljkjk2hx7wl800any6465";
}
```
If a `fetchurl` derivation was treated like a normal derivation, the
output paths of the derivation and *all derivations depending on it*
would change. For instance, if we were to change the URL of the
Glibc source distribution in Nixpkgs (a package on which almost all
other packages depend) massive rebuilds would be needed. This is
unfortunate for a change which we know cannot have a real effect as
it propagates upwards through the dependency graph.
For fixed-output derivations, on the other hand, the name of the
output path only depends on the `outputHash*` and `name` attributes,
while all other attributes are ignored for the purpose of computing
the output path. (The `name` attribute is included because it is
part of the path.)
As an example, here is the (simplified) Nix expression for
`fetchurl`:
```nix
{ stdenv, curl }: # The curl program is used for downloading.
{ url, sha256 }:
stdenv.mkDerivation {
name = baseNameOf (toString url);
builder = ./builder.sh;
buildInputs = [ curl ];
# This is a fixed-output derivation; the output must be a regular
# file with SHA256 hash sha256.
outputHashMode = "flat";
outputHashAlgo = "sha256";
outputHash = sha256;
inherit url;
}
```
The `outputHash` attribute must be a string containing the hash in either hexadecimal or "nix32" encoding, or following the format for integrity metadata as defined by [SRI](https://www.w3.org/TR/SRI/).
The "nix32" encoding is an adaptation of base-32 encoding.
The [`convertHash`](@docroot@/language/builtins.md#builtins-convertHash) function shows how to convert between different encodings, and the [`nix-hash` command](../command-ref/nix-hash.md) has information about obtaining the hash for some contents, as well as converting to and from encodings.
The `outputHashAlgo` attribute specifies the hash algorithm used to compute the hash.
It can currently be `"sha1"`, `"sha256"`, `"sha512"`, or `null`.
`outputHashAlgo` can only be `null` when `outputHash` follows the SRI format.
The `outputHashMode` attribute determines how the hash is computed.
It must be one of the following values:
- [`"flat"`](@docroot@/store/store-object/content-address.md#method-flat)
This is the default.
- [`"recursive"` or `"nar"`](@docroot@/store/store-object/content-address.md#method-nix-archive)
> **Compatibility**
>
> `"recursive"` is the traditional way of indicating this,
> and is supported since 2005 (virtually the entire history of Nix).
> `"nar"` is more clear, and consistent with other parts of Nix (such as the CLI),
> however support for it is only added in Nix version 2.21.
- [`"text"`](@docroot@/store/store-object/content-address.md#method-text)
> **Warning**
>
> The use of this method for derivation outputs is part of the [`dynamic-derivations`][xp-feature-dynamic-derivations] experimental feature.
- [`"git"`](@docroot@/store/store-object/content-address.md#method-git)
> **Warning**
>
> This method is part of the [`git-hashing`][xp-feature-git-hashing] experimental feature.
- [`__contentAddressed`]{#adv-attr-__contentAddressed}
> **Warning**
> This attribute is part of an [experimental feature](@docroot@/development/experimental-features.md).
>
> To use this attribute, you must enable the
> [`ca-derivations`][xp-feature-ca-derivations] experimental feature.
> For example, in [nix.conf](../command-ref/conf-file.md) you could add:
>
> ```
> extra-experimental-features = ca-derivations
> ```
If this attribute is set to `true`, then the derivation
outputs will be stored in a content-addressed location rather than the
traditional input-addressed one.
Setting this attribute also requires setting
[`outputHashMode`](#adv-attr-outputHashMode)
and
[`outputHashAlgo`](#adv-attr-outputHashAlgo)
like for *fixed-output derivations* (see above).
It also implicitly requires that the machine to build the derivation must have the `ca-derivations` [system feature](@docroot@/command-ref/conf-file.md#conf-system-features).
- [`passAsFile`]{#adv-attr-passAsFile}\ - [`passAsFile`]{#adv-attr-passAsFile}\
A list of names of attributes that should be passed via files rather A list of names of attributes that should be passed via files rather
than environment variables. For example, if you have than environment variables. For example, if you have
@ -370,6 +241,134 @@ Derivations can declare some infrequently used optional attributes.
ensures that the derivation can only be built on a machine with the `kvm` feature. ensures that the derivation can only be built on a machine with the `kvm` feature.
[xp-feature-ca-derivations]: @docroot@/development/experimental-features.md#xp-feature-ca-derivations ## Setting the derivation type
As discussed in [Derivation Outputs and Types of Derivations](@docroot@/store/derivation/outputs/index.md), there are multiples kinds of derivations / kinds of derivation outputs.
The choice of the following attributes determines which kind of derivation we are making.
- [`__contentAddressed`]
- [`outputHash`]
- [`outputHashAlgo`]
- [`outputHashMode`]
The three types of derivations are chosen based on the following combinations of these attributes.
All other combinations are invalid.
- [Input-addressing derivations](@docroot@/store/derivation/outputs/input-address.md)
This is the default for `builtins.derivation`.
Nix only currently supports one kind of input-addressing, so no other information is needed.
`__contentAddressed = false;` may also be included, but is not needed, and will trigger the experimental feature check.
- [Fixed-output derivations][fixed-output derivation]
All of [`outputHash`], [`outputHashAlgo`], and [`outputHashMode`].
<!--
`__contentAddressed` is ignored, becaused fixed-output derivations always content-address their outputs, by definition.
**TODO CHECK**
-->
- [(Floating) content-addressing derivations](@docroot@/store/derivation/outputs/content-address.md)
Both [`outputHashAlgo`] and [`outputHashMode`], `__contentAddressed = true;`, and *not* `outputHash`.
If an output hash was given, then the derivation output would be "fixed" not "floating".
Here is more information on the `output*` attributes, and what values they may be set to:
- [`outputHashMode`]{#adv-attr-outputHashMode}
This specifies how the files of a content-addressing derivation output are digested to produce a content address.
This works in conjunction with [`outputHashAlgo`](#adv-attr-outputHashAlgo).
Specifying one without the other is an error (unless [`outputHash` is also specified and includes its own hash algorithm as described below).
The `outputHashMode` attribute determines how the hash is computed.
It must be one of the following values:
- [`"flat"`](@docroot@/store/store-object/content-address.md#method-flat)
This is the default.
- [`"recursive"` or `"nar"`](@docroot@/store/store-object/content-address.md#method-nix-archive)
> **Compatibility**
>
> `"recursive"` is the traditional way of indicating this,
> and is supported since 2005 (virtually the entire history of Nix).
> `"nar"` is more clear, and consistent with other parts of Nix (such as the CLI),
> however support for it is only added in Nix version 2.21.
- [`"text"`](@docroot@/store/store-object/content-address.md#method-text)
> **Warning**
>
> The use of this method for derivation outputs is part of the [`dynamic-derivations`][xp-feature-dynamic-derivations] experimental feature.
- [`"git"`](@docroot@/store/store-object/content-address.md#method-git)
> **Warning**
>
> This method is part of the [`git-hashing`][xp-feature-git-hashing] experimental feature.
See [content-addressing store objects](@docroot@/store/store-object/content-address.md) for more information about the process this flag controls.
- [`outputHashAlgo`]{#adv-attr-outputHashAlgo}
This specifies the hash alorithm used to digest the [file system object] data of a content-addressing derivation output.
This works in conjunction with [`outputHashMode`](#adv-attr-outputHashAlgo).
Specifying one without the other is an error (unless [`outputHash` is also specified and includes its own hash algorithm as described below).
The `outputHashAlgo` attribute specifies the hash algorithm used to compute the hash.
It can currently be `"blake3"`, "sha1"`, `"sha256"`, `"sha512"`, or `null`.
`outputHashAlgo` can only be `null` when `outputHash` follows the SRI format, because in that case the choice of hash algorithm is determined by `outputHash`.
- [`outputHash`]{#adv-attr-outputHashAlgo}; [`outputHash`]{#adv-attr-outputHashMode}\
This will specify the output hash of the single output of a [fixed-output derivation].
The `outputHash` attribute must be a string containing the hash in either hexadecimal or "nix32" encoding, or following the format for integrity metadata as defined by [SRI](https://www.w3.org/TR/SRI/).
The "nix32" encoding is an adaptation of base-32 encoding.
> **Note**
>
> The [`convertHash`](@docroot@/language/builtins.md#builtins-convertHash) function shows how to convert between different encodings.
> The [`nix-hash` command](../command-ref/nix-hash.md) has information about obtaining the hash for some contents, as well as converting to and from encodings.
- [`__contentAddressed`]{#adv-attr-__contentAddressed}
> **Warning**
>
> This attribute is part of an [experimental feature](@docroot@/development/experimental-features.md).
>
> To use this attribute, you must enable the
> [`ca-derivations`][xp-feature-ca-derivations] experimental feature.
> For example, in [nix.conf](../command-ref/conf-file.md) you could add:
>
> ```
> extra-experimental-features = ca-derivations
> ```
This is a boolean with a default of `false`.
It determines whether the derivation is floating content-addressing.
[`__contentAddressed`]: #adv-attr-__contentAddressed
[`outputHash`]: #adv-attr-outputHash
[`outputHashAlgo`]: #adv-attr-outputHashAlgo
[`outputHashMode`]: #adv-attr-outputHashMode
[fixed-output derivation]: @docroot@/glossary.md#gloss-fixed-output-derivation
[file system object]: @docroot@/store/file-system-object.md
[store object]: @docroot@/store/store-object.md
[xp-feature-dynamic-derivations]: @docroot@/development/experimental-features.md#xp-feature-dynamic-derivations [xp-feature-dynamic-derivations]: @docroot@/development/experimental-features.md#xp-feature-dynamic-derivations
[xp-feature-git-hashing]: @docroot@/development/experimental-features.md#xp-feature-git-hashing [xp-feature-git-hashing]: @docroot@/development/experimental-features.md#xp-feature-git-hashing

View file

@ -1,9 +1,10 @@
# Derivations # Derivations
The most important built-in function is `derivation`, which is used to describe a single derivation: The most important built-in function is `derivation`, which is used to describe a single store-layer [store derivation].
a specification for running an executable on precisely defined input files to repeatably produce output files at uniquely determined file system paths. Consult the [store chapter](@docroot@/store/derivation/index.md) for what a store derivation is;
this section just concerns how to create one from the Nix language.
It takes as input an attribute set, the attributes of which specify the inputs to the process. This builtin function takes as input an attribute set, the attributes of which specify the inputs to the process.
It outputs an attribute set, and produces a [store derivation] as a side effect of evaluation. It outputs an attribute set, and produces a [store derivation] as a side effect of evaluation.
[store derivation]: @docroot@/glossary.md#gloss-store-derivation [store derivation]: @docroot@/glossary.md#gloss-store-derivation
@ -15,7 +16,7 @@ It outputs an attribute set, and produces a [store derivation] as a side effect
- [`name`]{#attr-name} ([String](@docroot@/language/types.md#type-string)) - [`name`]{#attr-name} ([String](@docroot@/language/types.md#type-string))
A symbolic name for the derivation. A symbolic name for the derivation.
It is added to the [store path] of the corresponding [store derivation] as well as to its [output paths](@docroot@/glossary.md#gloss-output-path). See [derivation outputs](@docroot@/store/derivation/index.md#outputs) for what this is affects.
[store path]: @docroot@/store/store-path.md [store path]: @docroot@/store/store-path.md
@ -28,17 +29,12 @@ It outputs an attribute set, and produces a [store derivation] as a side effect
> } > }
> ``` > ```
> >
> The store derivation's path will be `/nix/store/<hash>-hello.drv`. > The derivation's path will be `/nix/store/<hash>-hello.drv`.
> The [output](#attr-outputs) paths will be of the form `/nix/store/<hash>-hello[-<output>]` > The [output](#attr-outputs) paths will be of the form `/nix/store/<hash>-hello[-<output>]`
- [`system`]{#attr-system} ([String](@docroot@/language/types.md#type-string)) - [`system`]{#attr-system} ([String](@docroot@/language/types.md#type-string))
The system type on which the [`builder`](#attr-builder) executable is meant to be run. See [system](@docroot@/store/derivation/index.md#system).
A necessary condition for Nix to build derivations locally is that the `system` attribute matches the current [`system` configuration option].
It can automatically [build on other platforms](@docroot@/language/derivations.md#attr-builder) by forwarding build requests to other machines.
[`system` configuration option]: @docroot@/command-ref/conf-file.md#conf-system
> **Example** > **Example**
> >
@ -68,7 +64,7 @@ It outputs an attribute set, and produces a [store derivation] as a side effect
- [`builder`]{#attr-builder} ([Path](@docroot@/language/types.md#type-path) | [String](@docroot@/language/types.md#type-string)) - [`builder`]{#attr-builder} ([Path](@docroot@/language/types.md#type-path) | [String](@docroot@/language/types.md#type-string))
Path to an executable that will perform the build. See [builder](@docroot@/store/derivation/index.md#builder).
> **Example** > **Example**
> >
@ -117,7 +113,7 @@ It outputs an attribute set, and produces a [store derivation] as a side effect
Default: `[ ]` Default: `[ ]`
Command-line arguments to be passed to the [`builder`](#attr-builder) executable. See [args](@docroot@/store/derivation/index.md#args).
> **Example** > **Example**
> >
@ -239,77 +235,3 @@ It outputs an attribute set, and produces a [store derivation] as a side effect
passed as an empty string. passed as an empty string.
<!-- FIXME: add a section on output attributes --> <!-- FIXME: add a section on output attributes -->
## Builder execution
The [`builder`](#attr-builder) is executed as follows:
- A temporary directory is created under the directory specified by
`TMPDIR` (default `/tmp`) where the build will take place. The
current directory is changed to this directory.
- The environment is cleared and set to the derivation attributes, as
specified above.
- In addition, the following variables are set:
- `NIX_BUILD_TOP` contains the path of the temporary directory for
this build.
- Also, `TMPDIR`, `TEMPDIR`, `TMP`, `TEMP` are set to point to the
temporary directory. This is to prevent the builder from
accidentally writing temporary files anywhere else. Doing so
might cause interference by other processes.
- `PATH` is set to `/path-not-set` to prevent shells from
initialising it to their built-in default value.
- `HOME` is set to `/homeless-shelter` to prevent programs from
using `/etc/passwd` or the like to find the user's home
directory, which could cause impurity. Usually, when `HOME` is
set, it is used as the location of the home directory, even if
it points to a non-existent path.
- `NIX_STORE` is set to the path of the top-level Nix store
directory (typically, `/nix/store`).
- `NIX_ATTRS_JSON_FILE` & `NIX_ATTRS_SH_FILE` if `__structuredAttrs`
is set to `true` for the derivation. A detailed explanation of this
behavior can be found in the
[section about structured attrs](./advanced-attributes.md#adv-attr-structuredAttrs).
- For each output declared in `outputs`, the corresponding
environment variable is set to point to the intended path in the
Nix store for that output. Each output path is a concatenation
of the cryptographic hash of all build inputs, the `name`
attribute and the output name. (The output name is omitted if
its `out`.)
- If an output path already exists, it is removed. Also, locks are
acquired to prevent multiple Nix instances from performing the same
build at the same time.
- A log of the combined standard output and error is written to
`/nix/var/log/nix`.
- The builder is executed with the arguments specified by the
attribute `args`. If it exits with exit code 0, it is considered to
have succeeded.
- The temporary directory is removed (unless the `-K` option was
specified).
- If the build was successful, Nix scans each output path for
references to input paths by looking for the hash parts of the input
paths. Since these are potential runtime dependencies, Nix registers
them as dependencies of the output paths.
- After the build, Nix sets the last-modified timestamp on all files
in the build result to 1 (00:00:01 1/1/1970 UTC), sets the group to
the default group, and sets the mode of the file to 0444 or 0555
(i.e., read-only, with execute permission enabled if the file was
originally executable). Note that possible `setuid` and `setgid`
bits are cleared. Setuid and setgid programs are not currently
supported by Nix. This is because the Nix archives used in
deployment have no concept of ownership information, and because it
makes the build result dependent on the user performing the build.

View file

@ -71,8 +71,9 @@ Boxes are data structures, arrow labels are transformations.
| evaluate | | | | evaluate | | |
| | | | | | | | | |
| V | | | | V | | |
| .------------. | | .------------------. | | .------------. | | |
| | derivation |----|-instantiate-|->| store derivation | | | | derivation | | | .------------------. |
| | expression |----|-instantiate-|->| store derivation | |
| '------------' | | '------------------' | | '------------' | | '------------------' |
| | | | | | | | | |
| | | realise | | | | realise |

View file

@ -22,9 +22,9 @@ Rather than writing
"--with-freetype2-library=" + freetype + "/lib" "--with-freetype2-library=" + freetype + "/lib"
``` ```
(where `freetype` is a [derivation]), you can instead write (where `freetype` is a [derivation expression]), you can instead write
[derivation]: @docroot@/glossary.md#gloss-derivation [derivation expression]: @docroot@/glossary.md#gloss-derivation-expression
```nix ```nix
"--with-freetype2-library=${freetype}/lib" "--with-freetype2-library=${freetype}/lib"
@ -148,7 +148,7 @@ An expression that is interpolated must evaluate to one of the following:
- `__toString` must be a function that takes the attribute set itself and returns a string - `__toString` must be a function that takes the attribute set itself and returns a string
- `outPath` must be a string - `outPath` must be a string
This includes [derivations](./derivations.md) or [flake inputs](@docroot@/command-ref/new-cli/nix3-flake.md#flake-inputs) (experimental). This includes [derivation expressions](./derivations.md) or [flake inputs](@docroot@/command-ref/new-cli/nix3-flake.md#flake-inputs) (experimental).
A string interpolates to itself. A string interpolates to itself.

View file

@ -1,6 +1,8 @@
# Derivation "ATerm" file format # Derivation "ATerm" file format
For historical reasons, [derivations](@docroot@/glossary.md#gloss-store-derivation) are stored on-disk in [ATerm](https://homepages.cwi.nl/~daybuild/daily-books/technology/aterm-guide/aterm-guide.html) format. For historical reasons, [store derivations][store derivation] are stored on-disk in [ATerm](https://homepages.cwi.nl/~daybuild/daily-books/technology/aterm-guide/aterm-guide.html) format.
## The ATerm format used
Derivations are serialised in one of the following formats: Derivations are serialised in one of the following formats:
@ -17,3 +19,20 @@ Derivations are serialised in one of the following formats:
The only `version-string`s that are in use today are for [experimental features](@docroot@/development/experimental-features.md): The only `version-string`s that are in use today are for [experimental features](@docroot@/development/experimental-features.md):
- `"xp-dyn-drv"` for the [`dynamic-derivations`](@docroot@/development/experimental-features.md#xp-feature-dynamic-derivations) experimental feature. - `"xp-dyn-drv"` for the [`dynamic-derivations`](@docroot@/development/experimental-features.md#xp-feature-dynamic-derivations) experimental feature.
## Use for encoding to store object
When derivation is encoded to a [store object] we make the following choices:
- The store path name is the derivation name with `.drv` suffixed at the end
Indeed, the ATerm format above does *not* contain the name of the derivation, on the assumption that a store path will also be provided out-of-band.
- The derivation is content-addressed using the ["Text" method] of content-addressing derivations
Currently we always encode derivations to store object using the ATerm format (and the previous two choices),
but we reserve the option to encode new sorts of derivations differently in the future.
[store derivation]: @docroot@/glossary.md#gloss-store-derivation
[store object]: @docroot@/glossary.md#gloss-store-object
["Text" method]: @docroot@/store/store-object/content-address.md#method-text

View file

@ -32,6 +32,7 @@ is a JSON object with the following fields:
For an output which will be [content addresed], the name of the hash algorithm used. For an output which will be [content addresed], the name of the hash algorithm used.
Valid algorithm strings are: Valid algorithm strings are:
- `blake3`
- `md5` - `md5`
- `sha1` - `sha1`
- `sha256` - `sha256`

View file

@ -35,10 +35,10 @@ In other words, the same store object residing in different store could have dif
* `deriver`: * `deriver`:
If known, the path to the [derivation] from which this store object was produced. If known, the path to the [store derivation] from which this store object was produced.
Otherwise `null`. Otherwise `null`.
[derivation]: @docroot@/glossary.md#gloss-store-derivation [store derivation]: @docroot@/glossary.md#gloss-store-derivation
* `registrationTime` (optional): * `registrationTime` (optional):

View file

@ -53,7 +53,7 @@ where
method of content addressing store objects, method of content addressing store objects,
if the hash algorithm is [SHA-256]. if the hash algorithm is [SHA-256].
Just like in the "Text" case, we can have the store objects referenced by their paths. Just like in the "Text" case, we can have the store objects referenced by their paths.
Additionally, we can have an optional `:self` label to denote self reference. Additionally, we can have an optional `:self` label to denote self-reference.
- ```ebnf - ```ebnf
| "output:" id | "output:" id

View file

@ -0,0 +1,66 @@
# Release 2.27.0 (2025-03-03)
- `inputs.self.submodules` flake attribute [#12421](https://github.com/NixOS/nix/pull/12421)
Flakes in Git repositories can now declare that they need Git submodules to be enabled:
```
{
inputs.self.submodules = true;
}
```
Thus, it's no longer needed for the caller of the flake to pass `submodules = true`.
- Git LFS support [#10153](https://github.com/NixOS/nix/pull/10153) [#12468](https://github.com/NixOS/nix/pull/12468)
The Git fetcher now supports Large File Storage (LFS). This can be enabled by passing the attribute `lfs = true` to the fetcher, e.g.
```console
nix flake prefetch 'git+ssh://git@github.com/Apress/repo-with-large-file-storage.git?lfs=1'
```
A flake can also declare that it requires LFS to be enabled:
```
{
inputs.self.lfs = true;
}
```
Author: [**@b-camacho**](https://github.com/b-camacho), [**@kip93**](https://github.com/kip93)
- Handle the case where a chroot store is used and some inputs are in the "host" `/nix/store` [#12512](https://github.com/NixOS/nix/pull/12512)
The evaluator now presents a "union" filesystem view of the `/nix/store` in the host and the chroot.
This change also removes some hacks that broke `builtins.{path,filterSource}` in chroot stores [#11503](https://github.com/NixOS/nix/issues/11503).
- `nix flake prefetch` now has a `--out-link` option [#12443](https://github.com/NixOS/nix/pull/12443)
- Set `FD_CLOEXEC` on sockets created by curl [#12439](https://github.com/NixOS/nix/pull/12439)
Curl created sockets without setting `FD_CLOEXEC`/`SOCK_CLOEXEC`. This could previously cause connections to remain open forever when using commands like `nix shell`. This change sets the `FD_CLOEXEC` flag using a `CURLOPT_SOCKOPTFUNCTION` callback.
# Contributors
This release was made possible by the following 21 contributors:
- Aiden Fox Ivey [**(@aidenfoxivey)**](https://github.com/aidenfoxivey)
- Ben Millwood [**(@bmillwood)**](https://github.com/bmillwood)
- Brian Camacho [**(@b-camacho)**](https://github.com/b-camacho)
- Brian McKenna [**(@puffnfresh)**](https://github.com/puffnfresh)
- Eelco Dolstra [**(@edolstra)**](https://github.com/edolstra)
- Fabian Möller [**(@B4dM4n)**](https://github.com/B4dM4n)
- Illia Bobyr [**(@ilya-bobyr)**](https://github.com/ilya-bobyr)
- Ivan Trubach [**(@tie)**](https://github.com/tie)
- John Ericson [**(@Ericson2314)**](https://github.com/Ericson2314)
- Jörg Thalheim [**(@Mic92)**](https://github.com/Mic92)
- Leandro Emmanuel Reina Kiperman [**(@kip93)**](https://github.com/kip93)
- MaxHearnden [**(@MaxHearnden)**](https://github.com/MaxHearnden)
- Philipp Otterbein
- Robert Hensing [**(@roberth)**](https://github.com/roberth)
- Sandro [**(@SuperSandro2000)**](https://github.com/SuperSandro2000)
- Sergei Zimmerman [**(@xokdvium)**](https://github.com/xokdvium)
- Silvan Mosberger [**(@infinisil)**](https://github.com/infinisil)
- Someone [**(@SomeoneSerge)**](https://github.com/SomeoneSerge)
- Steve Walker [**(@stevalkr)**](https://github.com/stevalkr)
- bcamacho2 [**(@bcamacho2)**](https://github.com/bcamacho2)
- silvanshade [**(@silvanshade)**](https://github.com/silvanshade)
- tomberek [**(@tomberek)**](https://github.com/tomberek)

View file

@ -0,0 +1,100 @@
# Building
## Normalizing derivation inputs
- Each input must be [realised] prior to building the derivation in question.
[realised]: @docroot@/glossary.md#gloss-realise
- Once this is done, the derivation is *normalized*, replacing each input deriving path with its store path, which we now know from realising the input.
## Builder Execution
The [`builder`](./derivation/index.md#builder) is executed as follows:
- A temporary directory is created under the directory specified by
`TMPDIR` (default `/tmp`) where the build will take place. The
current directory is changed to this directory.
- The environment is cleared and set to the derivation attributes, as
specified above.
- In addition, the following variables are set:
- `NIX_BUILD_TOP` contains the path of the temporary directory for
this build.
- Also, `TMPDIR`, `TEMPDIR`, `TMP`, `TEMP` are set to point to the
temporary directory. This is to prevent the builder from
accidentally writing temporary files anywhere else. Doing so
might cause interference by other processes.
- `PATH` is set to `/path-not-set` to prevent shells from
initialising it to their built-in default value.
- `HOME` is set to `/homeless-shelter` to prevent programs from
using `/etc/passwd` or the like to find the user's home
directory, which could cause impurity. Usually, when `HOME` is
set, it is used as the location of the home directory, even if
it points to a non-existent path.
- `NIX_STORE` is set to the path of the top-level Nix store
directory (typically, `/nix/store`).
- `NIX_ATTRS_JSON_FILE` & `NIX_ATTRS_SH_FILE` if `__structuredAttrs`
is set to `true` for the derivation. A detailed explanation of this
behavior can be found in the
[section about structured attrs](@docroot@/language/advanced-attributes.md#adv-attr-structuredAttrs).
- For each output declared in `outputs`, the corresponding
environment variable is set to point to the intended path in the
Nix store for that output. Each output path is a concatenation
of the cryptographic hash of all build inputs, the `name`
attribute and the output name. (The output name is omitted if
its `out`.)
- If an output path already exists, it is removed. Also, locks are
acquired to prevent multiple [Nix instances][Nix instance] from performing the same
build at the same time.
- A log of the combined standard output and error is written to
`/nix/var/log/nix`.
- The builder is executed with the arguments specified by the
attribute `args`. If it exits with exit code 0, it is considered to
have succeeded.
- The temporary directory is removed (unless the `-K` option was
specified).
## Processing outputs
If the builder exited successfully, the following steps happen in order to turn the output directories left behind by the builder into proper store objects:
- **Normalize the file permissions**
Nix sets the last-modified timestamp on all files
in the build result to 1 (00:00:01 1/1/1970 UTC), sets the group to
the default group, and sets the mode of the file to 0444 or 0555
(i.e., read-only, with execute permission enabled if the file was
originally executable). Any possible `setuid` and `setgid`
bits are cleared.
> **Note**
>
> Setuid and setgid programs are not currently supported by Nix.
> This is because the Nix archives used in deployment have no concept of ownership information,
> and because it makes the build result dependent on the user performing the build.
- **Calculate the references**
Nix scans each output path for
references to input paths by looking for the hash parts of the input
paths. Since these are potential runtime dependencies, Nix registers
them as dependencies of the output paths.
Nix also scans for references to other outputs' paths in the same way, because outputs are allowed to refer to each other.
If the outputs' references to each other form a cycle, this is an error, because the references of store objects much be acyclic.
[Nix instance]: @docroot@/glossary.md#gloss-nix-instance

View file

@ -0,0 +1,302 @@
# Store Derivation and Deriving Path
Besides functioning as a [content-addressed store], the Nix store layer works as a [build system].
Other systems (like Git or IPFS) also store and transfer immutable data, but they don't concern themselves with *how* that data was created.
This is where Nix distinguishes itself.
*Derivations* represent individual build steps, and *deriving paths* are needed to refer to the *outputs* of those build steps before they are built.
<!-- The two concepts need to be introduced together because, as described below, each depends on the other. -->
## Store Derivation {#store-derivation}
A derivation is a specification for running an executable on precisely defined input to produce on more [store objects][store object].
These store objects are known as the derivation's *outputs*.
Derivations are *built*, in which case the process is spawned according to the spec, and when it exits, required to leave behind files which will (after post-processing) become the outputs of the derivation.
This process is described in detail in [Building](@docroot@/store/building.md).
<!--
Some of these things are described directly below, but we envision with more material the exposition will probably want to migrate to separate pages benough this.
See outputs spec for an example of this one that migrated to its own page.
-->
A derivation consists of:
- A name
- An [inputs specification][inputs], a set of [deriving paths][deriving path]
- An [outputs specification][outputs], specifying which outputs should be produced, and various metadata about them.
- The ["system" type][system] (e.g. `x86_64-linux`) where the executable is to run.
- The [process creation fields]: to spawn the arbitrary process which will perform the build step.
[store derivation]: #store-derivation
[inputs]: #inputs
[input]: #inputs
[outputs]: ./outputs/index.md
[output]: ./outputs/index.md
[process creation fields]: #process-creation-fields
[builder]: #builder
[args]: #args
[env]: #env
[system]: #system
[content-addressed store]: @docroot@/glossary.md#gloss-content-addressed-store
[build system]: @docroot@/glossary.md#gloss-build-system
### Referencing derivations {#derivation-path}
Derivations are always referred to by the [store path] of the store object they are encoded to.
See the [encoding section](#derivation-encoding) for more details on how this encoding works, and thus what exactly what store path we would end up with for a given derivation.
The store path of the store object which encodes a derivation is often called a *derivation path* for brevity.
## Deriving path {#deriving-path}
Deriving paths are a way to refer to [store objects][store object] that may or may not yet be [realised][realise].
There are two forms:
- [*constant*]{#deriving-path-constant}: just a [store path].
It can be made [valid][validity] by copying it into the store: from the evaluator, command line interface or another store.
- [*output*]{#deriving-path-output}: a pair of a [store path] to a [store derivation] and an [output] name.
In pseudo code:
```typescript
type OutputName = String;
type ConstantPath = {
path: StorePath;
};
type OutputPath = {
drvPath: StorePath;
output: OutputName;
};
type DerivingPath = ConstantPath | OutputPath;
```
Deriving paths are necessary because, in general and particularly for [content-addressing derivations][content-addressing derivation], the [store path] of an [output] is not known in advance.
We can use an output deriving path to refer to such an output, instead of the store path which we do not yet know.
[deriving path]: #deriving-path
[validity]: @docroot@/glossary.md#gloss-validity
## Parts of a derivation
A derivation is constructed from the parts documented in the following subsections.
### Inputs {#inputs}
The inputs are a set of [deriving paths][deriving path], referring to all store objects needed in order to perform this build step.
The [process creation fields] will presumably include many [store paths][store path]:
- The path to the executable normally starts with a store path
- The arguments and environment variables likely contain many other store paths.
But rather than somehow scanning all the other fields for inputs, Nix requires that all inputs be explicitly collected in the inputs field. It is instead the responsibility of the creator of a derivation (e.g. the evaluator) to ensure that every store object referenced in another field (e.g. referenced by store path) is included in this inputs field.
### System {#system}
The system type on which the [`builder`](#attr-builder) executable is meant to be run.
A necessary condition for Nix to schedule a given derivation on some [Nix instance] is for the "system" of that derivation to match that instance's [`system` configuration option] or [`extra-platforms` configuration option].
By putting the `system` in each derivation, Nix allows *heterogenous* build plans, where not all steps can be run on the same machine or same sort of machine.
Nix can schedule builds such that it automatically builds on other platforms by [forwarding build requests](@docroot@/advanced-topics/distributed-builds.md) to other Nix instances.
[`system` configuration option]: @docroot@/command-ref/conf-file.md#conf-system
[`extra-platforms` configuration option]: @docroot@/command-ref/conf-file.md#conf-extra-platforms
[content-addressing derivation]: @docroot@/glossary.md#gloss-content-addressing-derivation
[realise]: @docroot@/glossary.md#gloss-realise
[store object]: @docroot@/store/store-object.md
[store path]: @docroot@/store/store-path.md
### Process creation fields {#process-creation-fields}
These are the three fields which describe how to spawn the process which (along with any of its own child processes) will perform the build.
You may note that this has everything needed for an `execve` system call.
#### Builder {#builder}
This is the path to an executable that will perform the build and produce the [outputs].
#### Arguments {#args}
Command-line arguments to be passed to the [`builder`](#builder) executable.
Note that these are the arguments after the first argument.
The first argument passed to the `builder` will be the value of `builder`, as per the usual convention on Unix.
See [Wikipedia](https://en.wikipedia.org/wiki/Argv) for details.
#### Environment Variables {#env}
Environment variables which will be passed to the [builder](#builder) executable.
### Placeholders
Placeholders are opaque values used within the [process creation fields] to [store objects] for which we don't yet know [store path]s.
They are strings in the form `/<hash>` that are embedded anywhere within the strings of those fields, and we are [considering](https://github.com/NixOS/nix/issues/12361) to add store-path-like placeholders.
> **Note**
>
> Output Deriving Path exist to solve the same problem as placeholders --- that is, referring to store objects for which we don't yet know a store path.
> They also have a string syntax with `^`, [described in the encoding section](#deriving-path-encoding).
> We could use that syntax instead of `/<hash>` for placeholders, but its human-legibility would cause problems.
There are two types of placeholder, corresponding to the two cases where this problem arises:
- [Output placeholder]{#output-placeholder}:
This is a placeholder for a derivation's own output.
- [Input placeholder]{#input-placeholder}:
This is a placeholder to a derivation's non-constant [input],
i.e. an input that is an [output derived path].
> **Explanation**
>
> In general, we need to realise [realise] a [store object] in order to be sure to have a store object for it.
> But for these two cases this is either impossible or impractical:
>
> - In the output case this is impossible:
>
> We cannot build the output until we have a correct derivation, and we cannot have a correct derivation (without using placeholders) until we have the output path.
>
> - In the input case this is impractical:
>
> If we always build a dependency first, and then refer to its output by store path, we would lose the ability for a derivation graph to describe an entire build plan consisting of multiple build steps.
## Encoding
### Derivation {#derivation-encoding}
There are two formats, documented separately:
- The legacy ["ATerm" format](@docroot@/protocols/derivation-aterm.md)
- The experimental, currently under development and changing [JSON format](@docroot@/protocols/json/derivation.md)
Every derivation has a canonical choice of encoding used to serialize it to a store object.
This ensures that there is a canonical [store path] used to refer to the derivation, as described in [Referencing derivations](#derivation-path).
> **Note**
>
> Currently, the canonical encoding for every derivation is the "ATerm" format,
> but this is subject to change for types derivations which are not yet stable.
Regardless of the format used, when serializing a derivation to a store object, that store object will be content-addressed.
In the common case, the inputs to store objects are either:
- [constant deriving paths](#deriving-path-constant) for content-addressed source objects, which are "initial inputs" rather than the outputs of some other derivation
- the outputs of other derivations
If those other derivations *also* abide by this common case (and likewise for transitive inputs), then the entire closure of the serialized derivation will be content-addressed.
### Deriving Path {#deriving-path-encoding}
- *constant*
Constant deriving paths are encoded simply as the underlying store path is.
Thus, we see that every encoded store path is also a valid encoded (constant) deriving path.
- *output*
Output deriving paths are encoded by
- encoding of a store path referring to a derivation
- a `^` separator (or `!` in some legacy contexts)
- the name of an output of the previously referred derivation
> **Example**
>
> ```
> /nix/store/lxrn8v5aamkikg6agxwdqd1jz7746wz4-firefox-98.0.2.drv^out
> ```
>
> This parses like so:
>
> ```
> /nix/store/lxrn8v5aamkikg6agxwdqd1jz7746wz4-firefox-98.0.2.drv^out
> |------------------------------------------------------------| |-|
> store path (usual encoding) output name
> |--|
> note the ".drv"
> ```
## Extending the model to be higher-order
**Experimental feature**: [`dynamic-derivations`](@docroot@/development/experimental-features.md#xp-feature-dynamic-derivations)
So far, we have used store paths to refer to derivations.
That works because we've implicitly assumed that all derivations are created *statically* --- created by some mechanism out of band, and then manually inserted into the store.
But what if derivations could also be created dynamically within Nix?
In other words, what if derivations could be the outputs of other derivations?
> **Note**
>
> In the parlance of "Build Systems à la carte", we are generalizing the Nix store layer to be a "Monadic" instead of "Applicative" build system.
How should we refer to such derivations?
A deriving path works, the same as how we refer to other derivation outputs.
But what about a dynamic derivations output?
(i.e. how do we refer to the output of a derivation, which is itself an output of a derivation?)
For that we need to generalize the definition of deriving path, replacing the store path used to refer to the derivation with a nested deriving path:
```diff
type OutputPath = {
- drvPath: StorePath;
+ drvPath: DerivingPath;
output: OutputName;
};
```
Now, the `drvPath` field of `OutputPath` is itself a `DerivingPath` instead of a `StorePath`.
With that change, here is updated definition:
```typescript
type OutputName = String;
type ConstantPath = {
path: StorePath;
};
type OutputPath = {
drvPath: DerivingPath;
output: OutputName;
};
type DerivingPath = ConstantPath | OutputPath;
```
Under this extended model, `DerivingPath`s are thus inductively built up from a root `ConstantPath`, wrapped with zero or more outer `OutputPath`s.
### Encoding {#deriving-path-encoding}
The encoding is adjusted in the natural way, encoding the `drv` field recursively using the same deriving path encoding.
The result of this is that it is possible to have a chain of `^<output-name>` at the end of the final string, as opposed to just a single one.
> **Example**
>
> ```
> /nix/store/lxrn8v5aamkikg6agxwdqd1jz7746wz4-firefox-98.0.2.drv^foo.drv^bar.drv^out
> |----------------------------------------------------------------------------| |-|
> inner deriving path (usual encoding) output name
> |--------------------------------------------------------------------| |-----|
> even more inner deriving path (usual encoding) output name
> |------------------------------------------------------------| |-----|
> innermost constant store path (usual encoding) output name
> ```
[Nix instance]: @docroot@/glossary.md#gloss-nix-instance

View file

@ -0,0 +1,192 @@
# Content-addressing derivation outputs
The content-addressing of an output only depends on that store object itself, not any other information external (such has how it was made, when it was made, etc.).
As a consequence, a store object will be content-addressed the same way regardless of whether it was manually inserted into the store, outputted by some derivation, or outputted by a some other derivation.
The output spec for a content-addressed output must contains the following field:
- *method*: how the data of the store object is digested into a content address
The possible choices of *method* are described in the [section on content-addressing store objects](@docroot@/store/store-object/content-address.md).
Given the method, the output's name (computed from the derivation name and output spec mapping as described above), and the data of the store object, the output's store path will be computed as described in that section.
## Fixed-output content-addressing {#fixed}
In this case the content address of the *fixed* in advanced by the derivation itself.
In other words, when the derivation has finished [building](@docroot@/store/building.md), and the provisional output' content-address is computed as part of the process to turn it into a *bona fide* store object, the calculated content address must much that given in the derivation, or the build of that derivation will be deemed a failure.
The output spec for an output with a fixed content addresses additionally contains:
- *hash*, the hash expected from digesting the store object's file system objects.
This hash may be of a freely-chosen hash algorithm (that Nix supports)
> **Design note**
>
> In principle, the output spec could also specify the references the store object should have, since the references and file system objects are equally parts of a content-addressed store object proper that contribute to its content-addressed.
> However, at this time, the references are not not done because all fixed content-addressed outputs are required to have no references (including no self-reference).
>
> Also in principle, rather than specifying the references and file system object data with separate hashes, a single hash that constraints both could be used.
> This could be done with the final store path's digest, or better yet, the hash that will become the store path's digest before it is truncated.
>
> These possible future extensions are included to elucidate the core property of fixed-output content addressing --- that all parts of the output must be cryptographically fixed with one or more hashes --- separate from the particulars of the currently-supported store object content-addressing schemes.
### Design rationale
What is the purpose of fixing an output's content address in advanced?
In abstract terms, the answer is carefully controlled impurity.
Unlike a regular derivation, the [builder] executable of a derivation that produced fixed outputs has access to the network.
The outputs' guaranteed content-addresses are supposed to mitigate the risk of the builder being given these capabilities;
regardless of what the builder does *during* the build, it cannot influence downstream builds in unanticipated ways because all information it passed downstream flows through the outputs whose content-addresses are fixed.
[builder]: @docroot@/store/derivation/index.md#builder
In concrete terms, the purpose of this feature is fetching fixed input data like source code from the network.
For example, consider a family of "fetch URL" derivations.
These derivations download files from given URL.
To ensure that the downloaded file has not been modified, each derivation must also specify a cryptographic hash of the file.
For example,
```jsonc
{
"outputs: {
"out": {
"method": "nar",
"hashAlgo": "sha256",
"hash: "1md7jsfd8pa45z73bz1kszpp01yw6x5ljkjk2hx7wl800any6465",
},
},
"env": {
"url": "http://ftp.gnu.org/pub/gnu/hello/hello-2.1.1.tar.gz"
// ...
},
// ...
}
```
It sometimes happens that the URL of the file changes,
e.g., because servers are reorganised or no longer available.
In these cases, we then must update the call to `fetchurl`, e.g.,
```diff
"env": {
- "url": "http://ftp.gnu.org/pub/gnu/hello/hello-2.1.1.tar.gz"
+ "url": "ftp://ftp.nluug.nl/pub/gnu/hello/hello-2.1.1.tar.gz"
// ...
},
```
If a `fetchurl` derivation's outputs were [input-addressed][input addressing], the output paths of the derivation and of *all derivations depending on it* would change.
For instance, if we were to change the URL of the Glibc source distribution in Nixpkgs (a package on which almost all other packages depend on Linux) massive rebuilds would be needed.
This is unfortunate for a change which we know cannot have a real effect as it propagates upwards through the dependency graph.
For content-addressed outputs (fixed or floating), on the other hand, the outputs' store path only depends on the derivation's name, data, and the `method` of the outputs' specs.
The rest of the derivation is ignored for the purpose of computing the output path.
> **History Note**
>
> Fixed content-addressing is especially important both today and historically as the *only* form of content-addressing that is stabilized.
> This is why the rationale above contrasts it with [input addressing].
## (Floating) Content-Addressing {#floating}
> **Warning**
> This is part of an [experimental feature](@docroot@/development/experimental-features.md).
>
> To use this type of output addressing, you must enable the
> [`ca-derivations`][xp-feature-ca-derivations] experimental feature.
> For example, in [nix.conf](@docroot@/command-ref/conf-file.md) you could add:
>
> ```
> extra-experimental-features = ca-derivations
> ```
With this experimemental feature enabled, derivation outputs can also be content-addressed *without* fixing in the output spec what the outputs' content address must be.
### Purity
Because the derivation output is not fixed (just like with [input addressing]), the [builder] is not given any impure capabilities [^purity].
> **Configuration note**
>
> Strictly speaking, the extent to which sandboxing and deprivilaging is possible varies with the environment Nix is running in.
> Nix's configuration settings indicate what level of sandboxing is required or enabled.
> Builds of derivations will fail if they request an absense of sandboxing which is not allowed.
> Builds of derivations will also fail if the level of sandboxing specified in the configure exceeds what is possible in teh given environment.
>
> (The "environment", in this case, consists of attributes such as the Operating System Nix runs atop, along with the operating-system-specific privilages that Nix has been granted.
> Because of how conventional operating systems like macos, Linux, etc. work, granting builders *fewer* privilages may ironically require that Nix be run with *more* privilages.)
That said, derivations producing floating content-addressed outputs may declare their builders as impure (like the builders of derivations producing producing fixed outputs).
This is provisionally supported as part of the [`impure-derivations`][xp-feature-impure-derivations] experimental feature.
### Compatibility negotiation
Any derivation producing a floating content-addresssed output implicitly requires the `ca-derivations` [system feature](@docroot@/command-ref/conf-file.md#conf-system-features).
This prevents scheduling the building of the derivation on a machine without the experimental feature enabled.
Even once the experimental feature is stabilized, this is still useful in order to be allow using remote builder running odler versions of Nix, or alternative implementations that do not support floating content addressing.
### Determinism
In the earlier [discussion of how self-references are handled when content-addressing store objects](@docroot@/store/store-object/content-address.html#self-references), it was pointed out that methods of producing store objects ought to be deterministic regardless of the choice of provisional store path.
For store objects produced by manually inserting into the store to create a store object, the "method of production" is an informally concept --- formally, Nix has no idea where the store object came from, and content-addressing is crucial in order to ensure that the derivation is *intrinsically* tamper-proof.
But for store objects produced by derivation, the "method is quite formal" --- the whole point of derivations is to be a formal notion of building, after all.
In this case, we can elevate this informal property to a formal one.
A *determinstic* content-addressing derivation should produce outputs with the same content addresses:
1. Every time the builder is run
This is because either the builder is completely sandboxed, or because all any remaining impurities that leak inside the build sandbox are ignored by the builder and do not influence its behavior.
2. Regardless of the choice of any provisional outputs paths
Provisional store paths must be chosen for any output that has a self-reference.
The choice of provisional store path can be thought of as an impurity, since it is an arbitrary choice.
If provisional outputs paths are deterministically chosen, we are in the first branch of part (1).
The builder the data it produces based on it in arbitrary ways, but this gets us closer to to [input addressing].
Deterministically choosing the provisional path may be considered "complete sandboxing" by removing an impurity, but this is unsatisfactory
<!--
TODO
(Both these points will be expanded-upon below.)
-->
If provisional outputs paths are randomly chosen, we are in the second branch of part (1).
The builder *must* not let the random input affect the final outputs it produces, and multiple builds may be performed and the compared in order to ensure that this is in fact the case.
### Floating versus Fixed
While the distinction between content- and input-addressing is one of *mechanism*, the distinction between fixed and floating content addressing is more one of *policy*.
A fixed output that passes its content address check is just like a floating output.
It is only in the potential for that check to fail that they are different.
> **Design Note**
>
> In a future world where floating content-addressing is also stable, we in principle no longer need separate [fixed](#fixed) content-addressing.
> Instead, we could always use floating content-addressing, and separately assert the precise value content address of a given store object to be used as an input (of another derivation).
> A stand-alone assertion object of this sort is not yet implemented, but its possible creation is tracked in [Issue #11955](https://github.com/NixOS/nix/issues/11955).
>
> In the current version of Nix, fixed outputs which fail their hash check are still registered as valid store objects, just not registered as outputs of the derivation which produced them.
> This is an optimization that means if the wrong output hash is specified in a derivation, and then the derivation is recreated with the right output hash, derivation does not need to be rebuilt --- avoiding downloading potentially large amounts of data twice.
> This optimisation prefigures the design above:
> If the output hash assertion was removed outside the derivation itself, Nix could additionally not only register that outputted store object like today, but could also make note that derivation did in fact successfully download some data.
For example, for the "fetch URL" example above, making such a note is tantamount to recording what data is available at the time of download at the given URL.
> It would only be when Nix subsequently tries to build something with that (refining our example) downloaded source code that Nix would be forced to check the output hash assertion, preventing it from e.g. building compromised malware.
>
> Recapping, Nix would
>
> 1. successfully download data
> 2. insert that data into the store
> 3. associate (presumably with some sort of expiration policy) the downloaded data with the derivation that downloaded it
>
> But only use the downloaded store object in subsequent derivations that depended upon the assertion if the assertion passed.
>
> This possible future extension is included to illustrate this distinction:
[input addressing]: ./input-address.md
[xp-feature-ca-derivations]: @docroot@/development/experimental-features.md#xp-feature-ca-derivations
[xp-feature-git-hashing]: @docroot@/development/experimental-features.md#xp-feature-git-hashing
[xp-feature-impure-derivations]: @docroot@/development/experimental-features.md#xp-feature-impure-derivations

View file

@ -0,0 +1,97 @@
# Derivation Outputs and Types of Derivations
As stated on the [main pages on derivations](../index.md#store-derivation),
a derivation produces [store objects], which are known as the *outputs* of the derivation.
Indeed, the entire point of derivations is to produce these outputs, and to reliably and reproducably produce these derivations each time the derivation is run.
One of the parts of a derivation is its *outputs specification*, which specifies certain information about the outputs the derivation produces when run.
The outputs specification is a map, from names to specifications for individual outputs.
## Output Names {#outputs}
Output names can be any string which is also a valid [store path] name.
The name mapped to each output specification is not actually the name of the output.
In the general case, the output store object has name `derivationName + "-" + outputSpecName`, not any other metadata about it.
However, an output spec named "out" describes and output store object whose name is just the derivation name.
> **Example**
>
> A derivation is named `hello`, and has two outputs, `out`, and `dev`
>
> - The derivation's path will be: `/nix/store/<hash>-hello.drv`.
>
> - The store path of `out` will be: `/nix/store/<hash>-hello`.
>
> - The store path of `dev` will be: `/nix/store/<hash>-hello-dev`.
The outputs are the derivations are the [store objects][store object] it is obligated to produce.
> **Note**
>
> The formal terminology here is somewhat at adds with everyday communication in the Nix community today.
> "output" in casual usage tends to refer to either to the actual output store object, or the notional output spec, depending on context.
>
> For example "hello's `dev` output" means the store object referred to by the store path `/nix/store/<hash>-hello-dev`.
> It is unusual to call this the "`hello-dev` output", even though `hello-dev` is the actual name of that store object.
## Types of output addressing
The main information contained in an output specification is how the derivation output is addressed.
In particular, the specification decides:
- whether the output is [content-addressed](./content-address.md) or [input-addressed](./input-address.md)
- if the content is content-addressed, how is it content addressed
- if the content is content-addressed, [what is its content address](./content-address.md#fixed-content-addressing) (and thus what is its [store path])
## Types of derivations
The sections on each type of derivation output addressing ended up discussing other attributes of the derivation besides its outputs, such as purity, scheduling, determinism, etc.
This is no concidence; for the type of a derivation is in fact one-for-one with the type of its outputs:
- A derivation that produces *xyz-addressed* outputs is an *xyz-addressing* derivations.
The rules for this are fairly concise:
- All the outputs must be of the same type / use the same addressing
- The derivation must have at least one output
- Additionally, if the outputs are fixed content-addressed, there must be exactly one output, whose specification is mapped from the name `out`.
(The name `out` is special, according to the rules described above.
Having only one output and calling its specification `out` means the single output is effectively anonymous; the store path just has the derivation name.)
(This is an arbitrary restriction that could be lifted.)
- The output is either *fixed* or *floating*, indicating whether the its store path is known prior to building it.
- With fixed content-addressing it is fixed.
> A *fixed content-addressing* derivation is also called a *fixed-output derivation*, since that is the only currently-implemented form of fixed-output addressing
- With floating content-addressing or input-addressing it is floating.
> Thus, historically with Nix, with no experimental features enabled, *all* outputs are fixed.
- The derivation may be *pure* or *impure*, indicating what read access to the outside world the [builder](../index.md#builder) has.
- An input-addressing derivation *must* be pure.
> If it is impure, we would have a large problem, because an input-addressed derivation always produces outputs with the same paths.
- A content-addressing derivation may be pure or impure
- If it is impure, it may be be fixed (typical), or it may be floating if the additional [`impure-derivations`][xp-feature-impure-derivations] experimental feature is enabled.
- If it is pure, it must be floating.
- Pure, fixed content-addressing derivations are not suppported
> There is no use for this forth combination.
> The sole purpose of an output's store path being fixed is to support the derivation being impure.
[xp-feature-ca-derivations]: @docroot@/development/experimental-features.md#xp-feature-ca-derivations
[xp-feature-git-hashing]: @docroot@/development/experimental-features.md#xp-feature-git-hashing
[xp-feature-impure-derivations]: @docroot@/development/experimental-features.md#xp-feature-impure-derivations

View file

@ -0,0 +1,31 @@
# Input-addressing derivation outputs
[input addressing]: #input-addressing
"Input addressing" means the address the store object by the *way it was made* rather than *what it is*.
That is to say, an input-addressed output's store path is a function not of the output itself, but of the derivation that produced it.
Even if two store paths have the same contents, if they are produced in different ways, and one is input-addressed, then they will have different store paths, and thus guaranteed to not be the same store object.
<!---
### Modulo fixed-output derivations
**TODO hash derivation modulo.**
So how do we compute the hash part of the output path of a derivation?
This is done by the function `hashDrv`, shown in Figure 5.10.
It distinguishes between two cases.
If the derivation is a fixed-output derivation, then it computes a hash over just the `outputHash` attributes.
If the derivation is not a fixed-output derivation, we replace each element in the derivations inputDrvs with the result of a call to `hashDrv` for that element.
(The derivation at each store path in `inputDrvs` is converted from its on-disk ATerm representation back to a `StoreDrv` by the function `parseDrv`.) In essence, `hashDrv` partitions store derivations into equivalence classes, and for hashing purpose it replaces each store path in a derivation graph with its equivalence class.
The recursion in Figure 5.10 is inefficient:
it will call itself once for each path by which a subderivation can be reached, i.e., `O(V k)` times for a derivation graph with `V` derivations and with out-degree of at most `k`.
In the actual implementation, memoisation is used to reduce this to `O(V + E)` complexity for a graph with E edges.
-->
[xp-feature-ca-derivations]: @docroot@/development/experimental-features.md#xp-feature-ca-derivations
[xp-feature-git-hashing]: @docroot@/development/experimental-features.md#xp-feature-git-hashing
[xp-feature-impure-derivations]: @docroot@/development/experimental-features.md#xp-feature-impure-derivations

View file

@ -24,13 +24,17 @@ For the full specification of the algorithms involved, see the [specification of
### File System Objects ### File System Objects
With all currently supported store object content addressing methods, the file system object is always [content-addressed][fso-ca] first, and then that hash is incorporated into content address computation for the store object. With all currently-supported store object content-addressing methods, the file system object is always [content-addressed][fso-ca] first, and then that hash is incorporated into content address computation for the store object.
### References ### References
#### References to other store objects
With all currently supported store object content addressing methods, With all currently supported store object content addressing methods,
other objects are referred to by their regular (string-encoded-) [store paths][Store Path]. other objects are referred to by their regular (string-encoded-) [store paths][Store Path].
#### Self-references
Self-references however cannot be referred to by their path, because we are in the midst of describing how to compute that path! Self-references however cannot be referred to by their path, because we are in the midst of describing how to compute that path!
> The alternative would require finding as hash function fixed point, i.e. the solution to an equation in the form > The alternative would require finding as hash function fixed point, i.e. the solution to an equation in the form
@ -40,7 +44,28 @@ Self-references however cannot be referred to by their path, because we are in t
> which is computationally infeasible. > which is computationally infeasible.
> As far as we know, this is equivalent to finding a hash collision. > As far as we know, this is equivalent to finding a hash collision.
Instead we just have a "has self reference" boolean, which will end up affecting the digest. Instead we have a "has self-reference" boolean, which ends up affecting the digest:
In all currently-supported store object content-addressing methods, when hashing the file system object data, any occurence of store object's own store path in the digested data is replaced with a [sentinel value](https://en.wikipedia.org/wiki/Sentinel_value).
The hashes of these modified input streams are used instead.
When validating the content address of a store object after the fact, the above process works as written.
However, when first creating the store object we don't know the store object's store path, as explained just above.
We therefore, strictly speaking, do not know what value we will be replacing with the sentinental value in the inputs to hash functions.
What instead happens is that the provisional store object --- the data from which we wish to create a store object --- is paired with a provisional "scratch" store path (that presumably was chosen when the data was created).
That provisional store path is instead what is replaced with the sentinel value, rather than the final store object which we do not yet know.
> **Design note**
>
> It is an informal property of content-addressed store objects that the choice of provisional store path should not matter.
> In other words, if a provisional store object is prepared in the same way except for the choice of provision store path, the provisional data need not be identical.
> But, after the sentinel value is substituted in place of each provisional store object's provision store path, the final so-normalized data *should* be identical.
>
> If, conversely, the data after this normalization process is still different, we'll compute a different content-address.
> The method of preparing the provisional self-referenced data has *failed* to be deterministic in the sense of not *leaking* the choice of provisional store path --- a choice which is supposed to be arbitrary --- into the final store object.
>
> This property is informal because at this stage, we are just described store objects, which have no formal notion of their origin.
> Without such a formal notion, there is nothing to formally accuse of being insufficiently deterministic.
> Where we cover [derivations](@docroot@/store/derivation/index.md), we will have a chance to make this a formal property, not of content-addressed store objects themselves, but of derivations that *produce* content-addressed store objects.
### Name and Store Directory ### Name and Store Directory
@ -63,7 +88,7 @@ References are not supported: store objects with flat hashing *and* references c
This also uses the corresponding [Flat](../file-system-object/content-address.md#serial-flat) method of file system object content addressing. This also uses the corresponding [Flat](../file-system-object/content-address.md#serial-flat) method of file system object content addressing.
References to other store objects are supported, but self references are not. References to other store objects are supported, but self-references are not.
This is the only store-object content-addressing method that is not named identically with a corresponding file system object method. This is the only store-object content-addressing method that is not named identically with a corresponding file system object method.
It is somewhat obscure, mainly used for "drv files" It is somewhat obscure, mainly used for "drv files"
@ -74,7 +99,7 @@ Prefer another method if possible.
This uses the corresponding [Nix Archive](../file-system-object/content-address.md#serial-nix-archive) method of file system object content addressing. This uses the corresponding [Nix Archive](../file-system-object/content-address.md#serial-nix-archive) method of file system object content addressing.
References (to other store objects and self references alike) are supported so long as the hash algorithm is SHA-256, but not (neither kind) otherwise. References (to other store objects and self-references alike) are supported so long as the hash algorithm is SHA-256, but not (neither kind) otherwise.
### Git { #method-git } ### Git { #method-git }

115
flake.nix
View file

@ -34,9 +34,7 @@
officialRelease = true; officialRelease = true;
linux32BitSystems = [ linux32BitSystems = [ ];
# "i686-linux"
];
linux64BitSystems = [ linux64BitSystems = [
"x86_64-linux" "x86_64-linux"
"aarch64-linux" "aarch64-linux"
@ -55,7 +53,6 @@
# Disabled because of https://github.com/NixOS/nixpkgs/issues/344423 # Disabled because of https://github.com/NixOS/nixpkgs/issues/344423
# "x86_64-unknown-netbsd" # "x86_64-unknown-netbsd"
"x86_64-unknown-freebsd" "x86_64-unknown-freebsd"
#"x86_64-w64-mingw32"
]; ];
stdenvs = [ stdenvs = [
@ -82,14 +79,7 @@
forAllCrossSystems = lib.genAttrs crossSystems; forAllCrossSystems = lib.genAttrs crossSystems;
forAllStdenvs = forAllStdenvs = lib.genAttrs stdenvs;
f:
lib.listToAttrs (
map (stdenvName: {
name = "${stdenvName}Packages";
value = f stdenvName;
}) stdenvs
);
# We don't apply flake-parts to the whole flake so that non-development attributes # We don't apply flake-parts to the whole flake so that non-development attributes
# load without fetching any development inputs. # load without fetching any development inputs.
@ -108,42 +98,38 @@
system: system:
let let
make-pkgs = make-pkgs =
crossSystem: stdenv: crossSystem:
import nixpkgs { forAllStdenvs (
localSystem = { stdenv:
inherit system; import nixpkgs {
}; localSystem = {
crossSystem = inherit system;
if crossSystem == null then };
null crossSystem =
else if crossSystem == null then
{ null
config = crossSystem; else
} {
// lib.optionalAttrs (crossSystem == "x86_64-unknown-freebsd13") { config = crossSystem;
useLLVM = true; }
}; // lib.optionalAttrs (crossSystem == "x86_64-unknown-freebsd13") {
overlays = [ useLLVM = true;
(overlayFor (p: p.${stdenv})) };
]; overlays = [
}; (overlayFor (pkgs: pkgs.${stdenv}))
stdenvs = forAllStdenvs (make-pkgs null); ];
native = stdenvs.stdenvPackages; }
);
in in
{ rec {
inherit stdenvs native; nativeForStdenv = make-pkgs null;
static = native.pkgsStatic; crossForStdenv = forAllCrossSystems make-pkgs;
llvm = native.pkgsLLVM; # Alias for convenience
cross = forAllCrossSystems (crossSystem: make-pkgs crossSystem "stdenv"); native = nativeForStdenv.stdenv;
cross = forAllCrossSystems (crossSystem: crossForStdenv.${crossSystem}.stdenv);
} }
); );
binaryTarball =
nix: pkgs:
pkgs.callPackage ./scripts/binary-tarball.nix {
inherit nix;
};
overlayFor = overlayFor =
getStdenv: final: prev: getStdenv: final: prev:
let let
@ -213,7 +199,6 @@
hydraJobs = import ./packaging/hydra.nix { hydraJobs = import ./packaging/hydra.nix {
inherit inherit
inputs inputs
binaryTarball
forAllCrossSystems forAllCrossSystems
forAllSystems forAllSystems
lib lib
@ -228,7 +213,6 @@
system: system:
{ {
installerScriptForGHA = self.hydraJobs.installerScriptForGHA.${system}; installerScriptForGHA = self.hydraJobs.installerScriptForGHA.${system};
#installTests = self.hydraJobs.installTests.${system};
nixpkgsLibTests = self.hydraJobs.tests.nixpkgsLibTests.${system}; nixpkgsLibTests = self.hydraJobs.tests.nixpkgsLibTests.${system};
rl-next = rl-next =
let let
@ -283,7 +267,7 @@
# TODO: enable static builds for darwin, blocked on: # TODO: enable static builds for darwin, blocked on:
# https://github.com/NixOS/nixpkgs/issues/320448 # https://github.com/NixOS/nixpkgs/issues/320448
# TODO: disabled to speed up GHA CI. # TODO: disabled to speed up GHA CI.
#"static-" = nixpkgsFor.${system}.static; #"static-" = nixpkgsFor.${system}.native.pkgsStatic;
} }
) )
( (
@ -401,8 +385,6 @@
{ {
# These attributes go right into `packages.<system>`. # These attributes go right into `packages.<system>`.
"${pkgName}" = nixpkgsFor.${system}.native.nixComponents.${pkgName}; "${pkgName}" = nixpkgsFor.${system}.native.nixComponents.${pkgName};
#"${pkgName}-static" = nixpkgsFor.${system}.static.nixComponents.${pkgName};
#"${pkgName}-llvm" = nixpkgsFor.${system}.llvm.nixComponents.${pkgName};
} }
// lib.optionalAttrs supportsCross ( // lib.optionalAttrs supportsCross (
flatMapAttrs (lib.genAttrs crossSystems (_: { })) ( flatMapAttrs (lib.genAttrs crossSystems (_: { })) (
@ -420,7 +402,7 @@
{ {
# These attributes go right into `packages.<system>`. # These attributes go right into `packages.<system>`.
"${pkgName}-${stdenvName}" = "${pkgName}-${stdenvName}" =
nixpkgsFor.${system}.stdenvs."${stdenvName}Packages".nixComponents.${pkgName}; nixpkgsFor.${system}.nativeForStdenv.${stdenvName}.nixComponents.${pkgName};
} }
) )
) )
@ -455,40 +437,13 @@
forAllStdenvs ( forAllStdenvs (
stdenvName: stdenvName:
makeShell { makeShell {
pkgs = nixpkgsFor.${system}.stdenvs."${stdenvName}Packages"; pkgs = nixpkgsFor.${system}.nativeForStdenv.${stdenvName};
} }
) )
) )
/*
// lib.optionalAttrs (!nixpkgsFor.${system}.native.stdenv.isDarwin) (
prefixAttrs "static" (
forAllStdenvs (
stdenvName:
makeShell {
pkgs = nixpkgsFor.${system}.stdenvs."${stdenvName}Packages".pkgsStatic;
}
)
)
// prefixAttrs "llvm" (
forAllStdenvs (
stdenvName:
makeShell {
pkgs = nixpkgsFor.${system}.stdenvs."${stdenvName}Packages".pkgsLLVM;
}
)
)
// prefixAttrs "cross" (
forAllCrossSystems (
crossSystem:
makeShell {
pkgs = nixpkgsFor.${system}.cross.${crossSystem};
}
)
)
)
*/
// { // {
default = self.devShells.${system}.native-stdenvPackages; native = self.devShells.${system}.native-stdenv;
default = self.devShells.${system}.native;
} }
); );
}; };

View file

@ -132,5 +132,18 @@
"140354451+myclevorname@users.noreply.github.com": "myclevorname", "140354451+myclevorname@users.noreply.github.com": "myclevorname",
"bonniot@gmail.com": "dbdr", "bonniot@gmail.com": "dbdr",
"jack@wilsdon.me": "jackwilsdon", "jack@wilsdon.me": "jackwilsdon",
"143541718+WxNzEMof@users.noreply.github.com": "the-sun-will-rise-tomorrow" "143541718+WxNzEMof@users.noreply.github.com": "the-sun-will-rise-tomorrow",
"fabianm88@gmail.com": "B4dM4n",
"silvan.mosberger@moduscreate.com": "infinisil",
"leandro.reina@ororatech.com": "kip93",
"else@someonex.net": "SomeoneSerge",
"aiden@aidenfoxivey.com": "aidenfoxivey",
"maxoscarhearnden@gmail.com": "MaxHearnden",
"silvanshade@users.noreply.github.com": "silvanshade",
"illia.bobyr@gmail.com": "ilya-bobyr",
"65963536+etherswangel@users.noreply.github.com": "stevalkr",
"thebenmachine+git@gmail.com": "bmillwood",
"leandro@kip93.net": "kip93",
"hello@briancamacho.me": "b-camacho",
"bcamacho@anduril.com": "bcamacho2"
} }

View file

@ -118,5 +118,16 @@
"wh0": null, "wh0": null,
"mupdt": "Matej Urbas", "mupdt": "Matej Urbas",
"momeemt": "Mutsuha Asada", "momeemt": "Mutsuha Asada",
"dwt": "\u202erekc\u00e4H nitraM\u202e" "dwt": "\u202erekc\u00e4H nitraM\u202e",
"aidenfoxivey": "Aiden Fox Ivey",
"ilya-bobyr": "Illia Bobyr",
"B4dM4n": "Fabian M\u00f6ller",
"silvanshade": null,
"bcamacho2": null,
"bmillwood": "Ben Millwood",
"stevalkr": "Steve Walker",
"SomeoneSerge": "Someone",
"b-camacho": "Brian Camacho",
"MaxHearnden": null,
"kip93": "Leandro Emmanuel Reina Kiperman"
} }

View file

@ -37,6 +37,34 @@
fi fi
''}"; ''}";
}; };
nixfmt-rfc-style = {
enable = true;
excludes = [
# Invalid
''^tests/functional/lang/parse-.*\.nix$''
# Formatting-sensitive
''^tests/functional/lang/eval-okay-curpos\.nix$''
''^tests/functional/lang/.*comment.*\.nix$''
''^tests/functional/lang/.*newline.*\.nix$''
''^tests/functional/lang/.*eol.*\.nix$''
# Syntax tests
''^tests/functional/shell.shebang\.nix$''
''^tests/functional/lang/eval-okay-ind-string\.nix$''
# Not supported by nixfmt
''^tests/functional/lang/eval-okay-deprecate-cursed-or\.nix$''
''^tests/functional/lang/eval-okay-attrs5\.nix$''
# More syntax tests
# These tests, or parts of them, should have been parse-* test cases.
''^tests/functional/lang/eval-fail-eol-2\.nix$''
''^tests/functional/lang/eval-fail-path-slash\.nix$''
''^tests/functional/lang/eval-fail-toJSON-non-utf-8\.nix$''
''^tests/functional/lang/eval-fail-set\.nix$''
];
};
clang-format = { clang-format = {
enable = true; enable = true;
# https://github.com/cachix/git-hooks.nix/pull/532 # https://github.com/cachix/git-hooks.nix/pull/532
@ -99,7 +127,6 @@
''^src/libexpr/nixexpr\.cc$'' ''^src/libexpr/nixexpr\.cc$''
''^src/libexpr/nixexpr\.hh$'' ''^src/libexpr/nixexpr\.hh$''
''^src/libexpr/parser-state\.hh$'' ''^src/libexpr/parser-state\.hh$''
''^src/libexpr/pos-table\.hh$''
''^src/libexpr/primops\.cc$'' ''^src/libexpr/primops\.cc$''
''^src/libexpr/primops\.hh$'' ''^src/libexpr/primops\.hh$''
''^src/libexpr/primops/context\.cc$'' ''^src/libexpr/primops/context\.cc$''
@ -369,7 +396,6 @@
''^src/libutil/types\.hh$'' ''^src/libutil/types\.hh$''
''^src/libutil/unix/file-descriptor\.cc$'' ''^src/libutil/unix/file-descriptor\.cc$''
''^src/libutil/unix/file-path\.cc$'' ''^src/libutil/unix/file-path\.cc$''
''^src/libutil/unix/monitor-fd\.hh$''
''^src/libutil/unix/processes\.cc$'' ''^src/libutil/unix/processes\.cc$''
''^src/libutil/unix/signals-impl\.hh$'' ''^src/libutil/unix/signals-impl\.hh$''
''^src/libutil/unix/signals\.cc$'' ''^src/libutil/unix/signals\.cc$''
@ -666,7 +692,6 @@
''^src/libutil-tests/data/git/check-data\.sh$'' ''^src/libutil-tests/data/git/check-data\.sh$''
]; ];
}; };
# TODO: nixfmt, https://github.com/NixOS/nixfmt/issues/153
}; };
}; };
}; };

View file

@ -144,12 +144,10 @@ release:
Make a pull request and auto-merge it. Make a pull request and auto-merge it.
* Create a milestone for the next release, move all unresolved issues
from the previous milestone, and close the previous milestone. Set
the date for the next milestone 6 weeks from now.
* Create a backport label. * Create a backport label.
* Add the new backport label to `.mergify.yml`.
* Post an [announcement on Discourse](https://discourse.nixos.org/c/announcements/8), including the contents of * Post an [announcement on Discourse](https://discourse.nixos.org/c/announcements/8), including the contents of
`rl-$VERSION.md`. `rl-$VERSION.md`.

View file

@ -42,7 +42,7 @@ my $flakeUrl = $evalInfo->{flake};
my $flakeInfo = decode_json(`nix flake metadata --json "$flakeUrl"` or die) if $flakeUrl; my $flakeInfo = decode_json(`nix flake metadata --json "$flakeUrl"` or die) if $flakeUrl;
my $nixRev = ($flakeInfo ? $flakeInfo->{revision} : $evalInfo->{jobsetevalinputs}->{nix}->{revision}) or die; my $nixRev = ($flakeInfo ? $flakeInfo->{revision} : $evalInfo->{jobsetevalinputs}->{nix}->{revision}) or die;
my $buildInfo = decode_json(fetch("$evalUrl/job/build.nix.x86_64-linux", 'application/json')); my $buildInfo = decode_json(fetch("$evalUrl/job/build.nix-everything.x86_64-linux", 'application/json'));
#print Dumper($buildInfo); #print Dumper($buildInfo);
my $releaseName = $buildInfo->{nixname}; my $releaseName = $buildInfo->{nixname};
@ -91,7 +91,7 @@ sub getStorePath {
sub copyManual { sub copyManual {
my $manual; my $manual;
eval { eval {
$manual = getStorePath("build.nix.x86_64-linux", "doc"); $manual = getStorePath("manual");
}; };
if ($@) { if ($@) {
warn "$@"; warn "$@";
@ -240,12 +240,12 @@ if ($haveDocker) {
# Upload nix-fallback-paths.nix. # Upload nix-fallback-paths.nix.
write_file("$tmpDir/fallback-paths.nix", write_file("$tmpDir/fallback-paths.nix",
"{\n" . "{\n" .
" x86_64-linux = \"" . getStorePath("build.nix.x86_64-linux") . "\";\n" . " x86_64-linux = \"" . getStorePath("build.nix-everything.x86_64-linux") . "\";\n" .
" i686-linux = \"" . getStorePath("build.nix.i686-linux") . "\";\n" . " i686-linux = \"" . getStorePath("build.nix-everything.i686-linux") . "\";\n" .
" aarch64-linux = \"" . getStorePath("build.nix.aarch64-linux") . "\";\n" . " aarch64-linux = \"" . getStorePath("build.nix-everything.aarch64-linux") . "\";\n" .
" riscv64-linux = \"" . getStorePath("buildCross.nix.riscv64-unknown-linux-gnu.x86_64-linux") . "\";\n" . " riscv64-linux = \"" . getStorePath("buildCross.nix-everything.riscv64-unknown-linux-gnu.x86_64-linux") . "\";\n" .
" x86_64-darwin = \"" . getStorePath("build.nix.x86_64-darwin") . "\";\n" . " x86_64-darwin = \"" . getStorePath("build.nix-everything.x86_64-darwin") . "\";\n" .
" aarch64-darwin = \"" . getStorePath("build.nix.aarch64-darwin") . "\";\n" . " aarch64-darwin = \"" . getStorePath("build.nix-everything.aarch64-darwin") . "\";\n" .
"}\n"); "}\n");
# Upload release files to S3. # Upload release files to S3.

View file

@ -0,0 +1,6 @@
if host_machine.system() == 'windows'
# libexpr's primops creates a large object
# Without the following flag, we'll get errors when cross-compiling to mingw32:
# Fatal error: can't write 66 bytes to section .text of src/libexpr/libnixexpr.dll.p/primops.cc.obj: 'file too big'
add_project_arguments([ '-Wa,-mbig-obj' ], language: 'cpp')
endif

View file

@ -26,18 +26,18 @@ in
runCommand "nix-binary-tarball-${version}" env '' runCommand "nix-binary-tarball-${version}" env ''
cp ${installerClosureInfo}/registration $TMPDIR/reginfo cp ${installerClosureInfo}/registration $TMPDIR/reginfo
cp ${./create-darwin-volume.sh} $TMPDIR/create-darwin-volume.sh cp ${../scripts/create-darwin-volume.sh} $TMPDIR/create-darwin-volume.sh
substitute ${./install-nix-from-tarball.sh} $TMPDIR/install \ substitute ${../scripts/install-nix-from-tarball.sh} $TMPDIR/install \
--subst-var-by nix ${nix} \ --subst-var-by nix ${nix} \
--subst-var-by cacert ${cacert} --subst-var-by cacert ${cacert}
substitute ${./install-darwin-multi-user.sh} $TMPDIR/install-darwin-multi-user.sh \ substitute ${../scripts/install-darwin-multi-user.sh} $TMPDIR/install-darwin-multi-user.sh \
--subst-var-by nix ${nix} \ --subst-var-by nix ${nix} \
--subst-var-by cacert ${cacert} --subst-var-by cacert ${cacert}
substitute ${./install-systemd-multi-user.sh} $TMPDIR/install-systemd-multi-user.sh \ substitute ${../scripts/install-systemd-multi-user.sh} $TMPDIR/install-systemd-multi-user.sh \
--subst-var-by nix ${nix} \ --subst-var-by nix ${nix} \
--subst-var-by cacert ${cacert} --subst-var-by cacert ${cacert}
substitute ${./install-multi-user.sh} $TMPDIR/install-multi-user \ substitute ${../scripts/install-multi-user.sh} $TMPDIR/install-multi-user \
--subst-var-by nix ${nix} \ --subst-var-by nix ${nix} \
--subst-var-by cacert ${cacert} --subst-var-by cacert ${cacert}

View file

@ -1,4 +1,7 @@
{ lib, devFlake }: {
lib,
devFlake,
}:
{ pkgs }: { pkgs }:
@ -113,6 +116,7 @@ pkgs.nixComponents.nix-util.overrideAttrs (
pkgs.buildPackages.changelog-d pkgs.buildPackages.changelog-d
modular.pre-commit.settings.package modular.pre-commit.settings.package
(pkgs.writeScriptBin "pre-commit-hooks-install" modular.pre-commit.settings.installationScript) (pkgs.writeScriptBin "pre-commit-hooks-install" modular.pre-commit.settings.installationScript)
pkgs.buildPackages.nixfmt-rfc-style
] ]
# TODO: Remove the darwin check once # TODO: Remove the darwin check once
# https://github.com/NixOS/nixpkgs/pull/291814 is available # https://github.com/NixOS/nixpkgs/pull/291814 is available

View file

@ -192,7 +192,7 @@ stdenv.mkDerivation (finalAttrs: {
devPaths = lib.mapAttrsToList (_k: lib.getDev) finalAttrs.finalPackage.libs; devPaths = lib.mapAttrsToList (_k: lib.getDev) finalAttrs.finalPackage.libs;
in in
'' ''
mkdir -p $out $dev $doc $man mkdir -p $out $dev
# Merged outputs # Merged outputs
lndir ${nix-cli} $out lndir ${nix-cli} $out
@ -201,8 +201,8 @@ stdenv.mkDerivation (finalAttrs: {
done done
# Forwarded outputs # Forwarded outputs
ln -s ${nix-manual} $doc ln -sT ${nix-manual} $doc
ln -s ${nix-manual.man} $man ln -sT ${nix-manual.man} $man
''; '';
passthru = { passthru = {

View file

@ -1,6 +1,5 @@
{ {
inputs, inputs,
binaryTarball,
forAllCrossSystems, forAllCrossSystems,
forAllSystems, forAllSystems,
lib, lib,
@ -14,7 +13,7 @@ let
installScriptFor = installScriptFor =
tarballs: tarballs:
nixpkgsFor.x86_64-linux.native.callPackage ../scripts/installer.nix { nixpkgsFor.x86_64-linux.native.callPackage ./installer {
inherit tarballs; inherit tarballs;
}; };
@ -65,65 +64,6 @@ in
system: self.devShells.${system}.default.inputDerivation system: self.devShells.${system}.default.inputDerivation
)) [ "i686-linux" ]; )) [ "i686-linux" ];
/*
buildStatic = forAllPackages (
pkgName:
lib.genAttrs linux64BitSystems (system: nixpkgsFor.${system}.static.nixComponents.${pkgName})
);
buildCross = forAllPackages (
pkgName:
# Hack to avoid non-evaling package
(
if pkgName == "nix-functional-tests" then
lib.flip builtins.removeAttrs [ "x86_64-w64-mingw32" ]
else
lib.id
)
(
forAllCrossSystems (
crossSystem:
lib.genAttrs [ "x86_64-linux" ] (
system: nixpkgsFor.${system}.cross.${crossSystem}.nixComponents.${pkgName}
)
)
)
);
buildNoGc =
let
components = forAllSystems (
system:
nixpkgsFor.${system}.native.nixComponents.overrideScope (
self: super: {
nix-expr = super.nix-expr.override { enableGC = false; };
}
)
);
in
forAllPackages (pkgName: forAllSystems (system: components.${system}.${pkgName}));
buildNoTests = forAllSystems (system: nixpkgsFor.${system}.native.nixComponents.nix-cli);
# Toggles some settings for better coverage. Windows needs these
# library combinations, and Debian build Nix with GNU readline too.
buildReadlineNoMarkdown =
let
components = forAllSystems (
system:
nixpkgsFor.${system}.native.nixComponents.overrideScope (
self: super: {
nix-cmd = super.nix-cmd.override {
enableMarkdown = false;
readlineFlavor = "readline";
};
}
)
);
in
forAllPackages (pkgName: forAllSystems (system: components.${system}.${pkgName}));
*/
# Perl bindings for various platforms. # Perl bindings for various platforms.
perlBindings = forAllSystems (system: nixpkgsFor.${system}.native.nixComponents.nix-perl-bindings); perlBindings = forAllSystems (system: nixpkgsFor.${system}.native.nixComponents.nix-perl-bindings);
@ -131,40 +71,12 @@ in
# with the closure of 'nix' package, and the second half of # with the closure of 'nix' package, and the second half of
# the installation script. # the installation script.
binaryTarball = forAllSystems ( binaryTarball = forAllSystems (
system: binaryTarball nixpkgsFor.${system}.native.nix nixpkgsFor.${system}.native system: nixpkgsFor.${system}.native.callPackage ./binary-tarball.nix { }
); );
/*
binaryTarballCross = lib.genAttrs [ "x86_64-linux" ] (
system:
forAllCrossSystems (
crossSystem:
binaryTarball nixpkgsFor.${system}.cross.${crossSystem}.nix
nixpkgsFor.${system}.cross.${crossSystem}
)
);
# The first half of the installation script. This is uploaded
# to https://nixos.org/nix/install. It downloads the binary
# tarball for the user's system and calls the second half of the
# installation script.
installerScript = installScriptFor [
# Native
self.hydraJobs.binaryTarball."x86_64-linux"
self.hydraJobs.binaryTarball."i686-linux"
self.hydraJobs.binaryTarball."aarch64-linux"
self.hydraJobs.binaryTarball."x86_64-darwin"
self.hydraJobs.binaryTarball."aarch64-darwin"
# Cross
self.hydraJobs.binaryTarballCross."x86_64-linux"."armv6l-unknown-linux-gnueabihf"
self.hydraJobs.binaryTarballCross."x86_64-linux"."armv7l-unknown-linux-gnueabihf"
self.hydraJobs.binaryTarballCross."x86_64-linux"."riscv64-unknown-linux-gnu"
];
*/
installerScriptForGHA = forAllSystems ( installerScriptForGHA = forAllSystems (
system: system:
nixpkgsFor.${system}.native.callPackage ../scripts/installer.nix { nixpkgsFor.${system}.native.callPackage ./installer {
tarballs = [ self.hydraJobs.binaryTarball.${system} ]; tarballs = [ self.hydraJobs.binaryTarball.${system} ];
} }
); );
@ -190,12 +102,8 @@ in
# System tests. # System tests.
tests = tests =
import ../tests/nixos { import ../tests/nixos {
inherit inherit lib nixpkgs nixpkgsFor;
lib inherit (self.inputs) nixpkgs-23-11;
nixpkgs
nixpkgsFor
self
;
} }
// { // {
@ -230,27 +138,4 @@ in
pkgs = nixpkgsFor.x86_64-linux.native; pkgs = nixpkgsFor.x86_64-linux.native;
nixpkgs = nixpkgs-regression; nixpkgs = nixpkgs-regression;
}; };
/*
installTests = forAllSystems (
system:
let
pkgs = nixpkgsFor.${system}.native;
in
pkgs.runCommand "install-tests" {
againstSelf = testNixVersions pkgs pkgs.nix;
againstCurrentLatest =
# FIXME: temporarily disable this on macOS because of #3605.
if system == "x86_64-linux" then testNixVersions pkgs pkgs.nixVersions.latest else null;
# Disabled because the latest stable version doesn't handle
# `NIX_DAEMON_SOCKET_PATH` which is required for the tests to work
# againstLatestStable = testNixVersions pkgs pkgs.nixStable;
} "touch $out"
);
installerTests = import ../tests/installer {
binaryTarballs = self.hydraJobs.binaryTarball;
inherit nixpkgsFor;
};
*/
} }

View file

@ -1,3 +1,13 @@
# Only execute this file once per shell.
if test -z "$HOME" || \
test -n "$__ETC_PROFILE_NIX_SOURCED"
exit
end
set --global __ETC_PROFILE_NIX_SOURCED 1
# Local helpers
function add_path --argument-names new_path function add_path --argument-names new_path
if type -q fish_add_path if type -q fish_add_path
# fish 3.2.0 or newer # fish 3.2.0 or newer
@ -10,48 +20,51 @@ function add_path --argument-names new_path
end end
end end
# Only execute this file once per shell. # Main configuration
if test -n "$__ETC_PROFILE_NIX_SOURCED"
exit
end
set __ETC_PROFILE_NIX_SOURCED 1 # Set up the per-user profile.
set --local NIX_LINK $HOME/.nix-profile
# Set up environment.
# This part should be kept in sync with nixpkgs:nixos/modules/programs/environment.nix
set --export NIX_PROFILES "@localstatedir@/nix/profiles/default $HOME/.nix-profile" set --export NIX_PROFILES "@localstatedir@/nix/profiles/default $HOME/.nix-profile"
# Populate bash completions, .desktop files, etc # Populate bash completions, .desktop files, etc
if test -z "$XDG_DATA_DIRS" if test -z "$XDG_DATA_DIRS"
# According to XDG spec the default is /usr/local/share:/usr/share, don't set something that prevents that default # According to XDG spec the default is /usr/local/share:/usr/share, don't set something that prevents that default
set --export XDG_DATA_DIRS "/usr/local/share:/usr/share:/nix/var/nix/profiles/default/share" set --export XDG_DATA_DIRS "/usr/local/share:/usr/share:$NIX_LINK/share:/nix/var/nix/profiles/default/share"
else else
set --export XDG_DATA_DIRS "$XDG_DATA_DIRS:/nix/var/nix/profiles/default/share" set --export XDG_DATA_DIRS "$XDG_DATA_DIRS:$NIX_LINK/share:/nix/var/nix/profiles/default/share"
end end
# Set $NIX_SSL_CERT_FILE so that Nixpkgs applications like curl work. # Set $NIX_SSL_CERT_FILE so that Nixpkgs applications like curl work.
if test -n "$NIX_SSL_CERT_FILE" if test -n "$NIX_SSL_CERT_FILE"
: # Allow users to override the NIX_SSL_CERT_FILE : # Allow users to override the NIX_SSL_CERT_FILE
else if test -e /etc/ssl/certs/ca-certificates.crt # NixOS, Ubuntu, Debian, Gentoo, Arch else if test -e /etc/ssl/certs/ca-certificates.crt # NixOS, Ubuntu, Debian, Gentoo, Arch
set --export NIX_SSL_CERT_FILE /etc/ssl/certs/ca-certificates.crt set --export NIX_SSL_CERT_FILE /etc/ssl/certs/ca-certificates.crt
else if test -e /etc/ssl/ca-bundle.pem # openSUSE Tumbleweed else if test -e /etc/ssl/ca-bundle.pem # openSUSE Tumbleweed
set --export NIX_SSL_CERT_FILE /etc/ssl/ca-bundle.pem set --export NIX_SSL_CERT_FILE /etc/ssl/ca-bundle.pem
else if test -e /etc/ssl/certs/ca-bundle.crt # Old NixOS else if test -e /etc/ssl/certs/ca-bundle.crt # Old NixOS
set --export NIX_SSL_CERT_FILE /etc/ssl/certs/ca-bundle.crt set --export NIX_SSL_CERT_FILE /etc/ssl/certs/ca-bundle.crt
else if test -e /etc/pki/tls/certs/ca-bundle.crt # Fedora, CentOS else if test -e /etc/pki/tls/certs/ca-bundle.crt # Fedora, CentOS
set --export NIX_SSL_CERT_FILE /etc/pki/tls/certs/ca-bundle.crt set --export NIX_SSL_CERT_FILE /etc/pki/tls/certs/ca-bundle.crt
else if test -e "$NIX_LINK/etc/ssl/certs/ca-bundle.crt" # fall back to cacert in Nix profile else if test -e "$NIX_LINK/etc/ssl/certs/ca-bundle.crt" # fall back to cacert in Nix profile
set --export NIX_SSL_CERT_FILE "$NIX_LINK/etc/ssl/certs/ca-bundle.crt" set --export NIX_SSL_CERT_FILE "$NIX_LINK/etc/ssl/certs/ca-bundle.crt"
else if test -e "$NIX_LINK/etc/ca-bundle.crt" # old cacert in Nix profile else if test -e "$NIX_LINK/etc/ca-bundle.crt" # old cacert in Nix profile
set --export NIX_SSL_CERT_FILE "$NIX_LINK/etc/ca-bundle.crt" set --export NIX_SSL_CERT_FILE "$NIX_LINK/etc/ca-bundle.crt"
else else
# Fall back to what is in the nix profiles, favouring whatever is defined last. # Fall back to what is in the nix profiles, favouring whatever is defined last.
for i in (string split ' ' $NIX_PROFILES) for i in (string split ' ' $NIX_PROFILES)
if test -e "$i/etc/ssl/certs/ca-bundle.crt" if test -e "$i/etc/ssl/certs/ca-bundle.crt"
set --export NIX_SSL_CERT_FILE "$i/etc/ssl/certs/ca-bundle.crt" set --export NIX_SSL_CERT_FILE "$i/etc/ssl/certs/ca-bundle.crt"
end
end end
end
end end
add_path "@localstatedir@/nix/profiles/default/bin" add_path "@localstatedir@/nix/profiles/default/bin"
add_path "$HOME/.nix-profile/bin" add_path "$NIX_LINK/bin"
# Cleanup
functions -e add_path functions -e add_path

View file

@ -1,3 +1,13 @@
# Only execute this file once per shell.
if test -z "$HOME" || test -z "$USER" || \
test -n "$__ETC_PROFILE_NIX_SOURCED"
exit
end
set --global __ETC_PROFILE_NIX_SOURCED 1
# Local helpers
function add_path --argument-names new_path function add_path --argument-names new_path
if type -q fish_add_path if type -q fish_add_path
# fish 3.2.0 or newer # fish 3.2.0 or newer
@ -10,50 +20,50 @@ function add_path --argument-names new_path
end end
end end
if test -n "$HOME" && test -n "$USER" # Main configuration
# Set up the per-user profile. # Set up the per-user profile.
set NIX_LINK $HOME/.nix-profile set --local NIX_LINK $HOME/.nix-profile
# Set up environment. # Set up environment.
# This part should be kept in sync with nixpkgs:nixos/modules/programs/environment.nix # This part should be kept in sync with nixpkgs:nixos/modules/programs/environment.nix
set --export NIX_PROFILES "@localstatedir@/nix/profiles/default $HOME/.nix-profile" set --export NIX_PROFILES "@localstatedir@/nix/profiles/default $HOME/.nix-profile"
# Populate bash completions, .desktop files, etc # Populate bash completions, .desktop files, etc
if test -z "$XDG_DATA_DIRS" if test -z "$XDG_DATA_DIRS"
# According to XDG spec the default is /usr/local/share:/usr/share, don't set something that prevents that default # According to XDG spec the default is /usr/local/share:/usr/share, don't set something that prevents that default
set --export XDG_DATA_DIRS "/usr/local/share:/usr/share:$NIX_LINK/share:/nix/var/nix/profiles/default/share" set --export XDG_DATA_DIRS "/usr/local/share:/usr/share:$NIX_LINK/share:/nix/var/nix/profiles/default/share"
else else
set --export XDG_DATA_DIRS "$XDG_DATA_DIRS:$NIX_LINK/share:/nix/var/nix/profiles/default/share" set --export XDG_DATA_DIRS "$XDG_DATA_DIRS:$NIX_LINK/share:/nix/var/nix/profiles/default/share"
end
# Set $NIX_SSL_CERT_FILE so that Nixpkgs applications like curl work.
if test -n "$NIX_SSL_CERT_FILE"
: # Allow users to override the NIX_SSL_CERT_FILE
else if test -e /etc/ssl/certs/ca-certificates.crt # NixOS, Ubuntu, Debian, Gentoo, Arch
set --export NIX_SSL_CERT_FILE /etc/ssl/certs/ca-certificates.crt
else if test -e /etc/ssl/ca-bundle.pem # openSUSE Tumbleweed
set --export NIX_SSL_CERT_FILE /etc/ssl/ca-bundle.pem
else if test -e /etc/ssl/certs/ca-bundle.crt # Old NixOS
set --export NIX_SSL_CERT_FILE /etc/ssl/certs/ca-bundle.crt
else if test -e /etc/pki/tls/certs/ca-bundle.crt # Fedora, CentOS
set --export NIX_SSL_CERT_FILE /etc/pki/tls/certs/ca-bundle.crt
else if test -e "$NIX_LINK/etc/ssl/certs/ca-bundle.crt" # fall back to cacert in Nix profile
set --export NIX_SSL_CERT_FILE "$NIX_LINK/etc/ssl/certs/ca-bundle.crt"
else if test -e "$NIX_LINK/etc/ca-bundle.crt" # old cacert in Nix profile
set --export NIX_SSL_CERT_FILE "$NIX_LINK/etc/ca-bundle.crt"
end
# Only use MANPATH if it is already set. In general `man` will just simply
# pick up `.nix-profile/share/man` because is it close to `.nix-profile/bin`
# which is in the $PATH. For more info, run `manpath -d`.
if set --query MANPATH
set --export --prepend --path MANPATH "$NIX_LINK/share/man"
end
add_path "$NIX_LINK/bin"
set --erase NIX_LINK
end end
# Set $NIX_SSL_CERT_FILE so that Nixpkgs applications like curl work.
if test -n "$NIX_SSL_CERT_FILE"
: # Allow users to override the NIX_SSL_CERT_FILE
else if test -e /etc/ssl/certs/ca-certificates.crt # NixOS, Ubuntu, Debian, Gentoo, Arch
set --export NIX_SSL_CERT_FILE /etc/ssl/certs/ca-certificates.crt
else if test -e /etc/ssl/ca-bundle.pem # openSUSE Tumbleweed
set --export NIX_SSL_CERT_FILE /etc/ssl/ca-bundle.pem
else if test -e /etc/ssl/certs/ca-bundle.crt # Old NixOS
set --export NIX_SSL_CERT_FILE /etc/ssl/certs/ca-bundle.crt
else if test -e /etc/pki/tls/certs/ca-bundle.crt # Fedora, CentOS
set --export NIX_SSL_CERT_FILE /etc/pki/tls/certs/ca-bundle.crt
else if test -e "$NIX_LINK/etc/ssl/certs/ca-bundle.crt" # fall back to cacert in Nix profile
set --export NIX_SSL_CERT_FILE "$NIX_LINK/etc/ssl/certs/ca-bundle.crt"
else if test -e "$NIX_LINK/etc/ca-bundle.crt" # old cacert in Nix profile
set --export NIX_SSL_CERT_FILE "$NIX_LINK/etc/ca-bundle.crt"
end
# Only use MANPATH if it is already set. In general `man` will just simply
# pick up `.nix-profile/share/man` because is it close to `.nix-profile/bin`
# which is in the $PATH. For more info, run `manpath -d`.
if set --query MANPATH
set --export --prepend --path MANPATH "$NIX_LINK/share/man"
end
add_path "$NIX_LINK/bin"
# Cleanup
functions -e add_path functions -e add_path

View file

@ -51,7 +51,7 @@ static bool allSupportedLocally(Store & store, const std::set<std::string>& requ
static int main_build_remote(int argc, char * * argv) static int main_build_remote(int argc, char * * argv)
{ {
{ {
logger = makeJSONLogger(*logger); logger = makeJSONLogger(getStandardError());
/* Ensure we don't get any SSH passphrase or host key popups. */ /* Ensure we don't get any SSH passphrase or host key popups. */
unsetenv("DISPLAY"); unsetenv("DISPLAY");

View file

@ -347,7 +347,7 @@ struct MixEnvironment : virtual Args
void setEnviron(); void setEnviron();
}; };
void completeFlakeInputPath( void completeFlakeInputAttrPath(
AddCompletions & completions, AddCompletions & completions,
ref<EvalState> evalState, ref<EvalState> evalState,
const std::vector<FlakeRef> & flakeRefs, const std::vector<FlakeRef> & flakeRefs,

View file

@ -33,8 +33,10 @@ EvalSettings evalSettings {
// FIXME `parseFlakeRef` should take a `std::string_view`. // FIXME `parseFlakeRef` should take a `std::string_view`.
auto flakeRef = parseFlakeRef(fetchSettings, std::string { rest }, {}, true, false); auto flakeRef = parseFlakeRef(fetchSettings, std::string { rest }, {}, true, false);
debug("fetching flake search path element '%s''", rest); debug("fetching flake search path element '%s''", rest);
auto storePath = flakeRef.resolve(state.store).fetchTree(state.store).first; auto [accessor, lockedRef] = flakeRef.resolve(state.store).lazyFetch(state.store);
return state.rootPath(state.store->toRealPath(storePath)); auto storePath = nix::fetchToStore(*state.store, SourcePath(accessor), FetchMode::Copy, lockedRef.input.getName());
state.allowPath(storePath);
return state.storePath(storePath);
}, },
}, },
}, },
@ -176,13 +178,15 @@ SourcePath lookupFileArg(EvalState & state, std::string_view s, const Path * bas
state.fetchSettings, state.fetchSettings,
EvalSettings::resolvePseudoUrl(s)); EvalSettings::resolvePseudoUrl(s));
auto storePath = fetchToStore(*state.store, SourcePath(accessor), FetchMode::Copy); auto storePath = fetchToStore(*state.store, SourcePath(accessor), FetchMode::Copy);
return state.rootPath(CanonPath(state.store->toRealPath(storePath))); return state.storePath(storePath);
} }
else if (hasPrefix(s, "flake:")) { else if (hasPrefix(s, "flake:")) {
auto flakeRef = parseFlakeRef(fetchSettings, std::string(s.substr(6)), {}, true, false); auto flakeRef = parseFlakeRef(fetchSettings, std::string(s.substr(6)), {}, true, false);
auto storePath = flakeRef.resolve(state.store).fetchTree(state.store).first; auto [accessor, lockedRef] = flakeRef.resolve(state.store).lazyFetch(state.store);
return state.rootPath(CanonPath(state.store->toRealPath(storePath))); auto storePath = nix::fetchToStore(*state.store, SourcePath(accessor), FetchMode::Copy, lockedRef.input.getName());
state.allowPath(storePath);
return state.storePath(storePath);
} }
else if (s.size() > 2 && s.at(0) == '<' && s.at(s.size() - 1) == '>') { else if (s.size() > 2 && s.at(0) == '<' && s.at(s.size() - 1) == '>') {

View file

@ -33,7 +33,7 @@ namespace nix {
namespace fs { using namespace std::filesystem; } namespace fs { using namespace std::filesystem; }
void completeFlakeInputPath( void completeFlakeInputAttrPath(
AddCompletions & completions, AddCompletions & completions,
ref<EvalState> evalState, ref<EvalState> evalState,
const std::vector<FlakeRef> & flakeRefs, const std::vector<FlakeRef> & flakeRefs,
@ -117,10 +117,10 @@ MixFlakeOptions::MixFlakeOptions()
.labels = {"input-path"}, .labels = {"input-path"},
.handler = {[&](std::string s) { .handler = {[&](std::string s) {
warn("'--update-input' is a deprecated alias for 'flake update' and will be removed in a future version."); warn("'--update-input' is a deprecated alias for 'flake update' and will be removed in a future version.");
lockFlags.inputUpdates.insert(flake::parseInputPath(s)); lockFlags.inputUpdates.insert(flake::parseInputAttrPath(s));
}}, }},
.completer = {[&](AddCompletions & completions, size_t, std::string_view prefix) { .completer = {[&](AddCompletions & completions, size_t, std::string_view prefix) {
completeFlakeInputPath(completions, getEvalState(), getFlakeRefsForCompletion(), prefix); completeFlakeInputAttrPath(completions, getEvalState(), getFlakeRefsForCompletion(), prefix);
}} }}
}); });
@ -129,15 +129,15 @@ MixFlakeOptions::MixFlakeOptions()
.description = "Override a specific flake input (e.g. `dwarffs/nixpkgs`). This implies `--no-write-lock-file`.", .description = "Override a specific flake input (e.g. `dwarffs/nixpkgs`). This implies `--no-write-lock-file`.",
.category = category, .category = category,
.labels = {"input-path", "flake-url"}, .labels = {"input-path", "flake-url"},
.handler = {[&](std::string inputPath, std::string flakeRef) { .handler = {[&](std::string inputAttrPath, std::string flakeRef) {
lockFlags.writeLockFile = false; lockFlags.writeLockFile = false;
lockFlags.inputOverrides.insert_or_assign( lockFlags.inputOverrides.insert_or_assign(
flake::parseInputPath(inputPath), flake::parseInputAttrPath(inputAttrPath),
parseFlakeRef(fetchSettings, flakeRef, absPath(getCommandBaseDir()), true)); parseFlakeRef(fetchSettings, flakeRef, absPath(getCommandBaseDir()), true));
}}, }},
.completer = {[&](AddCompletions & completions, size_t n, std::string_view prefix) { .completer = {[&](AddCompletions & completions, size_t n, std::string_view prefix) {
if (n == 0) { if (n == 0) {
completeFlakeInputPath(completions, getEvalState(), getFlakeRefsForCompletion(), prefix); completeFlakeInputAttrPath(completions, getEvalState(), getFlakeRefsForCompletion(), prefix);
} else if (n == 1) { } else if (n == 1) {
completeFlakeRef(completions, getEvalState()->store, prefix); completeFlakeRef(completions, getEvalState()->store, prefix);
} }

View file

@ -50,7 +50,7 @@ Args::Flag hashAlgo(std::string && longName, HashAlgorithm * ha)
{ {
return Args::Flag { return Args::Flag {
.longName = std::move(longName), .longName = std::move(longName),
.description = "Hash algorithm (`md5`, `sha1`, `sha256`, or `sha512`).", .description = "Hash algorithm (`blake3`, `md5`, `sha1`, `sha256`, or `sha512`).",
.labels = {"hash-algo"}, .labels = {"hash-algo"},
.handler = {[ha](std::string s) { .handler = {[ha](std::string s) {
*ha = parseHashAlgo(s); *ha = parseHashAlgo(s);
@ -63,7 +63,7 @@ Args::Flag hashAlgoOpt(std::string && longName, std::optional<HashAlgorithm> * o
{ {
return Args::Flag { return Args::Flag {
.longName = std::move(longName), .longName = std::move(longName),
.description = "Hash algorithm (`md5`, `sha1`, `sha256`, or `sha512`). Can be omitted for SRI hashes.", .description = "Hash algorithm (`blake3`, `md5`, `sha1`, `sha256`, or `sha512`). Can be omitted for SRI hashes.",
.labels = {"hash-algo"}, .labels = {"hash-algo"},
.handler = {[oha](std::string s) { .handler = {[oha](std::string s) {
*oha = std::optional<HashAlgorithm>{parseHashAlgo(s)}; *oha = std::optional<HashAlgorithm>{parseHashAlgo(s)};
@ -120,7 +120,7 @@ Args::Flag contentAddressMethod(ContentAddressMethod * method)
- [`text`](@docroot@/store/store-object/content-address.md#method-text): - [`text`](@docroot@/store/store-object/content-address.md#method-text):
Like `flat`, but used for Like `flat`, but used for
[derivations](@docroot@/glossary.md#store-derivation) serialized in store object and [derivations](@docroot@/glossary.md#gloss-store-derivation) serialized in store object and
[`builtins.toFile`](@docroot@/language/builtins.html#builtins-toFile). [`builtins.toFile`](@docroot@/language/builtins.html#builtins-toFile).
For advanced use-cases only; For advanced use-cases only;
for regular usage prefer `nar` and `flat`. for regular usage prefer `nar` and `flat`.

View file

@ -101,6 +101,8 @@ struct NixRepl
Value & v, Value & v,
unsigned int maxDepth = std::numeric_limits<unsigned int>::max()) unsigned int maxDepth = std::numeric_limits<unsigned int>::max())
{ {
// Hide the progress bar during printing because it might interfere
auto suspension = logger->suspend();
::nix::printValue(*state, str, v, PrintOptions { ::nix::printValue(*state, str, v, PrintOptions {
.ansiColors = true, .ansiColors = true,
.force = true, .force = true,
@ -126,7 +128,7 @@ NixRepl::NixRepl(const LookupPath & lookupPath, nix::ref<Store> store, ref<EvalS
: AbstractNixRepl(state) : AbstractNixRepl(state)
, debugTraceIndex(0) , debugTraceIndex(0)
, getValues(getValues) , getValues(getValues)
, staticEnv(new StaticEnv(nullptr, state->staticBaseEnv.get())) , staticEnv(new StaticEnv(nullptr, state->staticBaseEnv))
, runNixPtr{runNix} , runNixPtr{runNix}
, interacter(make_unique<ReadlineLikeInteracter>(getDataDir() + "/repl-history")) , interacter(make_unique<ReadlineLikeInteracter>(getDataDir() + "/repl-history"))
{ {
@ -138,16 +140,13 @@ static std::ostream & showDebugTrace(std::ostream & out, const PosTable & positi
out << ANSI_RED "error: " << ANSI_NORMAL; out << ANSI_RED "error: " << ANSI_NORMAL;
out << dt.hint.str() << "\n"; out << dt.hint.str() << "\n";
// prefer direct pos, but if noPos then try the expr. auto pos = dt.getPos(positions);
auto pos = dt.pos
? dt.pos
: positions[dt.expr.getPos() ? dt.expr.getPos() : noPos];
if (pos) { if (pos) {
out << *pos; out << pos;
if (auto loc = pos->getCodeLines()) { if (auto loc = pos.getCodeLines()) {
out << "\n"; out << "\n";
printCodeLines(out, "", *pos, *loc); printCodeLines(out, "", pos, *loc);
out << "\n"; out << "\n";
} }
} }
@ -177,18 +176,20 @@ ReplExitStatus NixRepl::mainLoop()
while (true) { while (true) {
// Hide the progress bar while waiting for user input, so that it won't interfere. // Hide the progress bar while waiting for user input, so that it won't interfere.
logger->pause(); {
// When continuing input from previous lines, don't print a prompt, just align to the same auto suspension = logger->suspend();
// number of chars as the prompt. // When continuing input from previous lines, don't print a prompt, just align to the same
if (!interacter->getLine(input, input.empty() ? ReplPromptType::ReplPrompt : ReplPromptType::ContinuationPrompt)) { // number of chars as the prompt.
// Ctrl-D should exit the debugger. if (!interacter->getLine(input, input.empty() ? ReplPromptType::ReplPrompt : ReplPromptType::ContinuationPrompt)) {
state->debugStop = false; // Ctrl-D should exit the debugger.
logger->cout(""); state->debugStop = false;
// TODO: Should Ctrl-D exit just the current debugger session or logger->cout("");
// the entire program? // TODO: Should Ctrl-D exit just the current debugger session or
return ReplExitStatus::QuitAll; // the entire program?
return ReplExitStatus::QuitAll;
}
// `suspension` resumes the logger
} }
logger->resume();
try { try {
switch (processLine(input)) { switch (processLine(input)) {
case ProcessLineResult::Quit: case ProcessLineResult::Quit:
@ -583,6 +584,7 @@ ProcessLineResult NixRepl::processLine(std::string line)
else if (command == ":p" || command == ":print") { else if (command == ":p" || command == ":print") {
Value v; Value v;
evalString(arg, v); evalString(arg, v);
auto suspension = logger->suspend();
if (v.type() == nString) { if (v.type() == nString) {
std::cout << v.string_view(); std::cout << v.string_view();
} else { } else {
@ -691,6 +693,7 @@ ProcessLineResult NixRepl::processLine(std::string line)
} else { } else {
Value v; Value v;
evalString(line, v); evalString(line, v);
auto suspension = logger->suspend();
printValue(std::cout, v, 1); printValue(std::cout, v, 1);
std::cout << std::endl; std::cout << std::endl;
} }

View file

@ -1152,7 +1152,7 @@ namespace nix {
ASSERT_TRACE1("hashString \"foo\" \"content\"", ASSERT_TRACE1("hashString \"foo\" \"content\"",
UsageError, UsageError,
HintFmt("unknown hash algorithm '%s', expect 'md5', 'sha1', 'sha256', or 'sha512'", "foo")); HintFmt("unknown hash algorithm '%s', expect 'blake3', 'md5', 'sha1', 'sha256', or 'sha512'", "foo"));
ASSERT_TRACE2("hashString \"sha256\" {}", ASSERT_TRACE2("hashString \"sha256\" {}",
TypeError, TypeError,

View file

@ -172,7 +172,7 @@ TEST_F(nix_api_expr_test, nix_expr_realise_context_bad_build)
TEST_F(nix_api_expr_test, nix_expr_realise_context) TEST_F(nix_api_expr_test, nix_expr_realise_context)
{ {
// TODO (ca-derivations): add a content-addressed derivation output, which produces a placeholder // TODO (ca-derivations): add a content-addressing derivation output, which produces a placeholder
auto expr = R"( auto expr = R"(
'' ''
a derivation output: ${ a derivation output: ${

View file

@ -28,20 +28,15 @@ namespace nix {
}; };
class CaptureLogging { class CaptureLogging {
Logger * oldLogger; std::unique_ptr<Logger> oldLogger;
std::unique_ptr<CaptureLogger> tempLogger;
public: public:
CaptureLogging() : tempLogger(std::make_unique<CaptureLogger>()) { CaptureLogging() {
oldLogger = logger; oldLogger = std::move(logger);
logger = tempLogger.get(); logger = std::make_unique<CaptureLogger>();
} }
~CaptureLogging() { ~CaptureLogging() {
logger = oldLogger; logger = std::move(oldLogger);
}
std::string get() const {
return tempLogger->get();
} }
}; };
@ -113,7 +108,7 @@ namespace nix {
CaptureLogging l; CaptureLogging l;
auto v = eval("builtins.trace \"test string 123\" 123"); auto v = eval("builtins.trace \"test string 123\" 123");
ASSERT_THAT(v, IsIntEq(123)); ASSERT_THAT(v, IsIntEq(123));
auto text = l.get(); auto text = (dynamic_cast<CaptureLogger *>(logger.get()))->get();
ASSERT_NE(text.find("test string 123"), std::string::npos); ASSERT_NE(text.find("test string 123"), std::string::npos);
} }

View file

@ -23,7 +23,7 @@ let
resolveInput = resolveInput =
inputSpec: if builtins.isList inputSpec then getInputByPath lockFile.root inputSpec else inputSpec; inputSpec: if builtins.isList inputSpec then getInputByPath lockFile.root inputSpec else inputSpec;
# Follow an input path (e.g. ["dwarffs" "nixpkgs"]) from the # Follow an input attrpath (e.g. ["dwarffs" "nixpkgs"]) from the
# root node, returning the final node. # root node, returning the final node.
getInputByPath = getInputByPath =
nodeName: path: nodeName: path:

View file

@ -45,7 +45,7 @@ EvalErrorBuilder<T> & EvalErrorBuilder<T>::withFrame(const Env & env, const Expr
// TODO: check compatibility with nested debugger calls. // TODO: check compatibility with nested debugger calls.
// TODO: What side-effects?? // TODO: What side-effects??
error.state.debugTraces.push_front(DebugTrace{ error.state.debugTraces.push_front(DebugTrace{
.pos = error.state.positions[expr.getPos()], .pos = expr.getPos(),
.expr = expr, .expr = expr,
.env = env, .env = env,
.hint = HintFmt("Fake frame for debugging purposes"), .hint = HintFmt("Fake frame for debugging purposes"),

View file

@ -146,7 +146,7 @@ inline void EvalState::forceList(Value & v, const PosIdx pos, std::string_view e
[[gnu::always_inline]] [[gnu::always_inline]]
inline CallDepth EvalState::addCallDepth(const PosIdx pos) { inline CallDepth EvalState::addCallDepth(const PosIdx pos) {
if (callDepth > settings.maxCallDepth) if (callDepth > settings.maxCallDepth)
error<EvalError>("stack overflow; max-call-depth exceeded").atPos(pos).debugThrow(); error<EvalBaseError>("stack overflow; max-call-depth exceeded").atPos(pos).debugThrow();
return CallDepth(callDepth); return CallDepth(callDepth);
}; };

View file

@ -2,7 +2,6 @@
///@file ///@file
#include "config.hh" #include "config.hh"
#include "ref.hh"
#include "source-path.hh" #include "source-path.hh"
namespace nix { namespace nix {

View file

@ -246,15 +246,42 @@ EvalState::EvalState(
, repair(NoRepair) , repair(NoRepair)
, emptyBindings(0) , emptyBindings(0)
, rootFS( , rootFS(
settings.restrictEval || settings.pureEval ({
? ref<SourceAccessor>(AllowListSourceAccessor::create(getFSSourceAccessor(), {}, /* In pure eval mode, we provide a filesystem that only
[&settings](const CanonPath & path) -> RestrictedPathError { contains the Nix store.
auto modeInformation = settings.pureEval
? "in pure evaluation mode (use '--impure' to override)" If we have a chroot store and pure eval is not enabled,
: "in restricted mode"; use a union accessor to make the chroot store available
throw RestrictedPathError("access to absolute path '%1%' is forbidden %2%", path, modeInformation); at its logical location while still having the
})) underlying directory available. This is necessary for
: getFSSourceAccessor()) instance if we're evaluating a file from the physical
/nix/store while using a chroot store. */
auto accessor = getFSSourceAccessor();
auto realStoreDir = dirOf(store->toRealPath(StorePath::dummy));
if (settings.pureEval || store->storeDir != realStoreDir) {
auto storeFS = makeMountedSourceAccessor(
{
{CanonPath::root, makeEmptySourceAccessor()},
{CanonPath(store->storeDir), makeFSSourceAccessor(realStoreDir)}
});
accessor = settings.pureEval
? storeFS
: makeUnionSourceAccessor({accessor, storeFS});
}
/* Apply access control if needed. */
if (settings.restrictEval || settings.pureEval)
accessor = AllowListSourceAccessor::create(accessor, {},
[&settings](const CanonPath & path) -> RestrictedPathError {
auto modeInformation = settings.pureEval
? "in pure evaluation mode (use '--impure' to override)"
: "in restricted mode";
throw RestrictedPathError("access to absolute path '%1%' is forbidden %2%", path, modeInformation);
});
accessor;
}))
, corepkgsFS(make_ref<MemorySourceAccessor>()) , corepkgsFS(make_ref<MemorySourceAccessor>())
, internalFS(make_ref<MemorySourceAccessor>()) , internalFS(make_ref<MemorySourceAccessor>())
, derivationInternal{corepkgsFS->addFile( , derivationInternal{corepkgsFS->addFile(
@ -344,7 +371,7 @@ void EvalState::allowPath(const Path & path)
void EvalState::allowPath(const StorePath & storePath) void EvalState::allowPath(const StorePath & storePath)
{ {
if (auto rootFS2 = rootFS.dynamic_pointer_cast<AllowListSourceAccessor>()) if (auto rootFS2 = rootFS.dynamic_pointer_cast<AllowListSourceAccessor>())
rootFS2->allowPrefix(CanonPath(store->toRealPath(storePath))); rootFS2->allowPrefix(CanonPath(store->printStorePath(storePath)));
} }
void EvalState::allowClosure(const StorePath & storePath) void EvalState::allowClosure(const StorePath & storePath)
@ -422,16 +449,6 @@ void EvalState::checkURI(const std::string & uri)
} }
Path EvalState::toRealPath(const Path & path, const NixStringContext & context)
{
// FIXME: check whether 'path' is in 'context'.
return
!context.empty() && store->isInStore(path)
? store->toRealPath(path)
: path;
}
Value * EvalState::addConstant(const std::string & name, Value & v, Constant info) Value * EvalState::addConstant(const std::string & name, Value & v, Constant info)
{ {
Value * v2 = allocValue(); Value * v2 = allocValue();
@ -754,18 +771,26 @@ void EvalState::runDebugRepl(const Error * error, const Env & env, const Expr &
if (!debugRepl || inDebugger) if (!debugRepl || inDebugger)
return; return;
auto dts = auto dts = [&]() -> std::unique_ptr<DebugTraceStacker> {
error && expr.getPos() if (error && expr.getPos()) {
? std::make_unique<DebugTraceStacker>( auto trace = DebugTrace{
*this, .pos = [&]() -> std::variant<Pos, PosIdx> {
DebugTrace { if (error->info().pos) {
.pos = error->info().pos ? error->info().pos : positions[expr.getPos()], if (auto * pos = error->info().pos.get())
return *pos;
return noPos;
}
return expr.getPos();
}(),
.expr = expr, .expr = expr,
.env = env, .env = env,
.hint = error->info().msg, .hint = error->info().msg,
.isError = true .isError = true};
})
: nullptr; return std::make_unique<DebugTraceStacker>(*this, std::move(trace));
}
return nullptr;
}();
if (error) if (error)
{ {
@ -810,7 +835,7 @@ static std::unique_ptr<DebugTraceStacker> makeDebugTraceStacker(
EvalState & state, EvalState & state,
Expr & expr, Expr & expr,
Env & env, Env & env,
std::shared_ptr<Pos> && pos, std::variant<Pos, PosIdx> pos,
const Args & ... formatArgs) const Args & ... formatArgs)
{ {
return std::make_unique<DebugTraceStacker>(state, return std::make_unique<DebugTraceStacker>(state,
@ -1087,7 +1112,7 @@ void EvalState::evalFile(const SourcePath & path, Value & v, bool mustBeTrivial)
*this, *this,
*e, *e,
this->baseEnv, this->baseEnv,
e->getPos() ? std::make_shared<Pos>(positions[e->getPos()]) : nullptr, e->getPos(),
"while evaluating the file '%1%':", resolvedPath.to_string()) "while evaluating the file '%1%':", resolvedPath.to_string())
: nullptr; : nullptr;
@ -1313,9 +1338,7 @@ void ExprLet::eval(EvalState & state, Env & env, Value & v)
state, state,
*this, *this,
env2, env2,
getPos() getPos(),
? std::make_shared<Pos>(state.positions[getPos()])
: nullptr,
"while evaluating a '%1%' expression", "while evaluating a '%1%' expression",
"let" "let"
) )
@ -1384,7 +1407,7 @@ void ExprSelect::eval(EvalState & state, Env & env, Value & v)
state, state,
*this, *this,
env, env,
state.positions[getPos()], getPos(),
"while evaluating the attribute '%1%'", "while evaluating the attribute '%1%'",
showAttrPath(state, env, attrPath)) showAttrPath(state, env, attrPath))
: nullptr; : nullptr;
@ -1585,7 +1608,7 @@ void EvalState::callFunction(Value & fun, std::span<Value *> args, Value & vRes,
try { try {
auto dts = debugRepl auto dts = debugRepl
? makeDebugTraceStacker( ? makeDebugTraceStacker(
*this, *lambda.body, env2, positions[lambda.pos], *this, *lambda.body, env2, lambda.pos,
"while calling %s", "while calling %s",
lambda.name lambda.name
? concatStrings("'", symbols[lambda.name], "'") ? concatStrings("'", symbols[lambda.name], "'")
@ -1720,9 +1743,7 @@ void ExprCall::eval(EvalState & state, Env & env, Value & v)
state, state,
*this, *this,
env, env,
getPos() getPos(),
? std::make_shared<Pos>(state.positions[getPos()])
: nullptr,
"while calling a function" "while calling a function"
) )
: nullptr; : nullptr;
@ -2051,7 +2072,7 @@ void ExprConcatStrings::eval(EvalState & state, Env & env, Value & v)
else if (firstType == nPath) { else if (firstType == nPath) {
if (!context.empty()) if (!context.empty())
state.error<EvalError>("a string that refers to a store path cannot be appended to a path").atPos(pos).withFrame(env, *this).debugThrow(); state.error<EvalError>("a string that refers to a store path cannot be appended to a path").atPos(pos).withFrame(env, *this).debugThrow();
v.mkPath(state.rootPath(CanonPath(canonPath(str())))); v.mkPath(state.rootPath(CanonPath(str())));
} else } else
v.mkStringMove(c_str(), context); v.mkStringMove(c_str(), context);
} }
@ -2106,7 +2127,7 @@ void EvalState::forceValueDeep(Value & v)
try { try {
// If the value is a thunk, we're evaling. Otherwise no trace necessary. // If the value is a thunk, we're evaling. Otherwise no trace necessary.
auto dts = debugRepl && i.value->isThunk() auto dts = debugRepl && i.value->isThunk()
? makeDebugTraceStacker(*this, *i.value->payload.thunk.expr, *i.value->payload.thunk.env, positions[i.pos], ? makeDebugTraceStacker(*this, *i.value->payload.thunk.expr, *i.value->payload.thunk.env, i.pos,
"while evaluating the attribute '%1%'", symbols[i.name]) "while evaluating the attribute '%1%'", symbols[i.name])
: nullptr; : nullptr;
@ -2432,7 +2453,7 @@ SourcePath EvalState::coerceToPath(const PosIdx pos, Value & v, NixStringContext
auto path = coerceToString(pos, v, context, errorCtx, false, false, true).toOwned(); auto path = coerceToString(pos, v, context, errorCtx, false, false, true).toOwned();
if (path == "" || path[0] != '/') if (path == "" || path[0] != '/')
error<EvalError>("string '%1%' doesn't represent an absolute path", path).withTrace(pos, errorCtx).debugThrow(); error<EvalError>("string '%1%' doesn't represent an absolute path", path).withTrace(pos, errorCtx).debugThrow();
return rootPath(CanonPath(path)); return rootPath(path);
} }
@ -3086,7 +3107,7 @@ std::optional<SourcePath> EvalState::resolveLookupPathPath(const LookupPath::Pat
fetchSettings, fetchSettings,
EvalSettings::resolvePseudoUrl(value)); EvalSettings::resolvePseudoUrl(value));
auto storePath = fetchToStore(*store, SourcePath(accessor), FetchMode::Copy); auto storePath = fetchToStore(*store, SourcePath(accessor), FetchMode::Copy);
return finish(rootPath(store->toRealPath(storePath))); return finish(this->storePath(storePath));
} catch (Error & e) { } catch (Error & e) {
logWarning({ logWarning({
.msg = HintFmt("Nix search path entry '%1%' cannot be downloaded, ignoring", value) .msg = HintFmt("Nix search path entry '%1%' cannot be downloaded, ignoring", value)

View file

@ -171,11 +171,28 @@ struct RegexCache;
std::shared_ptr<RegexCache> makeRegexCache(); std::shared_ptr<RegexCache> makeRegexCache();
struct DebugTrace { struct DebugTrace {
std::shared_ptr<Pos> pos; /* WARNING: Converting PosIdx -> Pos should be done with extra care. This is
due to the fact that operator[] of PosTable is incredibly expensive. */
std::variant<Pos, PosIdx> pos;
const Expr & expr; const Expr & expr;
const Env & env; const Env & env;
HintFmt hint; HintFmt hint;
bool isError; bool isError;
Pos getPos(const PosTable & table) const
{
return std::visit(
overloaded{
[&](PosIdx idx) {
// Prefer direct pos, but if noPos then try the expr.
if (!idx)
idx = expr.getPos();
return table[idx];
},
[&](Pos pos) { return pos; },
},
pos);
}
}; };
class EvalState : public std::enable_shared_from_this<EvalState> class EvalState : public std::enable_shared_from_this<EvalState>
@ -389,6 +406,15 @@ public:
*/ */
SourcePath rootPath(PathView path); SourcePath rootPath(PathView path);
/**
* Return a `SourcePath` that refers to `path` in the store.
*
* For now, this has to also be within the root filesystem for
* backwards compat, but for Windows and maybe also pure eval, we'll
* probably want to do something different.
*/
SourcePath storePath(const StorePath & path);
/** /**
* Allow access to a path. * Allow access to a path.
*/ */
@ -412,17 +438,6 @@ public:
void checkURI(const std::string & uri); void checkURI(const std::string & uri);
/**
* When using a diverted store and 'path' is in the Nix store, map
* 'path' to the diverted location (e.g. /nix/store/foo is mapped
* to /home/alice/my-nix/nix/store/foo). However, this is only
* done if the context is not empty, since otherwise we're
* probably trying to read from the actual /nix/store. This is
* intended to distinguish between import-from-derivation and
* sources stored in the actual /nix/store.
*/
Path toRealPath(const Path & path, const NixStringContext & context);
/** /**
* Parse a Nix expression from the specified file. * Parse a Nix expression from the specified file.
*/ */

View file

@ -24,6 +24,7 @@ deps_public_maybe_subproject = [
dependency('nix-fetchers'), dependency('nix-fetchers'),
] ]
subdir('nix-meson-build-support/subprojects') subdir('nix-meson-build-support/subprojects')
subdir('nix-meson-build-support/big-objs')
boost = dependency( boost = dependency(
'boost', 'boost',
@ -171,8 +172,6 @@ headers = [config_h] + files(
# internal: 'lexer-helpers.hh', # internal: 'lexer-helpers.hh',
'nixexpr.hh', 'nixexpr.hh',
'parser-state.hh', 'parser-state.hh',
'pos-idx.hh',
'pos-table.hh',
'primops.hh', 'primops.hh',
'print-ambiguous.hh', 'print-ambiguous.hh',
'print-options.hh', 'print-options.hh',

View file

@ -310,7 +310,7 @@ void ExprVar::bindVars(EvalState & es, const std::shared_ptr<const StaticEnv> &
const StaticEnv * curEnv; const StaticEnv * curEnv;
Level level; Level level;
int withLevel = -1; int withLevel = -1;
for (curEnv = env.get(), level = 0; curEnv; curEnv = curEnv->up, level++) { for (curEnv = env.get(), level = 0; curEnv; curEnv = curEnv->up.get(), level++) {
if (curEnv->isWith) { if (curEnv->isWith) {
if (withLevel == -1) withLevel = level; if (withLevel == -1) withLevel = level;
} else { } else {
@ -331,7 +331,7 @@ void ExprVar::bindVars(EvalState & es, const std::shared_ptr<const StaticEnv> &
"undefined variable '%1%'", "undefined variable '%1%'",
es.symbols[name] es.symbols[name]
).atPos(pos).debugThrow(); ).atPos(pos).debugThrow();
for (auto * e = env.get(); e && !fromWith; e = e->up) for (auto * e = env.get(); e && !fromWith; e = e->up.get())
fromWith = e->isWith; fromWith = e->isWith;
this->level = withLevel; this->level = withLevel;
} }
@ -379,7 +379,7 @@ std::shared_ptr<const StaticEnv> ExprAttrs::bindInheritSources(
// and displacement, and nothing else is allowed to access it. ideally we'd // and displacement, and nothing else is allowed to access it. ideally we'd
// not even *have* an expr that grabs anything from this env since it's fully // not even *have* an expr that grabs anything from this env since it's fully
// invisible, but the evaluator does not allow for this yet. // invisible, but the evaluator does not allow for this yet.
auto inner = std::make_shared<StaticEnv>(nullptr, env.get(), 0); auto inner = std::make_shared<StaticEnv>(nullptr, env, 0);
for (auto from : *inheritFromExprs) for (auto from : *inheritFromExprs)
from->bindVars(es, env); from->bindVars(es, env);
@ -393,7 +393,7 @@ void ExprAttrs::bindVars(EvalState & es, const std::shared_ptr<const StaticEnv>
if (recursive) { if (recursive) {
auto newEnv = [&] () -> std::shared_ptr<const StaticEnv> { auto newEnv = [&] () -> std::shared_ptr<const StaticEnv> {
auto newEnv = std::make_shared<StaticEnv>(nullptr, env.get(), attrs.size()); auto newEnv = std::make_shared<StaticEnv>(nullptr, env, attrs.size());
Displacement displ = 0; Displacement displ = 0;
for (auto & i : attrs) for (auto & i : attrs)
@ -440,7 +440,7 @@ void ExprLambda::bindVars(EvalState & es, const std::shared_ptr<const StaticEnv>
es.exprEnvs.insert(std::make_pair(this, env)); es.exprEnvs.insert(std::make_pair(this, env));
auto newEnv = std::make_shared<StaticEnv>( auto newEnv = std::make_shared<StaticEnv>(
nullptr, env.get(), nullptr, env,
(hasFormals() ? formals->formals.size() : 0) + (hasFormals() ? formals->formals.size() : 0) +
(!arg ? 0 : 1)); (!arg ? 0 : 1));
@ -474,7 +474,7 @@ void ExprCall::bindVars(EvalState & es, const std::shared_ptr<const StaticEnv> &
void ExprLet::bindVars(EvalState & es, const std::shared_ptr<const StaticEnv> & env) void ExprLet::bindVars(EvalState & es, const std::shared_ptr<const StaticEnv> & env)
{ {
auto newEnv = [&] () -> std::shared_ptr<const StaticEnv> { auto newEnv = [&] () -> std::shared_ptr<const StaticEnv> {
auto newEnv = std::make_shared<StaticEnv>(nullptr, env.get(), attrs->attrs.size()); auto newEnv = std::make_shared<StaticEnv>(nullptr, env, attrs->attrs.size());
Displacement displ = 0; Displacement displ = 0;
for (auto & i : attrs->attrs) for (auto & i : attrs->attrs)
@ -500,7 +500,7 @@ void ExprWith::bindVars(EvalState & es, const std::shared_ptr<const StaticEnv> &
es.exprEnvs.insert(std::make_pair(this, env)); es.exprEnvs.insert(std::make_pair(this, env));
parentWith = nullptr; parentWith = nullptr;
for (auto * e = env.get(); e && !parentWith; e = e->up) for (auto * e = env.get(); e && !parentWith; e = e->up.get())
parentWith = e->isWith; parentWith = e->isWith;
/* Does this `with' have an enclosing `with'? If so, record its /* Does this `with' have an enclosing `with'? If so, record its
@ -509,14 +509,14 @@ void ExprWith::bindVars(EvalState & es, const std::shared_ptr<const StaticEnv> &
const StaticEnv * curEnv; const StaticEnv * curEnv;
Level level; Level level;
prevWith = 0; prevWith = 0;
for (curEnv = env.get(), level = 1; curEnv; curEnv = curEnv->up, level++) for (curEnv = env.get(), level = 1; curEnv; curEnv = curEnv->up.get(), level++)
if (curEnv->isWith) { if (curEnv->isWith) {
prevWith = level; prevWith = level;
break; break;
} }
attrs->bindVars(es, env); attrs->bindVars(es, env);
auto newEnv = std::make_shared<StaticEnv>(this, env.get()); auto newEnv = std::make_shared<StaticEnv>(this, env);
body->bindVars(es, newEnv); body->bindVars(es, newEnv);
} }
@ -601,41 +601,6 @@ void ExprLambda::setDocComment(DocComment docComment) {
} }
}; };
/* Position table. */
Pos PosTable::operator[](PosIdx p) const
{
auto origin = resolve(p);
if (!origin)
return {};
const auto offset = origin->offsetOf(p);
Pos result{0, 0, origin->origin};
auto lines = this->lines.lock();
auto linesForInput = (*lines)[origin->offset];
if (linesForInput.empty()) {
auto source = result.getSource().value_or("");
const char * begin = source.data();
for (Pos::LinesIterator it(source), end; it != end; it++)
linesForInput.push_back(it->data() - begin);
if (linesForInput.empty())
linesForInput.push_back(0);
}
// as above: the first line starts at byte 0 and is always present
auto lineStartOffset = std::prev(
std::upper_bound(linesForInput.begin(), linesForInput.end(), offset));
result.line = 1 + (lineStartOffset - linesForInput.begin());
result.column = 1 + (offset - *lineStartOffset);
return result;
}
/* Symbol table. */ /* Symbol table. */
size_t SymbolTable::totalSize() const size_t SymbolTable::totalSize() const

View file

@ -480,13 +480,16 @@ extern ExprBlackHole eBlackHole;
struct StaticEnv struct StaticEnv
{ {
ExprWith * isWith; ExprWith * isWith;
const StaticEnv * up; std::shared_ptr<const StaticEnv> up;
// Note: these must be in sorted order. // Note: these must be in sorted order.
typedef std::vector<std::pair<Symbol, Displacement>> Vars; typedef std::vector<std::pair<Symbol, Displacement>> Vars;
Vars vars; Vars vars;
StaticEnv(ExprWith * isWith, const StaticEnv * up, size_t expectedSize = 0) : isWith(isWith), up(up) { StaticEnv(ExprWith * isWith, std::shared_ptr<const StaticEnv> up, size_t expectedSize = 0)
: isWith(isWith)
, up(std::move(up))
{
vars.reserve(expectedSize); vars.reserve(expectedSize);
}; };

View file

@ -359,11 +359,18 @@ string_parts_interpolated
path_start path_start
: PATH { : PATH {
Path path(absPath(std::string_view{$1.p, $1.l}, state->basePath.path.abs())); std::string_view literal({$1.p, $1.l});
Path path(absPath(literal, state->basePath.path.abs()));
/* add back in the trailing '/' to the first segment */ /* add back in the trailing '/' to the first segment */
if ($1.p[$1.l-1] == '/' && $1.l > 1) if (literal.size() > 1 && literal.back() == '/')
path += "/"; path += '/';
$$ = new ExprPath(ref<SourceAccessor>(state->rootFS), std::move(path)); $$ =
/* Absolute paths are always interpreted relative to the
root filesystem accessor, rather than the accessor of the
current Nix expression. */
literal.front() == '/'
? new ExprPath(state->rootFS, std::move(path))
: new ExprPath(state->basePath.accessor, std::move(path));
} }
| HPATH { | HPATH {
if (state->settings.pureEval) { if (state->settings.pureEval) {

View file

@ -1,3 +1,4 @@
#include "store-api.hh"
#include "eval.hh" #include "eval.hh"
namespace nix { namespace nix {
@ -12,4 +13,9 @@ SourcePath EvalState::rootPath(PathView path)
return {rootFS, CanonPath(absPath(path))}; return {rootFS, CanonPath(absPath(path))};
} }
SourcePath EvalState::storePath(const StorePath & path)
{
return {rootFS, CanonPath{store->printStorePath(path)}};
}
} }

View file

@ -145,8 +145,7 @@ static SourcePath realisePath(EvalState & state, const PosIdx pos, Value & v, st
try { try {
if (!context.empty() && path.accessor == state.rootFS) { if (!context.empty() && path.accessor == state.rootFS) {
auto rewrites = state.realiseContext(context); auto rewrites = state.realiseContext(context);
auto realPath = state.toRealPath(rewriteStrings(path.path.abs(), rewrites), context); path = {path.accessor, CanonPath(rewriteStrings(path.path.abs(), rewrites))};
path = {path.accessor, CanonPath(realPath)};
} }
return resolveSymlinks ? path.resolveSymlinks(*resolveSymlinks) : path; return resolveSymlinks ? path.resolveSymlinks(*resolveSymlinks) : path;
} catch (Error & e) { } catch (Error & e) {
@ -239,7 +238,7 @@ static void scopedImport(EvalState & state, const PosIdx pos, SourcePath & path,
Env * env = &state.allocEnv(vScope->attrs()->size()); Env * env = &state.allocEnv(vScope->attrs()->size());
env->up = &state.baseEnv; env->up = &state.baseEnv;
auto staticEnv = std::make_shared<StaticEnv>(nullptr, state.staticBaseEnv.get(), vScope->attrs()->size()); auto staticEnv = std::make_shared<StaticEnv>(nullptr, state.staticBaseEnv, vScope->attrs()->size());
unsigned int displ = 0; unsigned int displ = 0;
for (auto & attr : *vScope->attrs()) { for (auto & attr : *vScope->attrs()) {
@ -1595,9 +1594,13 @@ static RegisterPrimOp primop_placeholder({
.name = "placeholder", .name = "placeholder",
.args = {"output"}, .args = {"output"},
.doc = R"( .doc = R"(
Return a placeholder string for the specified *output* that will be Return at
substituted by the corresponding output path at build time. Typical [output placeholder string](@docroot@/store/derivation/index.md#output-placeholder)
outputs would be `"out"`, `"bin"` or `"dev"`. for the specified *output* that will be substituted by the corresponding
[output path](@docroot@/glossary.md#gloss-output-path)
at build time.
Typical outputs would be `"out"`, `"bin"` or `"dev"`.
)", )",
.fun = prim_placeholder, .fun = prim_placeholder,
}); });
@ -2135,12 +2138,15 @@ static RegisterPrimOp primop_outputOf({
.name = "__outputOf", .name = "__outputOf",
.args = {"derivation-reference", "output-name"}, .args = {"derivation-reference", "output-name"},
.doc = R"( .doc = R"(
Return the output path of a derivation, literally or using a placeholder if needed. Return the output path of a derivation, literally or using an
[input placeholder string](@docroot@/store/derivation/index.md#input-placeholder)
if needed.
If the derivation has a statically-known output path (i.e. the derivation output is input-addressed, or fixed content-addresed), the output path will just be returned. If the derivation has a statically-known output path (i.e. the derivation output is input-addressed, or fixed content-addresed), the output path will just be returned.
But if the derivation is content-addressed or if the derivation is itself not-statically produced (i.e. is the output of another derivation), a placeholder will be returned instead. But if the derivation is content-addressed or if the derivation is itself not-statically produced (i.e. is the output of another derivation), an input placeholder will be returned instead.
*`derivation reference`* must be a string that may contain a regular store path to a derivation, or may be a placeholder reference. If the derivation is produced by a derivation, you must explicitly select `drv.outPath`. *`derivation reference`* must be a string that may contain a regular store path to a derivation, or may be an input placeholder reference.
If the derivation is produced by a derivation, you must explicitly select `drv.outPath`.
This primop can be chained arbitrarily deeply. This primop can be chained arbitrarily deeply.
For instance, For instance,
@ -2150,9 +2156,9 @@ static RegisterPrimOp primop_outputOf({
"out" "out"
``` ```
will return a placeholder for the output of the output of `myDrv`. will return a input placeholder for the output of the output of `myDrv`.
This primop corresponds to the `^` sigil for derivable paths, e.g. as part of installable syntax on the command line. This primop corresponds to the `^` sigil for [deriving paths](@docroot@/glossary.md#gloss-deriving-paths), e.g. as part of installable syntax on the command line.
)", )",
.fun = prim_outputOf, .fun = prim_outputOf,
.experimentalFeature = Xp::DynamicDerivations, .experimentalFeature = Xp::DynamicDerivations,
@ -2472,21 +2478,11 @@ static void addPath(
const NixStringContext & context) const NixStringContext & context)
{ {
try { try {
StorePathSet refs;
if (path.accessor == state.rootFS && state.store->isInStore(path.path.abs())) { if (path.accessor == state.rootFS && state.store->isInStore(path.path.abs())) {
// FIXME: handle CA derivation outputs (where path needs to // FIXME: handle CA derivation outputs (where path needs to
// be rewritten to the actual output). // be rewritten to the actual output).
auto rewrites = state.realiseContext(context); auto rewrites = state.realiseContext(context);
path = {state.rootFS, CanonPath(state.toRealPath(rewriteStrings(path.path.abs(), rewrites), context))}; path = {path.accessor, CanonPath(rewriteStrings(path.path.abs(), rewrites))};
try {
auto [storePath, subPath] = state.store->toStorePath(path.path.abs());
// FIXME: we should scanForReferences on the path before adding it
refs = state.store->queryPathInfo(storePath)->references;
path = {state.rootFS, CanonPath(state.store->toRealPath(storePath) + subPath)};
} catch (Error &) { // FIXME: should be InvalidPathError
}
} }
std::unique_ptr<PathFilter> filter; std::unique_ptr<PathFilter> filter;

View file

@ -90,24 +90,26 @@ static void fetchTree(
fetchers::Input input { state.fetchSettings }; fetchers::Input input { state.fetchSettings };
NixStringContext context; NixStringContext context;
std::optional<std::string> type; std::optional<std::string> type;
auto fetcher = params.isFetchGit ? "fetchGit" : "fetchTree";
if (params.isFetchGit) type = "git"; if (params.isFetchGit) type = "git";
state.forceValue(*args[0], pos); state.forceValue(*args[0], pos);
if (args[0]->type() == nAttrs) { if (args[0]->type() == nAttrs) {
state.forceAttrs(*args[0], pos, "while evaluating the argument passed to builtins.fetchTree"); state.forceAttrs(*args[0], pos, fmt("while evaluating the argument passed to '%s'", fetcher));
fetchers::Attrs attrs; fetchers::Attrs attrs;
if (auto aType = args[0]->attrs()->get(state.sType)) { if (auto aType = args[0]->attrs()->get(state.sType)) {
if (type) if (type)
state.error<EvalError>( state.error<EvalError>(
"unexpected attribute 'type'" "unexpected argument 'type'"
).atPos(pos).debugThrow(); ).atPos(pos).debugThrow();
type = state.forceStringNoCtx(*aType->value, aType->pos, "while evaluating the `type` attribute passed to builtins.fetchTree"); type = state.forceStringNoCtx(*aType->value, aType->pos,
fmt("while evaluating the `type` argument passed to '%s'", fetcher));
} else if (!type) } else if (!type)
state.error<EvalError>( state.error<EvalError>(
"attribute 'type' is missing in call to 'fetchTree'" "argument 'type' is missing in call to '%s'", fetcher
).atPos(pos).debugThrow(); ).atPos(pos).debugThrow();
attrs.emplace("type", type.value()); attrs.emplace("type", type.value());
@ -127,9 +129,8 @@ static void fetchTree(
else if (attr.value->type() == nInt) { else if (attr.value->type() == nInt) {
auto intValue = attr.value->integer().value; auto intValue = attr.value->integer().value;
if (intValue < 0) { if (intValue < 0)
state.error<EvalError>("negative value given for fetchTree attr %1%: %2%", state.symbols[attr.name], intValue).atPos(pos).debugThrow(); state.error<EvalError>("negative value given for '%s' argument '%s': %d", fetcher, state.symbols[attr.name], intValue).atPos(pos).debugThrow();
}
attrs.emplace(state.symbols[attr.name], uint64_t(intValue)); attrs.emplace(state.symbols[attr.name], uint64_t(intValue));
} else if (state.symbols[attr.name] == "publicKeys") { } else if (state.symbols[attr.name] == "publicKeys") {
@ -137,8 +138,8 @@ static void fetchTree(
attrs.emplace(state.symbols[attr.name], printValueAsJSON(state, true, *attr.value, pos, context).dump()); attrs.emplace(state.symbols[attr.name], printValueAsJSON(state, true, *attr.value, pos, context).dump());
} }
else else
state.error<TypeError>("fetchTree argument '%s' is %s while a string, Boolean or integer is expected", state.error<TypeError>("argument '%s' to '%s' is %s while a string, Boolean or integer is expected",
state.symbols[attr.name], showType(*attr.value)).debugThrow(); state.symbols[attr.name], fetcher, showType(*attr.value)).debugThrow();
} }
if (params.isFetchGit && !attrs.contains("exportIgnore") && (!attrs.contains("submodules") || !*fetchers::maybeGetBoolAttr(attrs, "submodules"))) { if (params.isFetchGit && !attrs.contains("exportIgnore") && (!attrs.contains("submodules") || !*fetchers::maybeGetBoolAttr(attrs, "submodules"))) {
@ -153,14 +154,14 @@ static void fetchTree(
if (!params.allowNameArgument) if (!params.allowNameArgument)
if (auto nameIter = attrs.find("name"); nameIter != attrs.end()) if (auto nameIter = attrs.find("name"); nameIter != attrs.end())
state.error<EvalError>( state.error<EvalError>(
"attribute 'name' isnt supported in call to 'fetchTree'" "argument 'name' isnt supported in call to '%s'", fetcher
).atPos(pos).debugThrow(); ).atPos(pos).debugThrow();
input = fetchers::Input::fromAttrs(state.fetchSettings, std::move(attrs)); input = fetchers::Input::fromAttrs(state.fetchSettings, std::move(attrs));
} else { } else {
auto url = state.coerceToString(pos, *args[0], context, auto url = state.coerceToString(pos, *args[0], context,
"while evaluating the first argument passed to the fetcher", fmt("while evaluating the first argument passed to '%s'", fetcher),
false, false).toOwned(); false, false).toOwned();
if (params.isFetchGit) { if (params.isFetchGit) {
fetchers::Attrs attrs; fetchers::Attrs attrs;
@ -178,15 +179,16 @@ static void fetchTree(
if (!state.settings.pureEval && !input.isDirect()) if (!state.settings.pureEval && !input.isDirect())
input = lookupInRegistries(state.store, input).first; input = lookupInRegistries(state.store, input).first;
if (state.settings.pureEval && !input.isConsideredLocked(state.fetchSettings)) { if (state.settings.pureEval && !input.isLocked()) {
auto fetcher = "fetchTree"; if (input.getNarHash())
if (params.isFetchGit) warn(
fetcher = "fetchGit"; "Input '%s' is unlocked (e.g. lacks a Git revision) but does have a NAR hash. "
"This is deprecated since such inputs are verifiable but may not be reproducible.",
state.error<EvalError>( input.to_string());
"in pure evaluation mode, '%s' will not fetch unlocked input '%s'", else
fetcher, input.to_string() state.error<EvalError>(
).atPos(pos).debugThrow(); "in pure evaluation mode, '%s' will not fetch unlocked input '%s'",
fetcher, input.to_string()).atPos(pos).debugThrow();
} }
state.checkURI(input.toURLString()); state.checkURI(input.toURLString());
@ -361,6 +363,12 @@ static RegisterPrimOp primop_fetchTree({
Default: `false` Default: `false`
- `lfs` (Bool, optional)
Fetch any [Git LFS](https://git-lfs.com/) files.
Default: `false`
- `allRefs` (Bool, optional) - `allRefs` (Bool, optional)
By default, this has no effect. This becomes relevant only once `shallow` cloning is disabled. By default, this has no effect. This becomes relevant only once `shallow` cloning is disabled.
@ -683,6 +691,13 @@ static RegisterPrimOp primop_fetchGit({
Make a shallow clone when fetching the Git tree. Make a shallow clone when fetching the Git tree.
When this is enabled, the options `ref` and `allRefs` have no effect anymore. When this is enabled, the options `ref` and `allRefs` have no effect anymore.
- `lfs` (default: `false`)
A boolean that when `true` specifies that [Git LFS] files should be fetched.
[Git LFS]: https://git-lfs.com/
- `allRefs` - `allRefs`
Whether to fetch all references (eg. branches and tags) of the repository. Whether to fetch all references (eg. branches and tags) of the repository.

View file

@ -0,0 +1,97 @@
#include <gtest/gtest.h>
#include "fetchers.hh"
#include "fetch-settings.hh"
#include "json-utils.hh"
#include <nlohmann/json.hpp>
#include "tests/characterization.hh"
namespace nix::fetchers {
using nlohmann::json;
class AccessKeysTest : public ::testing::Test
{
protected:
public:
void SetUp() override {}
void TearDown() override {}
};
TEST_F(AccessKeysTest, singleOrgGitHub)
{
fetchers::Settings fetchSettings = fetchers::Settings{};
fetchSettings.accessTokens.get().insert({"github.com/a", "token"});
auto i = Input::fromURL(fetchSettings, "github:a/b");
auto token = i.scheme->getAccessToken(fetchSettings, "github.com", "github.com/a/b");
ASSERT_EQ(token, "token");
}
TEST_F(AccessKeysTest, nonMatches)
{
fetchers::Settings fetchSettings = fetchers::Settings{};
fetchSettings.accessTokens.get().insert({"github.com", "token"});
auto i = Input::fromURL(fetchSettings, "gitlab:github.com/evil");
auto token = i.scheme->getAccessToken(fetchSettings, "gitlab.com", "gitlab.com/github.com/evil");
ASSERT_EQ(token, std::nullopt);
}
TEST_F(AccessKeysTest, noPartialMatches)
{
fetchers::Settings fetchSettings = fetchers::Settings{};
fetchSettings.accessTokens.get().insert({"github.com/partial", "token"});
auto i = Input::fromURL(fetchSettings, "github:partial-match/repo");
auto token = i.scheme->getAccessToken(fetchSettings, "github.com", "github.com/partial-match");
ASSERT_EQ(token, std::nullopt);
}
TEST_F(AccessKeysTest, repoGitHub)
{
fetchers::Settings fetchSettings = fetchers::Settings{};
fetchSettings.accessTokens.get().insert({"github.com", "token"});
fetchSettings.accessTokens.get().insert({"github.com/a/b", "another_token"});
fetchSettings.accessTokens.get().insert({"github.com/a/c", "yet_another_token"});
auto i = Input::fromURL(fetchSettings, "github:a/a");
auto token = i.scheme->getAccessToken(fetchSettings, "github.com", "github.com/a/a");
ASSERT_EQ(token, "token");
token = i.scheme->getAccessToken(fetchSettings, "github.com", "github.com/a/b");
ASSERT_EQ(token, "another_token");
token = i.scheme->getAccessToken(fetchSettings, "github.com", "github.com/a/c");
ASSERT_EQ(token, "yet_another_token");
}
TEST_F(AccessKeysTest, multipleGitLab)
{
fetchers::Settings fetchSettings = fetchers::Settings{};
fetchSettings.accessTokens.get().insert({"gitlab.com", "token"});
fetchSettings.accessTokens.get().insert({"gitlab.com/a/b", "another_token"});
auto i = Input::fromURL(fetchSettings, "gitlab:a/b");
auto token = i.scheme->getAccessToken(fetchSettings, "gitlab.com", "gitlab.com/a/b");
ASSERT_EQ(token, "another_token");
token = i.scheme->getAccessToken(fetchSettings, "gitlab.com", "gitlab.com/a/c");
ASSERT_EQ(token, "token");
}
TEST_F(AccessKeysTest, multipleSourceHut)
{
fetchers::Settings fetchSettings = fetchers::Settings{};
fetchSettings.accessTokens.get().insert({"git.sr.ht", "token"});
fetchSettings.accessTokens.get().insert({"git.sr.ht/~a/b", "another_token"});
auto i = Input::fromURL(fetchSettings, "sourcehut:a/b");
auto token = i.scheme->getAccessToken(fetchSettings, "git.sr.ht", "git.sr.ht/~a/b");
ASSERT_EQ(token, "another_token");
token = i.scheme->getAccessToken(fetchSettings, "git.sr.ht", "git.sr.ht/~a/c");
ASSERT_EQ(token, "token");
}
}

View file

@ -7,13 +7,18 @@
#include <gtest/gtest.h> #include <gtest/gtest.h>
#include "fs-sink.hh" #include "fs-sink.hh"
#include "serialise.hh" #include "serialise.hh"
#include "git-lfs-fetch.hh"
namespace nix { namespace nix {
namespace fs {
using namespace std::filesystem;
}
class GitUtilsTest : public ::testing::Test class GitUtilsTest : public ::testing::Test
{ {
// We use a single repository for all tests. // We use a single repository for all tests.
Path tmpDir; fs::path tmpDir;
std::unique_ptr<AutoDelete> delTmpDir; std::unique_ptr<AutoDelete> delTmpDir;
public: public:
@ -25,7 +30,7 @@ public:
// Create the repo with libgit2 // Create the repo with libgit2
git_libgit2_init(); git_libgit2_init();
git_repository * repo = nullptr; git_repository * repo = nullptr;
auto r = git_repository_init(&repo, tmpDir.c_str(), 0); auto r = git_repository_init(&repo, tmpDir.string().c_str(), 0);
ASSERT_EQ(r, 0); ASSERT_EQ(r, 0);
git_repository_free(repo); git_repository_free(repo);
} }
@ -41,6 +46,11 @@ public:
{ {
return GitRepo::openRepo(tmpDir, true, false); return GitRepo::openRepo(tmpDir, true, false);
} }
std::string getRepoName() const
{
return tmpDir.filename().string();
}
}; };
void writeString(CreateRegularFileSink & fileSink, std::string contents, bool executable) void writeString(CreateRegularFileSink & fileSink, std::string contents, bool executable)
@ -78,7 +88,7 @@ TEST_F(GitUtilsTest, sink_basic)
// sink->createHardlink("foo-1.1/links/foo-2", CanonPath("foo-1.1/hello")); // sink->createHardlink("foo-1.1/links/foo-2", CanonPath("foo-1.1/hello"));
auto result = repo->dereferenceSingletonDirectory(sink->flush()); auto result = repo->dereferenceSingletonDirectory(sink->flush());
auto accessor = repo->getAccessor(result, false); auto accessor = repo->getAccessor(result, false, getRepoName());
auto entries = accessor->readDirectory(CanonPath::root); auto entries = accessor->readDirectory(CanonPath::root);
ASSERT_EQ(entries.size(), 5); ASSERT_EQ(entries.size(), 5);
ASSERT_EQ(accessor->readFile(CanonPath("hello")), "hello world"); ASSERT_EQ(accessor->readFile(CanonPath("hello")), "hello world");

View file

@ -31,6 +31,9 @@ deps_private += rapidcheck
gtest = dependency('gtest', main : true) gtest = dependency('gtest', main : true)
deps_private += gtest deps_private += gtest
libgit2 = dependency('libgit2')
deps_private += libgit2
add_project_arguments( add_project_arguments(
# TODO(Qyriad): Yes this is how the autoconf+Make system did it. # TODO(Qyriad): Yes this is how the autoconf+Make system did it.
# It would be nice for our headers to be idempotent instead. # It would be nice for our headers to be idempotent instead.
@ -43,6 +46,8 @@ add_project_arguments(
subdir('nix-meson-build-support/common') subdir('nix-meson-build-support/common')
sources = files( sources = files(
'access-tokens.cc',
'git-utils.cc',
'public-key.cc', 'public-key.cc',
) )

View file

@ -7,6 +7,7 @@
nix-fetchers, nix-fetchers,
nix-store-test-support, nix-store-test-support,
libgit2,
rapidcheck, rapidcheck,
gtest, gtest,
runCommand, runCommand,
@ -42,6 +43,7 @@ mkMesonExecutable (finalAttrs: {
nix-store-test-support nix-store-test-support
rapidcheck rapidcheck
gtest gtest
libgit2
]; ];
mesonFlags = [ mesonFlags = [

View file

@ -23,9 +23,11 @@ struct Settings : public Config
Access tokens are specified as a string made up of Access tokens are specified as a string made up of
space-separated `host=token` values. The specific token space-separated `host=token` values. The specific token
used is selected by matching the `host` portion against the used is selected by matching the `host` portion against the
"host" specification of the input. The actual use of the "host" specification of the input. The `host` portion may
`token` value is determined by the type of resource being contain a path element which will match against the prefix
accessed: URL for the input. (eg: `github.com/org=token`). The actual use
of the `token` value is determined by the type of resource
being accessed:
* Github: the token value is the OAUTH-TOKEN string obtained * Github: the token value is the OAUTH-TOKEN string obtained
as the Personal Access Token from the Github server (see as the Personal Access Token from the Github server (see

View file

@ -155,12 +155,6 @@ bool Input::isLocked() const
return scheme && scheme->isLocked(*this); return scheme && scheme->isLocked(*this);
} }
bool Input::isConsideredLocked(
const Settings & settings) const
{
return isLocked() || (settings.allowDirtyLocks && getNarHash());
}
bool Input::isFinal() const bool Input::isFinal() const
{ {
return maybeGetBoolAttr(attrs, "__final").value_or(false); return maybeGetBoolAttr(attrs, "__final").value_or(false);
@ -192,6 +186,7 @@ bool Input::contains(const Input & other) const
return false; return false;
} }
// FIXME: remove
std::pair<StorePath, Input> Input::fetchToStore(ref<Store> store) const std::pair<StorePath, Input> Input::fetchToStore(ref<Store> store) const
{ {
if (!scheme) if (!scheme)
@ -206,10 +201,6 @@ std::pair<StorePath, Input> Input::fetchToStore(ref<Store> store) const
auto narHash = store->queryPathInfo(storePath)->narHash; auto narHash = store->queryPathInfo(storePath)->narHash;
result.attrs.insert_or_assign("narHash", narHash.to_string(HashFormat::SRI, true)); result.attrs.insert_or_assign("narHash", narHash.to_string(HashFormat::SRI, true));
// FIXME: we would like to mark inputs as final in
// getAccessorUnchecked(), but then we can't add
// narHash. Or maybe narHash should be excluded from the
// concept of "final" inputs?
result.attrs.insert_or_assign("__final", Explicit<bool>(true)); result.attrs.insert_or_assign("__final", Explicit<bool>(true));
assert(result.isFinal()); assert(result.isFinal());
@ -290,6 +281,8 @@ std::pair<ref<SourceAccessor>, Input> Input::getAccessor(ref<Store> store) const
try { try {
auto [accessor, result] = getAccessorUnchecked(store); auto [accessor, result] = getAccessorUnchecked(store);
result.attrs.insert_or_assign("__final", Explicit<bool>(true));
checkLocks(*this, result); checkLocks(*this, result);
return {accessor, std::move(result)}; return {accessor, std::move(result)};

View file

@ -90,15 +90,6 @@ public:
*/ */
bool isLocked() const; bool isLocked() const;
/**
* Return whether the input is either locked, or, if
* `allow-dirty-locks` is enabled, it has a NAR hash. In the
* latter case, we can verify the input but we may not be able to
* fetch it from anywhere.
*/
bool isConsideredLocked(
const Settings & settings) const;
/** /**
* Only for relative path flakes, i.e. 'path:./foo', returns the * Only for relative path flakes, i.e. 'path:./foo', returns the
* relative path, i.e. './foo'. * relative path, i.e. './foo'.
@ -273,6 +264,9 @@ struct InputScheme
virtual std::optional<std::string> isRelative(const Input & input) const virtual std::optional<std::string> isRelative(const Input & input) const
{ return std::nullopt; } { return std::nullopt; }
virtual std::optional<std::string> getAccessToken(const fetchers::Settings & settings, const std::string & host, const std::string & url) const
{ return {};}
}; };
void registerInputScheme(std::shared_ptr<InputScheme> && fetcher); void registerInputScheme(std::shared_ptr<InputScheme> && fetcher);

View file

@ -0,0 +1,279 @@
#include "git-lfs-fetch.hh"
#include "git-utils.hh"
#include "filetransfer.hh"
#include "processes.hh"
#include "url.hh"
#include "users.hh"
#include "hash.hh"
#include <git2/attr.h>
#include <git2/config.h>
#include <git2/errors.h>
#include <git2/remote.h>
#include <nlohmann/json.hpp>
namespace nix::lfs {
// if authHeader is "", downloadToSink assumes no auth is expected
static void downloadToSink(
const std::string & url,
const std::string & authHeader,
// FIXME: passing a StringSink is superfluous, we may as well
// return a string. Or use an abstract Sink for streaming.
StringSink & sink,
std::string sha256Expected,
size_t sizeExpected)
{
FileTransferRequest request(url);
Headers headers;
if (!authHeader.empty())
headers.push_back({"Authorization", authHeader});
request.headers = headers;
getFileTransfer()->download(std::move(request), sink);
auto sizeActual = sink.s.length();
if (sizeExpected != sizeActual)
throw Error("size mismatch while fetching %s: expected %d but got %d", url, sizeExpected, sizeActual);
auto sha256Actual = hashString(HashAlgorithm::SHA256, sink.s).to_string(HashFormat::Base16, false);
if (sha256Actual != sha256Expected)
throw Error(
"hash mismatch while fetching %s: expected sha256:%s but got sha256:%s", url, sha256Expected, sha256Actual);
}
static std::string getLfsApiToken(const ParsedURL & url)
{
auto [status, output] = runProgram(RunOptions{
.program = "ssh",
.args = {*url.authority, "git-lfs-authenticate", url.path, "download"},
});
if (output.empty())
throw Error(
"git-lfs-authenticate: no output (cmd: ssh %s git-lfs-authenticate %s download)",
url.authority.value_or(""),
url.path);
auto queryResp = nlohmann::json::parse(output);
if (!queryResp.contains("header"))
throw Error("no header in git-lfs-authenticate response");
if (!queryResp["header"].contains("Authorization"))
throw Error("no Authorization in git-lfs-authenticate response");
return queryResp["header"]["Authorization"].get<std::string>();
}
typedef std::unique_ptr<git_config, Deleter<git_config_free>> GitConfig;
typedef std::unique_ptr<git_config_entry, Deleter<git_config_entry_free>> GitConfigEntry;
static std::string getLfsEndpointUrl(git_repository * repo)
{
GitConfig config;
if (git_repository_config(Setter(config), repo)) {
GitConfigEntry entry;
if (!git_config_get_entry(Setter(entry), config.get(), "lfs.url")) {
auto value = std::string(entry->value);
if (!value.empty()) {
debug("Found explicit lfs.url value: %s", value);
return value;
}
}
}
git_remote * remote = nullptr;
if (git_remote_lookup(&remote, repo, "origin"))
return "";
const char * url_c_str = git_remote_url(remote);
if (!url_c_str)
return "";
return std::string(url_c_str);
}
static std::optional<Pointer> parseLfsPointer(std::string_view content, std::string_view filename)
{
// https://github.com/git-lfs/git-lfs/blob/2ef4108/docs/spec.md
//
// example git-lfs pointer file:
// version https://git-lfs.github.com/spec/v1
// oid sha256:f5e02aa71e67f41d79023a128ca35bad86cf7b6656967bfe0884b3a3c4325eaf
// size 10000000
// (ending \n)
if (!content.starts_with("version ")) {
// Invalid pointer file
return std::nullopt;
}
if (!content.starts_with("version https://git-lfs.github.com/spec/v1")) {
// In case there's new spec versions in the future, but for now only v1 exists
debug("Invalid version found on potential lfs pointer file, skipping");
return std::nullopt;
}
std::string oid;
std::string size;
for (auto & line : tokenizeString<Strings>(content, "\n")) {
if (line.starts_with("version ")) {
continue;
}
if (line.starts_with("oid sha256:")) {
oid = line.substr(11); // skip "oid sha256:"
continue;
}
if (line.starts_with("size ")) {
size = line.substr(5); // skip "size "
continue;
}
debug("Custom extension '%s' found, ignoring", line);
}
if (oid.length() != 64 || !std::all_of(oid.begin(), oid.end(), ::isxdigit)) {
debug("Invalid sha256 %s, skipping", oid);
return std::nullopt;
}
if (size.length() == 0 || !std::all_of(size.begin(), size.end(), ::isdigit)) {
debug("Invalid size %s, skipping", size);
return std::nullopt;
}
return std::make_optional(Pointer{oid, std::stoul(size)});
}
Fetch::Fetch(git_repository * repo, git_oid rev)
{
this->repo = repo;
this->rev = rev;
const auto remoteUrl = lfs::getLfsEndpointUrl(repo);
this->url = nix::parseURL(nix::fixGitURL(remoteUrl)).canonicalise();
}
bool Fetch::shouldFetch(const CanonPath & path) const
{
const char * attr = nullptr;
git_attr_options opts = GIT_ATTR_OPTIONS_INIT;
opts.attr_commit_id = this->rev;
opts.flags = GIT_ATTR_CHECK_INCLUDE_COMMIT | GIT_ATTR_CHECK_NO_SYSTEM;
if (git_attr_get_ext(&attr, (git_repository *) (this->repo), &opts, path.rel_c_str(), "filter"))
throw Error("cannot get git-lfs attribute: %s", git_error_last()->message);
debug("Git filter for '%s' is '%s'", path, attr ? attr : "null");
return attr != nullptr && !std::string(attr).compare("lfs");
}
static nlohmann::json pointerToPayload(const std::vector<Pointer> & items)
{
nlohmann::json jArray = nlohmann::json::array();
for (const auto & pointer : items)
jArray.push_back({{"oid", pointer.oid}, {"size", pointer.size}});
return jArray;
}
std::vector<nlohmann::json> Fetch::fetchUrls(const std::vector<Pointer> & pointers) const
{
ParsedURL httpUrl(url);
httpUrl.scheme = url.scheme == "ssh" ? "https" : url.scheme;
FileTransferRequest request(httpUrl.to_string() + "/info/lfs/objects/batch");
request.post = true;
Headers headers;
if (this->url.scheme == "ssh")
headers.push_back({"Authorization", lfs::getLfsApiToken(this->url)});
headers.push_back({"Content-Type", "application/vnd.git-lfs+json"});
headers.push_back({"Accept", "application/vnd.git-lfs+json"});
request.headers = headers;
nlohmann::json oidList = pointerToPayload(pointers);
nlohmann::json data = {{"operation", "download"}};
data["objects"] = oidList;
request.data = data.dump();
FileTransferResult result = getFileTransfer()->upload(request);
auto responseString = result.data;
std::vector<nlohmann::json> objects;
// example resp here:
// {"objects":[{"oid":"f5e02aa71e67f41d79023a128ca35bad86cf7b6656967bfe0884b3a3c4325eaf","size":10000000,"actions":{"download":{"href":"https://gitlab.com/b-camacho/test-lfs.git/gitlab-lfs/objects/f5e02aa71e67f41d79023a128ca35bad86cf7b6656967bfe0884b3a3c4325eaf","header":{"Authorization":"Basic
// Yi1jYW1hY2hvOmV5SjBlWEFpT2lKS1YxUWlMQ0poYkdjaU9pSklVekkxTmlKOS5leUprWVhSaElqcDdJbUZqZEc5eUlqb2lZaTFqWVcxaFkyaHZJbjBzSW1wMGFTSTZJbUptTURZNFpXVTFMVEprWmpVdE5HWm1ZUzFpWWpRMExUSXpNVEV3WVRReU1qWmtaaUlzSW1saGRDSTZNVGN4TkRZeE16ZzBOU3dpYm1KbUlqb3hOekUwTmpFek9EUXdMQ0psZUhBaU9qRTNNVFEyTWpFd05EVjkuZk9yMDNkYjBWSTFXQzFZaTBKRmJUNnJTTHJPZlBwVW9lYllkT0NQZlJ4QQ=="}}},"authenticated":true}]}
try {
auto resp = nlohmann::json::parse(responseString);
if (resp.contains("objects"))
objects.insert(objects.end(), resp["objects"].begin(), resp["objects"].end());
else
throw Error("response does not contain 'objects'");
return objects;
} catch (const nlohmann::json::parse_error & e) {
printMsg(lvlTalkative, "Full response: '%1%'", responseString);
throw Error("response did not parse as json: %s", e.what());
}
}
void Fetch::fetch(
const std::string & content,
const CanonPath & pointerFilePath,
StringSink & sink,
std::function<void(uint64_t)> sizeCallback) const
{
debug("trying to fetch '%s' using git-lfs", pointerFilePath);
if (content.length() >= 1024) {
warn("encountered file '%s' that should have been a git-lfs pointer, but is too large", pointerFilePath);
sizeCallback(content.length());
sink(content);
return;
}
const auto pointer = parseLfsPointer(content, pointerFilePath.rel());
if (pointer == std::nullopt) {
warn("encountered file '%s' that should have been a git-lfs pointer, but is invalid", pointerFilePath);
sizeCallback(content.length());
sink(content);
return;
}
Path cacheDir = getCacheDir() + "/git-lfs";
std::string key = hashString(HashAlgorithm::SHA256, pointerFilePath.rel()).to_string(HashFormat::Base16, false)
+ "/" + pointer->oid;
Path cachePath = cacheDir + "/" + key;
if (pathExists(cachePath)) {
debug("using cache entry %s -> %s", key, cachePath);
sink(readFile(cachePath));
return;
}
debug("did not find cache entry for %s", key);
std::vector<Pointer> pointers;
pointers.push_back(pointer.value());
const auto objUrls = fetchUrls(pointers);
const auto obj = objUrls[0];
try {
std::string sha256 = obj.at("oid"); // oid is also the sha256
std::string ourl = obj.at("actions").at("download").at("href");
std::string authHeader = "";
if (obj.at("actions").at("download").contains("header")
&& obj.at("actions").at("download").at("header").contains("Authorization")) {
authHeader = obj["actions"]["download"]["header"]["Authorization"];
}
const uint64_t size = obj.at("size");
sizeCallback(size);
downloadToSink(ourl, authHeader, sink, sha256, size);
debug("creating cache entry %s -> %s", key, cachePath);
if (!pathExists(dirOf(cachePath)))
createDirs(dirOf(cachePath));
writeFile(cachePath, sink.s);
debug("%s fetched with git-lfs", pointerFilePath);
} catch (const nlohmann::json::out_of_range & e) {
throw Error("bad json from /info/lfs/objects/batch: %s %s", obj, e.what());
}
}
} // namespace nix::lfs

View file

@ -0,0 +1,43 @@
#include "canon-path.hh"
#include "serialise.hh"
#include "url.hh"
#include <git2/repository.h>
#include <nlohmann/json_fwd.hpp>
namespace nix::lfs {
/**
* git-lfs pointer
* @see https://github.com/git-lfs/git-lfs/blob/2ef4108/docs/spec.md
*/
struct Pointer
{
std::string oid; // git-lfs managed object id. you give this to the lfs server
// for downloads
size_t size; // in bytes
};
struct Fetch
{
// Reference to the repository
const git_repository * repo;
// Git commit being fetched
git_oid rev;
// derived from git remote url
nix::ParsedURL url;
Fetch(git_repository * repo, git_oid rev);
bool shouldFetch(const CanonPath & path) const;
void fetch(
const std::string & content,
const CanonPath & pointerFilePath,
StringSink & sink,
std::function<void(uint64_t)> sizeCallback) const;
std::vector<nlohmann::json> fetchUrls(const std::vector<Pointer> & pointers) const;
};
} // namespace nix::lfs

View file

@ -1,4 +1,5 @@
#include "git-utils.hh" #include "git-utils.hh"
#include "git-lfs-fetch.hh"
#include "cache.hh" #include "cache.hh"
#include "finally.hh" #include "finally.hh"
#include "processes.hh" #include "processes.hh"
@ -60,14 +61,6 @@ namespace nix {
struct GitSourceAccessor; struct GitSourceAccessor;
// Some wrapper types that ensure that the git_*_free functions get called.
template<auto del>
struct Deleter
{
template <typename T>
void operator()(T * p) const { del(p); };
};
typedef std::unique_ptr<git_repository, Deleter<git_repository_free>> Repository; typedef std::unique_ptr<git_repository, Deleter<git_repository_free>> Repository;
typedef std::unique_ptr<git_tree_entry, Deleter<git_tree_entry_free>> TreeEntry; typedef std::unique_ptr<git_tree_entry, Deleter<git_tree_entry_free>> TreeEntry;
typedef std::unique_ptr<git_tree, Deleter<git_tree_free>> Tree; typedef std::unique_ptr<git_tree, Deleter<git_tree_free>> Tree;
@ -85,20 +78,6 @@ typedef std::unique_ptr<git_odb, Deleter<git_odb_free>> ObjectDb;
typedef std::unique_ptr<git_packbuilder, Deleter<git_packbuilder_free>> PackBuilder; typedef std::unique_ptr<git_packbuilder, Deleter<git_packbuilder_free>> PackBuilder;
typedef std::unique_ptr<git_indexer, Deleter<git_indexer_free>> Indexer; typedef std::unique_ptr<git_indexer, Deleter<git_indexer_free>> Indexer;
// A helper to ensure that we don't leak objects returned by libgit2.
template<typename T>
struct Setter
{
T & t;
typename T::pointer p = nullptr;
Setter(T & t) : t(t) { }
~Setter() { if (p) t = T(p); }
operator typename T::pointer * () { return &p; }
};
Hash toHash(const git_oid & oid) Hash toHash(const git_oid & oid)
{ {
#ifdef GIT_EXPERIMENTAL_SHA256 #ifdef GIT_EXPERIMENTAL_SHA256
@ -506,12 +485,15 @@ struct GitRepoImpl : GitRepo, std::enable_shared_from_this<GitRepoImpl>
/** /**
* A 'GitSourceAccessor' with no regard for export-ignore or any other transformations. * A 'GitSourceAccessor' with no regard for export-ignore or any other transformations.
*/ */
ref<GitSourceAccessor> getRawAccessor(const Hash & rev); ref<GitSourceAccessor> getRawAccessor(
const Hash & rev,
bool smudgeLfs = false);
ref<SourceAccessor> getAccessor( ref<SourceAccessor> getAccessor(
const Hash & rev, const Hash & rev,
bool exportIgnore, bool exportIgnore,
std::string displayPrefix) override; std::string displayPrefix,
bool smudgeLfs = false) override;
ref<SourceAccessor> getAccessor(const WorkdirInfo & wd, bool exportIgnore, MakeNotAllowedError e) override; ref<SourceAccessor> getAccessor(const WorkdirInfo & wd, bool exportIgnore, MakeNotAllowedError e) override;
@ -670,24 +652,40 @@ ref<GitRepo> GitRepo::openRepo(const std::filesystem::path & path, bool create,
/** /**
* Raw git tree input accessor. * Raw git tree input accessor.
*/ */
struct GitSourceAccessor : SourceAccessor struct GitSourceAccessor : SourceAccessor
{ {
ref<GitRepoImpl> repo; ref<GitRepoImpl> repo;
Object root; Object root;
std::optional<lfs::Fetch> lfsFetch = std::nullopt;
GitSourceAccessor(ref<GitRepoImpl> repo_, const Hash & rev) GitSourceAccessor(ref<GitRepoImpl> repo_, const Hash & rev, bool smudgeLfs)
: repo(repo_) : repo(repo_)
, root(peelToTreeOrBlob(lookupObject(*repo, hashToOID(rev)).get())) , root(peelToTreeOrBlob(lookupObject(*repo, hashToOID(rev)).get()))
{ {
if (smudgeLfs)
lfsFetch = std::make_optional(lfs::Fetch(*repo, hashToOID(rev)));
} }
std::string readBlob(const CanonPath & path, bool symlink) std::string readBlob(const CanonPath & path, bool symlink)
{ {
auto blob = getBlob(path, symlink); const auto blob = getBlob(path, symlink);
auto data = std::string_view((const char *) git_blob_rawcontent(blob.get()), git_blob_rawsize(blob.get())); if (lfsFetch) {
if (lfsFetch->shouldFetch(path)) {
StringSink s;
try {
auto contents = std::string((const char *) git_blob_rawcontent(blob.get()), git_blob_rawsize(blob.get()));
lfsFetch->fetch(contents, path, s, [&s](uint64_t size){ s.s.reserve(size); });
} catch (Error & e) {
e.addTrace({}, "while smudging git-lfs file '%s'", path);
throw;
}
return s.s;
}
}
return std::string(data); return std::string((const char *) git_blob_rawcontent(blob.get()), git_blob_rawsize(blob.get()));
} }
std::string readFile(const CanonPath & path) override std::string readFile(const CanonPath & path) override
@ -1191,19 +1189,22 @@ struct GitFileSystemObjectSinkImpl : GitFileSystemObjectSink
} }
}; };
ref<GitSourceAccessor> GitRepoImpl::getRawAccessor(const Hash & rev) ref<GitSourceAccessor> GitRepoImpl::getRawAccessor(
const Hash & rev,
bool smudgeLfs)
{ {
auto self = ref<GitRepoImpl>(shared_from_this()); auto self = ref<GitRepoImpl>(shared_from_this());
return make_ref<GitSourceAccessor>(self, rev); return make_ref<GitSourceAccessor>(self, rev, smudgeLfs);
} }
ref<SourceAccessor> GitRepoImpl::getAccessor( ref<SourceAccessor> GitRepoImpl::getAccessor(
const Hash & rev, const Hash & rev,
bool exportIgnore, bool exportIgnore,
std::string displayPrefix) std::string displayPrefix,
bool smudgeLfs)
{ {
auto self = ref<GitRepoImpl>(shared_from_this()); auto self = ref<GitRepoImpl>(shared_from_this());
ref<GitSourceAccessor> rawGitAccessor = getRawAccessor(rev); ref<GitSourceAccessor> rawGitAccessor = getRawAccessor(rev, smudgeLfs);
rawGitAccessor->setPathDisplay(std::move(displayPrefix)); rawGitAccessor->setPathDisplay(std::move(displayPrefix));
if (exportIgnore) if (exportIgnore)
return make_ref<GitExportIgnoreSourceAccessor>(self, rawGitAccessor, rev); return make_ref<GitExportIgnoreSourceAccessor>(self, rawGitAccessor, rev);

View file

@ -89,7 +89,8 @@ struct GitRepo
virtual ref<SourceAccessor> getAccessor( virtual ref<SourceAccessor> getAccessor(
const Hash & rev, const Hash & rev,
bool exportIgnore, bool exportIgnore,
std::string displayPrefix) = 0; std::string displayPrefix,
bool smudgeLfs = false) = 0;
virtual ref<SourceAccessor> getAccessor(const WorkdirInfo & wd, bool exportIgnore, MakeNotAllowedError makeNotAllowedError) = 0; virtual ref<SourceAccessor> getAccessor(const WorkdirInfo & wd, bool exportIgnore, MakeNotAllowedError makeNotAllowedError) = 0;
@ -126,4 +127,26 @@ struct GitRepo
ref<GitRepo> getTarballCache(); ref<GitRepo> getTarballCache();
// A helper to ensure that the `git_*_free` functions get called.
template<auto del>
struct Deleter
{
template <typename T>
void operator()(T * p) const { del(p); };
};
// A helper to ensure that we don't leak objects returned by libgit2.
template<typename T>
struct Setter
{
T & t;
typename T::pointer p = nullptr;
Setter(T & t) : t(t) { }
~Setter() { if (p) t = T(p); }
operator typename T::pointer * () { return &p; }
};
} }

View file

@ -9,7 +9,6 @@
#include "pathlocks.hh" #include "pathlocks.hh"
#include "processes.hh" #include "processes.hh"
#include "git.hh" #include "git.hh"
#include "mounted-source-accessor.hh"
#include "git-utils.hh" #include "git-utils.hh"
#include "logging.hh" #include "logging.hh"
#include "finally.hh" #include "finally.hh"
@ -185,7 +184,7 @@ struct GitInputScheme : InputScheme
for (auto & [name, value] : url.query) { for (auto & [name, value] : url.query) {
if (name == "rev" || name == "ref" || name == "keytype" || name == "publicKey" || name == "publicKeys") if (name == "rev" || name == "ref" || name == "keytype" || name == "publicKey" || name == "publicKeys")
attrs.emplace(name, value); attrs.emplace(name, value);
else if (name == "shallow" || name == "submodules" || name == "exportIgnore" || name == "allRefs" || name == "verifyCommit") else if (name == "shallow" || name == "submodules" || name == "lfs" || name == "exportIgnore" || name == "allRefs" || name == "verifyCommit")
attrs.emplace(name, Explicit<bool> { value == "1" }); attrs.emplace(name, Explicit<bool> { value == "1" });
else else
url2.query.emplace(name, value); url2.query.emplace(name, value);
@ -210,6 +209,7 @@ struct GitInputScheme : InputScheme
"rev", "rev",
"shallow", "shallow",
"submodules", "submodules",
"lfs",
"exportIgnore", "exportIgnore",
"lastModified", "lastModified",
"revCount", "revCount",
@ -262,6 +262,8 @@ struct GitInputScheme : InputScheme
if (auto ref = input.getRef()) url.query.insert_or_assign("ref", *ref); if (auto ref = input.getRef()) url.query.insert_or_assign("ref", *ref);
if (getShallowAttr(input)) if (getShallowAttr(input))
url.query.insert_or_assign("shallow", "1"); url.query.insert_or_assign("shallow", "1");
if (getLfsAttr(input))
url.query.insert_or_assign("lfs", "1");
if (getSubmodulesAttr(input)) if (getSubmodulesAttr(input))
url.query.insert_or_assign("submodules", "1"); url.query.insert_or_assign("submodules", "1");
if (maybeGetBoolAttr(input.attrs, "exportIgnore").value_or(false)) if (maybeGetBoolAttr(input.attrs, "exportIgnore").value_or(false))
@ -349,8 +351,7 @@ struct GitInputScheme : InputScheme
if (commitMsg) { if (commitMsg) {
// Pause the logger to allow for user input (such as a gpg passphrase) in `git commit` // Pause the logger to allow for user input (such as a gpg passphrase) in `git commit`
logger->pause(); auto suspension = logger->suspend();
Finally restoreLogger([]() { logger->resume(); });
runProgram("git", true, runProgram("git", true,
{ "-C", repoPath->string(), "--git-dir", repoInfo.gitDir, "commit", std::string(path.rel()), "-F", "-" }, { "-C", repoPath->string(), "--git-dir", repoInfo.gitDir, "commit", std::string(path.rel()), "-F", "-" },
*commitMsg); *commitMsg);
@ -411,6 +412,11 @@ struct GitInputScheme : InputScheme
return maybeGetBoolAttr(input.attrs, "submodules").value_or(false); return maybeGetBoolAttr(input.attrs, "submodules").value_or(false);
} }
bool getLfsAttr(const Input & input) const
{
return maybeGetBoolAttr(input.attrs, "lfs").value_or(false);
}
bool getExportIgnoreAttr(const Input & input) const bool getExportIgnoreAttr(const Input & input) const
{ {
return maybeGetBoolAttr(input.attrs, "exportIgnore").value_or(false); return maybeGetBoolAttr(input.attrs, "exportIgnore").value_or(false);
@ -678,7 +684,8 @@ struct GitInputScheme : InputScheme
verifyCommit(input, repo); verifyCommit(input, repo);
bool exportIgnore = getExportIgnoreAttr(input); bool exportIgnore = getExportIgnoreAttr(input);
auto accessor = repo->getAccessor(rev, exportIgnore, "«" + input.to_string() + "»"); bool smudgeLfs = getLfsAttr(input);
auto accessor = repo->getAccessor(rev, exportIgnore, "«" + input.to_string() + "»", smudgeLfs);
/* If the repo has submodules, fetch them and return a mounted /* If the repo has submodules, fetch them and return a mounted
input accessor consisting of the accessor for the top-level input accessor consisting of the accessor for the top-level
@ -698,6 +705,7 @@ struct GitInputScheme : InputScheme
attrs.insert_or_assign("rev", submoduleRev.gitRev()); attrs.insert_or_assign("rev", submoduleRev.gitRev());
attrs.insert_or_assign("exportIgnore", Explicit<bool>{ exportIgnore }); attrs.insert_or_assign("exportIgnore", Explicit<bool>{ exportIgnore });
attrs.insert_or_assign("submodules", Explicit<bool>{ true }); attrs.insert_or_assign("submodules", Explicit<bool>{ true });
attrs.insert_or_assign("lfs", Explicit<bool>{ smudgeLfs });
attrs.insert_or_assign("allRefs", Explicit<bool>{ true }); attrs.insert_or_assign("allRefs", Explicit<bool>{ true });
auto submoduleInput = fetchers::Input::fromAttrs(*input.settings, std::move(attrs)); auto submoduleInput = fetchers::Input::fromAttrs(*input.settings, std::move(attrs));
auto [submoduleAccessor, submoduleInput2] = auto [submoduleAccessor, submoduleInput2] =
@ -838,7 +846,7 @@ struct GitInputScheme : InputScheme
{ {
auto makeFingerprint = [&](const Hash & rev) auto makeFingerprint = [&](const Hash & rev)
{ {
return rev.gitRev() + (getSubmodulesAttr(input) ? ";s" : "") + (getExportIgnoreAttr(input) ? ";e" : ""); return rev.gitRev() + (getSubmodulesAttr(input) ? ";s" : "") + (getExportIgnoreAttr(input) ? ";e" : "") + (getLfsAttr(input) ? ";l" : "");
}; };
if (auto rev = input.getRev()) if (auto rev = input.getRev())

View file

@ -172,9 +172,30 @@ struct GitArchiveInputScheme : InputScheme
return input; return input;
} }
std::optional<std::string> getAccessToken(const fetchers::Settings & settings, const std::string & host) const // Search for the longest possible match starting from the begining and ending at either the end or a path segment.
std::optional<std::string> getAccessToken(const fetchers::Settings & settings, const std::string & host, const std::string & url) const override
{ {
auto tokens = settings.accessTokens.get(); auto tokens = settings.accessTokens.get();
std::string answer;
size_t answer_match_len = 0;
if(! url.empty()) {
for (auto & token : tokens) {
auto first = url.find(token.first);
if (
first != std::string::npos
&& token.first.length() > answer_match_len
&& first == 0
&& url.substr(0,token.first.length()) == token.first
&& (url.length() == token.first.length() || url[token.first.length()] == '/')
)
{
answer = token.second;
answer_match_len = token.first.length();
}
}
if (!answer.empty())
return answer;
}
if (auto token = get(tokens, host)) if (auto token = get(tokens, host))
return *token; return *token;
return {}; return {};
@ -182,10 +203,22 @@ struct GitArchiveInputScheme : InputScheme
Headers makeHeadersWithAuthTokens( Headers makeHeadersWithAuthTokens(
const fetchers::Settings & settings, const fetchers::Settings & settings,
const std::string & host) const const std::string & host,
const Input & input) const
{
auto owner = getStrAttr(input.attrs, "owner");
auto repo = getStrAttr(input.attrs, "repo");
auto hostAndPath = fmt( "%s/%s/%s", host, owner, repo);
return makeHeadersWithAuthTokens(settings, host, hostAndPath);
}
Headers makeHeadersWithAuthTokens(
const fetchers::Settings & settings,
const std::string & host,
const std::string & hostAndPath) const
{ {
Headers headers; Headers headers;
auto accessToken = getAccessToken(settings, host); auto accessToken = getAccessToken(settings, host, hostAndPath);
if (accessToken) { if (accessToken) {
auto hdr = accessHeaderFromToken(*accessToken); auto hdr = accessHeaderFromToken(*accessToken);
if (hdr) if (hdr)
@ -361,7 +394,7 @@ struct GitHubInputScheme : GitArchiveInputScheme
: "https://%s/api/v3/repos/%s/%s/commits/%s", : "https://%s/api/v3/repos/%s/%s/commits/%s",
host, getOwner(input), getRepo(input), *input.getRef()); host, getOwner(input), getRepo(input), *input.getRef());
Headers headers = makeHeadersWithAuthTokens(*input.settings, host); Headers headers = makeHeadersWithAuthTokens(*input.settings, host, input);
auto json = nlohmann::json::parse( auto json = nlohmann::json::parse(
readFile( readFile(
@ -378,7 +411,7 @@ struct GitHubInputScheme : GitArchiveInputScheme
{ {
auto host = getHost(input); auto host = getHost(input);
Headers headers = makeHeadersWithAuthTokens(*input.settings, host); Headers headers = makeHeadersWithAuthTokens(*input.settings, host, input);
// If we have no auth headers then we default to the public archive // If we have no auth headers then we default to the public archive
// urls so we do not run into rate limits. // urls so we do not run into rate limits.
@ -435,7 +468,7 @@ struct GitLabInputScheme : GitArchiveInputScheme
auto url = fmt("https://%s/api/v4/projects/%s%%2F%s/repository/commits?ref_name=%s", auto url = fmt("https://%s/api/v4/projects/%s%%2F%s/repository/commits?ref_name=%s",
host, getStrAttr(input.attrs, "owner"), getStrAttr(input.attrs, "repo"), *input.getRef()); host, getStrAttr(input.attrs, "owner"), getStrAttr(input.attrs, "repo"), *input.getRef());
Headers headers = makeHeadersWithAuthTokens(*input.settings, host); Headers headers = makeHeadersWithAuthTokens(*input.settings, host, input);
auto json = nlohmann::json::parse( auto json = nlohmann::json::parse(
readFile( readFile(
@ -465,7 +498,7 @@ struct GitLabInputScheme : GitArchiveInputScheme
host, getStrAttr(input.attrs, "owner"), getStrAttr(input.attrs, "repo"), host, getStrAttr(input.attrs, "owner"), getStrAttr(input.attrs, "repo"),
input.getRev()->to_string(HashFormat::Base16, false)); input.getRev()->to_string(HashFormat::Base16, false));
Headers headers = makeHeadersWithAuthTokens(*input.settings, host); Headers headers = makeHeadersWithAuthTokens(*input.settings, host, input);
return DownloadUrl { url, headers }; return DownloadUrl { url, headers };
} }
@ -505,7 +538,7 @@ struct SourceHutInputScheme : GitArchiveInputScheme
auto base_url = fmt("https://%s/%s/%s", auto base_url = fmt("https://%s/%s/%s",
host, getStrAttr(input.attrs, "owner"), getStrAttr(input.attrs, "repo")); host, getStrAttr(input.attrs, "owner"), getStrAttr(input.attrs, "repo"));
Headers headers = makeHeadersWithAuthTokens(*input.settings, host); Headers headers = makeHeadersWithAuthTokens(*input.settings, host, input);
std::string refUri; std::string refUri;
if (ref == "HEAD") { if (ref == "HEAD") {
@ -552,7 +585,7 @@ struct SourceHutInputScheme : GitArchiveInputScheme
host, getStrAttr(input.attrs, "owner"), getStrAttr(input.attrs, "repo"), host, getStrAttr(input.attrs, "owner"), getStrAttr(input.attrs, "repo"),
input.getRev()->to_string(HashFormat::Base16, false)); input.getRev()->to_string(HashFormat::Base16, false));
Headers headers = makeHeadersWithAuthTokens(*input.settings, host); Headers headers = makeHeadersWithAuthTokens(*input.settings, host, input);
return DownloadUrl { url, headers }; return DownloadUrl { url, headers };
} }

View file

@ -14,7 +14,7 @@ cxx = meson.get_compiler('cpp')
subdir('nix-meson-build-support/deps-lists') subdir('nix-meson-build-support/deps-lists')
configdata = configuration_data() configuration_data()
deps_private_maybe_subproject = [ deps_private_maybe_subproject = [
] ]
@ -48,12 +48,12 @@ sources = files(
'fetch-to-store.cc', 'fetch-to-store.cc',
'fetchers.cc', 'fetchers.cc',
'filtering-source-accessor.cc', 'filtering-source-accessor.cc',
'git-lfs-fetch.cc',
'git-utils.cc', 'git-utils.cc',
'git.cc', 'git.cc',
'github.cc', 'github.cc',
'indirect.cc', 'indirect.cc',
'mercurial.cc', 'mercurial.cc',
'mounted-source-accessor.cc',
'path.cc', 'path.cc',
'registry.cc', 'registry.cc',
'store-path-accessor.cc', 'store-path-accessor.cc',
@ -69,8 +69,8 @@ headers = files(
'fetch-to-store.hh', 'fetch-to-store.hh',
'fetchers.hh', 'fetchers.hh',
'filtering-source-accessor.hh', 'filtering-source-accessor.hh',
'git-lfs-fetch.hh',
'git-utils.hh', 'git-utils.hh',
'mounted-source-accessor.hh',
'registry.hh', 'registry.hh',
'store-path-accessor.hh', 'store-path-accessor.hh',
'tarball.hh', 'tarball.hh',

View file

@ -1,9 +0,0 @@
#pragma once
#include "source-accessor.hh"
namespace nix {
ref<SourceAccessor> makeMountedSourceAccessor(std::map<CanonPath, ref<SourceAccessor>> mounts);
}

View file

@ -125,7 +125,7 @@ struct PathInputScheme : InputScheme
auto absPath = getAbsPath(input); auto absPath = getAbsPath(input);
Activity act(*logger, lvlTalkative, actUnknown, fmt("copying '%s' to the store", absPath)); Activity act(*logger, lvlTalkative, actUnknown, fmt("copying %s to the store", absPath));
// FIXME: check whether access to 'path' is allowed. // FIXME: check whether access to 'path' is allowed.
auto storePath = store->maybeParseStorePath(absPath.string()); auto storePath = store->maybeParseStorePath(absPath.string());

View file

@ -12,6 +12,7 @@
#include "flake/settings.hh" #include "flake/settings.hh"
#include "value-to-json.hh" #include "value-to-json.hh"
#include "local-fs-store.hh" #include "local-fs-store.hh"
#include "fetch-to-store.hh"
#include <nlohmann/json.hpp> #include <nlohmann/json.hpp>
@ -24,7 +25,7 @@ namespace flake {
struct FetchedFlake struct FetchedFlake
{ {
FlakeRef lockedRef; FlakeRef lockedRef;
StorePath storePath; ref<SourceAccessor> accessor;
}; };
typedef std::map<FlakeRef, FetchedFlake> FlakeCache; typedef std::map<FlakeRef, FetchedFlake> FlakeCache;
@ -40,7 +41,7 @@ static std::optional<FetchedFlake> lookupInFlakeCache(
return i->second; return i->second;
} }
static std::tuple<StorePath, FlakeRef, FlakeRef> fetchOrSubstituteTree( static std::tuple<ref<SourceAccessor>, FlakeRef, FlakeRef> fetchOrSubstituteTree(
EvalState & state, EvalState & state,
const FlakeRef & originalRef, const FlakeRef & originalRef,
bool useRegistries, bool useRegistries,
@ -51,8 +52,8 @@ static std::tuple<StorePath, FlakeRef, FlakeRef> fetchOrSubstituteTree(
if (!fetched) { if (!fetched) {
if (originalRef.input.isDirect()) { if (originalRef.input.isDirect()) {
auto [storePath, lockedRef] = originalRef.fetchTree(state.store); auto [accessor, lockedRef] = originalRef.lazyFetch(state.store);
fetched.emplace(FetchedFlake{.lockedRef = lockedRef, .storePath = storePath}); fetched.emplace(FetchedFlake{.lockedRef = lockedRef, .accessor = accessor});
} else { } else {
if (useRegistries) { if (useRegistries) {
resolvedRef = originalRef.resolve( resolvedRef = originalRef.resolve(
@ -64,8 +65,8 @@ static std::tuple<StorePath, FlakeRef, FlakeRef> fetchOrSubstituteTree(
}); });
fetched = lookupInFlakeCache(flakeCache, originalRef); fetched = lookupInFlakeCache(flakeCache, originalRef);
if (!fetched) { if (!fetched) {
auto [storePath, lockedRef] = resolvedRef.fetchTree(state.store); auto [accessor, lockedRef] = resolvedRef.lazyFetch(state.store);
fetched.emplace(FetchedFlake{.lockedRef = lockedRef, .storePath = storePath}); fetched.emplace(FetchedFlake{.lockedRef = lockedRef, .accessor = accessor});
} }
flakeCache.insert_or_assign(resolvedRef, *fetched); flakeCache.insert_or_assign(resolvedRef, *fetched);
} }
@ -76,14 +77,27 @@ static std::tuple<StorePath, FlakeRef, FlakeRef> fetchOrSubstituteTree(
flakeCache.insert_or_assign(originalRef, *fetched); flakeCache.insert_or_assign(originalRef, *fetched);
} }
debug("got tree '%s' from '%s'", debug("got tree '%s' from '%s'", fetched->accessor, fetched->lockedRef);
state.store->printStorePath(fetched->storePath), fetched->lockedRef);
state.allowPath(fetched->storePath); return {fetched->accessor, resolvedRef, fetched->lockedRef};
}
assert(!originalRef.input.getNarHash() || fetched->storePath == originalRef.input.computeStorePath(*state.store)); static StorePath copyInputToStore(
EvalState & state,
fetchers::Input & input,
const fetchers::Input & originalInput,
ref<SourceAccessor> accessor)
{
auto storePath = fetchToStore(*state.store, accessor, FetchMode::Copy, input.getName());
return {fetched->storePath, resolvedRef, fetched->lockedRef}; state.allowPath(storePath);
auto narHash = state.store->queryPathInfo(storePath)->narHash;
input.attrs.insert_or_assign("narHash", narHash.to_string(HashFormat::SRI, true));
assert(!originalInput.getNarHash() || storePath == originalInput.computeStorePath(*state.store));
return storePath;
} }
static void forceTrivialValue(EvalState & state, Value & value, const PosIdx pos) static void forceTrivialValue(EvalState & state, Value & value, const PosIdx pos)
@ -101,19 +115,54 @@ static void expectType(EvalState & state, ValueType type,
showType(type), showType(value.type()), state.positions[pos]); showType(type), showType(value.type()), state.positions[pos]);
} }
static std::map<FlakeId, FlakeInput> parseFlakeInputs( static std::pair<std::map<FlakeId, FlakeInput>, fetchers::Attrs> parseFlakeInputs(
EvalState & state, EvalState & state,
Value * value, Value * value,
const PosIdx pos, const PosIdx pos,
const InputPath & lockRootPath, const InputAttrPath & lockRootAttrPath,
const SourcePath & flakeDir); const SourcePath & flakeDir,
bool allowSelf);
static void parseFlakeInputAttr(
EvalState & state,
const Attr & attr,
fetchers::Attrs & attrs)
{
// Allow selecting a subset of enum values
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wswitch-enum"
switch (attr.value->type()) {
case nString:
attrs.emplace(state.symbols[attr.name], attr.value->c_str());
break;
case nBool:
attrs.emplace(state.symbols[attr.name], Explicit<bool> { attr.value->boolean() });
break;
case nInt: {
auto intValue = attr.value->integer().value;
if (intValue < 0)
state.error<EvalError>("negative value given for flake input attribute %1%: %2%", state.symbols[attr.name], intValue).debugThrow();
attrs.emplace(state.symbols[attr.name], uint64_t(intValue));
break;
}
default:
if (attr.name == state.symbols.create("publicKeys")) {
experimentalFeatureSettings.require(Xp::VerifiedFetches);
NixStringContext emptyContext = {};
attrs.emplace(state.symbols[attr.name], printValueAsJSON(state, true, *attr.value, attr.pos, emptyContext).dump());
} else
state.error<TypeError>("flake input attribute '%s' is %s while a string, Boolean, or integer is expected",
state.symbols[attr.name], showType(*attr.value)).debugThrow();
}
#pragma GCC diagnostic pop
}
static FlakeInput parseFlakeInput( static FlakeInput parseFlakeInput(
EvalState & state, EvalState & state,
std::string_view inputName, std::string_view inputName,
Value * value, Value * value,
const PosIdx pos, const PosIdx pos,
const InputPath & lockRootPath, const InputAttrPath & lockRootAttrPath,
const SourcePath & flakeDir) const SourcePath & flakeDir)
{ {
expectType(state, nAttrs, *value, pos); expectType(state, nAttrs, *value, pos);
@ -137,7 +186,7 @@ static FlakeInput parseFlakeInput(
else if (attr.value->type() == nPath) { else if (attr.value->type() == nPath) {
auto path = attr.value->path(); auto path = attr.value->path();
if (path.accessor != flakeDir.accessor) if (path.accessor != flakeDir.accessor)
throw Error("input path '%s' at %s must be in the same source tree as %s", throw Error("input attribute path '%s' at %s must be in the same source tree as %s",
path, state.positions[attr.pos], flakeDir); path, state.positions[attr.pos], flakeDir);
url = "path:" + flakeDir.path.makeRelative(path.path); url = "path:" + flakeDir.path.makeRelative(path.path);
} }
@ -149,44 +198,14 @@ static FlakeInput parseFlakeInput(
expectType(state, nBool, *attr.value, attr.pos); expectType(state, nBool, *attr.value, attr.pos);
input.isFlake = attr.value->boolean(); input.isFlake = attr.value->boolean();
} else if (attr.name == sInputs) { } else if (attr.name == sInputs) {
input.overrides = parseFlakeInputs(state, attr.value, attr.pos, lockRootPath, flakeDir); input.overrides = parseFlakeInputs(state, attr.value, attr.pos, lockRootAttrPath, flakeDir, false).first;
} else if (attr.name == sFollows) { } else if (attr.name == sFollows) {
expectType(state, nString, *attr.value, attr.pos); expectType(state, nString, *attr.value, attr.pos);
auto follows(parseInputPath(attr.value->c_str())); auto follows(parseInputAttrPath(attr.value->c_str()));
follows.insert(follows.begin(), lockRootPath.begin(), lockRootPath.end()); follows.insert(follows.begin(), lockRootAttrPath.begin(), lockRootAttrPath.end());
input.follows = follows; input.follows = follows;
} else { } else
// Allow selecting a subset of enum values parseFlakeInputAttr(state, attr, attrs);
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wswitch-enum"
switch (attr.value->type()) {
case nString:
attrs.emplace(state.symbols[attr.name], attr.value->c_str());
break;
case nBool:
attrs.emplace(state.symbols[attr.name], Explicit<bool> { attr.value->boolean() });
break;
case nInt: {
auto intValue = attr.value->integer().value;
if (intValue < 0) {
state.error<EvalError>("negative value given for flake input attribute %1%: %2%", state.symbols[attr.name], intValue).debugThrow();
}
attrs.emplace(state.symbols[attr.name], uint64_t(intValue));
break;
}
default:
if (attr.name == state.symbols.create("publicKeys")) {
experimentalFeatureSettings.require(Xp::VerifiedFetches);
NixStringContext emptyContext = {};
attrs.emplace(state.symbols[attr.name], printValueAsJSON(state, true, *attr.value, pos, emptyContext).dump());
} else
state.error<TypeError>("flake input attribute '%s' is %s while a string, Boolean, or integer is expected",
state.symbols[attr.name], showType(*attr.value)).debugThrow();
}
#pragma GCC diagnostic pop
}
} catch (Error & e) { } catch (Error & e) {
e.addTrace( e.addTrace(
state.positions[attr.pos], state.positions[attr.pos],
@ -216,28 +235,39 @@ static FlakeInput parseFlakeInput(
return input; return input;
} }
static std::map<FlakeId, FlakeInput> parseFlakeInputs( static std::pair<std::map<FlakeId, FlakeInput>, fetchers::Attrs> parseFlakeInputs(
EvalState & state, EvalState & state,
Value * value, Value * value,
const PosIdx pos, const PosIdx pos,
const InputPath & lockRootPath, const InputAttrPath & lockRootAttrPath,
const SourcePath & flakeDir) const SourcePath & flakeDir,
bool allowSelf)
{ {
std::map<FlakeId, FlakeInput> inputs; std::map<FlakeId, FlakeInput> inputs;
fetchers::Attrs selfAttrs;
expectType(state, nAttrs, *value, pos); expectType(state, nAttrs, *value, pos);
for (auto & inputAttr : *value->attrs()) { for (auto & inputAttr : *value->attrs()) {
inputs.emplace(state.symbols[inputAttr.name], auto inputName = state.symbols[inputAttr.name];
parseFlakeInput(state, if (inputName == "self") {
state.symbols[inputAttr.name], if (!allowSelf)
inputAttr.value, throw Error("'self' input attribute not allowed at %s", state.positions[inputAttr.pos]);
inputAttr.pos, expectType(state, nAttrs, *inputAttr.value, inputAttr.pos);
lockRootPath, for (auto & attr : *inputAttr.value->attrs())
flakeDir)); parseFlakeInputAttr(state, attr, selfAttrs);
} else {
inputs.emplace(inputName,
parseFlakeInput(state,
inputName,
inputAttr.value,
inputAttr.pos,
lockRootAttrPath,
flakeDir));
}
} }
return inputs; return {inputs, selfAttrs};
} }
static Flake readFlake( static Flake readFlake(
@ -246,7 +276,7 @@ static Flake readFlake(
const FlakeRef & resolvedRef, const FlakeRef & resolvedRef,
const FlakeRef & lockedRef, const FlakeRef & lockedRef,
const SourcePath & rootDir, const SourcePath & rootDir,
const InputPath & lockRootPath) const InputAttrPath & lockRootAttrPath)
{ {
auto flakeDir = rootDir / CanonPath(resolvedRef.subdir); auto flakeDir = rootDir / CanonPath(resolvedRef.subdir);
auto flakePath = flakeDir / "flake.nix"; auto flakePath = flakeDir / "flake.nix";
@ -269,8 +299,11 @@ static Flake readFlake(
auto sInputs = state.symbols.create("inputs"); auto sInputs = state.symbols.create("inputs");
if (auto inputs = vInfo.attrs()->get(sInputs)) if (auto inputs = vInfo.attrs()->get(sInputs)) {
flake.inputs = parseFlakeInputs(state, inputs->value, inputs->pos, lockRootPath, flakeDir); auto [flakeInputs, selfAttrs] = parseFlakeInputs(state, inputs->value, inputs->pos, lockRootAttrPath, flakeDir, true);
flake.inputs = std::move(flakeInputs);
flake.selfAttrs = std::move(selfAttrs);
}
auto sOutputs = state.symbols.create("outputs"); auto sOutputs = state.symbols.create("outputs");
@ -301,10 +334,10 @@ static Flake readFlake(
state.symbols[setting.name], state.symbols[setting.name],
std::string(state.forceStringNoCtx(*setting.value, setting.pos, ""))); std::string(state.forceStringNoCtx(*setting.value, setting.pos, "")));
else if (setting.value->type() == nPath) { else if (setting.value->type() == nPath) {
NixStringContext emptyContext = {}; auto storePath = fetchToStore(*state.store, setting.value->path(), FetchMode::Copy);
flake.config.settings.emplace( flake.config.settings.emplace(
state.symbols[setting.name], state.symbols[setting.name],
state.coerceToString(setting.pos, *setting.value, emptyContext, "", false, true, true).toOwned()); state.store->printStorePath(storePath));
} }
else if (setting.value->type() == nInt) else if (setting.value->type() == nInt)
flake.config.settings.emplace( flake.config.settings.emplace(
@ -342,17 +375,55 @@ static Flake readFlake(
return flake; return flake;
} }
static FlakeRef applySelfAttrs(
const FlakeRef & ref,
const Flake & flake)
{
auto newRef(ref);
std::set<std::string> allowedAttrs{"submodules", "lfs"};
for (auto & attr : flake.selfAttrs) {
if (!allowedAttrs.contains(attr.first))
throw Error("flake 'self' attribute '%s' is not supported", attr.first);
newRef.input.attrs.insert_or_assign(attr.first, attr.second);
}
return newRef;
}
static Flake getFlake( static Flake getFlake(
EvalState & state, EvalState & state,
const FlakeRef & originalRef, const FlakeRef & originalRef,
bool useRegistries, bool useRegistries,
FlakeCache & flakeCache, FlakeCache & flakeCache,
const InputPath & lockRootPath) const InputAttrPath & lockRootAttrPath)
{ {
auto [storePath, resolvedRef, lockedRef] = fetchOrSubstituteTree( // Fetch a lazy tree first.
auto [accessor, resolvedRef, lockedRef] = fetchOrSubstituteTree(
state, originalRef, useRegistries, flakeCache); state, originalRef, useRegistries, flakeCache);
return readFlake(state, originalRef, resolvedRef, lockedRef, state.rootPath(state.store->toRealPath(storePath)), lockRootPath); // Parse/eval flake.nix to get at the input.self attributes.
auto flake = readFlake(state, originalRef, resolvedRef, lockedRef, {accessor}, lockRootAttrPath);
// Re-fetch the tree if necessary.
auto newLockedRef = applySelfAttrs(lockedRef, flake);
if (lockedRef != newLockedRef) {
debug("refetching input '%s' due to self attribute", newLockedRef);
// FIXME: need to remove attrs that are invalidated by the changed input attrs, such as 'narHash'.
newLockedRef.input.attrs.erase("narHash");
auto [accessor2, resolvedRef2, lockedRef2] = fetchOrSubstituteTree(
state, newLockedRef, false, flakeCache);
accessor = accessor2;
lockedRef = lockedRef2;
}
// Copy the tree to the store.
auto storePath = copyInputToStore(state, lockedRef.input, originalRef.input, accessor);
// Re-parse flake.nix from the store.
return readFlake(state, originalRef, resolvedRef, lockedRef, state.storePath(storePath), lockRootAttrPath);
} }
Flake getFlake(EvalState & state, const FlakeRef & originalRef, bool useRegistries) Flake getFlake(EvalState & state, const FlakeRef & originalRef, bool useRegistries)
@ -405,12 +476,12 @@ LockedFlake lockFlake(
{ {
FlakeInput input; FlakeInput input;
SourcePath sourcePath; SourcePath sourcePath;
std::optional<InputPath> parentInputPath; // FIXME: rename to inputPathPrefix? std::optional<InputAttrPath> parentInputAttrPath; // FIXME: rename to inputAttrPathPrefix?
}; };
std::map<InputPath, OverrideTarget> overrides; std::map<InputAttrPath, OverrideTarget> overrides;
std::set<InputPath> explicitCliOverrides; std::set<InputAttrPath> explicitCliOverrides;
std::set<InputPath> overridesUsed, updatesUsed; std::set<InputAttrPath> overridesUsed, updatesUsed;
std::map<ref<Node>, SourcePath> nodePaths; std::map<ref<Node>, SourcePath> nodePaths;
for (auto & i : lockFlags.inputOverrides) { for (auto & i : lockFlags.inputOverrides) {
@ -434,9 +505,9 @@ LockedFlake lockFlake(
std::function<void( std::function<void(
const FlakeInputs & flakeInputs, const FlakeInputs & flakeInputs,
ref<Node> node, ref<Node> node,
const InputPath & inputPathPrefix, const InputAttrPath & inputAttrPathPrefix,
std::shared_ptr<const Node> oldNode, std::shared_ptr<const Node> oldNode,
const InputPath & followsPrefix, const InputAttrPath & followsPrefix,
const SourcePath & sourcePath, const SourcePath & sourcePath,
bool trustLock)> bool trustLock)>
computeLocks; computeLocks;
@ -448,7 +519,7 @@ LockedFlake lockFlake(
/* The node whose locks are to be updated.*/ /* The node whose locks are to be updated.*/
ref<Node> node, ref<Node> node,
/* The path to this node in the lock file graph. */ /* The path to this node in the lock file graph. */
const InputPath & inputPathPrefix, const InputAttrPath & inputAttrPathPrefix,
/* The old node, if any, from which locks can be /* The old node, if any, from which locks can be
copied. */ copied. */
std::shared_ptr<const Node> oldNode, std::shared_ptr<const Node> oldNode,
@ -456,59 +527,59 @@ LockedFlake lockFlake(
interpreted. When a node is initially locked, it's interpreted. When a node is initially locked, it's
relative to the node's flake; when it's already locked, relative to the node's flake; when it's already locked,
it's relative to the root of the lock file. */ it's relative to the root of the lock file. */
const InputPath & followsPrefix, const InputAttrPath & followsPrefix,
/* The source path of this node's flake. */ /* The source path of this node's flake. */
const SourcePath & sourcePath, const SourcePath & sourcePath,
bool trustLock) bool trustLock)
{ {
debug("computing lock file node '%s'", printInputPath(inputPathPrefix)); debug("computing lock file node '%s'", printInputAttrPath(inputAttrPathPrefix));
/* Get the overrides (i.e. attributes of the form /* Get the overrides (i.e. attributes of the form
'inputs.nixops.inputs.nixpkgs.url = ...'). */ 'inputs.nixops.inputs.nixpkgs.url = ...'). */
for (auto & [id, input] : flakeInputs) { for (auto & [id, input] : flakeInputs) {
for (auto & [idOverride, inputOverride] : input.overrides) { for (auto & [idOverride, inputOverride] : input.overrides) {
auto inputPath(inputPathPrefix); auto inputAttrPath(inputAttrPathPrefix);
inputPath.push_back(id); inputAttrPath.push_back(id);
inputPath.push_back(idOverride); inputAttrPath.push_back(idOverride);
overrides.emplace(inputPath, overrides.emplace(inputAttrPath,
OverrideTarget { OverrideTarget {
.input = inputOverride, .input = inputOverride,
.sourcePath = sourcePath, .sourcePath = sourcePath,
.parentInputPath = inputPathPrefix .parentInputAttrPath = inputAttrPathPrefix
}); });
} }
} }
/* Check whether this input has overrides for a /* Check whether this input has overrides for a
non-existent input. */ non-existent input. */
for (auto [inputPath, inputOverride] : overrides) { for (auto [inputAttrPath, inputOverride] : overrides) {
auto inputPath2(inputPath); auto inputAttrPath2(inputAttrPath);
auto follow = inputPath2.back(); auto follow = inputAttrPath2.back();
inputPath2.pop_back(); inputAttrPath2.pop_back();
if (inputPath2 == inputPathPrefix && !flakeInputs.count(follow)) if (inputAttrPath2 == inputAttrPathPrefix && !flakeInputs.count(follow))
warn( warn(
"input '%s' has an override for a non-existent input '%s'", "input '%s' has an override for a non-existent input '%s'",
printInputPath(inputPathPrefix), follow); printInputAttrPath(inputAttrPathPrefix), follow);
} }
/* Go over the flake inputs, resolve/fetch them if /* Go over the flake inputs, resolve/fetch them if
necessary (i.e. if they're new or the flakeref changed necessary (i.e. if they're new or the flakeref changed
from what's in the lock file). */ from what's in the lock file). */
for (auto & [id, input2] : flakeInputs) { for (auto & [id, input2] : flakeInputs) {
auto inputPath(inputPathPrefix); auto inputAttrPath(inputAttrPathPrefix);
inputPath.push_back(id); inputAttrPath.push_back(id);
auto inputPathS = printInputPath(inputPath); auto inputAttrPathS = printInputAttrPath(inputAttrPath);
debug("computing input '%s'", inputPathS); debug("computing input '%s'", inputAttrPathS);
try { try {
/* Do we have an override for this input from one of the /* Do we have an override for this input from one of the
ancestors? */ ancestors? */
auto i = overrides.find(inputPath); auto i = overrides.find(inputAttrPath);
bool hasOverride = i != overrides.end(); bool hasOverride = i != overrides.end();
bool hasCliOverride = explicitCliOverrides.contains(inputPath); bool hasCliOverride = explicitCliOverrides.contains(inputAttrPath);
if (hasOverride) if (hasOverride)
overridesUsed.insert(inputPath); overridesUsed.insert(inputAttrPath);
auto input = hasOverride ? i->second.input : input2; auto input = hasOverride ? i->second.input : input2;
/* Resolve relative 'path:' inputs relative to /* Resolve relative 'path:' inputs relative to
@ -523,11 +594,11 @@ LockedFlake lockFlake(
/* Resolve 'follows' later (since it may refer to an input /* Resolve 'follows' later (since it may refer to an input
path we haven't processed yet. */ path we haven't processed yet. */
if (input.follows) { if (input.follows) {
InputPath target; InputAttrPath target;
target.insert(target.end(), input.follows->begin(), input.follows->end()); target.insert(target.end(), input.follows->begin(), input.follows->end());
debug("input '%s' follows '%s'", inputPathS, printInputPath(target)); debug("input '%s' follows '%s'", inputAttrPathS, printInputAttrPath(target));
node->inputs.insert_or_assign(id, target); node->inputs.insert_or_assign(id, target);
continue; continue;
} }
@ -536,7 +607,7 @@ LockedFlake lockFlake(
auto overridenParentPath = auto overridenParentPath =
input.ref->input.isRelative() input.ref->input.isRelative()
? std::optional<InputPath>(hasOverride ? i->second.parentInputPath : inputPathPrefix) ? std::optional<InputAttrPath>(hasOverride ? i->second.parentInputAttrPath : inputAttrPathPrefix)
: std::nullopt; : std::nullopt;
auto resolveRelativePath = [&]() -> std::optional<SourcePath> auto resolveRelativePath = [&]() -> std::optional<SourcePath>
@ -555,9 +626,9 @@ LockedFlake lockFlake(
auto getInputFlake = [&](const FlakeRef & ref) auto getInputFlake = [&](const FlakeRef & ref)
{ {
if (auto resolvedPath = resolveRelativePath()) { if (auto resolvedPath = resolveRelativePath()) {
return readFlake(state, ref, ref, ref, *resolvedPath, inputPath); return readFlake(state, ref, ref, ref, *resolvedPath, inputAttrPath);
} else { } else {
return getFlake(state, ref, useRegistries, flakeCache, inputPath); return getFlake(state, ref, useRegistries, flakeCache, inputAttrPath);
} }
}; };
@ -565,19 +636,19 @@ LockedFlake lockFlake(
And the input is not in updateInputs? */ And the input is not in updateInputs? */
std::shared_ptr<LockedNode> oldLock; std::shared_ptr<LockedNode> oldLock;
updatesUsed.insert(inputPath); updatesUsed.insert(inputAttrPath);
if (oldNode && !lockFlags.inputUpdates.count(inputPath)) if (oldNode && !lockFlags.inputUpdates.count(inputAttrPath))
if (auto oldLock2 = get(oldNode->inputs, id)) if (auto oldLock2 = get(oldNode->inputs, id))
if (auto oldLock3 = std::get_if<0>(&*oldLock2)) if (auto oldLock3 = std::get_if<0>(&*oldLock2))
oldLock = *oldLock3; oldLock = *oldLock3;
if (oldLock if (oldLock
&& oldLock->originalRef == *input.ref && oldLock->originalRef == *input.ref
&& oldLock->parentPath == overridenParentPath && oldLock->parentInputAttrPath == overridenParentPath
&& !hasCliOverride) && !hasCliOverride)
{ {
debug("keeping existing input '%s'", inputPathS); debug("keeping existing input '%s'", inputAttrPathS);
/* Copy the input from the old lock since its flakeref /* Copy the input from the old lock since its flakeref
didn't change and there is no override from a didn't change and there is no override from a
@ -586,18 +657,18 @@ LockedFlake lockFlake(
oldLock->lockedRef, oldLock->lockedRef,
oldLock->originalRef, oldLock->originalRef,
oldLock->isFlake, oldLock->isFlake,
oldLock->parentPath); oldLock->parentInputAttrPath);
node->inputs.insert_or_assign(id, childNode); node->inputs.insert_or_assign(id, childNode);
/* If we have this input in updateInputs, then we /* If we have this input in updateInputs, then we
must fetch the flake to update it. */ must fetch the flake to update it. */
auto lb = lockFlags.inputUpdates.lower_bound(inputPath); auto lb = lockFlags.inputUpdates.lower_bound(inputAttrPath);
auto mustRefetch = auto mustRefetch =
lb != lockFlags.inputUpdates.end() lb != lockFlags.inputUpdates.end()
&& lb->size() > inputPath.size() && lb->size() > inputAttrPath.size()
&& std::equal(inputPath.begin(), inputPath.end(), lb->begin()); && std::equal(inputAttrPath.begin(), inputAttrPath.end(), lb->begin());
FlakeInputs fakeInputs; FlakeInputs fakeInputs;
@ -616,7 +687,7 @@ LockedFlake lockFlake(
if (!trustLock) { if (!trustLock) {
// It is possible that the flake has changed, // It is possible that the flake has changed,
// so we must confirm all the follows that are in the lock file are also in the flake. // so we must confirm all the follows that are in the lock file are also in the flake.
auto overridePath(inputPath); auto overridePath(inputAttrPath);
overridePath.push_back(i.first); overridePath.push_back(i.first);
auto o = overrides.find(overridePath); auto o = overrides.find(overridePath);
// If the override disappeared, we have to refetch the flake, // If the override disappeared, we have to refetch the flake,
@ -640,21 +711,21 @@ LockedFlake lockFlake(
if (mustRefetch) { if (mustRefetch) {
auto inputFlake = getInputFlake(oldLock->lockedRef); auto inputFlake = getInputFlake(oldLock->lockedRef);
nodePaths.emplace(childNode, inputFlake.path.parent()); nodePaths.emplace(childNode, inputFlake.path.parent());
computeLocks(inputFlake.inputs, childNode, inputPath, oldLock, followsPrefix, computeLocks(inputFlake.inputs, childNode, inputAttrPath, oldLock, followsPrefix,
inputFlake.path, false); inputFlake.path, false);
} else { } else {
computeLocks(fakeInputs, childNode, inputPath, oldLock, followsPrefix, sourcePath, true); computeLocks(fakeInputs, childNode, inputAttrPath, oldLock, followsPrefix, sourcePath, true);
} }
} else { } else {
/* We need to create a new lock file entry. So fetch /* We need to create a new lock file entry. So fetch
this input. */ this input. */
debug("creating new input '%s'", inputPathS); debug("creating new input '%s'", inputAttrPathS);
if (!lockFlags.allowUnlocked if (!lockFlags.allowUnlocked
&& !input.ref->input.isLocked() && !input.ref->input.isLocked()
&& !input.ref->input.isRelative()) && !input.ref->input.isRelative())
throw Error("cannot update unlocked flake input '%s' in pure mode", inputPathS); throw Error("cannot update unlocked flake input '%s' in pure mode", inputAttrPathS);
/* Note: in case of an --override-input, we use /* Note: in case of an --override-input, we use
the *original* ref (input2.ref) for the the *original* ref (input2.ref) for the
@ -663,7 +734,7 @@ LockedFlake lockFlake(
nuked the next time we update the lock nuked the next time we update the lock
file. That is, overrides are sticky unless you file. That is, overrides are sticky unless you
use --no-write-lock-file. */ use --no-write-lock-file. */
auto ref = (input2.ref && explicitCliOverrides.contains(inputPath)) ? *input2.ref : *input.ref; auto ref = (input2.ref && explicitCliOverrides.contains(inputAttrPath)) ? *input2.ref : *input.ref;
if (input.isFlake) { if (input.isFlake) {
auto inputFlake = getInputFlake(*input.ref); auto inputFlake = getInputFlake(*input.ref);
@ -689,11 +760,11 @@ LockedFlake lockFlake(
own lock file. */ own lock file. */
nodePaths.emplace(childNode, inputFlake.path.parent()); nodePaths.emplace(childNode, inputFlake.path.parent());
computeLocks( computeLocks(
inputFlake.inputs, childNode, inputPath, inputFlake.inputs, childNode, inputAttrPath,
oldLock oldLock
? std::dynamic_pointer_cast<const Node>(oldLock) ? std::dynamic_pointer_cast<const Node>(oldLock)
: readLockFile(state.fetchSettings, inputFlake.lockFilePath()).root.get_ptr(), : readLockFile(state.fetchSettings, inputFlake.lockFilePath()).root.get_ptr(),
oldLock ? followsPrefix : inputPath, oldLock ? followsPrefix : inputAttrPath,
inputFlake.path, inputFlake.path,
false); false);
} }
@ -705,9 +776,13 @@ LockedFlake lockFlake(
if (auto resolvedPath = resolveRelativePath()) { if (auto resolvedPath = resolveRelativePath()) {
return {*resolvedPath, *input.ref}; return {*resolvedPath, *input.ref};
} else { } else {
auto [storePath, resolvedRef, lockedRef] = fetchOrSubstituteTree( auto [accessor, resolvedRef, lockedRef] = fetchOrSubstituteTree(
state, *input.ref, useRegistries, flakeCache); state, *input.ref, useRegistries, flakeCache);
return {state.rootPath(state.store->toRealPath(storePath)), lockedRef};
// FIXME: allow input to be lazy.
auto storePath = copyInputToStore(state, lockedRef.input, input.ref->input, accessor);
return {state.storePath(storePath), lockedRef};
} }
}(); }();
@ -720,7 +795,7 @@ LockedFlake lockFlake(
} }
} catch (Error & e) { } catch (Error & e) {
e.addTrace({}, "while updating the flake input '%s'", inputPathS); e.addTrace({}, "while updating the flake input '%s'", inputAttrPathS);
throw; throw;
} }
} }
@ -740,11 +815,11 @@ LockedFlake lockFlake(
for (auto & i : lockFlags.inputOverrides) for (auto & i : lockFlags.inputOverrides)
if (!overridesUsed.count(i.first)) if (!overridesUsed.count(i.first))
warn("the flag '--override-input %s %s' does not match any input", warn("the flag '--override-input %s %s' does not match any input",
printInputPath(i.first), i.second); printInputAttrPath(i.first), i.second);
for (auto & i : lockFlags.inputUpdates) for (auto & i : lockFlags.inputUpdates)
if (!updatesUsed.count(i)) if (!updatesUsed.count(i))
warn("'%s' does not match any input of this flake", printInputPath(i)); warn("'%s' does not match any input of this flake", printInputAttrPath(i));
/* Check 'follows' inputs. */ /* Check 'follows' inputs. */
newLockFile.check(); newLockFile.check();
@ -844,21 +919,6 @@ LockedFlake lockFlake(
} }
} }
std::pair<StorePath, Path> sourcePathToStorePath(
ref<Store> store,
const SourcePath & _path)
{
auto path = _path.path.abs();
if (auto store2 = store.dynamic_pointer_cast<LocalFSStore>()) {
auto realStoreDir = store2->getRealStoreDir();
if (isInDir(path, realStoreDir))
path = store2->storeDir + path.substr(realStoreDir.size());
}
return store->toStorePath(path);
}
void callFlake(EvalState & state, void callFlake(EvalState & state,
const LockedFlake & lockedFlake, const LockedFlake & lockedFlake,
Value & vRes) Value & vRes)
@ -874,7 +934,7 @@ void callFlake(EvalState & state,
auto lockedNode = node.dynamic_pointer_cast<const LockedNode>(); auto lockedNode = node.dynamic_pointer_cast<const LockedNode>();
auto [storePath, subdir] = sourcePathToStorePath(state.store, sourcePath); auto [storePath, subdir] = state.store->toStorePath(sourcePath.path.abs());
emitTreeAttrs( emitTreeAttrs(
state, state,

View file

@ -57,7 +57,7 @@ struct FlakeInput
* false = (fetched) static source path * false = (fetched) static source path
*/ */
bool isFlake = true; bool isFlake = true;
std::optional<InputPath> follows; std::optional<InputAttrPath> follows;
FlakeInputs overrides; FlakeInputs overrides;
}; };
@ -79,24 +79,37 @@ struct Flake
* The original flake specification (by the user) * The original flake specification (by the user)
*/ */
FlakeRef originalRef; FlakeRef originalRef;
/** /**
* registry references and caching resolved to the specific underlying flake * registry references and caching resolved to the specific underlying flake
*/ */
FlakeRef resolvedRef; FlakeRef resolvedRef;
/** /**
* the specific local store result of invoking the fetcher * the specific local store result of invoking the fetcher
*/ */
FlakeRef lockedRef; FlakeRef lockedRef;
/** /**
* The path of `flake.nix`. * The path of `flake.nix`.
*/ */
SourcePath path; SourcePath path;
/** /**
* pretend that 'lockedRef' is dirty * Pretend that `lockedRef` is dirty.
*/ */
bool forceDirty = false; bool forceDirty = false;
std::optional<std::string> description; std::optional<std::string> description;
FlakeInputs inputs; FlakeInputs inputs;
/**
* Attributes to be retroactively applied to the `self` input
* (such as `submodules = true`).
*/
fetchers::Attrs selfAttrs;
/** /**
* 'nixConfig' attribute * 'nixConfig' attribute
*/ */
@ -201,13 +214,13 @@ struct LockFlags
/** /**
* Flake inputs to be overridden. * Flake inputs to be overridden.
*/ */
std::map<InputPath, FlakeRef> inputOverrides; std::map<InputAttrPath, FlakeRef> inputOverrides;
/** /**
* Flake inputs to be updated. This means that any existing lock * Flake inputs to be updated. This means that any existing lock
* for those inputs will be ignored. * for those inputs will be ignored.
*/ */
std::set<InputPath> inputUpdates; std::set<InputAttrPath> inputUpdates;
}; };
LockedFlake lockFlake( LockedFlake lockFlake(
@ -221,16 +234,6 @@ void callFlake(
const LockedFlake & lockedFlake, const LockedFlake & lockedFlake,
Value & v); Value & v);
/**
* Map a `SourcePath` to the corresponding store path. This is a
* temporary hack to support chroot stores while we don't have full
* lazy trees. FIXME: Remove this once we can pass a sourcePath rather
* than a storePath to call-flake.nix.
*/
std::pair<StorePath, Path> sourcePathToStorePath(
ref<Store> store,
const SourcePath & path);
} }
void emitTreeAttrs( void emitTreeAttrs(

View file

@ -107,7 +107,7 @@ std::pair<FlakeRef, std::string> parsePathFlakeRefWithFragment(
to 'baseDir'). If so, search upward to the root of the to 'baseDir'). If so, search upward to the root of the
repo (i.e. the directory containing .git). */ repo (i.e. the directory containing .git). */
path = absPath(path, baseDir); path = absPath(path, baseDir, true);
if (isFlake) { if (isFlake) {
@ -283,10 +283,10 @@ FlakeRef FlakeRef::fromAttrs(
fetchers::maybeGetStrAttr(attrs, "dir").value_or("")); fetchers::maybeGetStrAttr(attrs, "dir").value_or(""));
} }
std::pair<StorePath, FlakeRef> FlakeRef::fetchTree(ref<Store> store) const std::pair<ref<SourceAccessor>, FlakeRef> FlakeRef::lazyFetch(ref<Store> store) const
{ {
auto [storePath, lockedInput] = input.fetchToStore(store); auto [accessor, lockedInput] = input.getAccessor(store);
return {std::move(storePath), FlakeRef(std::move(lockedInput), subdir)}; return {accessor, FlakeRef(std::move(lockedInput), subdir)};
} }
std::tuple<FlakeRef, std::string, ExtendedOutputsSpec> parseFlakeRefWithFragmentAndExtendedOutputsSpec( std::tuple<FlakeRef, std::string, ExtendedOutputsSpec> parseFlakeRefWithFragmentAndExtendedOutputsSpec(

View file

@ -71,7 +71,7 @@ struct FlakeRef
const fetchers::Settings & fetchSettings, const fetchers::Settings & fetchSettings,
const fetchers::Attrs & attrs); const fetchers::Attrs & attrs);
std::pair<StorePath, FlakeRef> fetchTree(ref<Store> store) const; std::pair<ref<SourceAccessor>, FlakeRef> lazyFetch(ref<Store> store) const;
}; };
std::ostream & operator << (std::ostream & str, const FlakeRef & flakeRef); std::ostream & operator << (std::ostream & str, const FlakeRef & flakeRef);

View file

@ -1,7 +1,10 @@
#include <unordered_set> #include <unordered_set>
#include "fetch-settings.hh"
#include "flake/settings.hh"
#include "lockfile.hh" #include "lockfile.hh"
#include "store-api.hh" #include "store-api.hh"
#include "strings.hh"
#include <algorithm> #include <algorithm>
#include <iomanip> #include <iomanip>
@ -9,8 +12,6 @@
#include <iterator> #include <iterator>
#include <nlohmann/json.hpp> #include <nlohmann/json.hpp>
#include "strings.hh"
#include "flake/settings.hh"
namespace nix::flake { namespace nix::flake {
@ -43,11 +44,18 @@ LockedNode::LockedNode(
: lockedRef(getFlakeRef(fetchSettings, json, "locked", "info")) // FIXME: remove "info" : lockedRef(getFlakeRef(fetchSettings, json, "locked", "info")) // FIXME: remove "info"
, originalRef(getFlakeRef(fetchSettings, json, "original", nullptr)) , originalRef(getFlakeRef(fetchSettings, json, "original", nullptr))
, isFlake(json.find("flake") != json.end() ? (bool) json["flake"] : true) , isFlake(json.find("flake") != json.end() ? (bool) json["flake"] : true)
, parentPath(json.find("parent") != json.end() ? (std::optional<InputPath>) json["parent"] : std::nullopt) , parentInputAttrPath(json.find("parent") != json.end() ? (std::optional<InputAttrPath>) json["parent"] : std::nullopt)
{ {
if (!lockedRef.input.isConsideredLocked(fetchSettings) && !lockedRef.input.isRelative()) if (!lockedRef.input.isLocked() && !lockedRef.input.isRelative()) {
throw Error("Lock file contains unlocked input '%s'. Use '--allow-dirty-locks' to accept this lock file.", if (lockedRef.input.getNarHash())
fetchers::attrsToJSON(lockedRef.input.toAttrs())); warn(
"Lock file entry '%s' is unlocked (e.g. lacks a Git revision) but does have a NAR hash. "
"This is deprecated since such inputs are verifiable but may not be reproducible.",
lockedRef.to_string());
else
throw Error("Lock file contains unlocked input '%s'. Use '--allow-dirty-locks' to accept this lock file.",
fetchers::attrsToJSON(lockedRef.input.toAttrs()));
}
// For backward compatibility, lock file entries are implicitly final. // For backward compatibility, lock file entries are implicitly final.
assert(!lockedRef.input.attrs.contains("__final")); assert(!lockedRef.input.attrs.contains("__final"));
@ -59,7 +67,7 @@ StorePath LockedNode::computeStorePath(Store & store) const
return lockedRef.input.computeStorePath(store); return lockedRef.input.computeStorePath(store);
} }
static std::shared_ptr<Node> doFind(const ref<Node> & root, const InputPath & path, std::vector<InputPath> & visited) static std::shared_ptr<Node> doFind(const ref<Node> & root, const InputAttrPath & path, std::vector<InputAttrPath> & visited)
{ {
auto pos = root; auto pos = root;
@ -67,8 +75,8 @@ static std::shared_ptr<Node> doFind(const ref<Node> & root, const InputPath & pa
if (found != visited.end()) { if (found != visited.end()) {
std::vector<std::string> cycle; std::vector<std::string> cycle;
std::transform(found, visited.cend(), std::back_inserter(cycle), printInputPath); std::transform(found, visited.cend(), std::back_inserter(cycle), printInputAttrPath);
cycle.push_back(printInputPath(path)); cycle.push_back(printInputAttrPath(path));
throw Error("follow cycle detected: [%s]", concatStringsSep(" -> ", cycle)); throw Error("follow cycle detected: [%s]", concatStringsSep(" -> ", cycle));
} }
visited.push_back(path); visited.push_back(path);
@ -90,9 +98,9 @@ static std::shared_ptr<Node> doFind(const ref<Node> & root, const InputPath & pa
return pos; return pos;
} }
std::shared_ptr<Node> LockFile::findInput(const InputPath & path) std::shared_ptr<Node> LockFile::findInput(const InputAttrPath & path)
{ {
std::vector<InputPath> visited; std::vector<InputAttrPath> visited;
return doFind(root, path, visited); return doFind(root, path, visited);
} }
@ -115,7 +123,7 @@ LockFile::LockFile(
if (jsonNode.find("inputs") == jsonNode.end()) return; if (jsonNode.find("inputs") == jsonNode.end()) return;
for (auto & i : jsonNode["inputs"].items()) { for (auto & i : jsonNode["inputs"].items()) {
if (i.value().is_array()) { // FIXME: remove, obsolete if (i.value().is_array()) { // FIXME: remove, obsolete
InputPath path; InputAttrPath path;
for (auto & j : i.value()) for (auto & j : i.value())
path.push_back(j); path.push_back(j);
node.inputs.insert_or_assign(i.key(), path); node.inputs.insert_or_assign(i.key(), path);
@ -203,8 +211,8 @@ std::pair<nlohmann::json, LockFile::KeyMap> LockFile::toJSON() const
n["locked"].erase("__final"); n["locked"].erase("__final");
if (!lockedNode->isFlake) if (!lockedNode->isFlake)
n["flake"] = false; n["flake"] = false;
if (lockedNode->parentPath) if (lockedNode->parentInputAttrPath)
n["parent"] = *lockedNode->parentPath; n["parent"] = *lockedNode->parentInputAttrPath;
} }
nodes[key] = std::move(n); nodes[key] = std::move(n);
@ -248,11 +256,20 @@ std::optional<FlakeRef> LockFile::isUnlocked(const fetchers::Settings & fetchSet
visit(root); visit(root);
/* Return whether the input is either locked, or, if
`allow-dirty-locks` is enabled, it has a NAR hash. In the
latter case, we can verify the input but we may not be able to
fetch it from anywhere. */
auto isConsideredLocked = [&](const fetchers::Input & input)
{
return input.isLocked() || (fetchSettings.allowDirtyLocks && input.getNarHash());
};
for (auto & i : nodes) { for (auto & i : nodes) {
if (i == ref<const Node>(root)) continue; if (i == ref<const Node>(root)) continue;
auto node = i.dynamic_pointer_cast<const LockedNode>(); auto node = i.dynamic_pointer_cast<const LockedNode>();
if (node if (node
&& (!node->lockedRef.input.isConsideredLocked(fetchSettings) && (!isConsideredLocked(node->lockedRef.input)
|| !node->lockedRef.input.isFinal()) || !node->lockedRef.input.isFinal())
&& !node->lockedRef.input.isRelative()) && !node->lockedRef.input.isRelative())
return node->lockedRef; return node->lockedRef;
@ -267,36 +284,36 @@ bool LockFile::operator ==(const LockFile & other) const
return toJSON().first == other.toJSON().first; return toJSON().first == other.toJSON().first;
} }
InputPath parseInputPath(std::string_view s) InputAttrPath parseInputAttrPath(std::string_view s)
{ {
InputPath path; InputAttrPath path;
for (auto & elem : tokenizeString<std::vector<std::string>>(s, "/")) { for (auto & elem : tokenizeString<std::vector<std::string>>(s, "/")) {
if (!std::regex_match(elem, flakeIdRegex)) if (!std::regex_match(elem, flakeIdRegex))
throw UsageError("invalid flake input path element '%s'", elem); throw UsageError("invalid flake input attribute path element '%s'", elem);
path.push_back(elem); path.push_back(elem);
} }
return path; return path;
} }
std::map<InputPath, Node::Edge> LockFile::getAllInputs() const std::map<InputAttrPath, Node::Edge> LockFile::getAllInputs() const
{ {
std::set<ref<Node>> done; std::set<ref<Node>> done;
std::map<InputPath, Node::Edge> res; std::map<InputAttrPath, Node::Edge> res;
std::function<void(const InputPath & prefix, ref<Node> node)> recurse; std::function<void(const InputAttrPath & prefix, ref<Node> node)> recurse;
recurse = [&](const InputPath & prefix, ref<Node> node) recurse = [&](const InputAttrPath & prefix, ref<Node> node)
{ {
if (!done.insert(node).second) return; if (!done.insert(node).second) return;
for (auto &[id, input] : node->inputs) { for (auto &[id, input] : node->inputs) {
auto inputPath(prefix); auto inputAttrPath(prefix);
inputPath.push_back(id); inputAttrPath.push_back(id);
res.emplace(inputPath, input); res.emplace(inputAttrPath, input);
if (auto child = std::get_if<0>(&input)) if (auto child = std::get_if<0>(&input))
recurse(inputPath, *child); recurse(inputAttrPath, *child);
} }
}; };
@ -320,7 +337,7 @@ std::ostream & operator <<(std::ostream & stream, const Node::Edge & edge)
if (auto node = std::get_if<0>(&edge)) if (auto node = std::get_if<0>(&edge))
stream << describe((*node)->lockedRef); stream << describe((*node)->lockedRef);
else if (auto follows = std::get_if<1>(&edge)) else if (auto follows = std::get_if<1>(&edge))
stream << fmt("follows '%s'", printInputPath(*follows)); stream << fmt("follows '%s'", printInputAttrPath(*follows));
return stream; return stream;
} }
@ -347,15 +364,15 @@ std::string LockFile::diff(const LockFile & oldLocks, const LockFile & newLocks)
while (i != oldFlat.end() || j != newFlat.end()) { while (i != oldFlat.end() || j != newFlat.end()) {
if (j != newFlat.end() && (i == oldFlat.end() || i->first > j->first)) { if (j != newFlat.end() && (i == oldFlat.end() || i->first > j->first)) {
res += fmt("" ANSI_GREEN "Added input '%s':" ANSI_NORMAL "\n %s\n", res += fmt("" ANSI_GREEN "Added input '%s':" ANSI_NORMAL "\n %s\n",
printInputPath(j->first), j->second); printInputAttrPath(j->first), j->second);
++j; ++j;
} else if (i != oldFlat.end() && (j == newFlat.end() || i->first < j->first)) { } else if (i != oldFlat.end() && (j == newFlat.end() || i->first < j->first)) {
res += fmt("" ANSI_RED "Removed input '%s'" ANSI_NORMAL "\n", printInputPath(i->first)); res += fmt("" ANSI_RED "Removed input '%s'" ANSI_NORMAL "\n", printInputAttrPath(i->first));
++i; ++i;
} else { } else {
if (!equals(i->second, j->second)) { if (!equals(i->second, j->second)) {
res += fmt("" ANSI_BOLD "Updated input '%s':" ANSI_NORMAL "\n %s\n → %s\n", res += fmt("" ANSI_BOLD "Updated input '%s':" ANSI_NORMAL "\n %s\n → %s\n",
printInputPath(i->first), printInputAttrPath(i->first),
i->second, i->second,
j->second); j->second);
} }
@ -371,19 +388,19 @@ void LockFile::check()
{ {
auto inputs = getAllInputs(); auto inputs = getAllInputs();
for (auto & [inputPath, input] : inputs) { for (auto & [inputAttrPath, input] : inputs) {
if (auto follows = std::get_if<1>(&input)) { if (auto follows = std::get_if<1>(&input)) {
if (!follows->empty() && !findInput(*follows)) if (!follows->empty() && !findInput(*follows))
throw Error("input '%s' follows a non-existent input '%s'", throw Error("input '%s' follows a non-existent input '%s'",
printInputPath(inputPath), printInputAttrPath(inputAttrPath),
printInputPath(*follows)); printInputAttrPath(*follows));
} }
} }
} }
void check(); void check();
std::string printInputPath(const InputPath & path) std::string printInputAttrPath(const InputAttrPath & path)
{ {
return concatStringsSep("/", path); return concatStringsSep("/", path);
} }

View file

@ -12,7 +12,7 @@ class StorePath;
namespace nix::flake { namespace nix::flake {
typedef std::vector<FlakeId> InputPath; typedef std::vector<FlakeId> InputAttrPath;
struct LockedNode; struct LockedNode;
@ -23,7 +23,7 @@ struct LockedNode;
*/ */
struct Node : std::enable_shared_from_this<Node> struct Node : std::enable_shared_from_this<Node>
{ {
typedef std::variant<ref<LockedNode>, InputPath> Edge; typedef std::variant<ref<LockedNode>, InputAttrPath> Edge;
std::map<FlakeId, Edge> inputs; std::map<FlakeId, Edge> inputs;
@ -40,17 +40,17 @@ struct LockedNode : Node
/* The node relative to which relative source paths /* The node relative to which relative source paths
(e.g. 'path:../foo') are interpreted. */ (e.g. 'path:../foo') are interpreted. */
std::optional<InputPath> parentPath; std::optional<InputAttrPath> parentInputAttrPath;
LockedNode( LockedNode(
const FlakeRef & lockedRef, const FlakeRef & lockedRef,
const FlakeRef & originalRef, const FlakeRef & originalRef,
bool isFlake = true, bool isFlake = true,
std::optional<InputPath> parentPath = {}) std::optional<InputAttrPath> parentInputAttrPath = {})
: lockedRef(lockedRef) : lockedRef(std::move(lockedRef))
, originalRef(originalRef) , originalRef(std::move(originalRef))
, isFlake(isFlake) , isFlake(isFlake)
, parentPath(parentPath) , parentInputAttrPath(std::move(parentInputAttrPath))
{ } { }
LockedNode( LockedNode(
@ -83,9 +83,9 @@ struct LockFile
bool operator ==(const LockFile & other) const; bool operator ==(const LockFile & other) const;
std::shared_ptr<Node> findInput(const InputPath & path); std::shared_ptr<Node> findInput(const InputAttrPath & path);
std::map<InputPath, Node::Edge> getAllInputs() const; std::map<InputAttrPath, Node::Edge> getAllInputs() const;
static std::string diff(const LockFile & oldLocks, const LockFile & newLocks); static std::string diff(const LockFile & oldLocks, const LockFile & newLocks);
@ -97,8 +97,8 @@ struct LockFile
std::ostream & operator <<(std::ostream & stream, const LockFile & lockFile); std::ostream & operator <<(std::ostream & stream, const LockFile & lockFile);
InputPath parseInputPath(std::string_view s); InputAttrPath parseInputAttrPath(std::string_view s);
std::string printInputPath(const InputPath & path); std::string printInputAttrPath(const InputAttrPath & path);
} }

View file

@ -6,7 +6,8 @@ namespace nix {
LogFormat defaultLogFormat = LogFormat::raw; LogFormat defaultLogFormat = LogFormat::raw;
LogFormat parseLogFormat(const std::string & logFormatStr) { LogFormat parseLogFormat(const std::string & logFormatStr)
{
if (logFormatStr == "raw" || getEnv("NIX_GET_COMPLETIONS")) if (logFormatStr == "raw" || getEnv("NIX_GET_COMPLETIONS"))
return LogFormat::raw; return LogFormat::raw;
else if (logFormatStr == "raw-with-logs") else if (logFormatStr == "raw-with-logs")
@ -20,14 +21,15 @@ LogFormat parseLogFormat(const std::string & logFormatStr) {
throw Error("option 'log-format' has an invalid value '%s'", logFormatStr); throw Error("option 'log-format' has an invalid value '%s'", logFormatStr);
} }
Logger * makeDefaultLogger() { std::unique_ptr<Logger> makeDefaultLogger()
{
switch (defaultLogFormat) { switch (defaultLogFormat) {
case LogFormat::raw: case LogFormat::raw:
return makeSimpleLogger(false); return makeSimpleLogger(false);
case LogFormat::rawWithLogs: case LogFormat::rawWithLogs:
return makeSimpleLogger(true); return makeSimpleLogger(true);
case LogFormat::internalJSON: case LogFormat::internalJSON:
return makeJSONLogger(*makeSimpleLogger(true)); return makeJSONLogger(getStandardError());
case LogFormat::bar: case LogFormat::bar:
return makeProgressBar(); return makeProgressBar();
case LogFormat::barWithLogs: { case LogFormat::barWithLogs: {
@ -40,16 +42,14 @@ Logger * makeDefaultLogger() {
} }
} }
void setLogFormat(const std::string & logFormatStr) { void setLogFormat(const std::string & logFormatStr)
{
setLogFormat(parseLogFormat(logFormatStr)); setLogFormat(parseLogFormat(logFormatStr));
} }
void setLogFormat(const LogFormat & logFormat) { void setLogFormat(const LogFormat & logFormat)
{
defaultLogFormat = logFormat; defaultLogFormat = logFormat;
createDefaultLogger();
}
void createDefaultLogger() {
logger = makeDefaultLogger(); logger = makeDefaultLogger();
} }

View file

@ -16,6 +16,4 @@ enum class LogFormat {
void setLogFormat(const std::string & logFormatStr); void setLogFormat(const std::string & logFormatStr);
void setLogFormat(const LogFormat & logFormat); void setLogFormat(const LogFormat & logFormat);
void createDefaultLogger();
} }

View file

@ -73,8 +73,13 @@ private:
uint64_t corruptedPaths = 0, untrustedPaths = 0; uint64_t corruptedPaths = 0, untrustedPaths = 0;
bool active = true; bool active = true;
bool paused = false; size_t suspensions = 0;
bool haveUpdate = true; bool haveUpdate = true;
bool isPaused() const
{
return suspensions > 0;
}
}; };
/** Helps avoid unnecessary redraws, see `redraw()` */ /** Helps avoid unnecessary redraws, see `redraw()` */
@ -117,29 +122,43 @@ public:
{ {
{ {
auto state(state_.lock()); auto state(state_.lock());
if (!state->active) return; if (state->active) {
state->active = false; state->active = false;
writeToStderr("\r\e[K"); writeToStderr("\r\e[K");
updateCV.notify_one(); updateCV.notify_one();
quitCV.notify_one(); quitCV.notify_one();
}
} }
updateThread.join(); if (updateThread.joinable())
updateThread.join();
} }
void pause() override { void pause() override {
auto state (state_.lock()); auto state (state_.lock());
state->paused = true; state->suspensions++;
if (state->suspensions > 1) {
// already paused
return;
}
if (state->active) if (state->active)
writeToStderr("\r\e[K"); writeToStderr("\r\e[K");
} }
void resume() override { void resume() override {
auto state (state_.lock()); auto state (state_.lock());
state->paused = false; if (state->suspensions == 0) {
if (state->active) log(lvlError, "nix::ProgressBar: resume() called without a matching preceding pause(). This is a bug.");
writeToStderr("\r\e[K"); return;
state->haveUpdate = true; } else {
updateCV.notify_one(); state->suspensions--;
}
if (state->suspensions == 0) {
if (state->active)
writeToStderr("\r\e[K");
state->haveUpdate = true;
updateCV.notify_one();
}
} }
bool isVerbose() override bool isVerbose() override
@ -381,7 +400,7 @@ public:
auto nextWakeup = std::chrono::milliseconds::max(); auto nextWakeup = std::chrono::milliseconds::max();
state.haveUpdate = false; state.haveUpdate = false;
if (state.paused || !state.active) return nextWakeup; if (state.isPaused() || !state.active) return nextWakeup;
std::string line; std::string line;
@ -553,21 +572,9 @@ public:
} }
}; };
Logger * makeProgressBar() std::unique_ptr<Logger> makeProgressBar()
{ {
return new ProgressBar(isTTY()); return std::make_unique<ProgressBar>(isTTY());
}
void startProgressBar()
{
logger = makeProgressBar();
}
void stopProgressBar()
{
auto progressBar = dynamic_cast<ProgressBar *>(logger);
if (progressBar) progressBar->stop();
} }
} }

View file

@ -5,10 +5,6 @@
namespace nix { namespace nix {
Logger * makeProgressBar(); std::unique_ptr<Logger> makeProgressBar();
void startProgressBar();
void stopProgressBar();
} }

View file

@ -361,7 +361,7 @@ RunPager::RunPager()
if (!pager) pager = getenv("PAGER"); if (!pager) pager = getenv("PAGER");
if (pager && ((std::string) pager == "" || (std::string) pager == "cat")) return; if (pager && ((std::string) pager == "" || (std::string) pager == "cat")) return;
stopProgressBar(); logger->stop();
Pipe toPager; Pipe toPager;
toPager.create(); toPager.create();

View file

@ -3,13 +3,15 @@
#include "experimental-features.hh" #include "experimental-features.hh"
#include "derivations.hh" #include "derivations.hh"
#include "derivations.hh"
#include "tests/libstore.hh" #include "derivation-options.hh"
#include "tests/characterization.hh"
#include "parsed-derivations.hh" #include "parsed-derivations.hh"
#include "types.hh" #include "types.hh"
#include "json-utils.hh" #include "json-utils.hh"
#include "tests/libstore.hh"
#include "tests/characterization.hh"
namespace nix { namespace nix {
using nlohmann::json; using nlohmann::json;
@ -80,21 +82,30 @@ TEST_F(DerivationAdvancedAttrsTest, Derivation_advancedAttributes_defaults)
auto drvPath = writeDerivation(*store, got, NoRepair, true); auto drvPath = writeDerivation(*store, got, NoRepair, true);
ParsedDerivation parsedDrv(drvPath, got); ParsedDerivation parsedDrv(drvPath, got);
DerivationOptions options = DerivationOptions::fromParsedDerivation(parsedDrv);
EXPECT_EQ(parsedDrv.getStringAttr("__sandboxProfile").value_or(""), ""); EXPECT_TRUE(!parsedDrv.hasStructuredAttrs());
EXPECT_EQ(parsedDrv.getBoolAttr("__noChroot"), false);
EXPECT_EQ(parsedDrv.getStringsAttr("__impureHostDeps").value_or(Strings()), Strings()); EXPECT_EQ(options.additionalSandboxProfile, "");
EXPECT_EQ(parsedDrv.getStringsAttr("impureEnvVars").value_or(Strings()), Strings()); EXPECT_EQ(options.noChroot, false);
EXPECT_EQ(parsedDrv.getBoolAttr("__darwinAllowLocalNetworking"), false); EXPECT_EQ(options.impureHostDeps, StringSet{});
EXPECT_EQ(parsedDrv.getStringsAttr("allowedReferences"), std::nullopt); EXPECT_EQ(options.impureEnvVars, StringSet{});
EXPECT_EQ(parsedDrv.getStringsAttr("allowedRequisites"), std::nullopt); EXPECT_EQ(options.allowLocalNetworking, false);
EXPECT_EQ(parsedDrv.getStringsAttr("disallowedReferences"), std::nullopt); {
EXPECT_EQ(parsedDrv.getStringsAttr("disallowedRequisites"), std::nullopt); auto * checksForAllOutputs_ = std::get_if<0>(&options.outputChecks);
EXPECT_EQ(parsedDrv.getRequiredSystemFeatures(), StringSet()); ASSERT_TRUE(checksForAllOutputs_ != nullptr);
EXPECT_EQ(parsedDrv.canBuildLocally(*store), false); auto & checksForAllOutputs = *checksForAllOutputs_;
EXPECT_EQ(parsedDrv.willBuildLocally(*store), false);
EXPECT_EQ(parsedDrv.substitutesAllowed(), true); EXPECT_EQ(checksForAllOutputs.allowedReferences, std::nullopt);
EXPECT_EQ(parsedDrv.useUidRange(), false); EXPECT_EQ(checksForAllOutputs.allowedRequisites, std::nullopt);
EXPECT_EQ(checksForAllOutputs.disallowedReferences, StringSet{});
EXPECT_EQ(checksForAllOutputs.disallowedRequisites, StringSet{});
}
EXPECT_EQ(options.getRequiredSystemFeatures(got), StringSet());
EXPECT_EQ(options.canBuildLocally(*store, got), false);
EXPECT_EQ(options.willBuildLocally(*store, got), false);
EXPECT_EQ(options.substitutesAllowed(), true);
EXPECT_EQ(options.useUidRange(got), false);
}); });
}; };
@ -106,29 +117,36 @@ TEST_F(DerivationAdvancedAttrsTest, Derivation_advancedAttributes)
auto drvPath = writeDerivation(*store, got, NoRepair, true); auto drvPath = writeDerivation(*store, got, NoRepair, true);
ParsedDerivation parsedDrv(drvPath, got); ParsedDerivation parsedDrv(drvPath, got);
DerivationOptions options = DerivationOptions::fromParsedDerivation(parsedDrv);
StringSet systemFeatures{"rainbow", "uid-range"}; StringSet systemFeatures{"rainbow", "uid-range"};
EXPECT_EQ(parsedDrv.getStringAttr("__sandboxProfile").value_or(""), "sandcastle"); EXPECT_TRUE(!parsedDrv.hasStructuredAttrs());
EXPECT_EQ(parsedDrv.getBoolAttr("__noChroot"), true);
EXPECT_EQ(parsedDrv.getStringsAttr("__impureHostDeps").value_or(Strings()), Strings{"/usr/bin/ditto"}); EXPECT_EQ(options.additionalSandboxProfile, "sandcastle");
EXPECT_EQ(parsedDrv.getStringsAttr("impureEnvVars").value_or(Strings()), Strings{"UNICORN"}); EXPECT_EQ(options.noChroot, true);
EXPECT_EQ(parsedDrv.getBoolAttr("__darwinAllowLocalNetworking"), true); EXPECT_EQ(options.impureHostDeps, StringSet{"/usr/bin/ditto"});
EXPECT_EQ( EXPECT_EQ(options.impureEnvVars, StringSet{"UNICORN"});
parsedDrv.getStringsAttr("allowedReferences"), Strings{"/nix/store/3c08bzb71z4wiag719ipjxr277653ynp-foo"}); EXPECT_EQ(options.allowLocalNetworking, true);
EXPECT_EQ( {
parsedDrv.getStringsAttr("allowedRequisites"), Strings{"/nix/store/3c08bzb71z4wiag719ipjxr277653ynp-foo"}); auto * checksForAllOutputs_ = std::get_if<0>(&options.outputChecks);
EXPECT_EQ( ASSERT_TRUE(checksForAllOutputs_ != nullptr);
parsedDrv.getStringsAttr("disallowedReferences"), auto & checksForAllOutputs = *checksForAllOutputs_;
Strings{"/nix/store/7rhsm8i393hm1wcsmph782awg1hi2f7x-bar"});
EXPECT_EQ( EXPECT_EQ(
parsedDrv.getStringsAttr("disallowedRequisites"), checksForAllOutputs.allowedReferences, StringSet{"/nix/store/3c08bzb71z4wiag719ipjxr277653ynp-foo"});
Strings{"/nix/store/7rhsm8i393hm1wcsmph782awg1hi2f7x-bar"}); EXPECT_EQ(
EXPECT_EQ(parsedDrv.getRequiredSystemFeatures(), systemFeatures); checksForAllOutputs.allowedRequisites, StringSet{"/nix/store/3c08bzb71z4wiag719ipjxr277653ynp-foo"});
EXPECT_EQ(parsedDrv.canBuildLocally(*store), false); EXPECT_EQ(
EXPECT_EQ(parsedDrv.willBuildLocally(*store), false); checksForAllOutputs.disallowedReferences, StringSet{"/nix/store/7rhsm8i393hm1wcsmph782awg1hi2f7x-bar"});
EXPECT_EQ(parsedDrv.substitutesAllowed(), false); EXPECT_EQ(
EXPECT_EQ(parsedDrv.useUidRange(), true); checksForAllOutputs.disallowedRequisites, StringSet{"/nix/store/7rhsm8i393hm1wcsmph782awg1hi2f7x-bar"});
}
EXPECT_EQ(options.getRequiredSystemFeatures(got), systemFeatures);
EXPECT_EQ(options.canBuildLocally(*store, got), false);
EXPECT_EQ(options.willBuildLocally(*store, got), false);
EXPECT_EQ(options.substitutesAllowed(), false);
EXPECT_EQ(options.useUidRange(got), true);
}); });
}; };
@ -140,27 +158,29 @@ TEST_F(DerivationAdvancedAttrsTest, Derivation_advancedAttributes_structuredAttr
auto drvPath = writeDerivation(*store, got, NoRepair, true); auto drvPath = writeDerivation(*store, got, NoRepair, true);
ParsedDerivation parsedDrv(drvPath, got); ParsedDerivation parsedDrv(drvPath, got);
DerivationOptions options = DerivationOptions::fromParsedDerivation(parsedDrv);
EXPECT_EQ(parsedDrv.getStringAttr("__sandboxProfile").value_or(""), ""); EXPECT_TRUE(parsedDrv.hasStructuredAttrs());
EXPECT_EQ(parsedDrv.getBoolAttr("__noChroot"), false);
EXPECT_EQ(parsedDrv.getStringsAttr("__impureHostDeps").value_or(Strings()), Strings()); EXPECT_EQ(options.additionalSandboxProfile, "");
EXPECT_EQ(parsedDrv.getStringsAttr("impureEnvVars").value_or(Strings()), Strings()); EXPECT_EQ(options.noChroot, false);
EXPECT_EQ(parsedDrv.getBoolAttr("__darwinAllowLocalNetworking"), false); EXPECT_EQ(options.impureHostDeps, StringSet{});
EXPECT_EQ(options.impureEnvVars, StringSet{});
EXPECT_EQ(options.allowLocalNetworking, false);
{ {
auto structuredAttrs_ = parsedDrv.getStructuredAttrs(); auto * checksPerOutput_ = std::get_if<1>(&options.outputChecks);
ASSERT_TRUE(structuredAttrs_); ASSERT_TRUE(checksPerOutput_ != nullptr);
auto & structuredAttrs = *structuredAttrs_; auto & checksPerOutput = *checksPerOutput_;
auto outputChecks_ = get(structuredAttrs, "outputChecks"); EXPECT_EQ(checksPerOutput.size(), 0);
ASSERT_FALSE(outputChecks_);
} }
EXPECT_EQ(parsedDrv.getRequiredSystemFeatures(), StringSet()); EXPECT_EQ(options.getRequiredSystemFeatures(got), StringSet());
EXPECT_EQ(parsedDrv.canBuildLocally(*store), false); EXPECT_EQ(options.canBuildLocally(*store, got), false);
EXPECT_EQ(parsedDrv.willBuildLocally(*store), false); EXPECT_EQ(options.willBuildLocally(*store, got), false);
EXPECT_EQ(parsedDrv.substitutesAllowed(), true); EXPECT_EQ(options.substitutesAllowed(), true);
EXPECT_EQ(parsedDrv.useUidRange(), false); EXPECT_EQ(options.useUidRange(got), false);
}); });
}; };
@ -172,62 +192,52 @@ TEST_F(DerivationAdvancedAttrsTest, Derivation_advancedAttributes_structuredAttr
auto drvPath = writeDerivation(*store, got, NoRepair, true); auto drvPath = writeDerivation(*store, got, NoRepair, true);
ParsedDerivation parsedDrv(drvPath, got); ParsedDerivation parsedDrv(drvPath, got);
DerivationOptions options = DerivationOptions::fromParsedDerivation(parsedDrv);
StringSet systemFeatures{"rainbow", "uid-range"}; StringSet systemFeatures{"rainbow", "uid-range"};
EXPECT_EQ(parsedDrv.getStringAttr("__sandboxProfile").value_or(""), "sandcastle"); EXPECT_TRUE(parsedDrv.hasStructuredAttrs());
EXPECT_EQ(parsedDrv.getBoolAttr("__noChroot"), true);
EXPECT_EQ(parsedDrv.getStringsAttr("__impureHostDeps").value_or(Strings()), Strings{"/usr/bin/ditto"}); EXPECT_EQ(options.additionalSandboxProfile, "sandcastle");
EXPECT_EQ(parsedDrv.getStringsAttr("impureEnvVars").value_or(Strings()), Strings{"UNICORN"}); EXPECT_EQ(options.noChroot, true);
EXPECT_EQ(parsedDrv.getBoolAttr("__darwinAllowLocalNetworking"), true); EXPECT_EQ(options.impureHostDeps, StringSet{"/usr/bin/ditto"});
EXPECT_EQ(options.impureEnvVars, StringSet{"UNICORN"});
EXPECT_EQ(options.allowLocalNetworking, true);
{ {
auto structuredAttrs_ = parsedDrv.getStructuredAttrs();
ASSERT_TRUE(structuredAttrs_);
auto & structuredAttrs = *structuredAttrs_;
auto outputChecks_ = get(structuredAttrs, "outputChecks");
ASSERT_TRUE(outputChecks_);
auto & outputChecks = *outputChecks_;
{ {
auto output_ = get(outputChecks, "out"); auto output_ = get(std::get<1>(options.outputChecks), "out");
ASSERT_TRUE(output_); ASSERT_TRUE(output_);
auto & output = *output_; auto & output = *output_;
EXPECT_EQ(
get(output, "allowedReferences")->get<Strings>(), EXPECT_EQ(output.allowedReferences, StringSet{"/nix/store/3c08bzb71z4wiag719ipjxr277653ynp-foo"});
Strings{"/nix/store/3c08bzb71z4wiag719ipjxr277653ynp-foo"}); EXPECT_EQ(output.allowedRequisites, StringSet{"/nix/store/3c08bzb71z4wiag719ipjxr277653ynp-foo"});
EXPECT_EQ(
get(output, "allowedRequisites")->get<Strings>(),
Strings{"/nix/store/3c08bzb71z4wiag719ipjxr277653ynp-foo"});
} }
{ {
auto output_ = get(outputChecks, "bin"); auto output_ = get(std::get<1>(options.outputChecks), "bin");
ASSERT_TRUE(output_); ASSERT_TRUE(output_);
auto & output = *output_; auto & output = *output_;
EXPECT_EQ(
get(output, "disallowedReferences")->get<Strings>(), EXPECT_EQ(output.disallowedReferences, StringSet{"/nix/store/7rhsm8i393hm1wcsmph782awg1hi2f7x-bar"});
Strings{"/nix/store/7rhsm8i393hm1wcsmph782awg1hi2f7x-bar"}); EXPECT_EQ(output.disallowedRequisites, StringSet{"/nix/store/7rhsm8i393hm1wcsmph782awg1hi2f7x-bar"});
EXPECT_EQ(
get(output, "disallowedRequisites")->get<Strings>(),
Strings{"/nix/store/7rhsm8i393hm1wcsmph782awg1hi2f7x-bar"});
} }
{ {
auto output_ = get(outputChecks, "dev"); auto output_ = get(std::get<1>(options.outputChecks), "dev");
ASSERT_TRUE(output_); ASSERT_TRUE(output_);
auto & output = *output_; auto & output = *output_;
EXPECT_EQ(get(output, "maxSize")->get<uint64_t>(), 789);
EXPECT_EQ(get(output, "maxClosureSize")->get<uint64_t>(), 5909); EXPECT_EQ(output.maxSize, 789);
EXPECT_EQ(output.maxClosureSize, 5909);
} }
} }
EXPECT_EQ(parsedDrv.getRequiredSystemFeatures(), systemFeatures); EXPECT_EQ(options.getRequiredSystemFeatures(got), systemFeatures);
EXPECT_EQ(parsedDrv.canBuildLocally(*store), false); EXPECT_EQ(options.canBuildLocally(*store, got), false);
EXPECT_EQ(parsedDrv.willBuildLocally(*store), false); EXPECT_EQ(options.willBuildLocally(*store, got), false);
EXPECT_EQ(parsedDrv.substitutesAllowed(), false); EXPECT_EQ(options.substitutesAllowed(), false);
EXPECT_EQ(parsedDrv.useUidRange(), true); EXPECT_EQ(options.useUidRange(got), true);
}); });
}; };

View file

@ -36,14 +36,6 @@
namespace nix { namespace nix {
Goal::Co DerivationGoal::init() {
if (useDerivation) {
co_return getDerivation();
} else {
co_return haveDerivation();
}
}
DerivationGoal::DerivationGoal(const StorePath & drvPath, DerivationGoal::DerivationGoal(const StorePath & drvPath,
const OutputsSpec & wantedOutputs, Worker & worker, BuildMode buildMode) const OutputsSpec & wantedOutputs, Worker & worker, BuildMode buildMode)
: Goal(worker, DerivedPath::Built { .drvPath = makeConstantStorePathRef(drvPath), .outputs = wantedOutputs }) : Goal(worker, DerivedPath::Built { .drvPath = makeConstantStorePathRef(drvPath), .outputs = wantedOutputs })
@ -141,50 +133,44 @@ void DerivationGoal::addWantedOutputs(const OutputsSpec & outputs)
} }
Goal::Co DerivationGoal::getDerivation() Goal::Co DerivationGoal::init() {
{
trace("init"); trace("init");
/* The first thing to do is to make sure that the derivation if (useDerivation) {
exists. If it doesn't, it may be created through a /* The first thing to do is to make sure that the derivation
substitute. */ exists. If it doesn't, it may be created through a
if (buildMode == bmNormal && worker.evalStore.isValidPath(drvPath)) { substitute. */
co_return loadDerivation();
}
addWaitee(upcast_goal(worker.makePathSubstitutionGoal(drvPath))); if (buildMode != bmNormal || !worker.evalStore.isValidPath(drvPath)) {
addWaitee(upcast_goal(worker.makePathSubstitutionGoal(drvPath)));
co_await Suspend{}; co_await Suspend{};
co_return loadDerivation();
}
Goal::Co DerivationGoal::loadDerivation()
{
trace("loading derivation");
if (nrFailed != 0) {
co_return done(BuildResult::MiscFailure, {}, Error("cannot build missing derivation '%s'", worker.store.printStorePath(drvPath)));
}
/* `drvPath' should already be a root, but let's be on the safe
side: if the user forgot to make it a root, we wouldn't want
things being garbage collected while we're busy. */
worker.evalStore.addTempRoot(drvPath);
/* Get the derivation. It is probably in the eval store, but it might be inthe main store:
- Resolved derivation are resolved against main store realisations, and so must be stored there.
- Dynamic derivations are built, and so are found in the main store.
*/
for (auto * drvStore : { &worker.evalStore, &worker.store }) {
if (drvStore->isValidPath(drvPath)) {
drv = std::make_unique<Derivation>(drvStore->readDerivation(drvPath));
break;
} }
trace("loading derivation");
if (nrFailed != 0) {
co_return done(BuildResult::MiscFailure, {}, Error("cannot build missing derivation '%s'", worker.store.printStorePath(drvPath)));
}
/* `drvPath' should already be a root, but let's be on the safe
side: if the user forgot to make it a root, we wouldn't want
things being garbage collected while we're busy. */
worker.evalStore.addTempRoot(drvPath);
/* Get the derivation. It is probably in the eval store, but it might be inthe main store:
- Resolved derivation are resolved against main store realisations, and so must be stored there.
- Dynamic derivations are built, and so are found in the main store.
*/
for (auto * drvStore : { &worker.evalStore, &worker.store }) {
if (drvStore->isValidPath(drvPath)) {
drv = std::make_unique<Derivation>(drvStore->readDerivation(drvPath));
break;
}
}
assert(drv);
} }
assert(drv);
co_return haveDerivation(); co_return haveDerivation();
} }
@ -195,58 +181,64 @@ Goal::Co DerivationGoal::haveDerivation()
trace("have derivation"); trace("have derivation");
parsedDrv = std::make_unique<ParsedDerivation>(drvPath, *drv); parsedDrv = std::make_unique<ParsedDerivation>(drvPath, *drv);
drvOptions = std::make_unique<DerivationOptions>(DerivationOptions::fromParsedDerivation(*parsedDrv));
if (!drv->type().hasKnownOutputPaths()) if (!drv->type().hasKnownOutputPaths())
experimentalFeatureSettings.require(Xp::CaDerivations); experimentalFeatureSettings.require(Xp::CaDerivations);
if (drv->type().isImpure()) {
experimentalFeatureSettings.require(Xp::ImpureDerivations);
for (auto & [outputName, output] : drv->outputs) {
auto randomPath = StorePath::random(outputPathName(drv->name, outputName));
assert(!worker.store.isValidPath(randomPath));
initialOutputs.insert({
outputName,
InitialOutput {
.wanted = true,
.outputHash = impureOutputHash,
.known = InitialOutputStatus {
.path = randomPath,
.status = PathStatus::Absent
}
}
});
}
co_return gaveUpOnSubstitution();
}
for (auto & i : drv->outputsAndOptPaths(worker.store)) for (auto & i : drv->outputsAndOptPaths(worker.store))
if (i.second.second) if (i.second.second)
worker.store.addTempRoot(*i.second.second); worker.store.addTempRoot(*i.second.second);
auto outputHashes = staticOutputHashes(worker.evalStore, *drv); {
for (auto & [outputName, outputHash] : outputHashes) bool impure = drv->type().isImpure();
initialOutputs.insert({
outputName, if (impure) experimentalFeatureSettings.require(Xp::ImpureDerivations);
InitialOutput {
auto outputHashes = staticOutputHashes(worker.evalStore, *drv);
for (auto & [outputName, outputHash] : outputHashes) {
InitialOutput v{
.wanted = true, // Will be refined later .wanted = true, // Will be refined later
.outputHash = outputHash .outputHash = outputHash
};
/* TODO we might want to also allow randomizing the paths
for regular CA derivations, e.g. for sake of checking
determinism. */
if (impure) {
v.known = InitialOutputStatus {
.path = StorePath::random(outputPathName(drv->name, outputName)),
.status = PathStatus::Absent,
};
} }
});
/* Check what outputs paths are not already valid. */ initialOutputs.insert({
auto [allValid, validOutputs] = checkPathValidity(); outputName,
std::move(v),
});
}
/* If they are all valid, then we're done. */ if (impure) {
if (allValid && buildMode == bmNormal) { /* We don't yet have any safe way to cache an impure derivation at
co_return done(BuildResult::AlreadyValid, std::move(validOutputs)); this step. */
co_return gaveUpOnSubstitution();
}
}
{
/* Check what outputs paths are not already valid. */
auto [allValid, validOutputs] = checkPathValidity();
/* If they are all valid, then we're done. */
if (allValid && buildMode == bmNormal) {
co_return done(BuildResult::AlreadyValid, std::move(validOutputs));
}
} }
/* We are first going to try to create the invalid output paths /* We are first going to try to create the invalid output paths
through substitutes. If that doesn't work, we'll build through substitutes. If that doesn't work, we'll build
them. */ them. */
if (settings.useSubstitutes && parsedDrv->substitutesAllowed()) if (settings.useSubstitutes && drvOptions->substitutesAllowed())
for (auto & [outputName, status] : initialOutputs) { for (auto & [outputName, status] : initialOutputs) {
if (!status.wanted) continue; if (!status.wanted) continue;
if (!status.known) if (!status.known)
@ -268,12 +260,7 @@ Goal::Co DerivationGoal::haveDerivation()
} }
if (!waitees.empty()) co_await Suspend{}; /* to prevent hang (no wake-up event) */ if (!waitees.empty()) co_await Suspend{}; /* to prevent hang (no wake-up event) */
co_return outputsSubstitutionTried();
}
Goal::Co DerivationGoal::outputsSubstitutionTried()
{
trace("all outputs substituted (maybe)"); trace("all outputs substituted (maybe)");
assert(!drv->type().isImpure()); assert(!drv->type().isImpure());
@ -399,84 +386,7 @@ Goal::Co DerivationGoal::gaveUpOnSubstitution()
} }
if (!waitees.empty()) co_await Suspend{}; /* to prevent hang (no wake-up event) */ if (!waitees.empty()) co_await Suspend{}; /* to prevent hang (no wake-up event) */
co_return inputsRealised();
}
Goal::Co DerivationGoal::repairClosure()
{
assert(!drv->type().isImpure());
/* If we're repairing, we now know that our own outputs are valid.
Now check whether the other paths in the outputs closure are
good. If not, then start derivation goals for the derivations
that produced those outputs. */
/* Get the output closure. */
auto outputs = queryDerivationOutputMap();
StorePathSet outputClosure;
for (auto & i : outputs) {
if (!wantedOutputs.contains(i.first)) continue;
worker.store.computeFSClosure(i.second, outputClosure);
}
/* Filter out our own outputs (which we have already checked). */
for (auto & i : outputs)
outputClosure.erase(i.second);
/* Get all dependencies of this derivation so that we know which
derivation is responsible for which path in the output
closure. */
StorePathSet inputClosure;
if (useDerivation) worker.store.computeFSClosure(drvPath, inputClosure);
std::map<StorePath, StorePath> outputsToDrv;
for (auto & i : inputClosure)
if (i.isDerivation()) {
auto depOutputs = worker.store.queryPartialDerivationOutputMap(i, &worker.evalStore);
for (auto & j : depOutputs)
if (j.second)
outputsToDrv.insert_or_assign(*j.second, i);
}
/* Check each path (slow!). */
for (auto & i : outputClosure) {
if (worker.pathContentsGood(i)) continue;
printError(
"found corrupted or missing path '%s' in the output closure of '%s'",
worker.store.printStorePath(i), worker.store.printStorePath(drvPath));
auto drvPath2 = outputsToDrv.find(i);
if (drvPath2 == outputsToDrv.end())
addWaitee(upcast_goal(worker.makePathSubstitutionGoal(i, Repair)));
else
addWaitee(worker.makeGoal(
DerivedPath::Built {
.drvPath = makeConstantStorePathRef(drvPath2->second),
.outputs = OutputsSpec::All { },
},
bmRepair));
}
if (waitees.empty()) {
co_return done(BuildResult::AlreadyValid, assertPathValidity());
} else {
co_await Suspend{};
co_return closureRepaired();
}
}
Goal::Co DerivationGoal::closureRepaired()
{
trace("closure repaired");
if (nrFailed > 0)
throw Error("some paths in the output closure of derivation '%s' could not be repaired",
worker.store.printStorePath(drvPath));
co_return done(BuildResult::AlreadyValid, assertPathValidity());
}
Goal::Co DerivationGoal::inputsRealised()
{
trace("all inputs realised"); trace("all inputs realised");
if (nrFailed != 0) { if (nrFailed != 0) {
@ -718,7 +628,7 @@ Goal::Co DerivationGoal::tryToBuild()
`preferLocalBuild' set. Also, check and repair modes are only `preferLocalBuild' set. Also, check and repair modes are only
supported for local builds. */ supported for local builds. */
bool buildLocally = bool buildLocally =
(buildMode != bmNormal || parsedDrv->willBuildLocally(worker.store)) (buildMode != bmNormal || drvOptions->willBuildLocally(worker.store, *drv))
&& settings.maxBuildJobs.get() != 0; && settings.maxBuildJobs.get() != 0;
if (!buildLocally) { if (!buildLocally) {
@ -766,6 +676,73 @@ Goal::Co DerivationGoal::tryLocalBuild() {
} }
Goal::Co DerivationGoal::repairClosure()
{
assert(!drv->type().isImpure());
/* If we're repairing, we now know that our own outputs are valid.
Now check whether the other paths in the outputs closure are
good. If not, then start derivation goals for the derivations
that produced those outputs. */
/* Get the output closure. */
auto outputs = queryDerivationOutputMap();
StorePathSet outputClosure;
for (auto & i : outputs) {
if (!wantedOutputs.contains(i.first)) continue;
worker.store.computeFSClosure(i.second, outputClosure);
}
/* Filter out our own outputs (which we have already checked). */
for (auto & i : outputs)
outputClosure.erase(i.second);
/* Get all dependencies of this derivation so that we know which
derivation is responsible for which path in the output
closure. */
StorePathSet inputClosure;
if (useDerivation) worker.store.computeFSClosure(drvPath, inputClosure);
std::map<StorePath, StorePath> outputsToDrv;
for (auto & i : inputClosure)
if (i.isDerivation()) {
auto depOutputs = worker.store.queryPartialDerivationOutputMap(i, &worker.evalStore);
for (auto & j : depOutputs)
if (j.second)
outputsToDrv.insert_or_assign(*j.second, i);
}
/* Check each path (slow!). */
for (auto & i : outputClosure) {
if (worker.pathContentsGood(i)) continue;
printError(
"found corrupted or missing path '%s' in the output closure of '%s'",
worker.store.printStorePath(i), worker.store.printStorePath(drvPath));
auto drvPath2 = outputsToDrv.find(i);
if (drvPath2 == outputsToDrv.end())
addWaitee(upcast_goal(worker.makePathSubstitutionGoal(i, Repair)));
else
addWaitee(worker.makeGoal(
DerivedPath::Built {
.drvPath = makeConstantStorePathRef(drvPath2->second),
.outputs = OutputsSpec::All { },
},
bmRepair));
}
if (waitees.empty()) {
co_return done(BuildResult::AlreadyValid, assertPathValidity());
} else {
co_await Suspend{};
trace("closure repaired");
if (nrFailed > 0)
throw Error("some paths in the output closure of derivation '%s' could not be repaired",
worker.store.printStorePath(drvPath));
co_return done(BuildResult::AlreadyValid, assertPathValidity());
}
}
static void chmod_(const Path & path, mode_t mode) static void chmod_(const Path & path, mode_t mode)
{ {
if (chmod(path.c_str(), mode) == -1) if (chmod(path.c_str(), mode) == -1)
@ -1145,7 +1122,7 @@ HookReply DerivationGoal::tryBuildHook()
<< (worker.getNrLocalBuilds() < settings.maxBuildJobs ? 1 : 0) << (worker.getNrLocalBuilds() < settings.maxBuildJobs ? 1 : 0)
<< drv->platform << drv->platform
<< worker.store.printStorePath(drvPath) << worker.store.printStorePath(drvPath)
<< parsedDrv->getRequiredSystemFeatures(); << drvOptions->getRequiredSystemFeatures(*drv);
worker.hook->sink.flush(); worker.hook->sink.flush();
/* Read the first line of input, which should be a word indicating /* Read the first line of input, which should be a word indicating
@ -1247,7 +1224,7 @@ SingleDrvOutputs DerivationGoal::registerOutputs()
to do anything here. to do anything here.
We can only early return when the outputs are known a priori. For We can only early return when the outputs are known a priori. For
floating content-addressed derivations this isn't the case. floating content-addressing derivations this isn't the case.
*/ */
return assertPathValidity(); return assertPathValidity();
} }

View file

@ -2,6 +2,7 @@
///@file ///@file
#include "parsed-derivations.hh" #include "parsed-derivations.hh"
#include "derivation-options.hh"
#ifndef _WIN32 #ifndef _WIN32
# include "user-lock.hh" # include "user-lock.hh"
#endif #endif
@ -80,7 +81,7 @@ struct DerivationGoal : public Goal
/** /**
* Mapping from input derivations + output names to actual store * Mapping from input derivations + output names to actual store
* paths. This is filled in by waiteeDone() as each dependency * paths. This is filled in by waiteeDone() as each dependency
* finishes, before inputsRealised() is reached. * finishes, before `trace("all inputs realised")` is reached.
*/ */
std::map<std::pair<StorePath, std::string>, StorePath> inputDrvOutputs; std::map<std::pair<StorePath, std::string>, StorePath> inputDrvOutputs;
@ -143,6 +144,7 @@ struct DerivationGoal : public Goal
std::unique_ptr<Derivation> drv; std::unique_ptr<Derivation> drv;
std::unique_ptr<ParsedDerivation> parsedDrv; std::unique_ptr<ParsedDerivation> parsedDrv;
std::unique_ptr<DerivationOptions> drvOptions;
/** /**
* The remainder is state held during the build. * The remainder is state held during the build.
@ -233,13 +235,8 @@ struct DerivationGoal : public Goal
* The states. * The states.
*/ */
Co init() override; Co init() override;
Co getDerivation();
Co loadDerivation();
Co haveDerivation(); Co haveDerivation();
Co outputsSubstitutionTried();
Co gaveUpOnSubstitution(); Co gaveUpOnSubstitution();
Co closureRepaired();
Co inputsRealised();
Co tryToBuild(); Co tryToBuild();
virtual Co tryLocalBuild(); virtual Co tryLocalBuild();
Co buildDone(); Co buildDone();

View file

@ -1,4 +1,4 @@
-- Extension of the sql schema for content-addressed derivations. -- Extension of the sql schema for content-addressing derivations.
-- Won't be loaded unless the experimental feature `ca-derivations` -- Won't be loaded unless the experimental feature `ca-derivations`
-- is enabled -- is enabled

Some files were not shown because too many files have changed in this diff Show more