diff --git a/.github/ISSUE_TEMPLATE/missing_documentation.md b/.github/ISSUE_TEMPLATE/missing_documentation.md
index 942d7a971..be3f6af97 100644
--- a/.github/ISSUE_TEMPLATE/missing_documentation.md
+++ b/.github/ISSUE_TEMPLATE/missing_documentation.md
@@ -11,6 +11,10 @@ assignees: ''
+## Proposal
+
+
+
## Checklist
@@ -22,10 +26,6 @@ assignees: ''
[source]: https://github.com/NixOS/nix/tree/master/doc/manual/src
[open documentation issues and pull requests]: https://github.com/NixOS/nix/labels/documentation
-## Proposal
-
-
-
## Priorities
Add :+1: to [issues you find important](https://github.com/NixOS/nix/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).
diff --git a/.github/workflows/backport.yml b/.github/workflows/backport.yml
index 37966bab2..816474ed5 100644
--- a/.github/workflows/backport.yml
+++ b/.github/workflows/backport.yml
@@ -21,7 +21,7 @@ jobs:
fetch-depth: 0
- name: Create backport PRs
# should be kept in sync with `version`
- uses: zeebe-io/backport-action@v1.3.0
+ uses: zeebe-io/backport-action@v1.3.1
with:
# Config README: https://github.com/zeebe-io/backport-action#backport-action
github_token: ${{ secrets.GITHUB_TOKEN }}
diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index 0f1f6d43f..c3a17d106 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -11,6 +11,7 @@ jobs:
tests:
needs: [check_secrets]
strategy:
+ fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest]
runs-on: ${{ matrix.os }}
@@ -19,7 +20,7 @@ jobs:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- - uses: cachix/install-nix-action@v21
+ - uses: cachix/install-nix-action@v22
with:
# The sandbox would otherwise be disabled by default on Darwin
extra_nix_config: "sandbox = true"
@@ -61,7 +62,7 @@ jobs:
with:
fetch-depth: 0
- run: echo CACHIX_NAME="$(echo $GITHUB_REPOSITORY-install-tests | tr "[A-Z]/" "[a-z]-")" >> $GITHUB_ENV
- - uses: cachix/install-nix-action@v21
+ - uses: cachix/install-nix-action@v22
with:
install_url: https://releases.nixos.org/nix/nix-2.13.3/install
- uses: cachix/cachix-action@v12
@@ -76,13 +77,14 @@ jobs:
needs: [installer, check_secrets]
if: github.event_name == 'push' && needs.check_secrets.outputs.cachix == 'true'
strategy:
+ fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v3
- run: echo CACHIX_NAME="$(echo $GITHUB_REPOSITORY-install-tests | tr "[A-Z]/" "[a-z]-")" >> $GITHUB_ENV
- - uses: cachix/install-nix-action@v21
+ - uses: cachix/install-nix-action@v22
with:
install_url: '${{needs.installer.outputs.installerURL}}'
install_options: "--tarball-url-prefix https://${{ env.CACHIX_NAME }}.cachix.org/serve"
@@ -109,7 +111,7 @@ jobs:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- - uses: cachix/install-nix-action@v21
+ - uses: cachix/install-nix-action@v22
with:
install_url: https://releases.nixos.org/nix/nix-2.13.3/install
- run: echo CACHIX_NAME="$(echo $GITHUB_REPOSITORY-install-tests | tr "[A-Z]/" "[a-z]-")" >> $GITHUB_ENV
diff --git a/.gitignore b/.gitignore
index 7ae1071d0..29d9106ae 100644
--- a/.gitignore
+++ b/.gitignore
@@ -89,6 +89,7 @@ perl/Makefile.config
/tests/ca/config.nix
/tests/dyn-drv/config.nix
/tests/repl-result-out
+/tests/test-libstoreconsumer/test-libstoreconsumer
# /tests/lang/
/tests/lang/*.out
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 57a949906..4a72a8eac 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -5,7 +5,6 @@ We appreciate your support.
Reading and following these guidelines will help us make the contribution process easy and effective for everyone involved.
-
## Report a bug
1. Check on the [GitHub issue tracker](https://github.com/NixOS/nix/issues) if your bug was already reported.
@@ -30,6 +29,8 @@ Check out the [security policy](https://github.com/NixOS/nix/security/policy).
You can use [labels](https://github.com/NixOS/nix/labels) to filter for relevant topics.
2. Search for related issues that cover what you're going to work on. It could help to mention there that you will work on the issue.
+
+ Issues labeled ["good first issue"](https://github.com/NixOS/nix/labels/good-first-issue) should be relatively easy to fix and are likely to get merged quickly.
Pull requests addressing issues labeled ["idea approved"](https://github.com/NixOS/nix/labels/idea%20approved) are especially welcomed by maintainers and will receive prioritised review.
3. Check the [Nix reference manual](https://nixos.org/manual/nix/unstable/contributing/hacking.html) for information on building Nix and running its tests.
diff --git a/Makefile b/Makefile
index 3ce5cc5e3..f8c6b32a7 100644
--- a/Makefile
+++ b/Makefile
@@ -28,6 +28,7 @@ makefiles += \
src/libexpr/tests/local.mk \
tests/local.mk \
tests/overlay-local-store/local.mk \
+ tests/test-libstoreconsumer/local.mk \
tests/plugins/local.mk
else
makefiles += \
diff --git a/doc/manual/src/SUMMARY.md.in b/doc/manual/src/SUMMARY.md.in
index 69c721b57..13d2e4d15 100644
--- a/doc/manual/src/SUMMARY.md.in
+++ b/doc/manual/src/SUMMARY.md.in
@@ -97,7 +97,10 @@
- [manifest.json](command-ref/files/manifest.json.md)
- [Channels](command-ref/files/channels.md)
- [Default Nix expression](command-ref/files/default-nix-expression.md)
-- [Architecture](architecture/architecture.md)
+- [Architecture and Design](architecture/architecture.md)
+ - [File System Object](architecture/file-system-object.md)
+- [Protocols](protocols/protocols.md)
+ - [Serving Tarball Flakes](protocols/tarball-fetcher.md)
- [Glossary](glossary.md)
- [Contributing](contributing/contributing.md)
- [Hacking](contributing/hacking.md)
diff --git a/doc/manual/src/architecture/architecture.md b/doc/manual/src/architecture/architecture.md
index e51958052..9e969972e 100644
--- a/doc/manual/src/architecture/architecture.md
+++ b/doc/manual/src/architecture/architecture.md
@@ -7,11 +7,11 @@ It should help users understand why Nix behaves as it does, and it should help d
Nix consists of [hierarchical layers].
-[hierarchical layers]: https://en.m.wikipedia.org/wiki/Multitier_architecture#Layers
+[hierarchical layers]: https://en.wikipedia.org/wiki/Multitier_architecture#Layers
The following [concept map] shows its main components (rectangles), the objects they operate on (rounded rectangles), and their interactions (connecting phrases):
-[concept map]: https://en.m.wikipedia.org/wiki/Concept_map
+[concept map]: https://en.wikipedia.org/wiki/Concept_map
```
@@ -76,7 +76,7 @@ The result of a build task can be input to another build task.
The following [data flow diagram] shows a build plan for illustration.
Build inputs used as instructions to a build task are marked accordingly:
-[data flow diagram]: https://en.m.wikipedia.org/wiki/Data-flow_diagram
+[data flow diagram]: https://en.wikipedia.org/wiki/Data-flow_diagram
```
+--------------------------------------------------------------------+
diff --git a/doc/manual/src/architecture/file-system-object.md b/doc/manual/src/architecture/file-system-object.md
new file mode 100644
index 000000000..42f047260
--- /dev/null
+++ b/doc/manual/src/architecture/file-system-object.md
@@ -0,0 +1,64 @@
+# File System Object
+
+Nix uses a simplified model of the file system, which consists of file system objects.
+Every file system object is one of the following:
+
+ - File
+
+ - A possibly empty sequence of bytes for contents
+ - A single boolean representing the [executable](https://en.m.wikipedia.org/wiki/File-system_permissions#Permissions) permission
+
+ - Directory
+
+ Mapping of names to child file system objects
+
+ - [Symbolic link](https://en.m.wikipedia.org/wiki/Symbolic_link)
+
+ An arbitrary string.
+ Nix does not assign any semantics to symbolic links.
+
+File system objects and their children form a tree.
+A bare file or symlink can be a root file system object.
+
+Nix does not encode any other file system notions such as [hard links](https://en.m.wikipedia.org/wiki/Hard_link), [permissions](https://en.m.wikipedia.org/wiki/File-system_permissions), timestamps, or other metadata.
+
+## Examples of file system objects
+
+A plain file:
+
+```
+50 B, executable: false
+```
+
+An executable file:
+
+```
+122 KB, executable: true
+```
+
+A symlink:
+
+```
+-> /usr/bin/sh
+```
+
+A directory with contents:
+
+```
+├── bin
+│ └── hello: 35 KB, executable: true
+└── share
+ ├── info
+ │ └── hello.info: 36 KB, executable: false
+ └── man
+ └── man1
+ └── hello.1.gz: 790 B, executable: false
+```
+
+A directory that contains a symlink and other directories:
+
+```
+├── bin -> share/go/bin
+├── nix-support/
+└── share/
+```
diff --git a/doc/manual/src/command-ref/nix-channel.md b/doc/manual/src/command-ref/nix-channel.md
index a210583ae..025f758e7 100644
--- a/doc/manual/src/command-ref/nix-channel.md
+++ b/doc/manual/src/command-ref/nix-channel.md
@@ -4,7 +4,7 @@
# Synopsis
-`nix-channel` {`--add` url [*name*] | `--remove` *name* | `--list` | `--update` [*names…*] | `--rollback` [*generation*] }
+`nix-channel` {`--add` url [*name*] | `--remove` *name* | `--list` | `--update` [*names…*] | `--list-generations` | `--rollback` [*generation*] }
# Description
@@ -39,6 +39,15 @@ This command has the following operations:
for `nix-env` operations (by symlinking them from the directory
`~/.nix-defexpr`).
+ - `--list-generations`\
+ Prints a list of all the current existing generations for the
+ channel profile.
+
+ Works the same way as
+ ```
+ nix-env --profile /nix/var/nix/profiles/per-user/$USER/channels --list-generations
+ ```
+
- `--rollback` \[*generation*\]\
Reverts the previous call to `nix-channel
--update`. Optionally, you can specify a specific channel generation
diff --git a/doc/manual/src/command-ref/nix-collect-garbage.md b/doc/manual/src/command-ref/nix-collect-garbage.md
index 51db5fc67..a679ceaf7 100644
--- a/doc/manual/src/command-ref/nix-collect-garbage.md
+++ b/doc/manual/src/command-ref/nix-collect-garbage.md
@@ -1,6 +1,6 @@
# Name
-`nix-collect-garbage` - delete unreachable store paths
+`nix-collect-garbage` - delete unreachable [store objects]
# Synopsis
@@ -8,17 +8,57 @@
# Description
-The command `nix-collect-garbage` is mostly an alias of [`nix-store
---gc`](@docroot@/command-ref/nix-store/gc.md), that is, it deletes all
-unreachable paths in the Nix store to clean up your system. However,
-it provides two additional options: `-d` (`--delete-old`), which
-deletes all old generations of all profiles in `/nix/var/nix/profiles`
-by invoking `nix-env --delete-generations old` on all profiles (of
-course, this makes rollbacks to previous configurations impossible);
-and `--delete-older-than` *period*, where period is a value such as
-`30d`, which deletes all generations older than the specified number
-of days in all profiles in `/nix/var/nix/profiles` (except for the
-generations that were active at that point in time).
+The command `nix-collect-garbage` is mostly an alias of [`nix-store --gc`](@docroot@/command-ref/nix-store/gc.md).
+That is, it deletes all unreachable [store objects] in the Nix store to clean up your system.
+
+However, it provides two additional options,
+[`--delete-old`](#opt-delete-old) and [`--delete-older-than`](#opt-delete-older-than),
+which also delete old [profiles], allowing potentially more [store objects] to be deleted because profiles are also garbage collection roots.
+These options are the equivalent of running
+[`nix-env --delete-generations`](@docroot@/command-ref/nix-env/delete-generations.md)
+with various augments on multiple profiles,
+prior to running `nix-collect-garbage` (or just `nix-store --gc`) without any flags.
+
+> **Note**
+>
+> Deleting previous configurations makes rollbacks to them impossible.
+
+These flags should be used with care, because they potentially delete generations of profiles used by other users on the system.
+
+## Locations searched for profiles
+
+`nix-collect-garbage` cannot know about all profiles; that information doesn't exist.
+Instead, it looks in a few locations, and acts on all profiles it finds there:
+
+1. The default profile locations as specified in the [profiles] section of the manual.
+
+2. > **NOTE**
+ >
+ > Not stable; subject to change
+ >
+ > Do not rely on this functionality; it just exists for migration purposes and is may change in the future.
+ > These deprecated paths remain a private implementation detail of Nix.
+
+ `$NIX_STATE_DIR/profiles` and `$NIX_STATE_DIR/profiles/per-user`.
+
+ With the exception of `$NIX_STATE_DIR/profiles/per-user/root` and `$NIX_STATE_DIR/profiles/default`, these directories are no longer used by other commands.
+ `nix-collect-garbage` looks there anyways in order to clean up profiles from older versions of Nix.
+
+# Options
+
+These options are for deleting old [profiles] prior to deleting unreachable [store objects].
+
+- [`--delete-old`](#opt-delete-old) / `-d`\
+ Delete all old generations of profiles.
+
+ This is the equivalent of invoking `nix-env --delete-generations old` on each found profile.
+
+- [`--delete-older-than`](#opt-delete-older-than) *period*\
+ Delete all generations of profiles older than the specified amount (except for the generations that were active at that point in time).
+ *period* is a value such as `30d`, which would mean 30 days.
+
+ This is the equivalent of invoking [`nix-env --delete-generations `](@docroot@/command-ref/nix-env/delete-generations.md#generations-days) on each found profile.
+ See the documentation of that command for additional information about the *period* argument.
{{#include ./opt-common.md}}
@@ -32,3 +72,6 @@ generations of each profile, do
```console
$ nix-collect-garbage -d
```
+
+[profiles]: @docroot@/command-ref/files/profiles.md
+[store objects]: @docroot@/glossary.md#gloss-store-object
diff --git a/doc/manual/src/command-ref/nix-env/delete-generations.md b/doc/manual/src/command-ref/nix-env/delete-generations.md
index 92cb7f0d9..d828a5b9e 100644
--- a/doc/manual/src/command-ref/nix-env/delete-generations.md
+++ b/doc/manual/src/command-ref/nix-env/delete-generations.md
@@ -9,14 +9,39 @@
# Description
This operation deletes the specified generations of the current profile.
-The generations can be a list of generation numbers, the special value
-`old` to delete all non-current generations, a value such as `30d` to
-delete all generations older than the specified number of days (except
-for the generation that was active at that point in time), or a value
-such as `+5` to keep the last `5` generations ignoring any newer than
-current, e.g., if `30` is the current generation `+5` will delete
-generation `25` and all older generations. Periodically deleting old
-generations is important to make garbage collection effective.
+
+*generations* can be a one of the following:
+
+- `...`:\
+ A list of generation numbers, each one a separate command-line argument.
+
+ Delete exactly the profile generations given by their generation number.
+ Deleting the current generation is not allowed.
+
+- The special value `old`
+
+ Delete all generations older than the current one.
+
+- `d`:\
+ The last *days* days
+
+ *Example*: `30d`
+
+ Delete all generations older than *days* days.
+ The generation that was active at that point in time is excluded, and will not be deleted.
+
+- `+`:\
+ The last *count* generations up to the present
+
+ *Example*: `+5`
+
+ Keep the last *count* generations, along with any newer than current.
+
+Periodically deleting old generations is important to make garbage collection
+effective.
+The is because profiles are also garbage collection roots — any [store object] reachable from a profile is "alive" and ineligible for deletion.
+
+[store object]: @docroot@/glossary.md#gloss-store-object
{{#include ./opt-common.md}}
@@ -28,19 +53,35 @@ generations is important to make garbage collection effective.
# Examples
+## Delete explicit generation numbers
+
```console
$ nix-env --delete-generations 3 4 8
```
+Delete the generations numbered 3, 4, and 8, so long as the current active generation is not any of those.
+
+## Keep most-recent by count count
+
```console
$ nix-env --delete-generations +5
```
+Suppose `30` is the current generation, and we currently have generations numbered `20` through `32`.
+
+Then this command will delete generations `20` through `25` (`<= 30 - 5`),
+and keep generations `26` through `31` (`> 30 - 5`).
+
+## Keep most-recent in days
+
```console
$ nix-env --delete-generations 30d
```
+This command will delete all generations older than 30 days, except for the generation that was active 30 days ago (if it currently exists).
+
+## Delete all older
+
```console
$ nix-env --profile other_profile --delete-generations old
```
-
diff --git a/doc/manual/src/contributing/hacking.md b/doc/manual/src/contributing/hacking.md
index b954a2167..c57d45138 100644
--- a/doc/manual/src/contributing/hacking.md
+++ b/doc/manual/src/contributing/hacking.md
@@ -378,7 +378,7 @@ rm $(git ls-files doc/manual/ -o | grep -F '.md') && rmdir doc/manual/src/comman
[`mdbook-linkcheck`] does not implement checking [URI fragments] yet.
[`mdbook-linkcheck`]: https://github.com/Michael-F-Bryan/mdbook-linkcheck
-[URI fragments]: https://en.m.wikipedia.org/wiki/URI_fragment
+[URI fragments]: https://en.wikipedia.org/wiki/URI_fragment
#### `@docroot@` variable
diff --git a/doc/manual/src/glossary.md b/doc/manual/src/glossary.md
index e142bd415..ac0bb3c2f 100644
--- a/doc/manual/src/glossary.md
+++ b/doc/manual/src/glossary.md
@@ -85,12 +85,17 @@
[store path]: #gloss-store-path
+ - [file system object]{#gloss-store-object}\
+ The Nix data model for representing simplified file system data.
+
+ See [File System Object](@docroot@/architecture/file-system-object.md) for details.
+
+ [file system object]: #gloss-file-system-object
+
- [store object]{#gloss-store-object}\
- A file that is an immediate child of the Nix store directory. These
- can be regular files, but also entire directory trees. Store objects
- can be sources (objects copied from outside of the store),
- derivation outputs (objects produced by running a build task), or
- derivations (files describing a build task).
+
+ A store object consists of a [file system object], [reference]s to other store objects, and other metadata.
+ It can be referred to by a [store path].
[store object]: #gloss-store-object
@@ -112,9 +117,10 @@
from some server.
- [substituter]{#gloss-substituter}\
- A *substituter* is an additional store from which Nix will
- copy store objects it doesn't have. For details, see the
- [`substituters` option](./command-ref/conf-file.md#conf-substituters).
+ An additional [store]{#gloss-store} from which Nix can obtain store objects instead of building them.
+ Often the substituter is a [binary cache](#gloss-binary-cache), but any store can serve as substituter.
+
+ See the [`substituters` configuration option](./command-ref/conf-file.md#conf-substituters) for details.
[substituter]: #gloss-substituter
diff --git a/doc/manual/src/installation/prerequisites-source.md b/doc/manual/src/installation/prerequisites-source.md
index 5a708f11b..d4babf1ea 100644
--- a/doc/manual/src/installation/prerequisites-source.md
+++ b/doc/manual/src/installation/prerequisites-source.md
@@ -10,7 +10,7 @@
- Bash Shell. The `./configure` script relies on bashisms, so Bash is
required.
- - A version of GCC or Clang that supports C++17.
+ - A version of GCC or Clang that supports C++20.
- `pkg-config` to locate dependencies. If your distribution does not
provide it, you can get it from
diff --git a/doc/manual/src/language/index.md b/doc/manual/src/language/index.md
index 3eabe1a02..29950a52d 100644
--- a/doc/manual/src/language/index.md
+++ b/doc/manual/src/language/index.md
@@ -1,12 +1,11 @@
# Nix Language
-The Nix language is
+The Nix language is designed for conveniently creating and composing *derivations* – precise descriptions of how contents of existing files are used to derive new files.
+It is:
- *domain-specific*
- It only exists for the Nix package manager:
- to describe packages and configurations as well as their variants and compositions.
- It is not intended for general purpose use.
+ It comes with [built-in functions](@docroot@/language/builtins.md) to integrate with the Nix store, which manages files and performs the derivations declared in the Nix language.
- *declarative*
@@ -25,7 +24,7 @@ The Nix language is
- *lazy*
- Expressions are only evaluated when their value is needed.
+ Values are only computed when they are needed.
- *dynamically typed*
diff --git a/doc/manual/src/protocols/protocols.md b/doc/manual/src/protocols/protocols.md
new file mode 100644
index 000000000..d6bf1d809
--- /dev/null
+++ b/doc/manual/src/protocols/protocols.md
@@ -0,0 +1,4 @@
+# Protocols
+
+This chapter documents various developer-facing interfaces provided by
+Nix.
diff --git a/doc/manual/src/protocols/tarball-fetcher.md b/doc/manual/src/protocols/tarball-fetcher.md
new file mode 100644
index 000000000..0d3212303
--- /dev/null
+++ b/doc/manual/src/protocols/tarball-fetcher.md
@@ -0,0 +1,42 @@
+# Lockable HTTP Tarball Protocol
+
+Tarball flakes can be served as regular tarballs via HTTP or the file
+system (for `file://` URLs). Unless the server implements the Lockable
+HTTP Tarball protocol, it is the responsibility of the user to make sure that
+the URL always produces the same tarball contents.
+
+An HTTP server can return an "immutable" HTTP URL appropriate for lock
+files. This allows users to specify a tarball flake input in
+`flake.nix` that requests the latest version of a flake
+(e.g. `https://example.org/hello/latest.tar.gz`), while `flake.lock`
+will record a URL whose contents will not change
+(e.g. `https://example.org/hello/.tar.gz`). To do so, the
+server must return an [HTTP `Link` header](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Link) with the `rel` attribute set to
+`immutable`, as follows:
+
+```
+Link: ; rel="immutable"
+```
+
+(Note the required `<` and `>` characters around *flakeref*.)
+
+*flakeref* must be a tarball flakeref. It can contain flake attributes
+such as `narHash`, `rev` and `revCount`. If `narHash` is included, its
+value must be the NAR hash of the unpacked tarball (as computed via
+`nix hash path`). Nix checks the contents of the returned tarball
+against the `narHash` attribute. The `rev` and `revCount` attributes
+are useful when the tarball flake is a mirror of a fetcher type that
+has those attributes, such as Git or GitHub. They are not checked by
+Nix.
+
+```
+Link: ; rel="immutable"
+```
+
+(The linebreaks in this example are for clarity and must not be included in the actual response.)
+
+For tarball flakes, the value of the `lastModified` flake attribute is
+defined as the timestamp of the newest file inside the tarball.
diff --git a/doc/manual/src/release-notes/rl-next.md b/doc/manual/src/release-notes/rl-next.md
index 78ae99f4b..bde9057c6 100644
--- a/doc/manual/src/release-notes/rl-next.md
+++ b/doc/manual/src/release-notes/rl-next.md
@@ -1,2 +1,3 @@
# Release X.Y (202?-??-??)
+- [`nix-channel`](../command-ref/nix-channel.md) now supports a `--list-generations` subcommand
diff --git a/flake.nix b/flake.nix
index a4ee80b32..bdbf54169 100644
--- a/flake.nix
+++ b/flake.nix
@@ -590,6 +590,8 @@
tests.sourcehutFlakes = runNixOSTestFor "x86_64-linux" ./tests/nixos/sourcehut-flakes.nix;
+ tests.tarballFlakes = runNixOSTestFor "x86_64-linux" ./tests/nixos/tarball-flakes.nix;
+
tests.containers = runNixOSTestFor "x86_64-linux" ./tests/nixos/containers/containers.nix;
tests.setuid = lib.genAttrs
diff --git a/maintainers/README.md b/maintainers/README.md
index d13349438..0d520cb0c 100644
--- a/maintainers/README.md
+++ b/maintainers/README.md
@@ -117,6 +117,7 @@ Pull requests in this column are reviewed together during work meetings.
This is both for spreading implementation knowledge and for establishing common values in code reviews.
When the overall direction is agreed upon, even when further changes are required, the pull request is assigned to one team member.
+If significant changes are requested or reviewers cannot come to a conclusion in reasonable time, the pull request is [marked as draft](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/changing-the-stage-of-a-pull-request#converting-a-pull-request-to-a-draft).
### Assigned
diff --git a/scripts/install-darwin-multi-user.sh b/scripts/install-darwin-multi-user.sh
index 5111a5dde..0326d3415 100644
--- a/scripts/install-darwin-multi-user.sh
+++ b/scripts/install-darwin-multi-user.sh
@@ -100,7 +100,7 @@ poly_extra_try_me_commands() {
poly_configure_nix_daemon_service() {
task "Setting up the nix-daemon LaunchDaemon"
_sudo "to set up the nix-daemon as a LaunchDaemon" \
- /bin/cp -f "/nix/var/nix/profiles/default$NIX_DAEMON_DEST" "$NIX_DAEMON_DEST"
+ /usr/bin/install -m -rw-r--r-- "/nix/var/nix/profiles/default$NIX_DAEMON_DEST" "$NIX_DAEMON_DEST"
_sudo "to load the LaunchDaemon plist for nix-daemon" \
launchctl load /Library/LaunchDaemons/org.nixos.nix-daemon.plist
diff --git a/scripts/install-multi-user.sh b/scripts/install-multi-user.sh
index c11783158..656769d84 100644
--- a/scripts/install-multi-user.sh
+++ b/scripts/install-multi-user.sh
@@ -700,6 +700,10 @@ EOF
}
welcome_to_nix() {
+ local -r NIX_UID_RANGES="${NIX_FIRST_BUILD_UID}..$((NIX_FIRST_BUILD_UID + NIX_USER_COUNT - 1))"
+ local -r RANGE_TEXT=$(echo -ne "${BLUE}(uids [${NIX_UID_RANGES}])${ESC}")
+ local -r GROUP_TEXT=$(echo -ne "${BLUE}(gid ${NIX_BUILD_GROUP_ID})${ESC}")
+
ok "Welcome to the Multi-User Nix Installation"
cat <(store),
- profile2, storePath));
+ createGeneration(*store, profile2, storePath));
}
void MixProfile::updateProfile(const BuiltPaths & buildables)
diff --git a/src/libcmd/common-eval-args.cc b/src/libcmd/common-eval-args.cc
index ff3abd534..7f97364a1 100644
--- a/src/libcmd/common-eval-args.cc
+++ b/src/libcmd/common-eval-args.cc
@@ -165,7 +165,7 @@ SourcePath lookupFileArg(EvalState & state, std::string_view s)
{
if (EvalSettings::isPseudoUrl(s)) {
auto storePath = fetchers::downloadTarball(
- state.store, EvalSettings::resolvePseudoUrl(s), "source", false).first.storePath;
+ state.store, EvalSettings::resolvePseudoUrl(s), "source", false).tree.storePath;
return state.rootPath(CanonPath(state.store->toRealPath(storePath)));
}
diff --git a/src/libcmd/installables.cc b/src/libcmd/installables.cc
index a2b882355..10b077fb5 100644
--- a/src/libcmd/installables.cc
+++ b/src/libcmd/installables.cc
@@ -701,7 +701,7 @@ RawInstallablesCommand::RawInstallablesCommand()
{
addFlag({
.longName = "stdin",
- .description = "Read installables from the standard input.",
+ .description = "Read installables from the standard input. No default installable applied.",
.handler = {&readFromStdIn, true}
});
@@ -730,9 +730,9 @@ void RawInstallablesCommand::run(ref store)
while (std::cin >> word) {
rawInstallables.emplace_back(std::move(word));
}
+ } else {
+ applyDefaultInstallables(rawInstallables);
}
-
- applyDefaultInstallables(rawInstallables);
run(store, std::move(rawInstallables));
}
diff --git a/src/libexpr/eval.hh b/src/libexpr/eval.hh
index d6f4560a5..8e41bdbd0 100644
--- a/src/libexpr/eval.hh
+++ b/src/libexpr/eval.hh
@@ -741,7 +741,8 @@ struct EvalSettings : Config
If set to `true`, the Nix evaluator will not allow access to any
files outside of the Nix search path (as set via the `NIX_PATH`
environment variable or the `-I` option), or to URIs outside of
- `allowed-uri`. The default is `false`.
+ [`allowed-uris`](../command-ref/conf-file.md#conf-allowed-uris).
+ The default is `false`.
)"};
Setting pureEval{this, false, "pure-eval",
diff --git a/src/libexpr/parser.y b/src/libexpr/parser.y
index 4d981712a..3b545fd84 100644
--- a/src/libexpr/parser.y
+++ b/src/libexpr/parser.y
@@ -793,7 +793,7 @@ std::pair EvalState::resolveSearchPathElem(const SearchPathEl
if (EvalSettings::isPseudoUrl(elem.second)) {
try {
auto storePath = fetchers::downloadTarball(
- store, EvalSettings::resolvePseudoUrl(elem.second), "source", false).first.storePath;
+ store, EvalSettings::resolvePseudoUrl(elem.second), "source", false).tree.storePath;
res = { true, store->toRealPath(storePath) };
} catch (FileTransferError & e) {
logWarning({
diff --git a/src/libexpr/primops.cc b/src/libexpr/primops.cc
index 42efca4e7..5b2f7e8b7 100644
--- a/src/libexpr/primops.cc
+++ b/src/libexpr/primops.cc
@@ -6,7 +6,7 @@
#include "globals.hh"
#include "json-to-value.hh"
#include "names.hh"
-#include "references.hh"
+#include "path-references.hh"
#include "store-api.hh"
#include "util.hh"
#include "value-to-json.hh"
@@ -4058,18 +4058,6 @@ static RegisterPrimOp primop_splitVersion({
RegisterPrimOp::PrimOps * RegisterPrimOp::primOps;
-RegisterPrimOp::RegisterPrimOp(std::string name, size_t arity, PrimOpFun fun)
-{
- if (!primOps) primOps = new PrimOps;
- primOps->push_back({
- .name = name,
- .args = {},
- .arity = arity,
- .fun = fun,
- });
-}
-
-
RegisterPrimOp::RegisterPrimOp(Info && info)
{
if (!primOps) primOps = new PrimOps;
diff --git a/src/libexpr/primops.hh b/src/libexpr/primops.hh
index 4ae73fe1f..73b7b866c 100644
--- a/src/libexpr/primops.hh
+++ b/src/libexpr/primops.hh
@@ -28,11 +28,6 @@ struct RegisterPrimOp
* will get called during EvalState initialization, so there
* may be primops not yet added and builtins is not yet sorted.
*/
- RegisterPrimOp(
- std::string name,
- size_t arity,
- PrimOpFun fun);
-
RegisterPrimOp(Info && info);
};
diff --git a/src/libexpr/primops/context.cc b/src/libexpr/primops/context.cc
index 07bf400cf..8b3468009 100644
--- a/src/libexpr/primops/context.cc
+++ b/src/libexpr/primops/context.cc
@@ -12,7 +12,11 @@ static void prim_unsafeDiscardStringContext(EvalState & state, const PosIdx pos,
v.mkString(*s);
}
-static RegisterPrimOp primop_unsafeDiscardStringContext("__unsafeDiscardStringContext", 1, prim_unsafeDiscardStringContext);
+static RegisterPrimOp primop_unsafeDiscardStringContext({
+ .name = "__unsafeDiscardStringContext",
+ .arity = 1,
+ .fun = prim_unsafeDiscardStringContext
+});
static void prim_hasContext(EvalState & state, const PosIdx pos, Value * * args, Value & v)
@@ -22,7 +26,16 @@ static void prim_hasContext(EvalState & state, const PosIdx pos, Value * * args,
v.mkBool(!context.empty());
}
-static RegisterPrimOp primop_hasContext("__hasContext", 1, prim_hasContext);
+static RegisterPrimOp primop_hasContext({
+ .name = "__hasContext",
+ .args = {"s"},
+ .doc = R"(
+ Return `true` if string *s* has a non-empty context. The
+ context can be obtained with
+ [`getContext`](#builtins-getContext).
+ )",
+ .fun = prim_hasContext
+});
/* Sometimes we want to pass a derivation path (i.e. pkg.drvPath) to a
@@ -51,7 +64,11 @@ static void prim_unsafeDiscardOutputDependency(EvalState & state, const PosIdx p
v.mkString(*s, context2);
}
-static RegisterPrimOp primop_unsafeDiscardOutputDependency("__unsafeDiscardOutputDependency", 1, prim_unsafeDiscardOutputDependency);
+static RegisterPrimOp primop_unsafeDiscardOutputDependency({
+ .name = "__unsafeDiscardOutputDependency",
+ .arity = 1,
+ .fun = prim_unsafeDiscardOutputDependency
+});
/* Extract the context of a string as a structured Nix value.
@@ -119,7 +136,30 @@ static void prim_getContext(EvalState & state, const PosIdx pos, Value * * args,
v.mkAttrs(attrs);
}
-static RegisterPrimOp primop_getContext("__getContext", 1, prim_getContext);
+static RegisterPrimOp primop_getContext({
+ .name = "__getContext",
+ .args = {"s"},
+ .doc = R"(
+ Return the string context of *s*.
+
+ The string context tracks references to derivations within a string.
+ It is represented as an attribute set of [store derivation](@docroot@/glossary.md#gloss-store-derivation) paths mapping to output names.
+
+ Using [string interpolation](@docroot@/language/string-interpolation.md) on a derivation will add that derivation to the string context.
+ For example,
+
+ ```nix
+ builtins.getContext "${derivation { name = "a"; builder = "b"; system = "c"; }}"
+ ```
+
+ evaluates to
+
+ ```
+ { "/nix/store/arhvjaf6zmlyn8vh8fgn55rpwnxq0n7l-a.drv" = { outputs = [ "out" ]; }; }
+ ```
+ )",
+ .fun = prim_getContext
+});
/* Append the given context to a given string.
@@ -192,6 +232,10 @@ static void prim_appendContext(EvalState & state, const PosIdx pos, Value * * ar
v.mkString(orig, context);
}
-static RegisterPrimOp primop_appendContext("__appendContext", 2, prim_appendContext);
+static RegisterPrimOp primop_appendContext({
+ .name = "__appendContext",
+ .arity = 2,
+ .fun = prim_appendContext
+});
}
diff --git a/src/libexpr/primops/fetchMercurial.cc b/src/libexpr/primops/fetchMercurial.cc
index 2c0d98e74..322692b52 100644
--- a/src/libexpr/primops/fetchMercurial.cc
+++ b/src/libexpr/primops/fetchMercurial.cc
@@ -88,6 +88,10 @@ static void prim_fetchMercurial(EvalState & state, const PosIdx pos, Value * * a
state.allowPath(tree.storePath);
}
-static RegisterPrimOp r_fetchMercurial("fetchMercurial", 1, prim_fetchMercurial);
+static RegisterPrimOp r_fetchMercurial({
+ .name = "fetchMercurial",
+ .arity = 1,
+ .fun = prim_fetchMercurial
+});
}
diff --git a/src/libexpr/primops/fetchTree.cc b/src/libexpr/primops/fetchTree.cc
index fe880aaa8..1d23ef53b 100644
--- a/src/libexpr/primops/fetchTree.cc
+++ b/src/libexpr/primops/fetchTree.cc
@@ -194,7 +194,11 @@ static void prim_fetchTree(EvalState & state, const PosIdx pos, Value * * args,
}
// FIXME: document
-static RegisterPrimOp primop_fetchTree("fetchTree", 1, prim_fetchTree);
+static RegisterPrimOp primop_fetchTree({
+ .name = "fetchTree",
+ .arity = 1,
+ .fun = prim_fetchTree
+});
static void fetch(EvalState & state, const PosIdx pos, Value * * args, Value & v,
const std::string & who, bool unpack, std::string name)
@@ -262,7 +266,7 @@ static void fetch(EvalState & state, const PosIdx pos, Value * * args, Value & v
// https://github.com/NixOS/nix/issues/4313
auto storePath =
unpack
- ? fetchers::downloadTarball(state.store, *url, name, (bool) expectedHash).first.storePath
+ ? fetchers::downloadTarball(state.store, *url, name, (bool) expectedHash).tree.storePath
: fetchers::downloadFile(state.store, *url, name, (bool) expectedHash).storePath;
if (expectedHash) {
diff --git a/src/libexpr/primops/fromTOML.cc b/src/libexpr/primops/fromTOML.cc
index e2a8b3c3a..2f4d4022e 100644
--- a/src/libexpr/primops/fromTOML.cc
+++ b/src/libexpr/primops/fromTOML.cc
@@ -90,6 +90,24 @@ static void prim_fromTOML(EvalState & state, const PosIdx pos, Value * * args, V
}
}
-static RegisterPrimOp primop_fromTOML("fromTOML", 1, prim_fromTOML);
+static RegisterPrimOp primop_fromTOML({
+ .name = "fromTOML",
+ .args = {"e"},
+ .doc = R"(
+ Convert a TOML string to a Nix value. For example,
+
+ ```nix
+ builtins.fromTOML ''
+ x=1
+ s="a"
+ [table]
+ y=2
+ ''
+ ```
+
+ returns the value `{ s = "a"; table = { y = 2; }; x = 1; }`.
+ )",
+ .fun = prim_fromTOML
+});
}
diff --git a/src/libfetchers/attrs.hh b/src/libfetchers/attrs.hh
index 1a14bb023..9f885a793 100644
--- a/src/libfetchers/attrs.hh
+++ b/src/libfetchers/attrs.hh
@@ -2,6 +2,7 @@
///@file
#include "types.hh"
+#include "hash.hh"
#include
diff --git a/src/libfetchers/fetchers.cc b/src/libfetchers/fetchers.cc
index 91db3a9eb..2860c1ceb 100644
--- a/src/libfetchers/fetchers.cc
+++ b/src/libfetchers/fetchers.cc
@@ -159,6 +159,12 @@ std::pair Input::fetch(ref store) const
input.to_string(), *prevLastModified);
}
+ if (auto prevRev = getRev()) {
+ if (input.getRev() != prevRev)
+ throw Error("'rev' attribute mismatch in input '%s', expected %s",
+ input.to_string(), prevRev->gitRev());
+ }
+
if (auto prevRevCount = getRevCount()) {
if (input.getRevCount() != prevRevCount)
throw Error("'revCount' attribute mismatch in input '%s', expected %d",
diff --git a/src/libfetchers/fetchers.hh b/src/libfetchers/fetchers.hh
index 498ad7e4d..d0738f619 100644
--- a/src/libfetchers/fetchers.hh
+++ b/src/libfetchers/fetchers.hh
@@ -158,6 +158,7 @@ struct DownloadFileResult
StorePath storePath;
std::string etag;
std::string effectiveUrl;
+ std::optional immutableUrl;
};
DownloadFileResult downloadFile(
@@ -167,7 +168,14 @@ DownloadFileResult downloadFile(
bool locked,
const Headers & headers = {});
-std::pair downloadTarball(
+struct DownloadTarballResult
+{
+ Tree tree;
+ time_t lastModified;
+ std::optional immutableUrl;
+};
+
+DownloadTarballResult downloadTarball(
ref store,
const std::string & url,
const std::string & name,
diff --git a/src/libfetchers/github.cc b/src/libfetchers/github.cc
index 6c1d573ce..80598e7f8 100644
--- a/src/libfetchers/github.cc
+++ b/src/libfetchers/github.cc
@@ -207,21 +207,21 @@ struct GitArchiveInputScheme : InputScheme
auto url = getDownloadUrl(input);
- auto [tree, lastModified] = downloadTarball(store, url.url, input.getName(), true, url.headers);
+ auto result = downloadTarball(store, url.url, input.getName(), true, url.headers);
- input.attrs.insert_or_assign("lastModified", uint64_t(lastModified));
+ input.attrs.insert_or_assign("lastModified", uint64_t(result.lastModified));
getCache()->add(
store,
lockedAttrs,
{
{"rev", rev->gitRev()},
- {"lastModified", uint64_t(lastModified)}
+ {"lastModified", uint64_t(result.lastModified)}
},
- tree.storePath,
+ result.tree.storePath,
true);
- return {std::move(tree.storePath), input};
+ return {result.tree.storePath, input};
}
};
diff --git a/src/libfetchers/tarball.cc b/src/libfetchers/tarball.cc
index 96fe5faca..e42aca6db 100644
--- a/src/libfetchers/tarball.cc
+++ b/src/libfetchers/tarball.cc
@@ -32,7 +32,8 @@ DownloadFileResult downloadFile(
return {
.storePath = std::move(cached->storePath),
.etag = getStrAttr(cached->infoAttrs, "etag"),
- .effectiveUrl = getStrAttr(cached->infoAttrs, "url")
+ .effectiveUrl = getStrAttr(cached->infoAttrs, "url"),
+ .immutableUrl = maybeGetStrAttr(cached->infoAttrs, "immutableUrl"),
};
};
@@ -55,12 +56,14 @@ DownloadFileResult downloadFile(
}
// FIXME: write to temporary file.
-
Attrs infoAttrs({
{"etag", res.etag},
{"url", res.effectiveUri},
});
+ if (res.immutableUrl)
+ infoAttrs.emplace("immutableUrl", *res.immutableUrl);
+
std::optional storePath;
if (res.cached) {
@@ -111,10 +114,11 @@ DownloadFileResult downloadFile(
.storePath = std::move(*storePath),
.etag = res.etag,
.effectiveUrl = res.effectiveUri,
+ .immutableUrl = res.immutableUrl,
};
}
-std::pair downloadTarball(
+DownloadTarballResult downloadTarball(
ref store,
const std::string & url,
const std::string & name,
@@ -131,8 +135,9 @@ std::pair downloadTarball(
if (cached && !cached->expired)
return {
- Tree { .actualPath = store->toRealPath(cached->storePath), .storePath = std::move(cached->storePath) },
- getIntAttr(cached->infoAttrs, "lastModified")
+ .tree = Tree { .actualPath = store->toRealPath(cached->storePath), .storePath = std::move(cached->storePath) },
+ .lastModified = (time_t) getIntAttr(cached->infoAttrs, "lastModified"),
+ .immutableUrl = maybeGetStrAttr(cached->infoAttrs, "immutableUrl"),
};
auto res = downloadFile(store, url, name, locked, headers);
@@ -160,6 +165,9 @@ std::pair downloadTarball(
{"etag", res.etag},
});
+ if (res.immutableUrl)
+ infoAttrs.emplace("immutableUrl", *res.immutableUrl);
+
getCache()->add(
store,
inAttrs,
@@ -168,8 +176,9 @@ std::pair downloadTarball(
locked);
return {
- Tree { .actualPath = store->toRealPath(*unpackedStorePath), .storePath = std::move(*unpackedStorePath) },
- lastModified,
+ .tree = Tree { .actualPath = store->toRealPath(*unpackedStorePath), .storePath = std::move(*unpackedStorePath) },
+ .lastModified = lastModified,
+ .immutableUrl = res.immutableUrl,
};
}
@@ -189,21 +198,33 @@ struct CurlInputScheme : InputScheme
virtual bool isValidURL(const ParsedURL & url) const = 0;
- std::optional inputFromURL(const ParsedURL & url) const override
+ std::optional inputFromURL(const ParsedURL & _url) const override
{
- if (!isValidURL(url))
+ if (!isValidURL(_url))
return std::nullopt;
Input input;
- auto urlWithoutApplicationScheme = url;
- urlWithoutApplicationScheme.scheme = parseUrlScheme(url.scheme).transport;
+ auto url = _url;
+
+ url.scheme = parseUrlScheme(url.scheme).transport;
- input.attrs.insert_or_assign("type", inputType());
- input.attrs.insert_or_assign("url", urlWithoutApplicationScheme.to_string());
auto narHash = url.query.find("narHash");
if (narHash != url.query.end())
input.attrs.insert_or_assign("narHash", narHash->second);
+
+ if (auto i = get(url.query, "rev"))
+ input.attrs.insert_or_assign("rev", *i);
+
+ if (auto i = get(url.query, "revCount"))
+ if (auto n = string2Int(*i))
+ input.attrs.insert_or_assign("revCount", *n);
+
+ url.query.erase("rev");
+ url.query.erase("revCount");
+
+ input.attrs.insert_or_assign("type", inputType());
+ input.attrs.insert_or_assign("url", url.to_string());
return input;
}
@@ -212,7 +233,8 @@ struct CurlInputScheme : InputScheme
auto type = maybeGetStrAttr(attrs, "type");
if (type != inputType()) return {};
- std::set allowedNames = {"type", "url", "narHash", "name", "unpack"};
+ // FIXME: some of these only apply to TarballInputScheme.
+ std::set allowedNames = {"type", "url", "narHash", "name", "unpack", "rev", "revCount"};
for (auto & [name, value] : attrs)
if (!allowedNames.count(name))
throw Error("unsupported %s input attribute '%s'", *type, name);
@@ -275,10 +297,22 @@ struct TarballInputScheme : CurlInputScheme
: hasTarballExtension(url.path));
}
- std::pair fetch(ref store, const Input & input) override
+ std::pair fetch(ref store, const Input & _input) override
{
- auto tree = downloadTarball(store, getStrAttr(input.attrs, "url"), input.getName(), false).first;
- return {std::move(tree.storePath), input};
+ Input input(_input);
+ auto url = getStrAttr(input.attrs, "url");
+ auto result = downloadTarball(store, url, input.getName(), false);
+
+ if (result.immutableUrl) {
+ auto immutableInput = Input::fromURL(*result.immutableUrl);
+ // FIXME: would be nice to support arbitrary flakerefs
+ // here, e.g. git flakes.
+ if (immutableInput.getType() != "tarball")
+ throw Error("tarball 'Link' headers that redirect to non-tarball URLs are not supported");
+ input = immutableInput;
+ }
+
+ return {result.tree.storePath, std::move(input)};
}
};
diff --git a/src/libstore/build/local-derivation-goal.cc b/src/libstore/build/local-derivation-goal.cc
index 9f685cb70..aacd9f717 100644
--- a/src/libstore/build/local-derivation-goal.cc
+++ b/src/libstore/build/local-derivation-goal.cc
@@ -4,7 +4,7 @@
#include "worker.hh"
#include "builtins.hh"
#include "builtins/buildenv.hh"
-#include "references.hh"
+#include "path-references.hh"
#include "finally.hh"
#include "util.hh"
#include "archive.hh"
@@ -2389,18 +2389,21 @@ SingleDrvOutputs LocalDerivationGoal::registerOutputs()
continue;
auto references = *referencesOpt;
- auto rewriteOutput = [&]() {
+ auto rewriteOutput = [&](const StringMap & rewrites) {
/* Apply hash rewriting if necessary. */
- if (!outputRewrites.empty()) {
+ if (!rewrites.empty()) {
debug("rewriting hashes in '%1%'; cross fingers", actualPath);
- /* FIXME: this is in-memory. */
- StringSink sink;
- dumpPath(actualPath, sink);
+ /* FIXME: Is this actually streaming? */
+ auto source = sinkToSource([&](Sink & nextSink) {
+ RewritingSink rsink(rewrites, nextSink);
+ dumpPath(actualPath, rsink);
+ rsink.flush();
+ });
+ Path tmpPath = actualPath + ".tmp";
+ restorePath(tmpPath, *source);
deletePath(actualPath);
- sink.s = rewriteStrings(sink.s, outputRewrites);
- StringSource source(sink.s);
- restorePath(actualPath, source);
+ movePath(tmpPath, actualPath);
/* FIXME: set proper permissions in restorePath() so
we don't have to do another traversal. */
@@ -2449,7 +2452,7 @@ SingleDrvOutputs LocalDerivationGoal::registerOutputs()
"since recursive hashing is not enabled (one of outputHashMode={flat,text} is true)",
actualPath);
}
- rewriteOutput();
+ rewriteOutput(outputRewrites);
/* FIXME optimize and deduplicate with addToStore */
std::string oldHashPart { scratchPath->hashPart() };
HashModuloSink caSink { outputHash.hashType, oldHashPart };
@@ -2487,16 +2490,14 @@ SingleDrvOutputs LocalDerivationGoal::registerOutputs()
Hash::dummy,
};
if (*scratchPath != newInfo0.path) {
- // Also rewrite the output path
- auto source = sinkToSource([&](Sink & nextSink) {
- RewritingSink rsink2(oldHashPart, std::string(newInfo0.path.hashPart()), nextSink);
- dumpPath(actualPath, rsink2);
- rsink2.flush();
- });
- Path tmpPath = actualPath + ".tmp";
- restorePath(tmpPath, *source);
- deletePath(actualPath);
- movePath(tmpPath, actualPath);
+ // If the path has some self-references, we need to rewrite
+ // them.
+ // (note that this doesn't invalidate the ca hash we calculated
+ // above because it's computed *modulo the self-references*, so
+ // it already takes this rewrite into account).
+ rewriteOutput(
+ StringMap{{oldHashPart,
+ std::string(newInfo0.path.hashPart())}});
}
HashResult narHashAndSize = hashPath(htSHA256, actualPath);
@@ -2518,7 +2519,7 @@ SingleDrvOutputs LocalDerivationGoal::registerOutputs()
outputRewrites.insert_or_assign(
std::string { scratchPath->hashPart() },
std::string { requiredFinalPath.hashPart() });
- rewriteOutput();
+ rewriteOutput(outputRewrites);
auto narHashAndSize = hashPath(htSHA256, actualPath);
ValidPathInfo newInfo0 { requiredFinalPath, narHashAndSize.first };
newInfo0.narSize = narHashAndSize.second;
diff --git a/src/libstore/filetransfer.cc b/src/libstore/filetransfer.cc
index 2346accbe..38b691279 100644
--- a/src/libstore/filetransfer.cc
+++ b/src/libstore/filetransfer.cc
@@ -186,9 +186,9 @@ struct curlFileTransfer : public FileTransfer
size_t realSize = size * nmemb;
std::string line((char *) contents, realSize);
printMsg(lvlVomit, "got header for '%s': %s", request.uri, trim(line));
+
static std::regex statusLine("HTTP/[^ ]+ +[0-9]+(.*)", std::regex::extended | std::regex::icase);
- std::smatch match;
- if (std::regex_match(line, match, statusLine)) {
+ if (std::smatch match; std::regex_match(line, match, statusLine)) {
result.etag = "";
result.data.clear();
result.bodySize = 0;
@@ -196,9 +196,11 @@ struct curlFileTransfer : public FileTransfer
acceptRanges = false;
encoding = "";
} else {
+
auto i = line.find(':');
if (i != std::string::npos) {
std::string name = toLower(trim(line.substr(0, i)));
+
if (name == "etag") {
result.etag = trim(line.substr(i + 1));
/* Hack to work around a GitHub bug: it sends
@@ -212,10 +214,22 @@ struct curlFileTransfer : public FileTransfer
debug("shutting down on 200 HTTP response with expected ETag");
return 0;
}
- } else if (name == "content-encoding")
+ }
+
+ else if (name == "content-encoding")
encoding = trim(line.substr(i + 1));
+
else if (name == "accept-ranges" && toLower(trim(line.substr(i + 1))) == "bytes")
acceptRanges = true;
+
+ else if (name == "link" || name == "x-amz-meta-link") {
+ auto value = trim(line.substr(i + 1));
+ static std::regex linkRegex("<([^>]*)>; rel=\"immutable\"", std::regex::extended | std::regex::icase);
+ if (std::smatch match; std::regex_match(value, match, linkRegex))
+ result.immutableUrl = match.str(1);
+ else
+ debug("got invalid link header '%s'", value);
+ }
}
}
return realSize;
@@ -345,7 +359,7 @@ struct curlFileTransfer : public FileTransfer
{
auto httpStatus = getHTTPStatus();
- char * effectiveUriCStr;
+ char * effectiveUriCStr = nullptr;
curl_easy_getinfo(req, CURLINFO_EFFECTIVE_URL, &effectiveUriCStr);
if (effectiveUriCStr)
result.effectiveUri = effectiveUriCStr;
diff --git a/src/libstore/filetransfer.hh b/src/libstore/filetransfer.hh
index 378c6ff78..a3b0dde1f 100644
--- a/src/libstore/filetransfer.hh
+++ b/src/libstore/filetransfer.hh
@@ -80,6 +80,10 @@ struct FileTransferResult
std::string effectiveUri;
std::string data;
uint64_t bodySize = 0;
+ /* An "immutable" URL for this resource (i.e. one whose contents
+ will never change), as returned by the `Link: ;
+ rel="immutable"` header. */
+ std::optional immutableUrl;
};
class Store;
diff --git a/src/libstore/globals.cc b/src/libstore/globals.cc
index 32e9a6ea9..d53377239 100644
--- a/src/libstore/globals.cc
+++ b/src/libstore/globals.cc
@@ -77,7 +77,30 @@ Settings::Settings()
allowedImpureHostPrefixes = tokenizeString("/System/Library /usr/lib /dev /bin/sh");
#endif
- buildHook = getSelfExe().value_or("nix") + " __build-remote";
+ /* Set the build hook location
+
+ For builds we perform a self-invocation, so Nix has to be self-aware.
+ That is, it has to know where it is installed. We don't think it's sentient.
+
+ Normally, nix is installed according to `nixBinDir`, which is set at compile time,
+ but can be overridden. This makes for a great default that works even if this
+ code is linked as a library into some other program whose main is not aware
+ that it might need to be a build remote hook.
+
+ However, it may not have been installed at all. For example, if it's a static build,
+ there's a good chance that it has been moved out of its installation directory.
+ That makes `nixBinDir` useless. Instead, we'll query the OS for the path to the
+ current executable, using `getSelfExe()`.
+
+ As a last resort, we resort to `PATH`. Hopefully we find a `nix` there that's compatible.
+ If you're porting Nix to a new platform, that might be good enough for a while, but
+ you'll want to improve `getSelfExe()` to work on your platform.
+ */
+ std::string nixExePath = nixBinDir + "/nix";
+ if (!pathExists(nixExePath)) {
+ nixExePath = getSelfExe().value_or("nix");
+ }
+ buildHook = nixExePath + " __build-remote";
}
void loadConfFile()
diff --git a/src/libstore/globals.hh b/src/libstore/globals.hh
index 19fb96448..d41677d32 100644
--- a/src/libstore/globals.hh
+++ b/src/libstore/globals.hh
@@ -710,20 +710,19 @@ public:
Strings{"https://cache.nixos.org/"},
"substituters",
R"(
- A list of [URLs of Nix stores](@docroot@/command-ref/new-cli/nix3-help-stores.md#store-url-format)
- to be used as substituters, separated by whitespace.
- Substituters are tried based on their Priority value, which each substituter can set
- independently. Lower value means higher priority.
- The default is `https://cache.nixos.org`, with a Priority of 40.
+ A list of [URLs of Nix stores](@docroot@/command-ref/new-cli/nix3-help-stores.md#store-url-format) to be used as substituters, separated by whitespace.
+ A substituter is an additional [store]{@docroot@/glossary.md##gloss-store} from which Nix can obtain [store objects](@docroot@/glossary.md#gloss-store-object) instead of building them.
- At least one of the following conditions must be met for Nix to use
- a substituter:
+ Substituters are tried based on their priority value, which each substituter can set independently.
+ Lower value means higher priority.
+ The default is `https://cache.nixos.org`, which has a priority of 40.
+
+ At least one of the following conditions must be met for Nix to use a substituter:
- the substituter is in the [`trusted-substituters`](#conf-trusted-substituters) list
- the user calling Nix is in the [`trusted-users`](#conf-trusted-users) list
- In addition, each store path should be trusted as described
- in [`trusted-public-keys`](#conf-trusted-public-keys)
+ In addition, each store path should be trusted as described in [`trusted-public-keys`](#conf-trusted-public-keys)
)",
{"binary-caches"}};
diff --git a/src/libstore/local-store.hh b/src/libstore/local-store.hh
index a36ae162a..ae548cca9 100644
--- a/src/libstore/local-store.hh
+++ b/src/libstore/local-store.hh
@@ -52,14 +52,15 @@ struct LocalStoreConfig : virtual LocalFSStoreConfig
R"(
Allow this store to be opened when its [database](@docroot@/glossary.md#gloss-nix-database) is on a read-only filesystem.
- Normally Nix will attempt to open the store database in read-write mode, even for querying (when write access is not needed).
- This causes it to fail if the database is on a read-only filesystem.
+ Normally Nix will attempt to open the store database in read-write mode, even for querying (when write access is not needed), causing it to fail if the database is on a read-only filesystem.
Enable read-only mode to disable locking and open the SQLite database with the [`immutable` parameter](https://www.sqlite.org/c3ref/open.html) set.
- **Warning**
- Do not use this unless the filesystem is read-only.
- Using it when the filesystem is writable can cause incorrect query results or corruption errors if the database is changed by another process.
+ > **Warning**
+ > Do not use this unless the filesystem is read-only.
+ >
+ > Using it when the filesystem is writable can cause incorrect query results or corruption errors if the database is changed by another process.
+ > While the filesystem the database resides on might appear to be read-only, consider whether another user or system might have write access to it.
)"};
const std::string name() override { return "Local Store"; }
diff --git a/src/libstore/path-references.cc b/src/libstore/path-references.cc
new file mode 100644
index 000000000..33cf66ce3
--- /dev/null
+++ b/src/libstore/path-references.cc
@@ -0,0 +1,73 @@
+#include "path-references.hh"
+#include "hash.hh"
+#include "util.hh"
+#include "archive.hh"
+
+#include