diff --git a/doc/manual/src/advanced-topics/diff-hook.md b/doc/manual/src/advanced-topics/diff-hook.md index 4a742c160..207aad3b8 100644 --- a/doc/manual/src/advanced-topics/diff-hook.md +++ b/doc/manual/src/advanced-topics/diff-hook.md @@ -48,13 +48,13 @@ If the build passes and is deterministic, Nix will exit with a status code of 0: ```console -$ nix-build ./deterministic.nix -A stable +$ nix-build ./deterministic.nix --attr stable this derivation will be built: /nix/store/z98fasz2jqy9gs0xbvdj939p27jwda38-stable.drv building '/nix/store/z98fasz2jqy9gs0xbvdj939p27jwda38-stable.drv'... /nix/store/yyxlzw3vqaas7wfp04g0b1xg51f2czgq-stable -$ nix-build ./deterministic.nix -A stable --check +$ nix-build ./deterministic.nix --attr stable --check checking outputs of '/nix/store/z98fasz2jqy9gs0xbvdj939p27jwda38-stable.drv'... /nix/store/yyxlzw3vqaas7wfp04g0b1xg51f2czgq-stable ``` @@ -63,13 +63,13 @@ If the build is not deterministic, Nix will exit with a status code of 1: ```console -$ nix-build ./deterministic.nix -A unstable +$ nix-build ./deterministic.nix --attr unstable this derivation will be built: /nix/store/cgl13lbj1w368r5z8gywipl1ifli7dhk-unstable.drv building '/nix/store/cgl13lbj1w368r5z8gywipl1ifli7dhk-unstable.drv'... /nix/store/krpqk0l9ib0ibi1d2w52z293zw455cap-unstable -$ nix-build ./deterministic.nix -A unstable --check +$ nix-build ./deterministic.nix --attr unstable --check checking outputs of '/nix/store/cgl13lbj1w368r5z8gywipl1ifli7dhk-unstable.drv'... error: derivation '/nix/store/cgl13lbj1w368r5z8gywipl1ifli7dhk-unstable.drv' may not be deterministic: output '/nix/store/krpqk0l9ib0ibi1d2w52z293zw455cap-unstable' differs @@ -89,7 +89,7 @@ Using `--check` with `--keep-failed` will cause Nix to keep the second build's output in a special, `.check` path: ```console -$ nix-build ./deterministic.nix -A unstable --check --keep-failed +$ nix-build ./deterministic.nix --attr unstable --check --keep-failed checking outputs of '/nix/store/cgl13lbj1w368r5z8gywipl1ifli7dhk-unstable.drv'... note: keeping build directory '/tmp/nix-build-unstable.drv-0' error: derivation '/nix/store/cgl13lbj1w368r5z8gywipl1ifli7dhk-unstable.drv' may diff --git a/doc/manual/src/advanced-topics/post-build-hook.md b/doc/manual/src/advanced-topics/post-build-hook.md index 1479cc3a4..a251dec48 100644 --- a/doc/manual/src/advanced-topics/post-build-hook.md +++ b/doc/manual/src/advanced-topics/post-build-hook.md @@ -90,7 +90,7 @@ Then, restart the `nix-daemon`. Build any derivation, for example: ```console -$ nix-build -E '(import {}).writeText "example" (builtins.toString builtins.currentTime)' +$ nix-build --expr '(import {}).writeText "example" (builtins.toString builtins.currentTime)' this derivation will be built: /nix/store/s4pnfbkalzy5qz57qs6yybna8wylkig6-example.drv building '/nix/store/s4pnfbkalzy5qz57qs6yybna8wylkig6-example.drv'... diff --git a/doc/manual/src/command-ref/env-common.md b/doc/manual/src/command-ref/env-common.md index bf00be84f..b4a9bb2a9 100644 --- a/doc/manual/src/command-ref/env-common.md +++ b/doc/manual/src/command-ref/env-common.md @@ -71,9 +71,12 @@ Most Nix commands interpret the following environment variables: Settings are separated by the newline character. - [`NIX_USER_CONF_FILES`](#env-NIX_USER_CONF_FILES)\ - Overrides the location of the user Nix configuration files to load - from (defaults to the XDG spec locations). The variable is treated - as a list separated by the `:` token. + Overrides the location of the Nix user configuration files to load from. + + The default are the locations according to the [XDG Base Directory Specification]. + See the [XDG Base Directories](#xdg-base-directories) sub-section for details. + + The variable is treated as a list separated by the `:` token. - [`TMPDIR`](#env-TMPDIR)\ Use the specified directory to store temporary files. In particular, @@ -103,15 +106,19 @@ Most Nix commands interpret the following environment variables: 384 MiB. Setting it to a low value reduces memory consumption, but will increase runtime due to the overhead of garbage collection. -## XDG Base Directory +## XDG Base Directories -New Nix commands conform to the [XDG Base Directory Specification], and use the following environment variables to determine locations of various state and configuration files: +Nix follows the [XDG Base Directory Specification]. + +For backwards compatibility, Nix commands will follow the standard only when [`use-xdg-base-directories`] is enabled. +[New Nix commands](@docroot@/command-ref/new-cli/nix.md) (experimental) conform to the standard by default. + +The following environment variables are used to determine locations of various state and configuration files: - [`XDG_CONFIG_HOME`]{#env-XDG_CONFIG_HOME} (default `~/.config`) - [`XDG_STATE_HOME`]{#env-XDG_STATE_HOME} (default `~/.local/state`) - [`XDG_CACHE_HOME`]{#env-XDG_CACHE_HOME} (default `~/.cache`) -Classic Nix commands can also be made to follow this standard using the [`use-xdg-base-directories`] configuration option. [XDG Base Directory Specification]: https://specifications.freedesktop.org/basedir-spec/basedir-spec-latest.html -[`use-xdg-base-directories`]: @docroot@/command-ref/conf-file.md#conf-use-xdg-base-directories \ No newline at end of file +[`use-xdg-base-directories`]: @docroot@/command-ref/conf-file.md#conf-use-xdg-base-directories diff --git a/doc/manual/src/command-ref/nix-build.md b/doc/manual/src/command-ref/nix-build.md index 44de4cf53..f70bbd7f2 100644 --- a/doc/manual/src/command-ref/nix-build.md +++ b/doc/manual/src/command-ref/nix-build.md @@ -76,7 +76,7 @@ except for `--arg` and `--attr` / `-A` which are passed to `nix-instantiate`. # Examples ```console -$ nix-build '' -A firefox +$ nix-build '' --attr firefox store derivation is /nix/store/qybprl8sz2lc...-firefox-1.5.0.7.drv /nix/store/d18hyl92g30l...-firefox-1.5.0.7 @@ -91,7 +91,7 @@ If a derivation has multiple outputs, `nix-build` will build the default (first) output. You can also build all outputs: ```console -$ nix-build '' -A openssl.all +$ nix-build '' --attr openssl.all ``` This will create a symlink for each output named `result-outputname`. @@ -101,7 +101,7 @@ outputs `out`, `bin` and `man`, `nix-build` will create symlinks specific output: ```console -$ nix-build '' -A openssl.man +$ nix-build '' --attr openssl.man ``` This will create a symlink `result-man`. @@ -109,7 +109,7 @@ This will create a symlink `result-man`. Build a Nix expression given on the command line: ```console -$ nix-build -E 'with import { }; runCommand "foo" { } "echo bar > $out"' +$ nix-build --expr 'with import { }; runCommand "foo" { } "echo bar > $out"' $ cat ./result bar ``` @@ -118,5 +118,5 @@ Build the GNU Hello package from the latest revision of the master branch of Nixpkgs: ```console -$ nix-build https://github.com/NixOS/nixpkgs/archive/master.tar.gz -A hello +$ nix-build https://github.com/NixOS/nixpkgs/archive/master.tar.gz --attr hello ``` diff --git a/doc/manual/src/command-ref/nix-channel.md b/doc/manual/src/command-ref/nix-channel.md index 72d3e422b..a210583ae 100644 --- a/doc/manual/src/command-ref/nix-channel.md +++ b/doc/manual/src/command-ref/nix-channel.md @@ -52,6 +52,12 @@ The list of subscribed channels is stored in `~/.nix-channels`. {{#include ./env-common.md}} +# Files + +`nix-channel` operates on the following files. + +{{#include ./files/channels.md}} + # Examples To subscribe to the Nixpkgs channel and install the GNU Hello package: @@ -59,18 +65,18 @@ To subscribe to the Nixpkgs channel and install the GNU Hello package: ```console $ nix-channel --add https://nixos.org/channels/nixpkgs-unstable $ nix-channel --update -$ nix-env -iA nixpkgs.hello +$ nix-env --install --attr nixpkgs.hello ``` You can revert channel updates using `--rollback`: ```console -$ nix-instantiate --eval -E '(import {}).lib.version' +$ nix-instantiate --eval --expr '(import {}).lib.version' "14.04.527.0e935f1" $ nix-channel --rollback switching from generation 483 to 482 -$ nix-instantiate --eval -E '(import {}).lib.version' +$ nix-instantiate --eval --expr '(import {}).lib.version' "14.04.526.dbadfad" ``` diff --git a/doc/manual/src/command-ref/nix-copy-closure.md b/doc/manual/src/command-ref/nix-copy-closure.md index 0801e8246..fbf6828da 100644 --- a/doc/manual/src/command-ref/nix-copy-closure.md +++ b/doc/manual/src/command-ref/nix-copy-closure.md @@ -87,5 +87,5 @@ environment: ```console $ nix-copy-closure --from alice@itchy.labs \ /nix/store/0dj0503hjxy5mbwlafv1rsbdiyx1gkdy-subversion-1.4.4 -$ nix-env -i /nix/store/0dj0503hjxy5mbwlafv1rsbdiyx1gkdy-subversion-1.4.4 +$ nix-env --install /nix/store/0dj0503hjxy5mbwlafv1rsbdiyx1gkdy-subversion-1.4.4 ``` diff --git a/doc/manual/src/command-ref/nix-env.md b/doc/manual/src/command-ref/nix-env.md index b4a3dce49..941723216 100644 --- a/doc/manual/src/command-ref/nix-env.md +++ b/doc/manual/src/command-ref/nix-env.md @@ -49,7 +49,7 @@ These pages can be viewed offline: # Selectors -Several commands, such as `nix-env -q` and `nix-env -i`, take a list of +Several commands, such as `nix-env --query ` and `nix-env --install `, take a list of arguments that specify the packages on which to operate. These are extended regular expressions that must match the entire name of the package. (For details on regular expressions, see **regex**(7).) The match is @@ -83,6 +83,8 @@ match. Here are some examples: # Files +`nix-env` operates on the following files. + {{#include ./files/default-nix-expression.md}} {{#include ./files/profiles.md}} diff --git a/doc/manual/src/command-ref/nix-env/delete-generations.md b/doc/manual/src/command-ref/nix-env/delete-generations.md index 6f0af5384..92cb7f0d9 100644 --- a/doc/manual/src/command-ref/nix-env/delete-generations.md +++ b/doc/manual/src/command-ref/nix-env/delete-generations.md @@ -41,6 +41,6 @@ $ nix-env --delete-generations 30d ``` ```console -$ nix-env -p other_profile --delete-generations old +$ nix-env --profile other_profile --delete-generations old ``` diff --git a/doc/manual/src/command-ref/nix-env/install.md b/doc/manual/src/command-ref/nix-env/install.md index d754accfe..ad179cbc7 100644 --- a/doc/manual/src/command-ref/nix-env/install.md +++ b/doc/manual/src/command-ref/nix-env/install.md @@ -36,7 +36,7 @@ a number of possible ways: then the derivation with the highest version will be installed. You can force the installation of multiple derivations with the same - name by being specific about the versions. For instance, `nix-env -i + name by being specific about the versions. For instance, `nix-env --install gcc-3.3.6 gcc-4.1.1` will install both version of GCC (and will probably cause a user environment conflict\!). @@ -44,7 +44,7 @@ a number of possible ways: paths* that select attributes from the top-level Nix expression. This is faster than using derivation names and unambiguous. To find out the attribute paths of available - packages, use `nix-env -qaP`. + packages, use `nix-env --query --available --attr-path `. - If `--from-profile` *path* is given, *args* is a set of names denoting installed store paths in the profile *path*. This is an @@ -87,7 +87,7 @@ a number of possible ways: - `--remove-all` / `-r`\ Remove all previously installed packages first. This is equivalent - to running `nix-env -e '.*'` first, except that everything happens + to running `nix-env --uninstall '.*'` first, except that everything happens in a single transaction. {{#include ./opt-common.md}} @@ -103,9 +103,9 @@ a number of possible ways: To install a package using a specific attribute path from the active Nix expression: ```console -$ nix-env -iA gcc40mips +$ nix-env --install --attr gcc40mips installing `gcc-4.0.2' -$ nix-env -iA xorg.xorgserver +$ nix-env --install --attr xorg.xorgserver installing `xorg-server-1.2.0' ``` @@ -133,32 +133,32 @@ installing `gcc-3.3.2' To install all derivations in the Nix expression `foo.nix`: ```console -$ nix-env -f ~/foo.nix -i '.*' +$ nix-env --file ~/foo.nix --install '.*' ``` To copy the store path with symbolic name `gcc` from another profile: ```console -$ nix-env -i --from-profile /nix/var/nix/profiles/foo gcc +$ nix-env --install --from-profile /nix/var/nix/profiles/foo gcc ``` To install a specific [store derivation] (typically created by `nix-instantiate`): ```console -$ nix-env -i /nix/store/fibjb1bfbpm5mrsxc4mh2d8n37sxh91i-gcc-3.4.3.drv +$ nix-env --install /nix/store/fibjb1bfbpm5mrsxc4mh2d8n37sxh91i-gcc-3.4.3.drv ``` To install a specific output path: ```console -$ nix-env -i /nix/store/y3cgx0xj1p4iv9x0pnnmdhr8iyg741vk-gcc-3.4.3 +$ nix-env --install /nix/store/y3cgx0xj1p4iv9x0pnnmdhr8iyg741vk-gcc-3.4.3 ``` To install from a Nix expression specified on the command-line: ```console -$ nix-env -f ./foo.nix -i -E \ +$ nix-env --file ./foo.nix --install --expr \ 'f: (f {system = "i686-linux";}).subversionWithJava' ``` @@ -170,7 +170,7 @@ function defined in `./foo.nix`. A dry-run tells you which paths will be downloaded or built from source: ```console -$ nix-env -f '' -iA hello --dry-run +$ nix-env --file '' --install --attr hello --dry-run (dry run; not doing anything) installing ‘hello-2.10’ this path will be fetched (0.04 MiB download, 0.19 MiB unpacked): @@ -182,6 +182,6 @@ To install Firefox from the latest revision in the Nixpkgs/NixOS 14.12 channel: ```console -$ nix-env -f https://github.com/NixOS/nixpkgs/archive/nixos-14.12.tar.gz -iA firefox +$ nix-env --file https://github.com/NixOS/nixpkgs/archive/nixos-14.12.tar.gz --install --attr firefox ``` diff --git a/doc/manual/src/command-ref/nix-env/query.md b/doc/manual/src/command-ref/nix-env/query.md index 18f0ee210..c9b4d8513 100644 --- a/doc/manual/src/command-ref/nix-env/query.md +++ b/doc/manual/src/command-ref/nix-env/query.md @@ -137,7 +137,7 @@ derivation is shown unless `--no-name` is specified. To show installed packages: ```console -$ nix-env -q +$ nix-env --query bison-1.875c docbook-xml-4.2 firefox-1.0.4 @@ -149,7 +149,7 @@ ORBit2-2.8.3 To show available packages: ```console -$ nix-env -qa +$ nix-env --query --available firefox-1.0.7 GConf-2.4.0.1 MPlayer-1.0pre7 @@ -160,7 +160,7 @@ ORBit2-2.8.3 To show the status of available packages: ```console -$ nix-env -qas +$ nix-env --query --available --status -P- firefox-1.0.7 (not installed but present) --S GConf-2.4.0.1 (not present, but there is a substitute for fast installation) --S MPlayer-1.0pre3 (i.e., this is not the installed MPlayer, even though the version is the same!) @@ -171,14 +171,14 @@ IP- ORBit2-2.8.3 (installed and by definition present) To show available packages in the Nix expression `foo.nix`: ```console -$ nix-env -f ./foo.nix -qa +$ nix-env --file ./foo.nix --query --available foo-1.2.3 ``` To compare installed versions to what’s available: ```console -$ nix-env -qc +$ nix-env --query --compare-versions ... acrobat-reader-7.0 - ? (package is not available at all) autoconf-2.59 = 2.59 (same version) @@ -189,7 +189,7 @@ firefox-1.0.4 < 1.0.7 (a more recent version is available) To show all packages with “`zip`” in the name: ```console -$ nix-env -qa '.*zip.*' +$ nix-env --query --available '.*zip.*' bzip2-1.0.6 gzip-1.6 zip-3.0 @@ -199,7 +199,7 @@ zip-3.0 To show all packages with “`firefox`” or “`chromium`” in the name: ```console -$ nix-env -qa '.*(firefox|chromium).*' +$ nix-env --query --available '.*(firefox|chromium).*' chromium-37.0.2062.94 chromium-beta-38.0.2125.24 firefox-32.0.3 @@ -210,6 +210,6 @@ firefox-with-plugins-13.0.1 To show all packages in the latest revision of the Nixpkgs repository: ```console -$ nix-env -f https://github.com/NixOS/nixpkgs/archive/master.tar.gz -qa +$ nix-env --file https://github.com/NixOS/nixpkgs/archive/master.tar.gz --query --available ``` diff --git a/doc/manual/src/command-ref/nix-env/set-flag.md b/doc/manual/src/command-ref/nix-env/set-flag.md index 63f0a0ff9..e04b22a91 100644 --- a/doc/manual/src/command-ref/nix-env/set-flag.md +++ b/doc/manual/src/command-ref/nix-env/set-flag.md @@ -46,16 +46,16 @@ To prevent the currently installed Firefox from being upgraded: $ nix-env --set-flag keep true firefox ``` -After this, `nix-env -u` will ignore Firefox. +After this, `nix-env --upgrade ` will ignore Firefox. To disable the currently installed Firefox, then install a new Firefox while the old remains part of the profile: ```console -$ nix-env -q +$ nix-env --query firefox-2.0.0.9 (the current one) -$ nix-env --preserve-installed -i firefox-2.0.0.11 +$ nix-env --preserve-installed --install firefox-2.0.0.11 installing `firefox-2.0.0.11' building path(s) `/nix/store/myy0y59q3ig70dgq37jqwg1j0rsapzsl-user-environment' collision between `/nix/store/...-firefox-2.0.0.11/bin/firefox' @@ -65,10 +65,10 @@ collision between `/nix/store/...-firefox-2.0.0.11/bin/firefox' $ nix-env --set-flag active false firefox setting flag on `firefox-2.0.0.9' -$ nix-env --preserve-installed -i firefox-2.0.0.11 +$ nix-env --preserve-installed --install firefox-2.0.0.11 installing `firefox-2.0.0.11' -$ nix-env -q +$ nix-env --query firefox-2.0.0.11 (the enabled one) firefox-2.0.0.9 (the disabled one) ``` diff --git a/doc/manual/src/command-ref/nix-env/set.md b/doc/manual/src/command-ref/nix-env/set.md index c1cf75739..b9950eeab 100644 --- a/doc/manual/src/command-ref/nix-env/set.md +++ b/doc/manual/src/command-ref/nix-env/set.md @@ -25,6 +25,6 @@ The following updates a profile such that its current generation will contain just Firefox: ```console -$ nix-env -p /nix/var/nix/profiles/browser --set firefox +$ nix-env --profile /nix/var/nix/profiles/browser --set firefox ``` diff --git a/doc/manual/src/command-ref/nix-env/switch-generation.md b/doc/manual/src/command-ref/nix-env/switch-generation.md index e550325c4..38cf0534d 100644 --- a/doc/manual/src/command-ref/nix-env/switch-generation.md +++ b/doc/manual/src/command-ref/nix-env/switch-generation.md @@ -27,7 +27,7 @@ Switching will fail if the specified generation does not exist. # Examples ```console -$ nix-env -G 42 +$ nix-env --switch-generation 42 switching from generation 50 to 42 ``` diff --git a/doc/manual/src/command-ref/nix-env/switch-profile.md b/doc/manual/src/command-ref/nix-env/switch-profile.md index b389e4140..5ae2fdced 100644 --- a/doc/manual/src/command-ref/nix-env/switch-profile.md +++ b/doc/manual/src/command-ref/nix-env/switch-profile.md @@ -22,5 +22,5 @@ the symlink `~/.nix-profile` is made to point to *path*. # Examples ```console -$ nix-env -S ~/my-profile +$ nix-env --switch-profile ~/my-profile ``` diff --git a/doc/manual/src/command-ref/nix-env/uninstall.md b/doc/manual/src/command-ref/nix-env/uninstall.md index e9ec8a15e..734cc7675 100644 --- a/doc/manual/src/command-ref/nix-env/uninstall.md +++ b/doc/manual/src/command-ref/nix-env/uninstall.md @@ -24,5 +24,5 @@ designated by the symbolic names *drvnames* are removed. ```console $ nix-env --uninstall gcc -$ nix-env -e '.*' (remove everything) +$ nix-env --uninstall '.*' (remove everything) ``` diff --git a/doc/manual/src/command-ref/nix-env/upgrade.md b/doc/manual/src/command-ref/nix-env/upgrade.md index f88ffcbee..322dfbda2 100644 --- a/doc/manual/src/command-ref/nix-env/upgrade.md +++ b/doc/manual/src/command-ref/nix-env/upgrade.md @@ -76,21 +76,21 @@ version is installed. # Examples ```console -$ nix-env --upgrade -A nixpkgs.gcc +$ nix-env --upgrade --attr nixpkgs.gcc upgrading `gcc-3.3.1' to `gcc-3.4' ``` When there are no updates available, nothing will happen: ```console -$ nix-env --upgrade -A nixpkgs.pan +$ nix-env --upgrade --attr nixpkgs.pan ``` Using `-A` is preferred when possible, as it is faster and unambiguous but it is also possible to upgrade to a specific version by matching the derivation name: ```console -$ nix-env -u gcc-3.3.2 --always +$ nix-env --upgrade gcc-3.3.2 --always upgrading `gcc-3.4' to `gcc-3.3.2' ``` @@ -98,7 +98,7 @@ To try to upgrade everything (matching packages based on the part of the derivation name without version): ```console -$ nix-env -u +$ nix-env --upgrade upgrading `hello-2.1.2' to `hello-2.1.3' upgrading `mozilla-1.2' to `mozilla-1.4' ``` diff --git a/doc/manual/src/command-ref/nix-instantiate.md b/doc/manual/src/command-ref/nix-instantiate.md index e55fb2afd..e1b4a3e80 100644 --- a/doc/manual/src/command-ref/nix-instantiate.md +++ b/doc/manual/src/command-ref/nix-instantiate.md @@ -88,7 +88,7 @@ Instantiate [store derivation]s from a Nix expression, and build them using `nix $ nix-instantiate test.nix (instantiate) /nix/store/cigxbmvy6dzix98dxxh9b6shg7ar5bvs-perl-BerkeleyDB-0.26.drv -$ nix-store -r $(nix-instantiate test.nix) (build) +$ nix-store --realise $(nix-instantiate test.nix) (build) ... /nix/store/qhqk4n8ci095g3sdp93x7rgwyh9rdvgk-perl-BerkeleyDB-0.26 (output path) @@ -100,30 +100,30 @@ dr-xr-xr-x 2 eelco users 4096 1970-01-01 01:00 lib You can also give a Nix expression on the command line: ```console -$ nix-instantiate -E 'with import { }; hello' +$ nix-instantiate --expr 'with import { }; hello' /nix/store/j8s4zyv75a724q38cb0r87rlczaiag4y-hello-2.8.drv ``` This is equivalent to: ```console -$ nix-instantiate '' -A hello +$ nix-instantiate '' --attr hello ``` Parsing and evaluating Nix expressions: ```console -$ nix-instantiate --parse -E '1 + 2' +$ nix-instantiate --parse --expr '1 + 2' 1 + 2 ``` ```console -$ nix-instantiate --eval -E '1 + 2' +$ nix-instantiate --eval --expr '1 + 2' 3 ``` ```console -$ nix-instantiate --eval --xml -E '1 + 2' +$ nix-instantiate --eval --xml --expr '1 + 2' @@ -133,7 +133,7 @@ $ nix-instantiate --eval --xml -E '1 + 2' The difference between non-strict and strict evaluation: ```console -$ nix-instantiate --eval --xml -E 'rec { x = "foo"; y = x; }' +$ nix-instantiate --eval --xml --expr 'rec { x = "foo"; y = x; }' ... @@ -148,7 +148,7 @@ Note that `y` is left unevaluated (the XML representation doesn’t attempt to show non-normal forms). ```console -$ nix-instantiate --eval --xml --strict -E 'rec { x = "foo"; y = x; }' +$ nix-instantiate --eval --xml --strict --expr 'rec { x = "foo"; y = x; }' ... diff --git a/doc/manual/src/command-ref/nix-shell.md b/doc/manual/src/command-ref/nix-shell.md index 576e5ba0b..195b72be5 100644 --- a/doc/manual/src/command-ref/nix-shell.md +++ b/doc/manual/src/command-ref/nix-shell.md @@ -89,7 +89,7 @@ All options not listed here are passed to `nix-store - `--packages` / `-p` *packages*…\ Set up an environment in which the specified packages are present. The command line arguments are interpreted as attribute names inside - the Nix Packages collection. Thus, `nix-shell -p libjpeg openjdk` + the Nix Packages collection. Thus, `nix-shell --packages libjpeg openjdk` will start a shell in which the packages denoted by the attribute names `libjpeg` and `openjdk` are present. @@ -118,7 +118,7 @@ To build the dependencies of the package Pan, and start an interactive shell in which to build it: ```console -$ nix-shell '' -A pan +$ nix-shell '' --attr pan [nix-shell]$ eval ${unpackPhase:-unpackPhase} [nix-shell]$ cd $sourceRoot [nix-shell]$ eval ${patchPhase:-patchPhase} @@ -137,7 +137,7 @@ To clear the environment first, and do some additional automatic initialisation of the interactive shell: ```console -$ nix-shell '' -A pan --pure \ +$ nix-shell '' --attr pan --pure \ --command 'export NIX_DEBUG=1; export NIX_CORES=8; return' ``` @@ -146,13 +146,13 @@ Nix expressions can also be given on the command line using the `-E` and packages `sqlite` and `libX11`: ```console -$ nix-shell -E 'with import { }; runCommand "dummy" { buildInputs = [ sqlite xorg.libX11 ]; } ""' +$ nix-shell --expr 'with import { }; runCommand "dummy" { buildInputs = [ sqlite xorg.libX11 ]; } ""' ``` A shorter way to do the same is: ```console -$ nix-shell -p sqlite xorg.libX11 +$ nix-shell --packages sqlite xorg.libX11 [nix-shell]$ echo $NIX_LDFLAGS … -L/nix/store/j1zg5v…-sqlite-3.8.0.2/lib -L/nix/store/0gmcz9…-libX11-1.6.1/lib … ``` @@ -162,7 +162,7 @@ the `buildInputs = [ ... ]` shown above, not only package names. So the following is also legal: ```console -$ nix-shell -p sqlite 'git.override { withManual = false; }' +$ nix-shell --packages sqlite 'git.override { withManual = false; }' ``` The `-p` flag looks up Nixpkgs in the Nix search path. You can override @@ -171,7 +171,7 @@ gives you a shell containing the Pan package from a specific revision of Nixpkgs: ```console -$ nix-shell -p pan -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/8a3eea054838b55aca962c3fbde9c83c102b8bf2.tar.gz +$ nix-shell --packages pan -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/8a3eea054838b55aca962c3fbde9c83c102b8bf2.tar.gz [nix-shell:~]$ pan --version Pan 0.139 @@ -185,7 +185,7 @@ done by starting the script with the following lines: ```bash #! /usr/bin/env nix-shell -#! nix-shell -i real-interpreter -p packages +#! nix-shell -i real-interpreter --packages packages ``` where *real-interpreter* is the “real” script interpreter that will be @@ -202,7 +202,7 @@ For example, here is a Python script that depends on Python and the ```python #! /usr/bin/env nix-shell -#! nix-shell -i python -p python pythonPackages.prettytable +#! nix-shell -i python --packages python pythonPackages.prettytable import prettytable @@ -217,7 +217,7 @@ requires Perl and the `HTML::TokeParser::Simple` and `LWP` packages: ```perl #! /usr/bin/env nix-shell -#! nix-shell -i perl -p perl perlPackages.HTMLTokeParserSimple perlPackages.LWP +#! nix-shell -i perl --packages perl perlPackages.HTMLTokeParserSimple perlPackages.LWP use HTML::TokeParser::Simple; @@ -235,7 +235,7 @@ package like Terraform: ```bash #! /usr/bin/env nix-shell -#! nix-shell -i bash -p "terraform.withPlugins (plugins: [ plugins.openstack ])" +#! nix-shell -i bash --packages "terraform.withPlugins (plugins: [ plugins.openstack ])" terraform apply ``` @@ -251,7 +251,7 @@ branch): ```haskell #! /usr/bin/env nix-shell -#! nix-shell -i runghc -p "haskellPackages.ghcWithPackages (ps: [ps.download-curl ps.tagsoup])" +#! nix-shell -i runghc --packages "haskellPackages.ghcWithPackages (ps: [ps.download-curl ps.tagsoup])" #! nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/nixos-20.03.tar.gz import Network.Curl.Download diff --git a/doc/manual/src/command-ref/nix-store/dump.md b/doc/manual/src/command-ref/nix-store/dump.md index 62656d599..c2f3c42ef 100644 --- a/doc/manual/src/command-ref/nix-store/dump.md +++ b/doc/manual/src/command-ref/nix-store/dump.md @@ -23,7 +23,7 @@ produce the same NAR archive. For instance, directory entries are always sorted so that the actual on-disk order doesn’t influence the result. This means that the cryptographic hash of a NAR dump of a path is usable as a fingerprint of the contents of the path. Indeed, -the hashes of store paths stored in Nix’s database (see `nix-store -q +the hashes of store paths stored in Nix’s database (see `nix-store --query --hash`) are SHA-256 hashes of the NAR dump of each store path. NAR archives support filenames of unlimited length and 64-bit file diff --git a/doc/manual/src/command-ref/nix-store/export.md b/doc/manual/src/command-ref/nix-store/export.md index aeea38636..1bc46f53b 100644 --- a/doc/manual/src/command-ref/nix-store/export.md +++ b/doc/manual/src/command-ref/nix-store/export.md @@ -31,7 +31,7 @@ To copy a whole closure, do something like: ```console -$ nix-store --export $(nix-store -qR paths) > out +$ nix-store --export $(nix-store --query --requisites paths) > out ``` To import the whole closure again, run: diff --git a/doc/manual/src/command-ref/nix-store/opt-common.md b/doc/manual/src/command-ref/nix-store/opt-common.md index bf6566555..dd9a6bf21 100644 --- a/doc/manual/src/command-ref/nix-store/opt-common.md +++ b/doc/manual/src/command-ref/nix-store/opt-common.md @@ -11,7 +11,7 @@ The following options are allowed for all `nix-store` operations, but may not al be created in `/nix/var/nix/gcroots/auto/`. For instance, ```console - $ nix-store --add-root /home/eelco/bla/result -r ... + $ nix-store --add-root /home/eelco/bla/result --realise ... $ ls -l /nix/var/nix/gcroots/auto lrwxrwxrwx 1 ... 2005-03-13 21:10 dn54lcypm8f8... -> /home/eelco/bla/result diff --git a/doc/manual/src/command-ref/nix-store/query.md b/doc/manual/src/command-ref/nix-store/query.md index 9f7dbd3e8..cd45a4932 100644 --- a/doc/manual/src/command-ref/nix-store/query.md +++ b/doc/manual/src/command-ref/nix-store/query.md @@ -145,7 +145,7 @@ Print the closure (runtime dependencies) of the `svn` program in the current user environment: ```console -$ nix-store -qR $(which svn) +$ nix-store --query --requisites $(which svn) /nix/store/5mbglq5ldqld8sj57273aljwkfvj22mc-subversion-1.1.4 /nix/store/9lz9yc6zgmc0vlqmn2ipcpkjlmbi51vv-glibc-2.3.4 ... @@ -154,7 +154,7 @@ $ nix-store -qR $(which svn) Print the build-time dependencies of `svn`: ```console -$ nix-store -qR $(nix-store -qd $(which svn)) +$ nix-store --query --requisites $(nix-store --query --deriver $(which svn)) /nix/store/02iizgn86m42q905rddvg4ja975bk2i4-grep-2.5.1.tar.bz2.drv /nix/store/07a2bzxmzwz5hp58nf03pahrv2ygwgs3-gcc-wrapper.sh /nix/store/0ma7c9wsbaxahwwl04gbw3fcd806ski4-glibc-2.3.4.drv @@ -168,7 +168,7 @@ the derivation (`-qd`), not the closure of the output path that contains Show the build-time dependencies as a tree: ```console -$ nix-store -q --tree $(nix-store -qd $(which svn)) +$ nix-store --query --tree $(nix-store --query --deriver $(which svn)) /nix/store/7i5082kfb6yjbqdbiwdhhza0am2xvh6c-subversion-1.1.4.drv +---/nix/store/d8afh10z72n8l1cr5w42366abiblgn54-builder.sh +---/nix/store/fmzxmpjx2lh849ph0l36snfj9zdibw67-bash-3.0.drv @@ -180,7 +180,7 @@ $ nix-store -q --tree $(nix-store -qd $(which svn)) Show all paths that depend on the same OpenSSL library as `svn`: ```console -$ nix-store -q --referrers $(nix-store -q --binding openssl $(nix-store -qd $(which svn))) +$ nix-store --query --referrers $(nix-store --query --binding openssl $(nix-store --query --deriver $(which svn))) /nix/store/23ny9l9wixx21632y2wi4p585qhva1q8-sylpheed-1.0.0 /nix/store/5mbglq5ldqld8sj57273aljwkfvj22mc-subversion-1.1.4 /nix/store/dpmvp969yhdqs7lm2r1a3gng7pyq6vy4-subversion-1.1.3 @@ -191,7 +191,7 @@ Show all paths that directly or indirectly depend on the Glibc (C library) used by `svn`: ```console -$ nix-store -q --referrers-closure $(ldd $(which svn) | grep /libc.so | awk '{print $3}') +$ nix-store --query --referrers-closure $(ldd $(which svn) | grep /libc.so | awk '{print $3}') /nix/store/034a6h4vpz9kds5r6kzb9lhh81mscw43-libgnomeprintui-2.8.2 /nix/store/15l3yi0d45prm7a82pcrknxdh6nzmxza-gawk-3.1.4 ... @@ -204,7 +204,7 @@ Make a picture of the runtime dependency graph of the current user environment: ```console -$ nix-store -q --graph ~/.nix-profile | dot -Tps > graph.ps +$ nix-store --query --graph ~/.nix-profile | dot -Tps > graph.ps $ gv graph.ps ``` @@ -212,7 +212,7 @@ Show every garbage collector root that points to a store path that depends on `svn`: ```console -$ nix-store -q --roots $(which svn) +$ nix-store --query --roots $(which svn) /nix/var/nix/profiles/default-81-link /nix/var/nix/profiles/default-82-link /home/eelco/.local/state/nix/profiles/profile-97-link diff --git a/doc/manual/src/command-ref/nix-store/read-log.md b/doc/manual/src/command-ref/nix-store/read-log.md index 4a88e9382..d1ff17891 100644 --- a/doc/manual/src/command-ref/nix-store/read-log.md +++ b/doc/manual/src/command-ref/nix-store/read-log.md @@ -27,7 +27,7 @@ substitute, then the log is unavailable. # Example ```console -$ nix-store -l $(which ktorrent) +$ nix-store --read-log $(which ktorrent) building /nix/store/dhc73pvzpnzxhdgpimsd9sw39di66ph1-ktorrent-2.2.1 unpacking sources unpacking source archive /nix/store/p8n1jpqs27mgkjw07pb5269717nzf5f8-ktorrent-2.2.1.tar.gz diff --git a/doc/manual/src/command-ref/nix-store/realise.md b/doc/manual/src/command-ref/nix-store/realise.md index f61a20100..6b50d2145 100644 --- a/doc/manual/src/command-ref/nix-store/realise.md +++ b/doc/manual/src/command-ref/nix-store/realise.md @@ -99,7 +99,7 @@ This operation is typically used to build [store derivation]s produced by [store derivation]: @docroot@/glossary.md#gloss-store-derivation ```console -$ nix-store -r $(nix-instantiate ./test.nix) +$ nix-store --realise $(nix-instantiate ./test.nix) /nix/store/31axcgrlbfsxzmfff1gyj1bf62hvkby2-aterm-2.3.1 ``` @@ -108,7 +108,7 @@ This is essentially what [`nix-build`](@docroot@/command-ref/nix-build.md) does. To test whether a previously-built derivation is deterministic: ```console -$ nix-build '' -A hello --check -K +$ nix-build '' --attr hello --check -K ``` Use [`nix-store --read-log`](./read-log.md) to show the stderr and stdout of a build: diff --git a/doc/manual/src/command-ref/nix-store/verify-path.md b/doc/manual/src/command-ref/nix-store/verify-path.md index 59ffe92a3..927201599 100644 --- a/doc/manual/src/command-ref/nix-store/verify-path.md +++ b/doc/manual/src/command-ref/nix-store/verify-path.md @@ -24,6 +24,6 @@ path has changed, and 1 otherwise. To verify the integrity of the `svn` command and all its dependencies: ```console -$ nix-store --verify-path $(nix-store -qR $(which svn)) +$ nix-store --verify-path $(nix-store --query --requisites $(which svn)) ``` diff --git a/doc/manual/src/command-ref/opt-common.md b/doc/manual/src/command-ref/opt-common.md index 7a012250d..54c0a1d0d 100644 --- a/doc/manual/src/command-ref/opt-common.md +++ b/doc/manual/src/command-ref/opt-common.md @@ -162,11 +162,11 @@ Most Nix commands accept the following command-line options: }: ... ``` - So if you call this Nix expression (e.g., when you do `nix-env -iA + So if you call this Nix expression (e.g., when you do `nix-env --install --attr pkgname`), the function will be called automatically using the value [`builtins.currentSystem`](@docroot@/language/builtins.md) for the `system` argument. You can override this using `--arg`, e.g., - `nix-env -iA pkgname --arg system \"i686-freebsd\"`. (Note that + `nix-env --install --attr pkgname --arg system \"i686-freebsd\"`. (Note that since the argument is a Nix string literal, you have to escape the quotes.) @@ -199,7 +199,7 @@ Most Nix commands accept the following command-line options: For `nix-shell`, this option is commonly used to give you a shell in which you can build the packages returned by the expression. If you want to get a shell which contain the *built* packages ready for - use, give your expression to the `nix-shell -p` convenience flag + use, give your expression to the `nix-shell --packages ` convenience flag instead. - [`-I`](#opt-I) *path*\ diff --git a/doc/manual/src/contributing/hacking.md b/doc/manual/src/contributing/hacking.md index ca69f076a..b954a2167 100644 --- a/doc/manual/src/contributing/hacking.md +++ b/doc/manual/src/contributing/hacking.md @@ -77,7 +77,7 @@ $ nix-shell To get a shell with one of the other [supported compilation environments](#compilation-environments): ```console -$ nix-shell -A devShells.x86_64-linux.native-clang11StdenvPackages +$ nix-shell --attr devShells.x86_64-linux.native-clang11StdenvPackages ``` > **Note** @@ -139,7 +139,7 @@ $ nix build .#packages.aarch64-linux.default for flake-enabled Nix, or ```console -$ nix-build -A packages.aarch64-linux.default +$ nix-build --attr packages.aarch64-linux.default ``` for classic Nix. @@ -166,7 +166,7 @@ $ nix build .#nix-ccacheStdenv for flake-enabled Nix, or ```console -$ nix-build -A nix-ccacheStdenv +$ nix-build --attr nix-ccacheStdenv ``` for classic Nix. diff --git a/doc/manual/src/glossary.md b/doc/manual/src/glossary.md index eeb19ad50..e142bd415 100644 --- a/doc/manual/src/glossary.md +++ b/doc/manual/src/glossary.md @@ -101,11 +101,8 @@ derivation. - [output-addressed store object]{#gloss-output-addressed-store-object}\ - A store object whose store path hashes its content. This - includes derivations, the outputs of - [content-addressed derivations](#gloss-content-addressed-derivation), - and the outputs of - [fixed-output derivations](#gloss-fixed-output-derivation). + A [store object] whose [store path] is determined by its contents. + This includes derivations, the outputs of [content-addressed derivations](#gloss-content-addressed-derivation), and the outputs of [fixed-output derivations](#gloss-fixed-output-derivation). - [substitute]{#gloss-substitute}\ A substitute is a command invocation stored in the [Nix database] that @@ -163,7 +160,7 @@ build-time dependencies, while the closure of its output path is equivalent to its runtime dependencies. For correct deployment it is necessary to deploy whole closures, since otherwise at runtime - files could be missing. The command `nix-store -qR` prints out + files could be missing. The command `nix-store --query --requisites ` prints out closures of store paths. As an example, if the [store object] at path `P` contains a [reference] diff --git a/doc/manual/src/installation/upgrading.md b/doc/manual/src/installation/upgrading.md index 24efc4681..6d09f54d8 100644 --- a/doc/manual/src/installation/upgrading.md +++ b/doc/manual/src/installation/upgrading.md @@ -2,13 +2,13 @@ Multi-user Nix users on macOS can upgrade Nix by running: `sudo -i sh -c 'nix-channel --update && -nix-env -iA nixpkgs.nix && +nix-env --install --attr nixpkgs.nix && launchctl remove org.nixos.nix-daemon && launchctl load /Library/LaunchDaemons/org.nixos.nix-daemon.plist'` Single-user installations of Nix should run this: `nix-channel --update; -nix-env -iA nixpkgs.nix nixpkgs.cacert` +nix-env --install --attr nixpkgs.nix nixpkgs.cacert` Multi-user Nix users on Linux should run this with sudo: `nix-channel ---update; nix-env -iA nixpkgs.nix nixpkgs.cacert; systemctl +--update; nix-env --install --attr nixpkgs.nix nixpkgs.cacert; systemctl daemon-reload; systemctl restart nix-daemon` diff --git a/doc/manual/src/introduction.md b/doc/manual/src/introduction.md index b54346db8..76489bc1b 100644 --- a/doc/manual/src/introduction.md +++ b/doc/manual/src/introduction.md @@ -76,7 +76,7 @@ there after an upgrade. This means that you can _roll back_ to the old version: ```console -$ nix-env --upgrade -A nixpkgs.some-package +$ nix-env --upgrade --attr nixpkgs.some-package $ nix-env --rollback ``` @@ -122,7 +122,7 @@ Nix expressions generally describe how to build a package from source, so an installation action like ```console -$ nix-env --install -A nixpkgs.firefox +$ nix-env --install --attr nixpkgs.firefox ``` _could_ cause quite a bit of build activity, as not only Firefox but @@ -158,7 +158,7 @@ Pan newsreader, as described by [its Nix expression](https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/networking/newsreaders/pan/default.nix): ```console -$ nix-shell '' -A pan +$ nix-shell '' --attr pan ``` You’re then dropped into a shell where you can edit, build and test diff --git a/doc/manual/src/package-management/basic-package-mgmt.md b/doc/manual/src/package-management/basic-package-mgmt.md index 5f1d7a89c..6b86e763e 100644 --- a/doc/manual/src/package-management/basic-package-mgmt.md +++ b/doc/manual/src/package-management/basic-package-mgmt.md @@ -47,7 +47,7 @@ $ nix-channel --update You can view the set of available packages in Nixpkgs: ```console -$ nix-env -qaP +$ nix-env --query --available --attr-path nixpkgs.aterm aterm-2.2 nixpkgs.bash bash-3.0 nixpkgs.binutils binutils-2.15 @@ -65,7 +65,7 @@ If you downloaded Nixpkgs yourself, or if you checked it out from GitHub, then you need to pass the path to your Nixpkgs tree using the `-f` flag: ```console -$ nix-env -qaPf /path/to/nixpkgs +$ nix-env --query --available --attr-path --file /path/to/nixpkgs aterm aterm-2.2 bash bash-3.0 … @@ -77,7 +77,7 @@ Nixpkgs. You can filter the packages by name: ```console -$ nix-env -qaP firefox +$ nix-env --query --available --attr-path firefox nixpkgs.firefox-esr firefox-91.3.0esr nixpkgs.firefox firefox-94.0.1 ``` @@ -85,7 +85,7 @@ nixpkgs.firefox firefox-94.0.1 and using regular expressions: ```console -$ nix-env -qaP 'firefox.*' +$ nix-env --query --available --attr-path 'firefox.*' ``` It is also possible to see the *status* of available packages, i.e., @@ -93,7 +93,7 @@ whether they are installed into the user environment and/or present in the system: ```console -$ nix-env -qaPs +$ nix-env --query --available --attr-path --status … -PS nixpkgs.bash bash-3.0 --S nixpkgs.binutils binutils-2.15 @@ -110,10 +110,10 @@ which is Nix’s mechanism for doing binary deployment. It just means that Nix knows that it can fetch a pre-built package from somewhere (typically a network server) instead of building it locally. -You can install a package using `nix-env -iA`. For instance, +You can install a package using `nix-env --install --attr `. For instance, ```console -$ nix-env -iA nixpkgs.subversion +$ nix-env --install --attr nixpkgs.subversion ``` will install the package called `subversion` from `nixpkgs` channel (which is, of course, the @@ -143,14 +143,14 @@ instead of the attribute path, as `nix-env` does not record which attribute was used for installing: ```console -$ nix-env -e subversion +$ nix-env --uninstall subversion ``` Upgrading to a new version is just as easy. If you have a new release of Nix Packages, you can do: ```console -$ nix-env -uA nixpkgs.subversion +$ nix-env --upgrade --attr nixpkgs.subversion ``` This will *only* upgrade Subversion if there is a “newer” version in the @@ -163,15 +163,15 @@ whatever version is in the Nix expressions, use `-i` instead of `-u`; You can also upgrade all packages for which there are newer versions: ```console -$ nix-env -u +$ nix-env --upgrade ``` Sometimes it’s useful to be able to ask what `nix-env` would do, without actually doing it. For instance, to find out what packages would be -upgraded by `nix-env -u`, you can do +upgraded by `nix-env --upgrade `, you can do ```console -$ nix-env -u --dry-run +$ nix-env --upgrade --dry-run (dry run; not doing anything) upgrading `libxslt-1.1.0' to `libxslt-1.1.10' upgrading `graphviz-1.10' to `graphviz-1.12' diff --git a/doc/manual/src/package-management/binary-cache-substituter.md b/doc/manual/src/package-management/binary-cache-substituter.md index 5befad9f8..855eaf470 100644 --- a/doc/manual/src/package-management/binary-cache-substituter.md +++ b/doc/manual/src/package-management/binary-cache-substituter.md @@ -9,7 +9,7 @@ The daemon that handles binary cache requests via HTTP, `nix-serve`, is not part of the Nix distribution, but you can install it from Nixpkgs: ```console -$ nix-env -iA nixpkgs.nix-serve +$ nix-env --install --attr nixpkgs.nix-serve ``` You can then start the server, listening for HTTP connections on @@ -35,7 +35,7 @@ On the client side, you can tell Nix to use your binary cache using `--substituters`, e.g.: ```console -$ nix-env -iA nixpkgs.firefox --substituters http://avalon:8080/ +$ nix-env --install --attr nixpkgs.firefox --substituters http://avalon:8080/ ``` The option `substituters` tells Nix to use this binary cache in diff --git a/doc/manual/src/package-management/channels.md b/doc/manual/src/package-management/channels.md index 93c8b41a6..8e4da180b 100644 --- a/doc/manual/src/package-management/channels.md +++ b/doc/manual/src/package-management/channels.md @@ -43,7 +43,7 @@ operations (via the symlink `~/.nix-defexpr/channels`). Consequently, you can then say ```console -$ nix-env -u +$ nix-env --upgrade ``` to upgrade all packages in your profile to the latest versions available diff --git a/doc/manual/src/package-management/copy-closure.md b/doc/manual/src/package-management/copy-closure.md index d3fac4d76..14326298b 100644 --- a/doc/manual/src/package-management/copy-closure.md +++ b/doc/manual/src/package-management/copy-closure.md @@ -15,7 +15,7 @@ With `nix-store path (that is, the path and all its dependencies) to a file, and then unpack that file into another Nix store. For example, - $ nix-store --export $(nix-store -qR $(type -p firefox)) > firefox.closure + $ nix-store --export $(nix-store --query --requisites $(type -p firefox)) > firefox.closure writes the closure of Firefox to a file. You can then copy this file to another machine and install the closure: @@ -27,7 +27,7 @@ store are ignored. It is also possible to pipe the export into another command, e.g. to copy and install a closure directly to/on another machine: - $ nix-store --export $(nix-store -qR $(type -p firefox)) | bzip2 | \ + $ nix-store --export $(nix-store --query --requisites $(type -p firefox)) | bzip2 | \ ssh alice@itchy.example.org "bunzip2 | nix-store --import" However, `nix-copy-closure` is generally more efficient because it only diff --git a/doc/manual/src/package-management/profiles.md b/doc/manual/src/package-management/profiles.md index d1a2580d4..1d9e672a8 100644 --- a/doc/manual/src/package-management/profiles.md +++ b/doc/manual/src/package-management/profiles.md @@ -39,7 +39,7 @@ just Subversion 1.1.2 (arrows in the figure indicate symlinks). This would be what we would obtain if we had done ```console -$ nix-env -iA nixpkgs.subversion +$ nix-env --install --attr nixpkgs.subversion ``` on a set of Nix expressions that contained Subversion 1.1.2. @@ -54,7 +54,7 @@ environment is generated based on the current one. For instance, generation 43 was created from generation 42 when we did ```console -$ nix-env -iA nixpkgs.subversion nixpkgs.firefox +$ nix-env --install --attr nixpkgs.subversion nixpkgs.firefox ``` on a set of Nix expressions that contained Firefox and a new version of @@ -127,7 +127,7 @@ All `nix-env` operations work on the profile pointed to by (abbreviation `-p`): ```console -$ nix-env -p /nix/var/nix/profiles/other-profile -iA nixpkgs.subversion +$ nix-env --profile /nix/var/nix/profiles/other-profile --install --attr nixpkgs.subversion ``` This will *not* change the `~/.nix-profile` symlink. diff --git a/doc/manual/src/package-management/ssh-substituter.md b/doc/manual/src/package-management/ssh-substituter.md index c59933f61..7014c3cc8 100644 --- a/doc/manual/src/package-management/ssh-substituter.md +++ b/doc/manual/src/package-management/ssh-substituter.md @@ -6,7 +6,7 @@ automatically fetching any store paths in Firefox’s closure if they are available on the server `avalon`: ```console -$ nix-env -iA nixpkgs.firefox --substituters ssh://alice@avalon +$ nix-env --install --attr nixpkgs.firefox --substituters ssh://alice@avalon ``` This works similar to the binary cache substituter that Nix usually @@ -25,7 +25,7 @@ You can also copy the closure of some store path, without installing it into your profile, e.g. ```console -$ nix-store -r /nix/store/m85bxg…-firefox-34.0.5 --substituters +$ nix-store --realise /nix/store/m85bxg…-firefox-34.0.5 --substituters ssh://alice@avalon ``` diff --git a/docker.nix b/docker.nix index 52199af66..bd16b71cd 100644 --- a/docker.nix +++ b/docker.nix @@ -190,6 +190,12 @@ let cp -a ${rootEnv}/* $out/ ln -s ${manifest} $out/manifest.nix ''; + flake-registry-path = if (flake-registry == null) then + null + else if (builtins.readFileType (toString flake-registry)) == "directory" then + "${flake-registry}/flake-registry.json" + else + flake-registry; in pkgs.runCommand "base-system" { @@ -202,7 +208,7 @@ let ]; allowSubstitutes = false; preferLocalBuild = true; - } '' + } ('' env set -x mkdir -p $out/etc @@ -249,15 +255,15 @@ let ln -s ${pkgs.coreutils}/bin/env $out/usr/bin/env ln -s ${pkgs.bashInteractive}/bin/bash $out/bin/sh - '' + (lib.optionalString (flake-registry != null) '' + '' + (lib.optionalString (flake-registry-path != null) '' nixCacheDir="/root/.cache/nix" mkdir -p $out$nixCacheDir globalFlakeRegistryPath="$nixCacheDir/flake-registry.json" - ln -s ${flake-registry}/flake-registry.json $out$globalFlakeRegistryPath + ln -s ${flake-registry-path} $out$globalFlakeRegistryPath mkdir -p $out/nix/var/nix/gcroots/auto rootName=$(${pkgs.nix}/bin/nix --extra-experimental-features nix-command hash file --type sha1 --base32 <(echo -n $globalFlakeRegistryPath)) ln -s $globalFlakeRegistryPath $out/nix/var/nix/gcroots/auto/$rootName - ''); + '')); in pkgs.dockerTools.buildLayeredImageWithNixDb { diff --git a/src/libexpr/eval.cc b/src/libexpr/eval.cc index 740a5e677..585670e69 100644 --- a/src/libexpr/eval.cc +++ b/src/libexpr/eval.cc @@ -4,6 +4,7 @@ #include "util.hh" #include "store-api.hh" #include "derivations.hh" +#include "downstream-placeholder.hh" #include "globals.hh" #include "eval-inline.hh" #include "filetransfer.hh" @@ -1058,7 +1059,7 @@ void EvalState::mkOutputString( ? store->printStorePath(*std::move(optOutputPath)) /* Downstream we would substitute this for an actual path once we build the floating CA derivation */ - : downstreamPlaceholder(*store, drvPath, outputName), + : DownstreamPlaceholder::unknownCaOutput(drvPath, outputName).render(), NixStringContext { NixStringContextElem::Built { .drvPath = drvPath, @@ -2380,7 +2381,7 @@ DerivedPath EvalState::coerceToDerivedPath(const PosIdx pos, Value & v, std::str // This is testing for the case of CA derivations auto sExpected = optOutputPath ? store->printStorePath(*optOutputPath) - : downstreamPlaceholder(*store, b.drvPath, output); + : DownstreamPlaceholder::unknownCaOutput(b.drvPath, output).render(); if (s != sExpected) error( "string '%s' has context with the output '%s' from derivation '%s', but the string is not the right placeholder for this derivation output. It should be '%s'", diff --git a/src/libexpr/eval.hh b/src/libexpr/eval.hh index a90ff34c0..62b380929 100644 --- a/src/libexpr/eval.hh +++ b/src/libexpr/eval.hh @@ -483,7 +483,7 @@ public: * Coerce to `DerivedPath`. * * Must be a string which is either a literal store path or a - * "placeholder (see `downstreamPlaceholder()`). + * "placeholder (see `DownstreamPlaceholder`). * * Even more importantly, the string context must be exactly one * element, which is either a `NixStringContextElem::Opaque` or @@ -622,7 +622,7 @@ public: * @param optOutputPath Optional output path for that string. Must * be passed if and only if output store object is input-addressed. * Will be printed to form string if passed, otherwise a placeholder - * will be used (see `downstreamPlaceholder()`). + * will be used (see `DownstreamPlaceholder`). */ void mkOutputString( Value & value, diff --git a/src/libexpr/primops.cc b/src/libexpr/primops.cc index 6fbd66389..cfae1e5f8 100644 --- a/src/libexpr/primops.cc +++ b/src/libexpr/primops.cc @@ -1,5 +1,6 @@ #include "archive.hh" #include "derivations.hh" +#include "downstream-placeholder.hh" #include "eval-inline.hh" #include "eval.hh" #include "globals.hh" @@ -87,7 +88,7 @@ StringMap EvalState::realiseContext(const NixStringContext & context) auto outputs = resolveDerivedPath(*store, drv); for (auto & [outputName, outputPath] : outputs) { res.insert_or_assign( - downstreamPlaceholder(*store, drv.drvPath, outputName), + DownstreamPlaceholder::unknownCaOutput(drv.drvPath, outputName).render(), store->printStorePath(outputPath) ); } diff --git a/src/libfetchers/git.cc b/src/libfetchers/git.cc index 1da8c9609..47282f6c4 100644 --- a/src/libfetchers/git.cc +++ b/src/libfetchers/git.cc @@ -62,6 +62,7 @@ std::optional readHead(const Path & path) .program = "git", // FIXME: use 'HEAD' to avoid returning all refs .args = {"ls-remote", "--symref", path}, + .isInteractive = true, }); if (status != 0) return std::nullopt; @@ -350,7 +351,7 @@ struct GitInputScheme : InputScheme args.push_back(destDir); - runProgram("git", true, args); + runProgram("git", true, args, {}, true); } std::optional getSourcePath(const Input & input) override @@ -555,7 +556,7 @@ struct GitInputScheme : InputScheme : ref == "HEAD" ? *ref : "refs/heads/" + *ref; - runProgram("git", true, { "-C", repoDir, "--git-dir", gitDir, "fetch", "--quiet", "--force", "--", actualUrl, fmt("%s:%s", fetchRef, fetchRef) }); + runProgram("git", true, { "-C", repoDir, "--git-dir", gitDir, "fetch", "--quiet", "--force", "--", actualUrl, fmt("%s:%s", fetchRef, fetchRef) }, {}, true); } catch (Error & e) { if (!pathExists(localRefFile)) throw; warn("could not update local clone of Git repository '%s'; continuing with the most recent version", actualUrl); @@ -622,7 +623,7 @@ struct GitInputScheme : InputScheme // everything to ensure we get the rev. Activity act(*logger, lvlTalkative, actUnknown, fmt("making temporary clone of '%s'", repoDir)); runProgram("git", true, { "-C", tmpDir, "fetch", "--quiet", "--force", - "--update-head-ok", "--", repoDir, "refs/*:refs/*" }); + "--update-head-ok", "--", repoDir, "refs/*:refs/*" }, {}, true); } runProgram("git", true, { "-C", tmpDir, "checkout", "--quiet", input.getRev()->gitRev() }); @@ -649,7 +650,7 @@ struct GitInputScheme : InputScheme { Activity act(*logger, lvlTalkative, actUnknown, fmt("fetching submodules of '%s'", actualUrl)); - runProgram("git", true, { "-C", tmpDir, "submodule", "--quiet", "update", "--init", "--recursive" }); + runProgram("git", true, { "-C", tmpDir, "submodule", "--quiet", "update", "--init", "--recursive" }, {}, true); } filter = isNotDotGitDirectory; diff --git a/src/libstore/build/derivation-goal.cc b/src/libstore/build/derivation-goal.cc index 5b1c923cd..df7d21e54 100644 --- a/src/libstore/build/derivation-goal.cc +++ b/src/libstore/build/derivation-goal.cc @@ -1152,7 +1152,7 @@ HookReply DerivationGoal::tryBuildHook() /* Tell the hook all the inputs that have to be copied to the remote system. */ - worker_proto::write(worker.store, hook->sink, inputPaths); + workerProtoWrite(worker.store, hook->sink, inputPaths); /* Tell the hooks the missing outputs that have to be copied back from the remote system. */ @@ -1163,7 +1163,7 @@ HookReply DerivationGoal::tryBuildHook() if (buildMode != bmCheck && status.known && status.known->isValid()) continue; missingOutputs.insert(outputName); } - worker_proto::write(worker.store, hook->sink, missingOutputs); + workerProtoWrite(worker.store, hook->sink, missingOutputs); } hook->sink = FdSink(); diff --git a/src/libstore/build/entry-points.cc b/src/libstore/build/entry-points.cc index 74eae0692..edd6cb6d2 100644 --- a/src/libstore/build/entry-points.cc +++ b/src/libstore/build/entry-points.cc @@ -110,7 +110,7 @@ void Store::ensurePath(const StorePath & path) } -void LocalStore::repairPath(const StorePath & path) +void Store::repairPath(const StorePath & path) { Worker worker(*this, *this); GoalPtr goal = worker.makePathSubstitutionGoal(path, Repair); diff --git a/src/libstore/build/local-derivation-goal.cc b/src/libstore/build/local-derivation-goal.cc index 9f21a711a..6fbb81c85 100644 --- a/src/libstore/build/local-derivation-goal.cc +++ b/src/libstore/build/local-derivation-goal.cc @@ -1771,6 +1771,8 @@ void LocalDerivationGoal::runChild() for (auto & path : { "/etc/resolv.conf", "/etc/services", "/etc/hosts" }) if (pathExists(path)) ss.push_back(path); + + dirsInChroot.emplace(settings.caFile, "/etc/ssl/certs/ca-certificates.crt"); } for (auto & i : ss) dirsInChroot.emplace(i, i); diff --git a/src/libstore/daemon.cc b/src/libstore/daemon.cc index 5083497a9..b6dd83684 100644 --- a/src/libstore/daemon.cc +++ b/src/libstore/daemon.cc @@ -263,7 +263,7 @@ static std::vector readDerivedPaths(Store & store, unsigned int cli { std::vector reqs; if (GET_PROTOCOL_MINOR(clientVersion) >= 30) { - reqs = worker_proto::read(store, from, Phantom> {}); + reqs = WorkerProto>::read(store, from); } else { for (auto & s : readStrings(from)) reqs.push_back(parsePathWithOutputs(store, s).toDerivedPath()); @@ -287,7 +287,7 @@ static void performOp(TunnelLogger * logger, ref store, } case wopQueryValidPaths: { - auto paths = worker_proto::read(*store, from, Phantom {}); + auto paths = WorkerProto::read(*store, from); SubstituteFlag substitute = NoSubstitute; if (GET_PROTOCOL_MINOR(clientVersion) >= 27) { @@ -300,7 +300,7 @@ static void performOp(TunnelLogger * logger, ref store, } auto res = store->queryValidPaths(paths, substitute); logger->stopWork(); - worker_proto::write(*store, to, res); + workerProtoWrite(*store, to, res); break; } @@ -316,11 +316,11 @@ static void performOp(TunnelLogger * logger, ref store, } case wopQuerySubstitutablePaths: { - auto paths = worker_proto::read(*store, from, Phantom {}); + auto paths = WorkerProto::read(*store, from); logger->startWork(); auto res = store->querySubstitutablePaths(paths); logger->stopWork(); - worker_proto::write(*store, to, res); + workerProtoWrite(*store, to, res); break; } @@ -349,7 +349,7 @@ static void performOp(TunnelLogger * logger, ref store, paths = store->queryValidDerivers(path); else paths = store->queryDerivationOutputs(path); logger->stopWork(); - worker_proto::write(*store, to, paths); + workerProtoWrite(*store, to, paths); break; } @@ -367,7 +367,7 @@ static void performOp(TunnelLogger * logger, ref store, logger->startWork(); auto outputs = store->queryPartialDerivationOutputMap(path); logger->stopWork(); - worker_proto::write(*store, to, outputs); + workerProtoWrite(*store, to, outputs); break; } @@ -393,7 +393,7 @@ static void performOp(TunnelLogger * logger, ref store, if (GET_PROTOCOL_MINOR(clientVersion) >= 25) { auto name = readString(from); auto camStr = readString(from); - auto refs = worker_proto::read(*store, from, Phantom {}); + auto refs = WorkerProto::read(*store, from); bool repairBool; from >> repairBool; auto repair = RepairFlag{repairBool}; @@ -495,7 +495,7 @@ static void performOp(TunnelLogger * logger, ref store, case wopAddTextToStore: { std::string suffix = readString(from); std::string s = readString(from); - auto refs = worker_proto::read(*store, from, Phantom {}); + auto refs = WorkerProto::read(*store, from); logger->startWork(); auto path = store->addTextToStore(suffix, s, refs, NoRepair); logger->stopWork(); @@ -567,7 +567,7 @@ static void performOp(TunnelLogger * logger, ref store, auto results = store->buildPathsWithResults(drvs, mode); logger->stopWork(); - worker_proto::write(*store, to, results); + workerProtoWrite(*store, to, results); break; } @@ -644,7 +644,7 @@ static void performOp(TunnelLogger * logger, ref store, DrvOutputs builtOutputs; for (auto & [output, realisation] : res.builtOutputs) builtOutputs.insert_or_assign(realisation.id, realisation); - worker_proto::write(*store, to, builtOutputs); + workerProtoWrite(*store, to, builtOutputs); } break; } @@ -709,7 +709,7 @@ static void performOp(TunnelLogger * logger, ref store, case wopCollectGarbage: { GCOptions options; options.action = (GCOptions::GCAction) readInt(from); - options.pathsToDelete = worker_proto::read(*store, from, Phantom {}); + options.pathsToDelete = WorkerProto::read(*store, from); from >> options.ignoreLiveness >> options.maxFreed; // obsolete fields readInt(from); @@ -779,7 +779,7 @@ static void performOp(TunnelLogger * logger, ref store, else { to << 1 << (i->second.deriver ? store->printStorePath(*i->second.deriver) : ""); - worker_proto::write(*store, to, i->second.references); + workerProtoWrite(*store, to, i->second.references); to << i->second.downloadSize << i->second.narSize; } @@ -790,11 +790,11 @@ static void performOp(TunnelLogger * logger, ref store, SubstitutablePathInfos infos; StorePathCAMap pathsMap = {}; if (GET_PROTOCOL_MINOR(clientVersion) < 22) { - auto paths = worker_proto::read(*store, from, Phantom {}); + auto paths = WorkerProto::read(*store, from); for (auto & path : paths) pathsMap.emplace(path, std::nullopt); } else - pathsMap = worker_proto::read(*store, from, Phantom {}); + pathsMap = WorkerProto::read(*store, from); logger->startWork(); store->querySubstitutablePathInfos(pathsMap, infos); logger->stopWork(); @@ -802,7 +802,7 @@ static void performOp(TunnelLogger * logger, ref store, for (auto & i : infos) { to << store->printStorePath(i.first) << (i.second.deriver ? store->printStorePath(*i.second.deriver) : ""); - worker_proto::write(*store, to, i.second.references); + workerProtoWrite(*store, to, i.second.references); to << i.second.downloadSize << i.second.narSize; } break; @@ -812,7 +812,7 @@ static void performOp(TunnelLogger * logger, ref store, logger->startWork(); auto paths = store->queryAllValidPaths(); logger->stopWork(); - worker_proto::write(*store, to, paths); + workerProtoWrite(*store, to, paths); break; } @@ -884,7 +884,7 @@ static void performOp(TunnelLogger * logger, ref store, ValidPathInfo info { path, narHash }; if (deriver != "") info.deriver = store->parseStorePath(deriver); - info.references = worker_proto::read(*store, from, Phantom {}); + info.references = WorkerProto::read(*store, from); from >> info.registrationTime >> info.narSize >> info.ultimate; info.sigs = readStrings(from); info.ca = ContentAddress::parseOpt(readString(from)); @@ -935,9 +935,9 @@ static void performOp(TunnelLogger * logger, ref store, uint64_t downloadSize, narSize; store->queryMissing(targets, willBuild, willSubstitute, unknown, downloadSize, narSize); logger->stopWork(); - worker_proto::write(*store, to, willBuild); - worker_proto::write(*store, to, willSubstitute); - worker_proto::write(*store, to, unknown); + workerProtoWrite(*store, to, willBuild); + workerProtoWrite(*store, to, willSubstitute); + workerProtoWrite(*store, to, unknown); to << downloadSize << narSize; break; } @@ -950,7 +950,7 @@ static void performOp(TunnelLogger * logger, ref store, store->registerDrvOutput(Realisation{ .id = outputId, .outPath = outputPath}); } else { - auto realisation = worker_proto::read(*store, from, Phantom()); + auto realisation = WorkerProto::read(*store, from); store->registerDrvOutput(realisation); } logger->stopWork(); @@ -965,11 +965,11 @@ static void performOp(TunnelLogger * logger, ref store, if (GET_PROTOCOL_MINOR(clientVersion) < 31) { std::set outPaths; if (info) outPaths.insert(info->outPath); - worker_proto::write(*store, to, outPaths); + workerProtoWrite(*store, to, outPaths); } else { std::set realisations; if (info) realisations.insert(*info); - worker_proto::write(*store, to, realisations); + workerProtoWrite(*store, to, realisations); } break; } @@ -1045,7 +1045,7 @@ void processConnection( auto temp = trusted ? store->isTrustedClient() : std::optional { NotTrusted }; - worker_proto::write(*store, to, temp); + workerProtoWrite(*store, to, temp); } /* Send startup error messages to the client. */ diff --git a/src/libstore/derivations.cc b/src/libstore/derivations.cc index d56dc727b..ccb165d68 100644 --- a/src/libstore/derivations.cc +++ b/src/libstore/derivations.cc @@ -1,4 +1,5 @@ #include "derivations.hh" +#include "downstream-placeholder.hh" #include "store-api.hh" #include "globals.hh" #include "util.hh" @@ -748,7 +749,7 @@ Source & readDerivation(Source & in, const Store & store, BasicDerivation & drv, drv.outputs.emplace(std::move(name), std::move(output)); } - drv.inputSrcs = worker_proto::read(store, in, Phantom {}); + drv.inputSrcs = WorkerProto::read(store, in); in >> drv.platform >> drv.builder; drv.args = readStrings(in); @@ -796,7 +797,7 @@ void writeDerivation(Sink & out, const Store & store, const BasicDerivation & dr }, }, i.second.raw()); } - worker_proto::write(store, out, drv.inputSrcs); + workerProtoWrite(store, out, drv.inputSrcs); out << drv.platform << drv.builder << drv.args; out << drv.env.size(); for (auto & i : drv.env) @@ -810,13 +811,7 @@ std::string hashPlaceholder(const std::string_view outputName) return "/" + hashString(htSHA256, concatStrings("nix-output:", outputName)).to_string(Base32, false); } -std::string downstreamPlaceholder(const Store & store, const StorePath & drvPath, std::string_view outputName) -{ - auto drvNameWithExtension = drvPath.name(); - auto drvName = drvNameWithExtension.substr(0, drvNameWithExtension.size() - 4); - auto clearText = "nix-upstream-output:" + std::string { drvPath.hashPart() } + ":" + outputPathName(drvName, outputName); - return "/" + hashString(htSHA256, clearText).to_string(Base32, false); -} + static void rewriteDerivation(Store & store, BasicDerivation & drv, const StringMap & rewrites) @@ -880,7 +875,7 @@ std::optional Derivation::tryResolve( for (auto & outputName : inputOutputs) { if (auto actualPath = get(inputDrvOutputs, { inputDrv, outputName })) { inputRewrites.emplace( - downstreamPlaceholder(store, inputDrv, outputName), + DownstreamPlaceholder::unknownCaOutput(inputDrv, outputName).render(), store.printStorePath(*actualPath)); resolved.inputSrcs.insert(*actualPath); } else { diff --git a/src/libstore/derivations.hh b/src/libstore/derivations.hh index 1e2143f31..fa79f77fd 100644 --- a/src/libstore/derivations.hh +++ b/src/libstore/derivations.hh @@ -6,6 +6,7 @@ #include "hash.hh" #include "content-address.hh" #include "repair-flag.hh" +#include "derived-path.hh" #include "sync.hh" #include "comparator.hh" @@ -495,17 +496,6 @@ void writeDerivation(Sink & out, const Store & store, const BasicDerivation & dr */ std::string hashPlaceholder(const std::string_view outputName); -/** - * This creates an opaque and almost certainly unique string - * deterministically from a derivation path and output name. - * - * It is used as a placeholder to allow derivations to refer to - * content-addressed paths whose content --- and thus the path - * themselves --- isn't yet known. This occurs when a derivation has a - * dependency which is a CA derivation. - */ -std::string downstreamPlaceholder(const Store & store, const StorePath & drvPath, std::string_view outputName); - extern const Hash impureOutputHash; } diff --git a/src/libstore/downstream-placeholder.cc b/src/libstore/downstream-placeholder.cc new file mode 100644 index 000000000..1752738f2 --- /dev/null +++ b/src/libstore/downstream-placeholder.cc @@ -0,0 +1,39 @@ +#include "downstream-placeholder.hh" +#include "derivations.hh" + +namespace nix { + +std::string DownstreamPlaceholder::render() const +{ + return "/" + hash.to_string(Base32, false); +} + + +DownstreamPlaceholder DownstreamPlaceholder::unknownCaOutput( + const StorePath & drvPath, + std::string_view outputName) +{ + auto drvNameWithExtension = drvPath.name(); + auto drvName = drvNameWithExtension.substr(0, drvNameWithExtension.size() - 4); + auto clearText = "nix-upstream-output:" + std::string { drvPath.hashPart() } + ":" + outputPathName(drvName, outputName); + return DownstreamPlaceholder { + hashString(htSHA256, clearText) + }; +} + +DownstreamPlaceholder DownstreamPlaceholder::unknownDerivation( + const DownstreamPlaceholder & placeholder, + std::string_view outputName, + const ExperimentalFeatureSettings & xpSettings) +{ + xpSettings.require(Xp::DynamicDerivations); + auto compressed = compressHash(placeholder.hash, 20); + auto clearText = "nix-computed-output:" + + compressed.to_string(Base32, false) + + ":" + std::string { outputName }; + return DownstreamPlaceholder { + hashString(htSHA256, clearText) + }; +} + +} diff --git a/src/libstore/downstream-placeholder.hh b/src/libstore/downstream-placeholder.hh new file mode 100644 index 000000000..f0c0dee77 --- /dev/null +++ b/src/libstore/downstream-placeholder.hh @@ -0,0 +1,75 @@ +#pragma once +///@file + +#include "hash.hh" +#include "path.hh" + +namespace nix { + +/** + * Downstream Placeholders are opaque and almost certainly unique values + * used to allow derivations to refer to store objects which are yet to + * be built and for we do not yet have store paths for. + * + * They correspond to `DerivedPaths` that are not `DerivedPath::Opaque`, + * except for the cases involving input addressing or fixed outputs + * where we do know a store path for the derivation output in advance. + * + * Unlike `DerivationPath`, however, `DownstreamPlaceholder` is + * purposefully opaque and obfuscated. This is so they are hard to + * create by accident, and so substituting them (once we know what the + * path to store object is) is unlikely to capture other stuff it + * shouldn't. + * + * We use them with `Derivation`: the `render()` method is called to + * render an opaque string which can be used in the derivation, and the + * resolving logic can substitute those strings for store paths when + * resolving `Derivation.inputDrvs` to `BasicDerivation.inputSrcs`. + */ +class DownstreamPlaceholder +{ + /** + * `DownstreamPlaceholder` is just a newtype of `Hash`. + * This its only field. + */ + Hash hash; + + /** + * Newtype constructor + */ + DownstreamPlaceholder(Hash hash) : hash(hash) { } + +public: + /** + * This creates an opaque and almost certainly unique string + * deterministically from the placeholder. + */ + std::string render() const; + + /** + * Create a placeholder for an unknown output of a content-addressed + * derivation. + * + * The derivation itself is known (we have a store path for it), but + * the output doesn't yet have a known store path. + */ + static DownstreamPlaceholder unknownCaOutput( + const StorePath & drvPath, + std::string_view outputName); + + /** + * Create a placehold for the output of an unknown derivation. + * + * The derivation is not yet known because it is a dynamic + * derivaiton --- it is itself an output of another derivation --- + * and we just have (another) placeholder for it. + * + * @param xpSettings Stop-gap to avoid globals during unit tests. + */ + static DownstreamPlaceholder unknownDerivation( + const DownstreamPlaceholder & drvPlaceholder, + std::string_view outputName, + const ExperimentalFeatureSettings & xpSettings = experimentalFeatureSettings); +}; + +} diff --git a/src/libstore/export-import.cc b/src/libstore/export-import.cc index 4eb838b68..5ea263a86 100644 --- a/src/libstore/export-import.cc +++ b/src/libstore/export-import.cc @@ -45,7 +45,7 @@ void Store::exportPath(const StorePath & path, Sink & sink) teeSink << exportMagic << printStorePath(path); - worker_proto::write(*this, teeSink, info->references); + workerProtoWrite(*this, teeSink, info->references); teeSink << (info->deriver ? printStorePath(*info->deriver) : "") << 0; @@ -73,7 +73,7 @@ StorePaths Store::importPaths(Source & source, CheckSigsFlag checkSigs) //Activity act(*logger, lvlInfo, "importing path '%s'", info.path); - auto references = worker_proto::read(*this, source, Phantom {}); + auto references = WorkerProto::read(*this, source); auto deriver = readString(source); auto narHash = hashString(htSHA256, saved.s); diff --git a/src/libstore/gc.cc b/src/libstore/gc.cc index 0038ec802..3c9544017 100644 --- a/src/libstore/gc.cc +++ b/src/libstore/gc.cc @@ -110,6 +110,11 @@ void LocalStore::createTempRootsFile() void LocalStore::addTempRoot(const StorePath & path) { + if (readOnly) { + debug("Read-only store doesn't support creating lock files for temp roots, but nothing can be deleted anyways."); + return; + } + createTempRootsFile(); /* Open/create the global GC lock file. */ diff --git a/src/libstore/legacy-ssh-store.cc b/src/libstore/legacy-ssh-store.cc index 2012584e0..2b7bebe9d 100644 --- a/src/libstore/legacy-ssh-store.cc +++ b/src/libstore/legacy-ssh-store.cc @@ -146,7 +146,7 @@ struct LegacySSHStore : public virtual LegacySSHStoreConfig, public virtual Stor auto deriver = readString(conn->from); if (deriver != "") info->deriver = parseStorePath(deriver); - info->references = worker_proto::read(*this, conn->from, Phantom {}); + info->references = WorkerProto::read(*this, conn->from); readLongLong(conn->from); // download size info->narSize = readLongLong(conn->from); @@ -180,7 +180,7 @@ struct LegacySSHStore : public virtual LegacySSHStoreConfig, public virtual Stor << printStorePath(info.path) << (info.deriver ? printStorePath(*info.deriver) : "") << info.narHash.to_string(Base16, false); - worker_proto::write(*this, conn->to, info.references); + workerProtoWrite(*this, conn->to, info.references); conn->to << info.registrationTime << info.narSize @@ -209,7 +209,7 @@ struct LegacySSHStore : public virtual LegacySSHStoreConfig, public virtual Stor conn->to << exportMagic << printStorePath(info.path); - worker_proto::write(*this, conn->to, info.references); + workerProtoWrite(*this, conn->to, info.references); conn->to << (info.deriver ? printStorePath(*info.deriver) : "") << 0 @@ -294,7 +294,7 @@ public: if (GET_PROTOCOL_MINOR(conn->remoteVersion) >= 3) conn->from >> status.timesBuilt >> status.isNonDeterministic >> status.startTime >> status.stopTime; if (GET_PROTOCOL_MINOR(conn->remoteVersion) >= 6) { - auto builtOutputs = worker_proto::read(*this, conn->from, Phantom {}); + auto builtOutputs = WorkerProto::read(*this, conn->from); for (auto && [output, realisation] : builtOutputs) status.builtOutputs.insert_or_assign( std::move(output.outputName), @@ -344,6 +344,17 @@ public: virtual ref getFSAccessor() override { unsupported("getFSAccessor"); } + /** + * The default instance would schedule the work on the client side, but + * for consistency with `buildPaths` and `buildDerivation` it should happen + * on the remote side. + * + * We make this fail for now so we can add implement this properly later + * without it being a breaking change. + */ + void repairPath(const StorePath & path) override + { unsupported("repairPath"); } + void computeFSClosure(const StorePathSet & paths, StorePathSet & out, bool flipDirection = false, bool includeOutputs = false, bool includeDerivers = false) override @@ -358,10 +369,10 @@ public: conn->to << cmdQueryClosure << includeOutputs; - worker_proto::write(*this, conn->to, paths); + workerProtoWrite(*this, conn->to, paths); conn->to.flush(); - for (auto & i : worker_proto::read(*this, conn->from, Phantom {})) + for (auto & i : WorkerProto::read(*this, conn->from)) out.insert(i); } @@ -374,10 +385,10 @@ public: << cmdQueryValidPaths << false // lock << maybeSubstitute; - worker_proto::write(*this, conn->to, paths); + workerProtoWrite(*this, conn->to, paths); conn->to.flush(); - return worker_proto::read(*this, conn->from, Phantom {}); + return WorkerProto::read(*this, conn->from); } void connect() override diff --git a/src/libstore/local-store.cc b/src/libstore/local-store.cc index 7fb312c37..4188d413f 100644 --- a/src/libstore/local-store.cc +++ b/src/libstore/local-store.cc @@ -202,10 +202,16 @@ LocalStore::LocalStore(const Params & params) createSymlink(profilesDir, gcRootsDir + "/profiles"); } + if (readOnly) { + experimentalFeatureSettings.require(Xp::ReadOnlyLocalStore); + } + for (auto & perUserDir : {profilesDir + "/per-user", gcRootsDir + "/per-user"}) { createDirs(perUserDir); - if (chmod(perUserDir.c_str(), 0755) == -1) - throw SysError("could not set permissions on '%s' to 755", perUserDir); + if (!readOnly) { + if (chmod(perUserDir.c_str(), 0755) == -1) + throw SysError("could not set permissions on '%s' to 755", perUserDir); + } } /* Optionally, create directories and set permissions for a @@ -269,10 +275,12 @@ LocalStore::LocalStore(const Params & params) /* Acquire the big fat lock in shared mode to make sure that no schema upgrade is in progress. */ - Path globalLockPath = dbDir + "/big-lock"; - globalLock = openLockFile(globalLockPath.c_str(), true); + if (!readOnly) { + Path globalLockPath = dbDir + "/big-lock"; + globalLock = openLockFile(globalLockPath.c_str(), true); + } - if (!lockFile(globalLock.get(), ltRead, false)) { + if (!readOnly && !lockFile(globalLock.get(), ltRead, false)) { printInfo("waiting for the big Nix store lock..."); lockFile(globalLock.get(), ltRead, true); } @@ -280,6 +288,14 @@ LocalStore::LocalStore(const Params & params) /* Check the current database schema and if necessary do an upgrade. */ int curSchema = getSchema(); + if (readOnly && curSchema < nixSchemaVersion) { + debug("current schema version: %d", curSchema); + debug("supported schema version: %d", nixSchemaVersion); + throw Error(curSchema == 0 ? + "database does not exist, and cannot be created in read-only mode" : + "database schema needs migrating, but this cannot be done in read-only mode"); + } + if (curSchema > nixSchemaVersion) throw Error("current Nix store schema is version %1%, but I only support %2%", curSchema, nixSchemaVersion); @@ -344,7 +360,11 @@ LocalStore::LocalStore(const Params & params) else openDB(*state, false); if (experimentalFeatureSettings.isEnabled(Xp::CaDerivations)) { - migrateCASchema(state->db, dbDir + "/ca-schema", globalLock); + if (!readOnly) { + migrateCASchema(state->db, dbDir + "/ca-schema", globalLock); + } else { + throw Error("need to migrate to CA schema, but this cannot be done in read-only mode"); + } } /* Prepare SQL statements. */ @@ -475,13 +495,20 @@ int LocalStore::getSchema() void LocalStore::openDB(State & state, bool create) { - if (access(dbDir.c_str(), R_OK | W_OK)) + if (create && readOnly) { + throw Error("unable to create database while in read-only mode"); + } + + if (access(dbDir.c_str(), R_OK | (readOnly ? 0 : W_OK))) throw SysError("Nix database directory '%1%' is not writable", dbDir); /* Open the Nix database. */ std::string dbPath = dbDir + "/db.sqlite"; auto & db(state.db); - state.db = SQLite(dbPath, create); + auto openMode = readOnly ? SQLiteOpenMode::Immutable + : create ? SQLiteOpenMode::Normal + : SQLiteOpenMode::NoCreate; + state.db = SQLite(dbPath, openMode); #ifdef __CYGWIN__ /* The cygwin version of sqlite3 has a patch which calls diff --git a/src/libstore/local-store.hh b/src/libstore/local-store.hh index b61ec3590..46a660c68 100644 --- a/src/libstore/local-store.hh +++ b/src/libstore/local-store.hh @@ -46,6 +46,22 @@ struct LocalStoreConfig : virtual LocalFSStoreConfig "require-sigs", "Whether store paths copied into this store should have a trusted signature."}; + Setting readOnly{(StoreConfig*) this, + false, + "read-only", + R"( + Allow this store to be opened when its database is on a read-only filesystem. + + Normally Nix will attempt to open the store database in read-write mode, even + for querying (when write access is not needed). This causes it to fail if the + database is on a read-only filesystem. + + Enable read-only mode to disable locking and open the SQLite database with the + **immutable** parameter set. Do not use this unless the filesystem is read-only. + Using it when the filesystem is writable can cause incorrect query results or + corruption errors if the database is changed by another process. + )"}; + const std::string name() override { return "Local Store"; } std::string doc() override; @@ -240,8 +256,6 @@ public: void vacuumDB(); - void repairPath(const StorePath & path) override; - void addSignatures(const StorePath & storePath, const StringSet & sigs) override; /** diff --git a/src/libstore/path-info.cc b/src/libstore/path-info.cc index e60d7abe0..97b72faa3 100644 --- a/src/libstore/path-info.cc +++ b/src/libstore/path-info.cc @@ -1,5 +1,6 @@ #include "path-info.hh" #include "worker-protocol.hh" +#include "store-api.hh" namespace nix { @@ -131,7 +132,7 @@ ValidPathInfo ValidPathInfo::read(Source & source, const Store & store, unsigned auto narHash = Hash::parseAny(readString(source), htSHA256); ValidPathInfo info(path, narHash); if (deriver != "") info.deriver = store.parseStorePath(deriver); - info.references = worker_proto::read(store, source, Phantom {}); + info.references = WorkerProto::read(store, source); source >> info.registrationTime >> info.narSize; if (format >= 16) { source >> info.ultimate; @@ -152,7 +153,7 @@ void ValidPathInfo::write( sink << store.printStorePath(path); sink << (deriver ? store.printStorePath(*deriver) : "") << narHash.to_string(Base16, false); - worker_proto::write(store, sink, references); + workerProtoWrite(store, sink, references); sink << registrationTime << narSize; if (format >= 16) { sink << ultimate diff --git a/src/libstore/remote-store.cc b/src/libstore/remote-store.cc index 0ed17a6ce..c3dfb5979 100644 --- a/src/libstore/remote-store.cc +++ b/src/libstore/remote-store.cc @@ -18,189 +18,6 @@ namespace nix { -namespace worker_proto { - -std::string read(const Store & store, Source & from, Phantom _) -{ - return readString(from); -} - -void write(const Store & store, Sink & out, const std::string & str) -{ - out << str; -} - - -StorePath read(const Store & store, Source & from, Phantom _) -{ - return store.parseStorePath(readString(from)); -} - -void write(const Store & store, Sink & out, const StorePath & storePath) -{ - out << store.printStorePath(storePath); -} - - -std::optional read(const Store & store, Source & from, Phantom> _) -{ - auto temp = readNum(from); - switch (temp) { - case 0: - return std::nullopt; - case 1: - return { Trusted }; - case 2: - return { NotTrusted }; - default: - throw Error("Invalid trusted status from remote"); - } -} - -void write(const Store & store, Sink & out, const std::optional & optTrusted) -{ - if (!optTrusted) - out << (uint8_t)0; - else { - switch (*optTrusted) { - case Trusted: - out << (uint8_t)1; - break; - case NotTrusted: - out << (uint8_t)2; - break; - default: - assert(false); - }; - } -} - - -ContentAddress read(const Store & store, Source & from, Phantom _) -{ - return ContentAddress::parse(readString(from)); -} - -void write(const Store & store, Sink & out, const ContentAddress & ca) -{ - out << renderContentAddress(ca); -} - - -DerivedPath read(const Store & store, Source & from, Phantom _) -{ - auto s = readString(from); - return DerivedPath::parseLegacy(store, s); -} - -void write(const Store & store, Sink & out, const DerivedPath & req) -{ - out << req.to_string_legacy(store); -} - - -Realisation read(const Store & store, Source & from, Phantom _) -{ - std::string rawInput = readString(from); - return Realisation::fromJSON( - nlohmann::json::parse(rawInput), - "remote-protocol" - ); -} - -void write(const Store & store, Sink & out, const Realisation & realisation) -{ - out << realisation.toJSON().dump(); -} - - -DrvOutput read(const Store & store, Source & from, Phantom _) -{ - return DrvOutput::parse(readString(from)); -} - -void write(const Store & store, Sink & out, const DrvOutput & drvOutput) -{ - out << drvOutput.to_string(); -} - - -KeyedBuildResult read(const Store & store, Source & from, Phantom _) -{ - auto path = worker_proto::read(store, from, Phantom {}); - auto br = worker_proto::read(store, from, Phantom {}); - return KeyedBuildResult { - std::move(br), - /* .path = */ std::move(path), - }; -} - -void write(const Store & store, Sink & to, const KeyedBuildResult & res) -{ - worker_proto::write(store, to, res.path); - worker_proto::write(store, to, static_cast(res)); -} - - -BuildResult read(const Store & store, Source & from, Phantom _) -{ - BuildResult res; - res.status = (BuildResult::Status) readInt(from); - from - >> res.errorMsg - >> res.timesBuilt - >> res.isNonDeterministic - >> res.startTime - >> res.stopTime; - auto builtOutputs = worker_proto::read(store, from, Phantom {}); - for (auto && [output, realisation] : builtOutputs) - res.builtOutputs.insert_or_assign( - std::move(output.outputName), - std::move(realisation)); - return res; -} - -void write(const Store & store, Sink & to, const BuildResult & res) -{ - to - << res.status - << res.errorMsg - << res.timesBuilt - << res.isNonDeterministic - << res.startTime - << res.stopTime; - DrvOutputs builtOutputs; - for (auto & [output, realisation] : res.builtOutputs) - builtOutputs.insert_or_assign(realisation.id, realisation); - worker_proto::write(store, to, builtOutputs); -} - - -std::optional read(const Store & store, Source & from, Phantom> _) -{ - auto s = readString(from); - return s == "" ? std::optional {} : store.parseStorePath(s); -} - -void write(const Store & store, Sink & out, const std::optional & storePathOpt) -{ - out << (storePathOpt ? store.printStorePath(*storePathOpt) : ""); -} - - -std::optional read(const Store & store, Source & from, Phantom> _) -{ - return ContentAddress::parseOpt(readString(from)); -} - -void write(const Store & store, Sink & out, const std::optional & caOpt) -{ - out << (caOpt ? renderContentAddress(*caOpt) : ""); -} - -} - - /* TODO: Separate these store impls into different files, give them better names */ RemoteStore::RemoteStore(const Params & params) : RemoteStoreConfig(params) @@ -283,7 +100,7 @@ void RemoteStore::initConnection(Connection & conn) } if (GET_PROTOCOL_MINOR(conn.daemonVersion) >= 35) { - conn.remoteTrustsUs = worker_proto::read(*this, conn.from, Phantom> {}); + conn.remoteTrustsUs = WorkerProto>::read(*this, conn.from); } else { // We don't know the answer; protocol to old. conn.remoteTrustsUs = std::nullopt; @@ -410,12 +227,12 @@ StorePathSet RemoteStore::queryValidPaths(const StorePathSet & paths, Substitute return res; } else { conn->to << wopQueryValidPaths; - worker_proto::write(*this, conn->to, paths); + workerProtoWrite(*this, conn->to, paths); if (GET_PROTOCOL_MINOR(conn->daemonVersion) >= 27) { conn->to << (settings.buildersUseSubstitutes ? 1 : 0); } conn.processStderr(); - return worker_proto::read(*this, conn->from, Phantom {}); + return WorkerProto::read(*this, conn->from); } } @@ -425,7 +242,7 @@ StorePathSet RemoteStore::queryAllValidPaths() auto conn(getConnection()); conn->to << wopQueryAllValidPaths; conn.processStderr(); - return worker_proto::read(*this, conn->from, Phantom {}); + return WorkerProto::read(*this, conn->from); } @@ -442,9 +259,9 @@ StorePathSet RemoteStore::querySubstitutablePaths(const StorePathSet & paths) return res; } else { conn->to << wopQuerySubstitutablePaths; - worker_proto::write(*this, conn->to, paths); + workerProtoWrite(*this, conn->to, paths); conn.processStderr(); - return worker_proto::read(*this, conn->from, Phantom {}); + return WorkerProto::read(*this, conn->from); } } @@ -466,7 +283,7 @@ void RemoteStore::querySubstitutablePathInfos(const StorePathCAMap & pathsMap, S auto deriver = readString(conn->from); if (deriver != "") info.deriver = parseStorePath(deriver); - info.references = worker_proto::read(*this, conn->from, Phantom {}); + info.references = WorkerProto::read(*this, conn->from); info.downloadSize = readLongLong(conn->from); info.narSize = readLongLong(conn->from); infos.insert_or_assign(i.first, std::move(info)); @@ -479,9 +296,9 @@ void RemoteStore::querySubstitutablePathInfos(const StorePathCAMap & pathsMap, S StorePathSet paths; for (auto & path : pathsMap) paths.insert(path.first); - worker_proto::write(*this, conn->to, paths); + workerProtoWrite(*this, conn->to, paths); } else - worker_proto::write(*this, conn->to, pathsMap); + workerProtoWrite(*this, conn->to, pathsMap); conn.processStderr(); size_t count = readNum(conn->from); for (size_t n = 0; n < count; n++) { @@ -489,7 +306,7 @@ void RemoteStore::querySubstitutablePathInfos(const StorePathCAMap & pathsMap, S auto deriver = readString(conn->from); if (deriver != "") info.deriver = parseStorePath(deriver); - info.references = worker_proto::read(*this, conn->from, Phantom {}); + info.references = WorkerProto::read(*this, conn->from); info.downloadSize = readLongLong(conn->from); info.narSize = readLongLong(conn->from); } @@ -532,7 +349,7 @@ void RemoteStore::queryReferrers(const StorePath & path, auto conn(getConnection()); conn->to << wopQueryReferrers << printStorePath(path); conn.processStderr(); - for (auto & i : worker_proto::read(*this, conn->from, Phantom {})) + for (auto & i : WorkerProto::read(*this, conn->from)) referrers.insert(i); } @@ -542,7 +359,7 @@ StorePathSet RemoteStore::queryValidDerivers(const StorePath & path) auto conn(getConnection()); conn->to << wopQueryValidDerivers << printStorePath(path); conn.processStderr(); - return worker_proto::read(*this, conn->from, Phantom {}); + return WorkerProto::read(*this, conn->from); } @@ -554,7 +371,7 @@ StorePathSet RemoteStore::queryDerivationOutputs(const StorePath & path) auto conn(getConnection()); conn->to << wopQueryDerivationOutputs << printStorePath(path); conn.processStderr(); - return worker_proto::read(*this, conn->from, Phantom {}); + return WorkerProto::read(*this, conn->from); } @@ -564,7 +381,7 @@ std::map> RemoteStore::queryPartialDerivat auto conn(getConnection()); conn->to << wopQueryDerivationOutputMap << printStorePath(path); conn.processStderr(); - return worker_proto::read(*this, conn->from, Phantom>> {}); + return WorkerProto>>::read(*this, conn->from); } else { // Fallback for old daemon versions. // For floating-CA derivations (and their co-dependencies) this is an @@ -610,7 +427,7 @@ ref RemoteStore::addCAToStore( << wopAddToStore << name << caMethod.render(hashType); - worker_proto::write(*this, conn->to, references); + workerProtoWrite(*this, conn->to, references); conn->to << repair; // The dump source may invoke the store, so we need to make some room. @@ -635,7 +452,7 @@ ref RemoteStore::addCAToStore( name, printHashType(hashType)); std::string s = dump.drain(); conn->to << wopAddTextToStore << name << s; - worker_proto::write(*this, conn->to, references); + workerProtoWrite(*this, conn->to, references); conn.processStderr(); }, [&](const FileIngestionMethod & fim) -> void { @@ -701,7 +518,7 @@ void RemoteStore::addToStore(const ValidPathInfo & info, Source & source, sink << exportMagic << printStorePath(info.path); - worker_proto::write(*this, sink, info.references); + workerProtoWrite(*this, sink, info.references); sink << (info.deriver ? printStorePath(*info.deriver) : "") << 0 // == no legacy signature @@ -711,7 +528,7 @@ void RemoteStore::addToStore(const ValidPathInfo & info, Source & source, conn.processStderr(0, source2.get()); - auto importedPaths = worker_proto::read(*this, conn->from, Phantom {}); + auto importedPaths = WorkerProto::read(*this, conn->from); assert(importedPaths.size() <= 1); } @@ -720,7 +537,7 @@ void RemoteStore::addToStore(const ValidPathInfo & info, Source & source, << printStorePath(info.path) << (info.deriver ? printStorePath(*info.deriver) : "") << info.narHash.to_string(Base16, false); - worker_proto::write(*this, conn->to, info.references); + workerProtoWrite(*this, conn->to, info.references); conn->to << info.registrationTime << info.narSize << info.ultimate << info.sigs << renderContentAddress(info.ca) << repair << !checkSigs; @@ -793,7 +610,7 @@ void RemoteStore::registerDrvOutput(const Realisation & info) conn->to << info.id.to_string(); conn->to << std::string(info.outPath.to_string()); } else { - worker_proto::write(*this, conn->to, info); + workerProtoWrite(*this, conn->to, info); } conn.processStderr(); } @@ -815,14 +632,14 @@ void RemoteStore::queryRealisationUncached(const DrvOutput & id, auto real = [&]() -> std::shared_ptr { if (GET_PROTOCOL_MINOR(conn->daemonVersion) < 31) { - auto outPaths = worker_proto::read( - *this, conn->from, Phantom> {}); + auto outPaths = WorkerProto>::read( + *this, conn->from); if (outPaths.empty()) return nullptr; return std::make_shared(Realisation { .id = id, .outPath = *outPaths.begin() }); } else { - auto realisations = worker_proto::read( - *this, conn->from, Phantom> {}); + auto realisations = WorkerProto>::read( + *this, conn->from); if (realisations.empty()) return nullptr; return std::make_shared(*realisations.begin()); @@ -836,7 +653,7 @@ void RemoteStore::queryRealisationUncached(const DrvOutput & id, static void writeDerivedPaths(RemoteStore & store, ConnectionHandle & conn, const std::vector & reqs) { if (GET_PROTOCOL_MINOR(conn->daemonVersion) >= 30) { - worker_proto::write(store, conn->to, reqs); + workerProtoWrite(store, conn->to, reqs); } else { Strings ss; for (auto & p : reqs) { @@ -906,7 +723,7 @@ std::vector RemoteStore::buildPathsWithResults( writeDerivedPaths(*this, conn, paths); conn->to << buildMode; conn.processStderr(); - return worker_proto::read(*this, conn->from, Phantom> {}); + return WorkerProto>::read(*this, conn->from); } else { // Avoid deadlock. conn_.reset(); @@ -989,7 +806,7 @@ BuildResult RemoteStore::buildDerivation(const StorePath & drvPath, const BasicD conn->from >> res.timesBuilt >> res.isNonDeterministic >> res.startTime >> res.stopTime; } if (GET_PROTOCOL_MINOR(conn->daemonVersion) >= 28) { - auto builtOutputs = worker_proto::read(*this, conn->from, Phantom {}); + auto builtOutputs = WorkerProto::read(*this, conn->from); for (auto && [output, realisation] : builtOutputs) res.builtOutputs.insert_or_assign( std::move(output.outputName), @@ -1048,7 +865,7 @@ void RemoteStore::collectGarbage(const GCOptions & options, GCResults & results) conn->to << wopCollectGarbage << options.action; - worker_proto::write(*this, conn->to, options.pathsToDelete); + workerProtoWrite(*this, conn->to, options.pathsToDelete); conn->to << options.ignoreLiveness << options.maxFreed /* removed options */ @@ -1107,9 +924,9 @@ void RemoteStore::queryMissing(const std::vector & targets, conn->to << wopQueryMissing; writeDerivedPaths(*this, conn, targets); conn.processStderr(); - willBuild = worker_proto::read(*this, conn->from, Phantom {}); - willSubstitute = worker_proto::read(*this, conn->from, Phantom {}); - unknown = worker_proto::read(*this, conn->from, Phantom {}); + willBuild = WorkerProto::read(*this, conn->from); + willSubstitute = WorkerProto::read(*this, conn->from); + unknown = WorkerProto::read(*this, conn->from); conn->from >> downloadSize >> narSize; return; } diff --git a/src/libstore/remote-store.hh b/src/libstore/remote-store.hh index 82e4656ab..4f3971bfd 100644 --- a/src/libstore/remote-store.hh +++ b/src/libstore/remote-store.hh @@ -137,6 +137,17 @@ public: bool verifyStore(bool checkContents, RepairFlag repair) override; + /** + * The default instance would schedule the work on the client side, but + * for consistency with `buildPaths` and `buildDerivation` it should happen + * on the remote side. + * + * We make this fail for now so we can add implement this properly later + * without it being a breaking change. + */ + void repairPath(const StorePath & path) override + { unsupported("repairPath"); } + void addSignatures(const StorePath & storePath, const StringSet & sigs) override; void queryMissing(const std::vector & targets, diff --git a/src/libstore/sqlite.cc b/src/libstore/sqlite.cc index df334c23c..7c8decb74 100644 --- a/src/libstore/sqlite.cc +++ b/src/libstore/sqlite.cc @@ -1,6 +1,7 @@ #include "sqlite.hh" #include "globals.hh" #include "util.hh" +#include "url.hh" #include @@ -50,15 +51,17 @@ static void traceSQL(void * x, const char * sql) notice("SQL<[%1%]>", sql); }; -SQLite::SQLite(const Path & path, bool create) +SQLite::SQLite(const Path & path, SQLiteOpenMode mode) { // useSQLiteWAL also indicates what virtual file system we need. Using // `unix-dotfile` is needed on NFS file systems and on Windows' Subsystem // for Linux (WSL) where useSQLiteWAL should be false by default. const char *vfs = settings.useSQLiteWAL ? 0 : "unix-dotfile"; - int flags = SQLITE_OPEN_READWRITE; - if (create) flags |= SQLITE_OPEN_CREATE; - int ret = sqlite3_open_v2(path.c_str(), &db, flags, vfs); + bool immutable = mode == SQLiteOpenMode::Immutable; + int flags = immutable ? SQLITE_OPEN_READONLY : SQLITE_OPEN_READWRITE; + if (mode == SQLiteOpenMode::Normal) flags |= SQLITE_OPEN_CREATE; + auto uri = "file:" + percentEncode(path) + "?immutable=" + (immutable ? "1" : "0"); + int ret = sqlite3_open_v2(uri.c_str(), &db, SQLITE_OPEN_URI | flags, vfs); if (ret != SQLITE_OK) { const char * err = sqlite3_errstr(ret); throw Error("cannot open SQLite database '%s': %s", path, err); diff --git a/src/libstore/sqlite.hh b/src/libstore/sqlite.hh index 6e14852cb..b0e84f8ed 100644 --- a/src/libstore/sqlite.hh +++ b/src/libstore/sqlite.hh @@ -11,6 +11,27 @@ struct sqlite3_stmt; namespace nix { +enum class SQLiteOpenMode { + /** + * Open the database in read-write mode. + * If the database does not exist, it will be created. + */ + Normal, + /** + * Open the database in read-write mode. + * Fails with an error if the database does not exist. + */ + NoCreate, + /** + * Open the database in immutable mode. + * In addition to the database being read-only, + * no wal or journal files will be created by sqlite. + * Use this mode if the database is on a read-only filesystem. + * Fails with an error if the database does not exist. + */ + Immutable +}; + /** * RAII wrapper to close a SQLite database automatically. */ @@ -18,7 +39,7 @@ struct SQLite { sqlite3 * db = 0; SQLite() { } - SQLite(const Path & path, bool create = true); + SQLite(const Path & path, SQLiteOpenMode mode = SQLiteOpenMode::Normal); SQLite(const SQLite & from) = delete; SQLite& operator = (const SQLite & from) = delete; SQLite& operator = (SQLite && from) { db = from.db; from.db = 0; return *this; } diff --git a/src/libstore/ssh.cc b/src/libstore/ssh.cc index 6f6deda51..fae99d75b 100644 --- a/src/libstore/ssh.cc +++ b/src/libstore/ssh.cc @@ -41,6 +41,11 @@ void SSHMaster::addCommonSSHOpts(Strings & args) args.push_back("-oLocalCommand=echo started"); } +bool SSHMaster::isMasterRunning() { + auto res = runProgram(RunOptions {.program = "ssh", .args = {"-O", "check", host}, .mergeStderrToStdout = true}); + return res.first == 0; +} + std::unique_ptr SSHMaster::startCommand(const std::string & command) { Path socketPath = startMaster(); @@ -97,7 +102,7 @@ std::unique_ptr SSHMaster::startCommand(const std::string // Wait for the SSH connection to be established, // So that we don't overwrite the password prompt with our progress bar. - if (!fakeSSH && !useMaster) { + if (!fakeSSH && !useMaster && !isMasterRunning()) { std::string reply; try { reply = readLine(out.readSide.get()); @@ -133,6 +138,8 @@ Path SSHMaster::startMaster() logger->pause(); Finally cleanup = [&]() { logger->resume(); }; + bool wasMasterRunning = isMasterRunning(); + state->sshMaster = startProcess([&]() { restoreProcessContext(); @@ -152,13 +159,15 @@ Path SSHMaster::startMaster() out.writeSide = -1; - std::string reply; - try { - reply = readLine(out.readSide.get()); - } catch (EndOfFile & e) { } + if (!wasMasterRunning) { + std::string reply; + try { + reply = readLine(out.readSide.get()); + } catch (EndOfFile & e) { } - if (reply != "started") - throw Error("failed to start SSH master connection to '%s'", host); + if (reply != "started") + throw Error("failed to start SSH master connection to '%s'", host); + } return state->socketPath; } diff --git a/src/libstore/ssh.hh b/src/libstore/ssh.hh index c86a8a986..94b952af9 100644 --- a/src/libstore/ssh.hh +++ b/src/libstore/ssh.hh @@ -28,6 +28,7 @@ private: Sync state_; void addCommonSSHOpts(Strings & args); + bool isMasterRunning(); public: diff --git a/src/libstore/store-api.hh b/src/libstore/store-api.hh index bad610014..2ecbe2708 100644 --- a/src/libstore/store-api.hh +++ b/src/libstore/store-api.hh @@ -679,8 +679,7 @@ public: * Repair the contents of the given path by redownloading it using * a substituter (if available). */ - virtual void repairPath(const StorePath & path) - { unsupported("repairPath"); } + virtual void repairPath(const StorePath & path); /** * Add signatures to the specified store path. The signatures are diff --git a/src/libstore/tests/downstream-placeholder.cc b/src/libstore/tests/downstream-placeholder.cc new file mode 100644 index 000000000..ec3e1000f --- /dev/null +++ b/src/libstore/tests/downstream-placeholder.cc @@ -0,0 +1,33 @@ +#include + +#include "downstream-placeholder.hh" + +namespace nix { + +TEST(DownstreamPlaceholder, unknownCaOutput) { + ASSERT_EQ( + DownstreamPlaceholder::unknownCaOutput( + StorePath { "g1w7hy3qg1w7hy3qg1w7hy3qg1w7hy3q-foo.drv" }, + "out").render(), + "/0c6rn30q4frawknapgwq386zq358m8r6msvywcvc89n6m5p2dgbz"); +} + +TEST(DownstreamPlaceholder, unknownDerivation) { + /** + * We set these in tests rather than the regular globals so we don't have + * to worry about race conditions if the tests run concurrently. + */ + ExperimentalFeatureSettings mockXpSettings; + mockXpSettings.set("experimental-features", "dynamic-derivations ca-derivations"); + + ASSERT_EQ( + DownstreamPlaceholder::unknownDerivation( + DownstreamPlaceholder::unknownCaOutput( + StorePath { "g1w7hy3qg1w7hy3qg1w7hy3qg1w7hy3q-foo.drv.drv" }, + "out"), + "out", + mockXpSettings).render(), + "/0gn6agqxjyyalf0dpihgyf49xq5hqxgw100f0wydnj6yqrhqsb3w"); +} + +} diff --git a/src/libstore/worker-protocol.cc b/src/libstore/worker-protocol.cc new file mode 100644 index 000000000..51bb12026 --- /dev/null +++ b/src/libstore/worker-protocol.cc @@ -0,0 +1,192 @@ +#include "serialise.hh" +#include "util.hh" +#include "path-with-outputs.hh" +#include "store-api.hh" +#include "build-result.hh" +#include "worker-protocol.hh" +#include "archive.hh" +#include "derivations.hh" + +#include + +namespace nix { + +std::string WorkerProto::read(const Store & store, Source & from) +{ + return readString(from); +} + +void WorkerProto::write(const Store & store, Sink & out, const std::string & str) +{ + out << str; +} + + +StorePath WorkerProto::read(const Store & store, Source & from) +{ + return store.parseStorePath(readString(from)); +} + +void WorkerProto::write(const Store & store, Sink & out, const StorePath & storePath) +{ + out << store.printStorePath(storePath); +} + + +std::optional WorkerProto>::read(const Store & store, Source & from) +{ + auto temp = readNum(from); + switch (temp) { + case 0: + return std::nullopt; + case 1: + return { Trusted }; + case 2: + return { NotTrusted }; + default: + throw Error("Invalid trusted status from remote"); + } +} + +void WorkerProto>::write(const Store & store, Sink & out, const std::optional & optTrusted) +{ + if (!optTrusted) + out << (uint8_t)0; + else { + switch (*optTrusted) { + case Trusted: + out << (uint8_t)1; + break; + case NotTrusted: + out << (uint8_t)2; + break; + default: + assert(false); + }; + } +} + + +ContentAddress WorkerProto::read(const Store & store, Source & from) +{ + return ContentAddress::parse(readString(from)); +} + +void WorkerProto::write(const Store & store, Sink & out, const ContentAddress & ca) +{ + out << renderContentAddress(ca); +} + + +DerivedPath WorkerProto::read(const Store & store, Source & from) +{ + auto s = readString(from); + return DerivedPath::parseLegacy(store, s); +} + +void WorkerProto::write(const Store & store, Sink & out, const DerivedPath & req) +{ + out << req.to_string_legacy(store); +} + + +Realisation WorkerProto::read(const Store & store, Source & from) +{ + std::string rawInput = readString(from); + return Realisation::fromJSON( + nlohmann::json::parse(rawInput), + "remote-protocol" + ); +} + +void WorkerProto::write(const Store & store, Sink & out, const Realisation & realisation) +{ + out << realisation.toJSON().dump(); +} + + +DrvOutput WorkerProto::read(const Store & store, Source & from) +{ + return DrvOutput::parse(readString(from)); +} + +void WorkerProto::write(const Store & store, Sink & out, const DrvOutput & drvOutput) +{ + out << drvOutput.to_string(); +} + + +KeyedBuildResult WorkerProto::read(const Store & store, Source & from) +{ + auto path = WorkerProto::read(store, from); + auto br = WorkerProto::read(store, from); + return KeyedBuildResult { + std::move(br), + /* .path = */ std::move(path), + }; +} + +void WorkerProto::write(const Store & store, Sink & to, const KeyedBuildResult & res) +{ + workerProtoWrite(store, to, res.path); + workerProtoWrite(store, to, static_cast(res)); +} + + +BuildResult WorkerProto::read(const Store & store, Source & from) +{ + BuildResult res; + res.status = (BuildResult::Status) readInt(from); + from + >> res.errorMsg + >> res.timesBuilt + >> res.isNonDeterministic + >> res.startTime + >> res.stopTime; + auto builtOutputs = WorkerProto::read(store, from); + for (auto && [output, realisation] : builtOutputs) + res.builtOutputs.insert_or_assign( + std::move(output.outputName), + std::move(realisation)); + return res; +} + +void WorkerProto::write(const Store & store, Sink & to, const BuildResult & res) +{ + to + << res.status + << res.errorMsg + << res.timesBuilt + << res.isNonDeterministic + << res.startTime + << res.stopTime; + DrvOutputs builtOutputs; + for (auto & [output, realisation] : res.builtOutputs) + builtOutputs.insert_or_assign(realisation.id, realisation); + workerProtoWrite(store, to, builtOutputs); +} + + +std::optional WorkerProto>::read(const Store & store, Source & from) +{ + auto s = readString(from); + return s == "" ? std::optional {} : store.parseStorePath(s); +} + +void WorkerProto>::write(const Store & store, Sink & out, const std::optional & storePathOpt) +{ + out << (storePathOpt ? store.printStorePath(*storePathOpt) : ""); +} + + +std::optional WorkerProto>::read(const Store & store, Source & from) +{ + return ContentAddress::parseOpt(readString(from)); +} + +void WorkerProto>::write(const Store & store, Sink & out, const std::optional & caOpt) +{ + out << (caOpt ? renderContentAddress(*caOpt) : ""); +} + +} diff --git a/src/libstore/worker-protocol.hh b/src/libstore/worker-protocol.hh index 34b2fc17b..f06332d17 100644 --- a/src/libstore/worker-protocol.hh +++ b/src/libstore/worker-protocol.hh @@ -1,7 +1,6 @@ #pragma once ///@file -#include "store-api.hh" #include "serialise.hh" namespace nix { @@ -79,41 +78,81 @@ typedef enum { class Store; struct Source; +// items being serialized +struct DerivedPath; +struct DrvOutput; +struct Realisation; +struct BuildResult; +struct KeyedBuildResult; +enum TrustedFlag : bool; + + /** - * Used to guide overloading + * Data type for canonical pairs of serializers for the worker protocol. * * See https://en.cppreference.com/w/cpp/language/adl for the broader * concept of what is going on here. */ template -struct Phantom {}; +struct WorkerProto { + static T read(const Store & store, Source & from); + static void write(const Store & store, Sink & out, const T & t); +}; +/** + * Wrapper function around `WorkerProto::write` that allows us to + * infer the type instead of having to write it down explicitly. + */ +template +void workerProtoWrite(const Store & store, Sink & out, const T & t) +{ + WorkerProto::write(store, out, t); +} -namespace worker_proto { -/* FIXME maybe move more stuff inside here */ +/** + * Declare a canonical serializer pair for the worker protocol. + * + * We specialize the struct merely to indicate that we are implementing + * the function for the given type. + * + * Some sort of `template<...>` must be used with the caller for this to + * be legal specialization syntax. See below for what that looks like in + * practice. + */ +#define MAKE_WORKER_PROTO(T) \ + struct WorkerProto< T > { \ + static T read(const Store & store, Source & from); \ + static void write(const Store & store, Sink & out, const T & t); \ + }; -#define MAKE_WORKER_PROTO(TEMPLATE, T) \ - TEMPLATE T read(const Store & store, Source & from, Phantom< T > _); \ - TEMPLATE void write(const Store & store, Sink & out, const T & str) +template<> +MAKE_WORKER_PROTO(std::string); +template<> +MAKE_WORKER_PROTO(StorePath); +template<> +MAKE_WORKER_PROTO(ContentAddress); +template<> +MAKE_WORKER_PROTO(DerivedPath); +template<> +MAKE_WORKER_PROTO(Realisation); +template<> +MAKE_WORKER_PROTO(DrvOutput); +template<> +MAKE_WORKER_PROTO(BuildResult); +template<> +MAKE_WORKER_PROTO(KeyedBuildResult); +template<> +MAKE_WORKER_PROTO(std::optional); -MAKE_WORKER_PROTO(, std::string); -MAKE_WORKER_PROTO(, StorePath); -MAKE_WORKER_PROTO(, ContentAddress); -MAKE_WORKER_PROTO(, DerivedPath); -MAKE_WORKER_PROTO(, Realisation); -MAKE_WORKER_PROTO(, DrvOutput); -MAKE_WORKER_PROTO(, BuildResult); -MAKE_WORKER_PROTO(, KeyedBuildResult); -MAKE_WORKER_PROTO(, std::optional); +template +MAKE_WORKER_PROTO(std::vector); +template +MAKE_WORKER_PROTO(std::set); -MAKE_WORKER_PROTO(template, std::vector); -MAKE_WORKER_PROTO(template, std::set); - -#define X_ template -#define Y_ std::map -MAKE_WORKER_PROTO(X_, Y_); +template +#define X_ std::map +MAKE_WORKER_PROTO(X_); #undef X_ -#undef Y_ /** * These use the empty string for the null case, relying on the fact @@ -129,72 +168,72 @@ MAKE_WORKER_PROTO(X_, Y_); * worker protocol harder to implement in other languages where such * specializations may not be allowed. */ -MAKE_WORKER_PROTO(, std::optional); -MAKE_WORKER_PROTO(, std::optional); +template<> +MAKE_WORKER_PROTO(std::optional); +template<> +MAKE_WORKER_PROTO(std::optional); template -std::vector read(const Store & store, Source & from, Phantom> _) +std::vector WorkerProto>::read(const Store & store, Source & from) { std::vector resSet; auto size = readNum(from); while (size--) { - resSet.push_back(read(store, from, Phantom {})); + resSet.push_back(WorkerProto::read(store, from)); } return resSet; } template -void write(const Store & store, Sink & out, const std::vector & resSet) +void WorkerProto>::write(const Store & store, Sink & out, const std::vector & resSet) { out << resSet.size(); for (auto & key : resSet) { - write(store, out, key); + WorkerProto::write(store, out, key); } } template -std::set read(const Store & store, Source & from, Phantom> _) +std::set WorkerProto>::read(const Store & store, Source & from) { std::set resSet; auto size = readNum(from); while (size--) { - resSet.insert(read(store, from, Phantom {})); + resSet.insert(WorkerProto::read(store, from)); } return resSet; } template -void write(const Store & store, Sink & out, const std::set & resSet) +void WorkerProto>::write(const Store & store, Sink & out, const std::set & resSet) { out << resSet.size(); for (auto & key : resSet) { - write(store, out, key); + WorkerProto::write(store, out, key); } } template -std::map read(const Store & store, Source & from, Phantom> _) +std::map WorkerProto>::read(const Store & store, Source & from) { std::map resMap; auto size = readNum(from); while (size--) { - auto k = read(store, from, Phantom {}); - auto v = read(store, from, Phantom {}); + auto k = WorkerProto::read(store, from); + auto v = WorkerProto::read(store, from); resMap.insert_or_assign(std::move(k), std::move(v)); } return resMap; } template -void write(const Store & store, Sink & out, const std::map & resMap) +void WorkerProto>::write(const Store & store, Sink & out, const std::map & resMap) { out << resMap.size(); for (auto & i : resMap) { - write(store, out, i.first); - write(store, out, i.second); + WorkerProto::write(store, out, i.first); + WorkerProto::write(store, out, i.second); } } } - -} diff --git a/src/libutil/experimental-features.cc b/src/libutil/experimental-features.cc index ad0ec0427..0ce676af3 100644 --- a/src/libutil/experimental-features.cc +++ b/src/libutil/experimental-features.cc @@ -12,7 +12,7 @@ struct ExperimentalFeatureDetails std::string_view description; }; -constexpr std::array xpFeatureDetails = {{ +constexpr std::array xpFeatureDetails = {{ { .tag = Xp::CaDerivations, .name = "ca-derivations", @@ -207,6 +207,25 @@ constexpr std::array xpFeatureDetails = {{ - "text hashing" derivation outputs, so we can build .drv files. + + - dependencies in derivations on the outputs of + derivations that are themselves derivations outputs. + )", + }, + { + .tag = Xp::ReadOnlyLocalStore, + .name = "read-only-local-store", + .description = R"( + Allow the use of the `read-only` parameter in local store URIs. + + Set this parameter to `true` to allow stores with databases on read-only + filesystems to be opened for querying; ordinarily Nix will refuse to do this. + + Enabling this setting disables the locking required for safe concurrent + access, so you should be certain that the database will not be changed. + While the filesystem the database resides on might be read-only to this + process, consider whether another user, process, or system, might have + write access to it. )", }, }}; diff --git a/src/libutil/experimental-features.hh b/src/libutil/experimental-features.hh index 409100592..19d425b6d 100644 --- a/src/libutil/experimental-features.hh +++ b/src/libutil/experimental-features.hh @@ -30,6 +30,7 @@ enum struct ExperimentalFeature DiscardReferences, DaemonTrustOverride, DynamicDerivations, + ReadOnlyLocalStore, }; /** diff --git a/src/libutil/util.cc b/src/libutil/util.cc index 21d1c8dcd..3a8309149 100644 --- a/src/libutil/util.cc +++ b/src/libutil/util.cc @@ -1141,9 +1141,9 @@ std::vector stringsToCharPtrs(const Strings & ss) } std::string runProgram(Path program, bool searchPath, const Strings & args, - const std::optional & input) + const std::optional & input, bool isInteractive) { - auto res = runProgram(RunOptions {.program = program, .searchPath = searchPath, .args = args, .input = input}); + auto res = runProgram(RunOptions {.program = program, .searchPath = searchPath, .args = args, .input = input, .isInteractive = isInteractive}); if (!statusOk(res.first)) throw ExecError(res.first, "program '%1%' %2%", program, statusToString(res.first)); @@ -1193,6 +1193,16 @@ void runProgram2(const RunOptions & options) // case), so we can't use it if we alter the environment processOptions.allowVfork = !options.environment; + std::optional>> resumeLoggerDefer; + if (options.isInteractive) { + logger->pause(); + resumeLoggerDefer.emplace( + []() { + logger->resume(); + } + ); + } + /* Fork. */ Pid pid = startProcess([&]() { if (options.environment) diff --git a/src/libutil/util.hh b/src/libutil/util.hh index 040fed68f..a7907cd14 100644 --- a/src/libutil/util.hh +++ b/src/libutil/util.hh @@ -415,7 +415,7 @@ pid_t startProcess(std::function fun, const ProcessOptions & options = P */ std::string runProgram(Path program, bool searchPath = false, const Strings & args = Strings(), - const std::optional & input = {}); + const std::optional & input = {}, bool isInteractive = false); struct RunOptions { @@ -430,6 +430,7 @@ struct RunOptions Source * standardIn = nullptr; Sink * standardOut = nullptr; bool mergeStderrToStdout = false; + bool isInteractive = false; }; std::pair runProgram(RunOptions && options); diff --git a/src/nix-collect-garbage/nix-collect-garbage.cc b/src/nix-collect-garbage/nix-collect-garbage.cc index 3cc57af4e..cb1f42e35 100644 --- a/src/nix-collect-garbage/nix-collect-garbage.cc +++ b/src/nix-collect-garbage/nix-collect-garbage.cc @@ -77,7 +77,12 @@ static int main_nix_collect_garbage(int argc, char * * argv) return true; }); - if (removeOld) removeOldGenerations(profilesDir()); + if (removeOld) { + std::set dirsToClean = { + profilesDir(), settings.nixStateDir + "/profiles", dirOf(getDefaultProfile())}; + for (auto & dir : dirsToClean) + removeOldGenerations(dir); + } // Run the actual garbage collector. if (!dryRun) { diff --git a/src/nix-store/nix-store.cc b/src/nix-store/nix-store.cc index 40f30eb63..61c189efb 100644 --- a/src/nix-store/nix-store.cc +++ b/src/nix-store/nix-store.cc @@ -849,7 +849,7 @@ static void opServe(Strings opFlags, Strings opArgs) case cmdQueryValidPaths: { bool lock = readInt(in); bool substitute = readInt(in); - auto paths = worker_proto::read(*store, in, Phantom {}); + auto paths = WorkerProto::read(*store, in); if (lock && writeAllowed) for (auto & path : paths) store->addTempRoot(path); @@ -858,19 +858,19 @@ static void opServe(Strings opFlags, Strings opArgs) store->substitutePaths(paths); } - worker_proto::write(*store, out, store->queryValidPaths(paths)); + workerProtoWrite(*store, out, store->queryValidPaths(paths)); break; } case cmdQueryPathInfos: { - auto paths = worker_proto::read(*store, in, Phantom {}); + auto paths = WorkerProto::read(*store, in); // !!! Maybe we want a queryPathInfos? for (auto & i : paths) { try { auto info = store->queryPathInfo(i); out << store->printStorePath(info->path) << (info->deriver ? store->printStorePath(*info->deriver) : ""); - worker_proto::write(*store, out, info->references); + workerProtoWrite(*store, out, info->references); // !!! Maybe we want compression? out << info->narSize // downloadSize << info->narSize; @@ -898,7 +898,7 @@ static void opServe(Strings opFlags, Strings opArgs) case cmdExportPaths: { readInt(in); // obsolete - store->exportPaths(worker_proto::read(*store, in, Phantom {}), out); + store->exportPaths(WorkerProto::read(*store, in), out); break; } @@ -944,7 +944,7 @@ static void opServe(Strings opFlags, Strings opArgs) DrvOutputs builtOutputs; for (auto & [output, realisation] : status.builtOutputs) builtOutputs.insert_or_assign(realisation.id, realisation); - worker_proto::write(*store, out, builtOutputs); + workerProtoWrite(*store, out, builtOutputs); } break; @@ -953,9 +953,9 @@ static void opServe(Strings opFlags, Strings opArgs) case cmdQueryClosure: { bool includeOutputs = readInt(in); StorePathSet closure; - store->computeFSClosure(worker_proto::read(*store, in, Phantom {}), + store->computeFSClosure(WorkerProto::read(*store, in), closure, false, includeOutputs); - worker_proto::write(*store, out, closure); + workerProtoWrite(*store, out, closure); break; } @@ -970,7 +970,7 @@ static void opServe(Strings opFlags, Strings opArgs) }; if (deriver != "") info.deriver = store->parseStorePath(deriver); - info.references = worker_proto::read(*store, in, Phantom {}); + info.references = WorkerProto::read(*store, in); in >> info.registrationTime >> info.narSize >> info.ultimate; info.sigs = readStrings(in); info.ca = ContentAddress::parseOpt(readString(in)); diff --git a/src/nix/app.cc b/src/nix/app.cc index fd4569bb4..e678b54f0 100644 --- a/src/nix/app.cc +++ b/src/nix/app.cc @@ -7,6 +7,7 @@ #include "names.hh" #include "command.hh" #include "derivations.hh" +#include "downstream-placeholder.hh" namespace nix { @@ -23,7 +24,7 @@ StringPairs resolveRewrites( if (auto drvDep = std::get_if(&dep.path)) for (auto & [ outputName, outputPath ] : drvDep->outputs) res.emplace( - downstreamPlaceholder(store, drvDep->drvPath, outputName), + DownstreamPlaceholder::unknownCaOutput(drvDep->drvPath, outputName).render(), store.printStorePath(outputPath) ); return res; diff --git a/src/nix/build.md b/src/nix/build.md index ee414dc86..0fbb39cc3 100644 --- a/src/nix/build.md +++ b/src/nix/build.md @@ -44,7 +44,7 @@ R""( `release.nix`: ```console - # nix build -f release.nix build.x86_64-linux + # nix build --file release.nix build.x86_64-linux ``` * Build a NixOS system configuration from a flake, and make a profile diff --git a/src/nix/copy.md b/src/nix/copy.md index 25e0ddadc..199006436 100644 --- a/src/nix/copy.md +++ b/src/nix/copy.md @@ -15,7 +15,7 @@ R""( SSH: ```console - # nix copy -s --to ssh://server /run/current-system + # nix copy --substitute-on-destination --to ssh://server /run/current-system ``` The `-s` flag causes the remote machine to try to substitute missing diff --git a/src/nix/develop.md b/src/nix/develop.md index c49b39669..1b5a8aeba 100644 --- a/src/nix/develop.md +++ b/src/nix/develop.md @@ -69,7 +69,7 @@ R""( * Run a series of script commands: ```console - # nix develop --command bash -c "mkdir build && cmake .. && make" + # nix develop --command bash --command "mkdir build && cmake .. && make" ``` # Description diff --git a/src/nix/eval.md b/src/nix/eval.md index 3b510737a..48d5aa597 100644 --- a/src/nix/eval.md +++ b/src/nix/eval.md @@ -18,7 +18,7 @@ R""( * Evaluate a Nix expression from a file: ```console - # nix eval -f ./my-nixpkgs hello.name + # nix eval --file ./my-nixpkgs hello.name ``` * Get the current version of the `nixpkgs` flake: diff --git a/src/nix/flake-check.md b/src/nix/flake-check.md index 07031c909..c8307f8d8 100644 --- a/src/nix/flake-check.md +++ b/src/nix/flake-check.md @@ -68,6 +68,6 @@ The following flake output attributes must be In addition, the `hydraJobs` output is evaluated in the same way as Hydra's `hydra-eval-jobs` (i.e. as a arbitrarily deeply nested attribute set of derivations). Similarly, the -`legacyPackages`.*system* output is evaluated like `nix-env -qa`. +`legacyPackages`.*system* output is evaluated like `nix-env --query --available `. )"" diff --git a/src/nix/nar-ls.md b/src/nix/nar-ls.md index d373f9715..5a03c5d82 100644 --- a/src/nix/nar-ls.md +++ b/src/nix/nar-ls.md @@ -5,7 +5,7 @@ R""( * To list a specific file in a NAR: ```console - # nix nar ls -l ./hello.nar /bin/hello + # nix nar ls --long ./hello.nar /bin/hello -r-xr-xr-x 38184 hello ``` @@ -13,7 +13,7 @@ R""( format: ```console - # nix nar ls --json -R ./hello.nar /bin + # nix nar ls --json --recursive ./hello.nar /bin {"type":"directory","entries":{"hello":{"type":"regular","size":38184,"executable":true,"narOffset":400}}} ``` diff --git a/src/nix/nix.md b/src/nix/nix.md index 1ef6c7fcd..8a850ae83 100644 --- a/src/nix/nix.md +++ b/src/nix/nix.md @@ -197,7 +197,7 @@ operate are determined as follows: of all outputs of the `glibc` package in the binary cache: ```console - # nix path-info -S --eval-store auto --store https://cache.nixos.org 'nixpkgs#glibc^*' + # nix path-info --closure-size --eval-store auto --store https://cache.nixos.org 'nixpkgs#glibc^*' /nix/store/g02b1lpbddhymmcjb923kf0l7s9nww58-glibc-2.33-123 33208200 /nix/store/851dp95qqiisjifi639r0zzg5l465ny4-glibc-2.33-123-bin 36142896 /nix/store/kdgs3q6r7xdff1p7a9hnjr43xw2404z7-glibc-2.33-123-debug 155787312 @@ -208,7 +208,7 @@ operate are determined as follows: and likewise, using a store path to a "drv" file to specify the derivation: ```console - # nix path-info -S '/nix/store/gzaflydcr6sb3567hap9q6srzx8ggdgg-glibc-2.33-78.drv^*' + # nix path-info --closure-size '/nix/store/gzaflydcr6sb3567hap9q6srzx8ggdgg-glibc-2.33-78.drv^*' … ``` * If you didn't specify the desired outputs, but the derivation has an diff --git a/src/nix/path-info.md b/src/nix/path-info.md index 6ad23a02e..2dda866d0 100644 --- a/src/nix/path-info.md +++ b/src/nix/path-info.md @@ -13,7 +13,7 @@ R""( closure, sorted by size: ```console - # nix path-info -rS /run/current-system | sort -nk2 + # nix path-info --recursive --closure-size /run/current-system | sort -nk2 /nix/store/hl5xwp9kdrd1zkm0idm3kkby9q66z404-empty 96 /nix/store/27324qvqhnxj3rncazmxc4mwy79kz8ha-nameservers 112 … @@ -25,7 +25,7 @@ R""( readable sizes: ```console - # nix path-info -rsSh nixpkgs#rustc + # nix path-info --recursive --size --closure-size --human-readable nixpkgs#rustc /nix/store/01rrgsg5zk3cds0xgdsq40zpk6g51dz9-ncurses-6.2-dev 386.7K 69.1M /nix/store/0q783wnvixpqz6dxjp16nw296avgczam-libpfm-4.11.0 5.9M 37.4M … @@ -34,7 +34,7 @@ R""( * Check the existence of a path in a binary cache: ```console - # nix path-info -r /nix/store/blzxgyvrk32ki6xga10phr4sby2xf25q-geeqie-1.5.1 --store https://cache.nixos.org/ + # nix path-info --recursive /nix/store/blzxgyvrk32ki6xga10phr4sby2xf25q-geeqie-1.5.1 --store https://cache.nixos.org/ path '/nix/store/blzxgyvrk32ki6xga10phr4sby2xf25q-geeqie-1.5.1' is not valid ``` @@ -57,7 +57,7 @@ R""( size: ```console - # nix path-info --json --all -S \ + # nix path-info --json --all --closure-size \ | jq 'map(select(.closureSize > 1e9)) | sort_by(.closureSize) | map([.path, .closureSize])' [ …, diff --git a/src/nix/profile.cc b/src/nix/profile.cc index fd63b3519..7cea616d2 100644 --- a/src/nix/profile.cc +++ b/src/nix/profile.cc @@ -31,6 +31,11 @@ struct ProfileElementSource std::tuple(originalRef.to_string(), attrPath, outputs) < std::tuple(other.originalRef.to_string(), other.attrPath, other.outputs); } + + std::string to_string() const + { + return fmt("%s#%s%s", originalRef, attrPath, outputs.to_string()); + } }; const int defaultPriority = 5; @@ -42,16 +47,30 @@ struct ProfileElement bool active = true; int priority = defaultPriority; - std::string describe() const + std::string identifier() const { if (source) - return fmt("%s#%s%s", source->originalRef, source->attrPath, source->outputs.to_string()); + return source->to_string(); StringSet names; for (auto & path : storePaths) names.insert(DrvName(path.name()).name); return concatStringsSep(", ", names); } + /** + * Return a string representing an installable corresponding to the current + * element, either a flakeref or a plain store path + */ + std::set toInstallables(Store & store) + { + if (source) + return {source->to_string()}; + StringSet rawPaths; + for (auto & path : storePaths) + rawPaths.insert(store.printStorePath(path)); + return rawPaths; + } + std::string versions() const { StringSet versions; @@ -62,7 +81,7 @@ struct ProfileElement bool operator < (const ProfileElement & other) const { - return std::tuple(describe(), storePaths) < std::tuple(other.describe(), other.storePaths); + return std::tuple(identifier(), storePaths) < std::tuple(other.identifier(), other.storePaths); } void updateStorePaths( @@ -237,13 +256,13 @@ struct ProfileManifest bool changes = false; while (i != prevElems.end() || j != curElems.end()) { - if (j != curElems.end() && (i == prevElems.end() || i->describe() > j->describe())) { - logger->cout("%s%s: ∅ -> %s", indent, j->describe(), j->versions()); + if (j != curElems.end() && (i == prevElems.end() || i->identifier() > j->identifier())) { + logger->cout("%s%s: ∅ -> %s", indent, j->identifier(), j->versions()); changes = true; ++j; } - else if (i != prevElems.end() && (j == curElems.end() || i->describe() < j->describe())) { - logger->cout("%s%s: %s -> ∅", indent, i->describe(), i->versions()); + else if (i != prevElems.end() && (j == curElems.end() || i->identifier() < j->identifier())) { + logger->cout("%s%s: %s -> ∅", indent, i->identifier(), i->versions()); changes = true; ++i; } @@ -251,7 +270,7 @@ struct ProfileManifest auto v1 = i->versions(); auto v2 = j->versions(); if (v1 != v2) { - logger->cout("%s%s: %s -> %s", indent, i->describe(), v1, v2); + logger->cout("%s%s: %s -> %s", indent, i->identifier(), v1, v2); changes = true; } ++i; @@ -363,10 +382,10 @@ struct CmdProfileInstall : InstallablesCommand, MixDefaultProfile auto profileElement = *it; for (auto & storePath : profileElement.storePaths) { if (conflictError.fileA.starts_with(store->printStorePath(storePath))) { - return std::pair(conflictError.fileA, profileElement.source->originalRef); + return std::pair(conflictError.fileA, profileElement.toInstallables(*store)); } if (conflictError.fileB.starts_with(store->printStorePath(storePath))) { - return std::pair(conflictError.fileB, profileElement.source->originalRef); + return std::pair(conflictError.fileB, profileElement.toInstallables(*store)); } } } @@ -375,9 +394,9 @@ struct CmdProfileInstall : InstallablesCommand, MixDefaultProfile // There are 2 conflicting files. We need to find out which one is from the already installed package and // which one is the package that is the new package that is being installed. // The first matching package is the one that was already installed (original). - auto [originalConflictingFilePath, originalConflictingRef] = findRefByFilePath(manifest.elements.begin(), manifest.elements.end()); + auto [originalConflictingFilePath, originalConflictingRefs] = findRefByFilePath(manifest.elements.begin(), manifest.elements.end()); // The last matching package is the one that was going to be installed (new). - auto [newConflictingFilePath, newConflictingRef] = findRefByFilePath(manifest.elements.rbegin(), manifest.elements.rend()); + auto [newConflictingFilePath, newConflictingRefs] = findRefByFilePath(manifest.elements.rbegin(), manifest.elements.rend()); throw Error( "An existing package already provides the following file:\n" @@ -403,8 +422,8 @@ struct CmdProfileInstall : InstallablesCommand, MixDefaultProfile " nix profile install %4% --priority %7%\n", originalConflictingFilePath, newConflictingFilePath, - originalConflictingRef.to_string(), - newConflictingRef.to_string(), + concatStringsSep(" ", originalConflictingRefs), + concatStringsSep(" ", newConflictingRefs), conflictError.priority, conflictError.priority - 1, conflictError.priority + 1 @@ -491,7 +510,7 @@ struct CmdProfileRemove : virtual EvalCommand, MixDefaultProfile, MixProfileElem if (!matches(*store, element, i, matchers)) { newManifest.elements.push_back(std::move(element)); } else { - notice("removing '%s'", element.describe()); + notice("removing '%s'", element.identifier()); } } diff --git a/src/nix/search.md b/src/nix/search.md index 4caa90654..0c5d22549 100644 --- a/src/nix/search.md +++ b/src/nix/search.md @@ -52,12 +52,12 @@ R""( * Search for packages containing `neovim` but hide ones containing either `gui` or `python`: ```console - # nix search nixpkgs neovim -e 'python|gui' + # nix search nixpkgs neovim --exclude 'python|gui' ``` or ```console - # nix search nixpkgs neovim -e 'python' -e 'gui' + # nix search nixpkgs neovim --exclude 'python' --exclude 'gui' ``` # Description diff --git a/src/nix/shell.md b/src/nix/shell.md index 13a389103..1668104b1 100644 --- a/src/nix/shell.md +++ b/src/nix/shell.md @@ -19,26 +19,26 @@ R""( * Run GNU Hello: ```console - # nix shell nixpkgs#hello -c hello --greeting 'Hi everybody!' + # nix shell nixpkgs#hello --command hello --greeting 'Hi everybody!' Hi everybody! ``` * Run multiple commands in a shell environment: ```console - # nix shell nixpkgs#gnumake -c sh -c "cd src && make" + # nix shell nixpkgs#gnumake --command sh --command "cd src && make" ``` * Run GNU Hello in a chroot store: ```console - # nix shell --store ~/my-nix nixpkgs#hello -c hello + # nix shell --store ~/my-nix nixpkgs#hello --command hello ``` * Start a shell providing GNU Hello in a chroot store: ```console - # nix shell --store ~/my-nix nixpkgs#hello nixpkgs#bashInteractive -c bash + # nix shell --store ~/my-nix nixpkgs#hello nixpkgs#bashInteractive --command bash ``` Note that it's necessary to specify `bash` explicitly because your diff --git a/src/nix/store-ls.md b/src/nix/store-ls.md index 836efce42..14c4627c9 100644 --- a/src/nix/store-ls.md +++ b/src/nix/store-ls.md @@ -5,7 +5,7 @@ R""( * To list the contents of a store path in a binary cache: ```console - # nix store ls --store https://cache.nixos.org/ -lR /nix/store/0i2jd68mp5g6h2sa5k9c85rb80sn8hi9-hello-2.10 + # nix store ls --store https://cache.nixos.org/ --long --recursive /nix/store/0i2jd68mp5g6h2sa5k9c85rb80sn8hi9-hello-2.10 dr-xr-xr-x 0 ./bin -r-xr-xr-x 38184 ./bin/hello dr-xr-xr-x 0 ./share @@ -15,7 +15,7 @@ R""( * To show information about a specific file in a binary cache: ```console - # nix store ls --store https://cache.nixos.org/ -l /nix/store/0i2jd68mp5g6h2sa5k9c85rb80sn8hi9-hello-2.10/bin/hello + # nix store ls --store https://cache.nixos.org/ --long /nix/store/0i2jd68mp5g6h2sa5k9c85rb80sn8hi9-hello-2.10/bin/hello -r-xr-xr-x 38184 hello ``` diff --git a/src/nix/upgrade-nix.md b/src/nix/upgrade-nix.md index 08757aebd..cce88c397 100644 --- a/src/nix/upgrade-nix.md +++ b/src/nix/upgrade-nix.md @@ -11,7 +11,7 @@ R""( * Upgrade Nix in a specific profile: ```console - # nix upgrade-nix -p ~alice/.local/state/nix/profiles/profile + # nix upgrade-nix --profile ~alice/.local/state/nix/profiles/profile ``` # Description diff --git a/src/nix/verify.md b/src/nix/verify.md index cc1122c02..e1d55eab4 100644 --- a/src/nix/verify.md +++ b/src/nix/verify.md @@ -12,7 +12,7 @@ R""( signatures: ```console - # nix store verify -r -n2 --no-contents $(type -p firefox) + # nix store verify --recursive --sigs-needed 2 --no-contents $(type -p firefox) ``` * Verify a store path in the binary cache `https://cache.nixos.org/`: diff --git a/tests/gc.sh b/tests/gc.sh index 98d6cb032..95669e25c 100644 --- a/tests/gc.sh +++ b/tests/gc.sh @@ -52,9 +52,7 @@ rmdir $NIX_STORE_DIR/.links rmdir $NIX_STORE_DIR ## Test `nix-collect-garbage -d` -# `nix-env` doesn't work with CA derivations, so let's ignore that bit if we're -# using them -if [[ -z "${NIX_TESTS_CA_BY_DEFAULT:-}" ]]; then +testCollectGarbageD () { clearProfiles # Run two `nix-env` commands, should create two generations of # the profile @@ -66,4 +64,17 @@ if [[ -z "${NIX_TESTS_CA_BY_DEFAULT:-}" ]]; then # left nix-collect-garbage -d [[ $(nix-env --list-generations | wc -l) -eq 1 ]] +} +# `nix-env` doesn't work with CA derivations, so let's ignore that bit if we're +# using them +if [[ -z "${NIX_TESTS_CA_BY_DEFAULT:-}" ]]; then + testCollectGarbageD + + # Run the same test, but forcing the profiles at their legacy location under + # /nix/var/nix. + # + # Regression test for #8294 + rm ~/.nix-profile + ln -s $NIX_STATE_DIR/profiles/per-user/me ~/.nix-profile + testCollectGarbageD fi diff --git a/tests/local.mk b/tests/local.mk index 9cb81e1f0..45d78b5cd 100644 --- a/tests/local.mk +++ b/tests/local.mk @@ -135,7 +135,8 @@ nix_tests = \ flakes/show.sh \ impure-derivations.sh \ path-from-hash-part.sh \ - toString-path.sh + toString-path.sh \ + read-only-store.sh ifeq ($(HAVE_LIBCPUID), 1) nix_tests += compute-levels.sh diff --git a/tests/nix-profile.sh b/tests/nix-profile.sh index 4ef5b484a..9da3f802b 100644 --- a/tests/nix-profile.sh +++ b/tests/nix-profile.sh @@ -157,17 +157,17 @@ error: An existing package already provides the following file: To remove the existing package: - nix profile remove path:${flake1Dir} + nix profile remove path:${flake1Dir}#packages.${system}.default The new package can also be installed next to the existing one by assigning a different priority. The conflicting packages have a priority of 5. To prioritise the new package: - nix profile install path:${flake2Dir} --priority 4 + nix profile install path:${flake2Dir}#packages.${system}.default --priority 4 To prioritise the existing package: - nix profile install path:${flake2Dir} --priority 6 + nix profile install path:${flake2Dir}#packages.${system}.default --priority 6 EOF ) [[ $($TEST_HOME/.nix-profile/bin/hello) = "Hello World" ]] @@ -177,3 +177,10 @@ nix profile install $flake2Dir --priority 0 [[ $($TEST_HOME/.nix-profile/bin/hello) = "Hello World2" ]] # nix profile install $flake1Dir --priority 100 # [[ $($TEST_HOME/.nix-profile/bin/hello) = "Hello World" ]] + +# Ensure that conflicts are handled properly even when the installables aren't +# flake references. +# Regression test for https://github.com/NixOS/nix/issues/8284 +clearProfiles +nix profile install $(nix build $flake1Dir --no-link --print-out-paths) +expect 1 nix profile install --impure --expr "(builtins.getFlake ''$flake2Dir'').packages.$system.default" diff --git a/tests/nixos/nix-copy.nix b/tests/nixos/nix-copy.nix index ee8b77100..16c477bf9 100644 --- a/tests/nixos/nix-copy.nix +++ b/tests/nixos/nix-copy.nix @@ -23,6 +23,12 @@ in { nix.settings.substituters = lib.mkForce [ ]; nix.settings.experimental-features = [ "nix-command" ]; services.getty.autologinUser = "root"; + programs.ssh.extraConfig = '' + Host * + ControlMaster auto + ControlPath ~/.ssh/master-%h:%r@%n:%p + ControlPersist 15m + ''; }; server = @@ -62,6 +68,10 @@ in { client.wait_for_text("done") server.succeed("nix-store --check-validity ${pkgA}") + # Check that ControlMaster is working + client.send_chars("nix copy --to ssh://server ${pkgA} >&2; echo done\n") + client.wait_for_text("done") + client.copy_from_host("key", "/root/.ssh/id_ed25519") client.succeed("chmod 600 /root/.ssh/id_ed25519") diff --git a/tests/read-only-store.sh b/tests/read-only-store.sh new file mode 100644 index 000000000..2f89b849f --- /dev/null +++ b/tests/read-only-store.sh @@ -0,0 +1,40 @@ +source common.sh + +enableFeatures "read-only-local-store" + +clearStore + +happy () { + # We can do a read-only query just fine with a read-only store + nix --store local?read-only=true path-info $dummyPath + + # We can "write" an already-present store-path a read-only store, because no IO is actually required + nix-store --store local?read-only=true --add dummy +} +## Testing read-only mode without forcing the underlying store to actually be read-only + +# Make sure the command fails when the store doesn't already have a database +expectStderr 1 nix-store --store local?read-only=true --add dummy | grepQuiet "database does not exist, and cannot be created in read-only mode" + +# Make sure the store actually has a current-database, with at least one store object +dummyPath=$(nix-store --add dummy) + +# Try again and make sure we fail when adding a item not already in the store +expectStderr 1 nix-store --store local?read-only=true --add eval.nix | grepQuiet "attempt to write a readonly database" + +# Test a few operations that should work with the read-only store in its current state +happy + +## Testing read-only mode with an underlying store that is actually read-only + +# Ensure store is actually read-only +chmod -R -w $TEST_ROOT/store +chmod -R -w $TEST_ROOT/var + +# Make sure we fail on add operations on the read-only store +# This is only for adding files that are not *already* in the store +expectStderr 1 nix-store --add eval.nix | grepQuiet "error: opening lock file '$(readlink -e $TEST_ROOT)/var/nix/db/big-lock'" +expectStderr 1 nix-store --store local?read-only=true --add eval.nix | grepQuiet "Permission denied" + +# Test the same operations from before should again succeed +happy