In order for the script not be sourced multiple times by the same shell
instance, `__ETC_PROFILE_NIX_SOURCED` needs to be set with a `--global`
flag.
Both files are almost identical. And style differences make it harder
to see what is actually different and keep them in sync, when it is
required.
`nix-profile.fish` and part of `nix-profile-daemon.fish` use 4 space
indentation. Which is also the indentation that the fish shell
documentation is using.
Reformatting a chunk of `nix-profile-daemon.fish` from 2 space
indentation to 4 space indentation for consistency.
- Multiple choices of stdenv are handled more consistently, especially for the dev
shells which were previously not done correctly.
- Some stray nix code was moving into the `packaging` directory
nix-env can read priorities from a derivations meta attributes, but this
only works when installing a nix expression.
nix-env can also install bare store paths, however meta attributes are
not readable in that case. This means that a store path can not be
installed with a specific priority.
Some cases where it is advantageous to install a store path: a remote
host following a `nix copy`, or any time you want to save some
evaluation time and happen to already know the store path.
This PR addresses this shortcoming by adding a --priority flag to
nix-env --install.
Since ff8e2fe84e, 'path:' URLs on the
CLI are interpreted as relative to the current directory of the user,
not the path of the flake we're overriding.
During garbage collection we cache several things -- a set of known-dead
paths, a set of known-alive paths, and a map of paths to their derivers.
Currently they use STL maps and sets, which are ordered structures that
typically are backed by binary trees. Since we are putting pseudorandom
paths into these and looking them up by exact key, we don't need the
ordering, and we're paying a nontrivial cost per insertion.
The existing maps require O(n log n) memory and have O(log n) insertion
and lookup time.
We could instead use unordered maps, which are typically backed by
hashmaps. These require O(n) memory and have O(1) insertion and lookup
time.
On my system this appears to result in a dramatic speedup -- prior to
this patch I was able to delete 400k paths out of 9.5 million over the
course of 34.5 hours. After this patch the same result took 89 minutes.
This result should NOT be taken at face value because the two runs
aren't really comparable; in particular the first started when I had 9.5
million store paths and the seconcd started with 7.8 million, so we are
deleting a different set of paths starting from a much cleaner
filesystem. But I do think it's indicative.
Related: https://github.com/NixOS/nix/issues/9581