When diagnosing infinite recursion references to nullptr `Env` can be formed.
This happens only with `ExprBlackHole` is evaluated, which always leads to
`InfiniteRecursionError`.
UBSAN log for one such case:
```
../src/libexpr/eval-inline.hh:94:31: runtime error: reference binding to null pointer of type 'Env'
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior ../src/libexpr/eval-inline.hh:94:31 in
```
As a prelude to making "or" work like a normal variable, emit a warning
any time the "fn or" production is used in a context that will change
how it is parsed when that production is refactored.
In detail: in the future, OR_KW will be moved to expr_simple, and the
cursed ExprCall production that is currently part of the expr_select
nonterminal will be generated "normally" in expr_app instead. Any
productions that accept an expr_select will be affected, except for the
expr_app nonterminal itself (because, while expr_app has a production
accepting a bare expr_select, its other production will continue to
accept "fn or" expressions). So all we need to do is emit an appropriate
warning when an expr_simple representing a cursed ExprCall is accepted
in one of those productions without first going through expr_app.
As the warning message describes, users can suppress the warning by
wrapping their problematic "fn or" expressions in parentheses. For
example, "f g or" can be made future-proof by rewriting it as
"f (g or)"; similarly "[ x y or ]" can be rewritten as "[ x (y or) ]",
etc. The parentheses preserve the current grouping behavior, as in the
future "f g or" will be parsed as "(f g) or", just like
"f g anything-else" is grouped. (Mechanically, this suppresses the
warning because the problem ExprCalls go through the
"expr_app : expr_select" production, which resets the cursed status on
the ExprCall.)
This also bans various sneaking of negative numbers from the language
into unsuspecting builtins as was exposed while auditing the
consequences of changing the Nix language integer type to a newtype.
It's unlikely that this change comprehensively ensures correctness when
passing integers out of the Nix language and we should probably add a
checked-narrowing function or something similar, but that's out of scope
for the immediate change.
During the development of this I found a few fun facts about the
language:
- You could overflow integers by converting from unsigned JSON values.
- You could overflow unsigned integers by converting negative numbers
into them when going into Nix config, into fetchTree, and into flake
inputs.
The flake inputs and Nix config cannot actually be tested properly
since they both ban thunks, however, we put in checks anyway because
it's possible these could somehow be used to do such shenanigans some
other way.
Note that Lix has banned Nix language integer overflows since the very
first public beta, but threw a SIGILL about them because we run with
-fsanitize=signed-overflow -fsanitize-undefined-trap-on-error in
production builds. Since the Nix language uses signed integers, overflow
was simply undefined behaviour, and since we defined that to trap, it
did.
Trapping on it was a bad UX, but we didn't even entirely notice
that we had done this at all until it was reported as a bug a couple of
months later (which is, to be fair, that flag working as intended), and
it's got enough production time that, aside from code that is IMHO buggy
(and which is, in any case, not in nixpkgs) such as
https://git.lix.systems/lix-project/lix/issues/445, we don't think
anyone doing anything reasonable actually depends on wrapping overflow.
Even for weird use cases such as doing funny bit crimes, it doesn't make
sense IMO to have wrapping behaviour, since two's complement arithmetic
overflow behaviour is so *aggressively* not what you want for *any* kind
of mathematics/algorithms. The Nix language exists for package
management, a domain where bit crimes are already only dubiously in
scope to begin with, and it makes a lot more sense for that domain for
the integers to never lose precision, either by throwing errors if they
would, or by being arbitrary-precision.
Fixes: https://github.com/NixOS/nix/issues/10968
Original-CL: https://gerrit.lix.systems/c/lix/+/1596
Change-Id: I51f253840c4af2ea5422b8a420aa5fafbf8fae75
After the removal of the InputAccessor::fetchToStore() method, the
only remaining functionality in InputAccessor was `fingerprint` and
`getLastModified()`, and there is no reason to keep those in a
separate class.
we now keep not a table of all positions, but a table of all origins and
their sizes. position indices are now direct pointers into the virtual
concatenation of all parsed contents. this slightly reduces memory usage
and time spent in the parser, at the cost of not being able to report
positions if the total input size exceeds 4GiB. this limit is not unique
to nix though, rustc and clang also limit their input to 4GiB (although
at least clang refuses to process inputs that are larger, we will not).
this new 4GiB limit probably will not cause any problems for quite a
while, all of nixpkgs together is less than 100MiB in size and already
needs over 700MiB of memory and multiple seconds just to parse. 4GiB
worth of input will easily take multiple minutes and over 30GiB of
memory without even evaluating anything. if problems *do* arise we can
probably recover the old table-based system by adding some tracking to
Pos::Origin (or increasing the size of PosIdx outright), but for time
being this looks like more complexity than it's worth.
since we now need to read the entire input again to determine the
line/column of a position we'll make unsafeGetAttrPos slightly lazy:
mostly the set it returns is only used to determine the file of origin
of an attribute, not its exact location. the thunks do not add
measurable runtime overhead.
notably this change is necessary to allow changing the parser since
apparently nothing supports nix's very idiosyncratic line ending choice
of "anything goes", making it very hard to calculate line/column
positions in the parser (while byte offsets are very easy).
desugaring inherit-from to syntactic duplication of the source expr also
duplicates side effects of the source expr (such as trace calls) and
expensive computations (such as derivationStrict).
this also has the effect of sorting let bindings lexicographically
rather than by symbol creation order as was previously done, giving a
better canonicalization in the process.
While preparing PRs like #9753, I've had to change error messages in
dozens of code paths. It would be nice if instead of
EvalError("expected 'boolean' but found '%1%'", showType(v))
we could write
TypeError(v, "boolean")
or similar. Then, changing the error message could be a mechanical
refactor with the compiler pointing out places the constructor needs to
be changed, rather than the error-prone process of grepping through the
codebase. Structured errors would also help prevent the "same" error
from having multiple slightly different messages, and could be a first
step towards error codes / an error index.
This PR reworks the exception infrastructure in `libexpr` to
support exception types with different constructor signatures than
`BaseError`. Actually refactoring the exceptions to use structured data
will come in a future PR (this one is big enough already, as it has to
touch every exception in `libexpr`).
The core design is in `eval-error.hh`. Generally, errors like this:
state.error("'%s' is not a string", getAttrPathStr())
.debugThrow<TypeError>()
are transformed like this:
state.error<TypeError>("'%s' is not a string", getAttrPathStr())
.debugThrow()
The type annotation has moved from `ErrorBuilder::debugThrow` to
`EvalState::error`.
these symbols are used a *lot*, so it makes sense to cache them. this
mostly increases clarity of the code (however clear one may wish to call
the parser desugaring here), but it also provides a small performance
benefit.
Also move `SourcePath` into `libutil`.
These changes allow `error.hh` and `error.cc` to access source path and
position information, which we can use to produce better error messages
(for example, we could consider omitting filenames when two or more
consecutive stack frames originate from the same file).
This avoids a Value allocation for empty list constants. During a `nix
search nixpkgs`, about 82% of all thunked lists are empty, so this
removes about 3 million Value allocations.
Performance comparison on `nix search github:NixOS/nixpkgs/e1fa12d4f6c6fe19ccb59cac54b5b3f25e160870 --no-eval-cache`:
maximum RSS: median = 3845432.0000 mean = 3845432.0000 stddev = 0.0000 min = 3845432.0000 max = 3845432.0000 [rejected?, p=0.00000, Δ=-70084.00000±0.00000]
soft page faults: median = 965395.0000 mean = 965394.6667 stddev = 1.1181 min = 965392.0000 max = 965396.0000 [rejected?, p=0.00000, Δ=-17929.77778±38.59610]
system CPU time: median = 1.8029 mean = 1.7702 stddev = 0.0621 min = 1.6749 max = 1.8417 [rejected, p=0.00064, Δ=-0.12873±0.09905]
user CPU time: median = 14.1022 mean = 14.0633 stddev = 0.1869 min = 13.8118 max = 14.3190 [not rejected, p=0.03006, Δ=-0.18248±0.24928]
elapsed time: median = 15.8205 mean = 15.8618 stddev = 0.2312 min = 15.5033 max = 16.1670 [not rejected, p=0.00558, Δ=-0.28963±0.29434]
since `up` and `values` are both pointer-aligned the type field will
also be pointer-aligned, wasting 48 bits of space on most machines. we
can get away with removing the type field altogether by encoding some
information into the `with` expr that created the env to begin with,
reducing the GC load for the absolutely massive amount of single-entry
envs we create for lambdas. this reduces memory usage of system eval by
quite a bit (reducing heap size of our system eval from 8.4GB to 8.23GB)
and gives similar savings in eval time.
running `nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'`
before:
Time (mean ± σ): 5.576 s ± 0.003 s [User: 5.197 s, System: 0.378 s]
Range (min … max): 5.572 s … 5.581 s 10 runs
after:
Time (mean ± σ): 5.408 s ± 0.002 s [User: 5.019 s, System: 0.388 s]
Range (min … max): 5.405 s … 5.411 s 10 runs
this also reduces forceValue code size and removes the need for
hideInDiagnostics. coopting thunk forcing like this has the additional
benefit of clarifying how these errors can happen in the first place.
checking for isBlackhole in the forceValue hot path is rather more
expensive than necessary, and with a little bit of trickery we can move
such handling into the isApp case. small performance benefit, but under
some circumstances we've seen 2% improvement as well.
〉 nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
before:
Time (mean ± σ): 4.429 s ± 0.002 s [User: 3.929 s, System: 0.500 s]
Range (min … max): 4.427 s … 4.433 s 10 runs
after:
Time (mean ± σ): 4.396 s ± 0.002 s [User: 3.894 s, System: 0.501 s]
Range (min … max): 4.393 s … 4.399 s 10 runs
This includes position information in more places, making debugging
easier.
Before:
```
$ nix-instantiate --show-trace --eval tests/functional/lang/eval-fail-using-set-as-attr-name.nix
error:
… while evaluating an attribute name
at «none»:0: (source not available)
error: value is a set while a string was expected
```
After:
```
error:
… while evaluating an attribute name
at /pwd/lang/eval-fail-using-set-as-attr-name.nix:5:10:
4| in
5| attr.${key}
| ^
6|
error: value is a set while a string was expected
```
* Finish converting existing comments for internal API docs
99% of this was just reformatting existing comments. Only two exceptions:
- Expanded upon `BuildResult::status` compat note
- Split up file-level `symbol-table.hh` doc comments to get
per-definition docs
Also fixed a few whitespace goofs, turning leading tabs to spaces and
removing trailing spaces.
Picking up from #8133
* Fix two things from comments
* Use triple-backtick not indent for `dumpPath`
* Convert GNU-style `\`..'` quotes to markdown style in API docs
This will render correctly.
We are looking for *$ because it indicate that it was constructed with a new but
not release. De-referencing shallow copy so deleting as whole might create
dangling pointer that's why we move it so we delete a empty containers + the
nice perf boost.
This makes the position object used in exceptions abstract, with a
method getSource() to get the source code of the file in which the
error originated. This is needed for lazy trees because source files
don't necessarily exist in the filesystem, and we don't want to make
libutil depend on the InputAccessor type in libfetcher.