Mateusz Guzik
da62ed4f1a
cache: drop write-only tvp_seqc vars
2020-09-08 16:06:46 +00:00
Mateusz Guzik
84ecea90b7
cache: don't update timestmaps on found entry
2020-08-27 06:31:55 +00:00
Mateusz Guzik
5f08d440b0
cache: assorted clean ups
...
In particular remove spurious comments, duplicate assertions and the
inconsistently done KTR support.
2020-08-27 06:31:27 +00:00
Mateusz Guzik
12441fcbe2
cache: ncp = NULL early to account for sdt probes in ailure path
...
CID: 1432106
2020-08-27 06:30:40 +00:00
Mateusz Guzik
1e9a0b391d
cache: relock on failure in cache_zap_locked_vnode
...
This gets rid of bogus scheme of yielding in hopes the blocking thread will
make progress.
2020-08-26 12:54:18 +00:00
Mateusz Guzik
075f58f231
cache: stop null checking in cache_free
2020-08-26 12:53:16 +00:00
Mateusz Guzik
66fa11c898
cache: make it mandatory to request both timestamps or neither
2020-08-26 12:52:54 +00:00
Mateusz Guzik
eef63775b6
cache: convert bucketlocks to a mutex
...
By now bucket locks are almost never taken for anything but writing and
converting to mutex simplifies the code.
2020-08-26 12:52:17 +00:00
Mateusz Guzik
32f3d0821c
cache: only evict negative entries on CREATE when ISLASTCN is set
2020-08-26 12:50:57 +00:00
Mateusz Guzik
935e15187c
cache: decouple smr and locked lookup in the slowpath
...
Tested by: pho
2020-08-26 12:50:10 +00:00
Mateusz Guzik
d3476daddc
cache: factor dotdot lookup out of cache_lookup
...
Tested by: pho
2020-08-26 12:49:39 +00:00
Mateusz Guzik
f9cdb0775e
cache: remove leftover assert in vn_fullpath_any_smr
...
It is only valid when !slash_prefixed. For slash_prefixed the length
is properly accounted for later.
Reported by: markj (syzkaller)
2020-08-24 18:23:58 +00:00
Mateusz Guzik
e35406c8f7
cache: lockless reverse lookup
...
This enables fully scalable operation for getcwd and significantly improves
realpath.
For example:
PATH_CUSTOM=/usr/src ./getcwd_processes -t 104
before: 1550851
after: 380135380
Tested by: pho
2020-08-24 09:00:57 +00:00
Mateusz Guzik
feabaaf995
cache: drop the always curthread argument from reverse lookup routines
...
Note VOP_VPTOCNP keeps getting it as temporary compatibility for zfs.
Tested by: pho
2020-08-24 08:57:02 +00:00
Mateusz Guzik
f0696c5e4b
cache: perform reverse lookup using v_cache_dd if possible
...
Tested by: pho
2020-08-24 08:55:55 +00:00
Mateusz Guzik
ce575cd0e2
cache: populate v_cache_dd for non-VDIR entries
...
It makes v_cache_dd into a little bit of a misnomer and it may be addressed later.
Tested by: pho
2020-08-24 08:55:04 +00:00
Mateusz Guzik
1e448a1558
cache: stronger vnode asserts in cache_enter_time
2020-08-22 16:58:34 +00:00
Mateusz Guzik
760a430bb3
vfs: add a work around for vp_crossmp bug to realpath
...
The actual bug is not yet addressed as it will get much easier after other
problems are addressed (most notably rename contract).
The only affected in-tree consumer is realpath. Everyone else happens to be
performing lookups within a mount point, having a side effect of ni_dvp being
set to mount point's root vnode in the worst case.
Reported by: pho
2020-08-22 06:56:04 +00:00
Mateusz Guzik
17838b5869
cache: don't use cache_purge_negative when renaming
...
It avoidably scans (and evicts) unrelated entries. Instead take
advantage of passed componentname and perform a hash lookup
for the exact one.
Sample data from buildworld probed on cache_purge_negative extended
to count both scanned and evicted entries on each call are below.
At most it has to evict 1.
evicted
value ------------- Distribution ------------- count
-1 | 0
0 |@@@@@@@@@@@@@@@ 19506
1 |@@@@@ 5820
2 |@@@@@@ 7751
4 |@@@@@ 6506
8 |@@@@@ 5996
16 |@@@ 4029
32 |@ 1489
64 | 193
128 | 109
256 | 56
512 | 16
1024 | 7
2048 | 3
4096 | 1
8192 | 1
16384 | 0
scanned
value ------------- Distribution ------------- count
-1 | 0
0 |@@ 2456
1 |@ 1496
2 |@@ 2728
4 |@@@ 4171
8 |@@@@ 5122
16 |@@@@ 5335
32 |@@@@@ 6279
64 |@@@@ 5671
128 |@@@@ 4558
256 |@@ 3123
512 |@@ 2790
1024 |@@ 2449
2048 |@@ 3021
4096 |@ 1398
8192 |@ 886
16384 | 0
2020-08-20 10:06:50 +00:00
Mateusz Guzik
39f8815070
cache: add cache_rename, a dedicated helper to use for renames
...
While here make both tmpfs and ufs use it.
No fuctional changes.
2020-08-20 10:05:46 +00:00
Mateusz Guzik
16be9f9956
cache: reimplement cache_lookup_nomakeentry as cache_remove_cnp
...
This in particular removes unused arguments.
2020-08-20 10:05:19 +00:00
Mateusz Guzik
6c55d6e030
cache: when adding an already existing entry assert on a complete match
2020-08-19 15:08:14 +00:00
Mateusz Guzik
7c75f14f5b
cache: tidy up the comment above cache_prehash
2020-08-19 15:07:28 +00:00
Mateusz Guzik
3c5d2ed71f
cache: add NOCAPCHECK to the list of supported flags for lockless lookup
...
It is de facto supported in that lockless lookup does not do any capability
checks.
2020-08-16 18:33:24 +00:00
Mateusz Guzik
8ab4becab0
vfs: use namei_zone for getcwd allocations
...
instead of malloc.
Note that this should probably be wrapped with a dedicated API and other
vn_getcwd callers did not get converted.
2020-08-16 18:21:21 +00:00
Mateusz Guzik
5e79447d60
cache: let SAVESTART passthrough
...
The flag is only passed for non-LOOKUP ops and those fallback to the slowpath.
2020-08-10 12:28:56 +00:00
Mateusz Guzik
bb48255cf5
cache: resize struct namecache to a multiply of alignment
...
For example struct namecache on amd64 is 100 bytes, but it has to occupies
104. Use the extra bytes to support longer names.
2020-08-10 12:05:55 +00:00
Mateusz Guzik
8b62cebea7
cache: remove unused variables from cache_fplookup_parse
2020-08-10 11:51:56 +00:00
Mateusz Guzik
03337743db
vfs: clean MNTK_FPLOOKUP if MNT_UNION is set
...
Elides checking it during lookup.
2020-08-10 11:51:21 +00:00
Mateusz Guzik
c571b99545
cache: strlcpy -> memcpy
2020-08-10 10:40:14 +00:00
Mateusz Guzik
3ba0e51703
vfs: partially support file create/delete/rename in lockless lookup
...
Perform the lookup until the last 2 elements and fallback to slowpath.
Tested by: pho
Sponsored by: The FreeBSD Foundation
2020-08-10 10:35:18 +00:00
Mateusz Guzik
21d5af2b30
vfs: drop the thread argumemnt from vfs_fplookup_vexec
...
It is guaranteed curthread.
Tested by: pho
Sponsored by: The FreeBSD Foundation
2020-08-10 10:34:22 +00:00
Mateusz Guzik
e910c93eea
cache: add more predicts for failing conditions
2020-08-06 04:20:14 +00:00
Mateusz Guzik
95888901f7
cache: plug unititalized variable use
...
CID: 1431128
2020-08-06 04:19:47 +00:00
Mateusz Guzik
e1b1971c05
cache: don't ignore size passed to nchinittbl
2020-08-05 09:38:02 +00:00
Mateusz Guzik
2b86f9d6d0
cache: convert the hash from LIST to SLIST
...
This reduces struct namecache by sizeof(void *).
Negative side is that we have to find the previous element (if any) when
removing an entry, but since we normally don't expect collisions it should be
fine.
Note this adds cache_get_hash calls which can be eliminated.
2020-08-05 09:25:59 +00:00
Mateusz Guzik
cf8ac0de81
cache: reduce zone alignment to 8 bytes
...
It used to be sizeof of the given struct to accomodate for 32 bit mips
doing 64 bit loads, but the same can be achieved with requireing just
64 bit alignment.
While here reorder struct namecache so that most commonly used fields
are closer.
2020-08-05 09:24:38 +00:00
Mateusz Guzik
d61ce7ef50
cache: convert ncnegnash into a macro
...
It is a read-only var with value known at compilation time.
2020-08-05 09:24:00 +00:00
Mateusz Guzik
2840f07d4f
cache: cleanup lockless entry point
...
- remove spurious bzero
- assert ni_lcf, it has to be set by namei by this point
2020-08-05 07:32:26 +00:00
Mateusz Guzik
8ccf01e0e2
cache: stop messing with cn_lkflags
...
See r363882.
2020-08-05 07:30:57 +00:00
Mateusz Guzik
27c4618df5
cache: stop messing with cn_flags
...
This removes flag setting/unsetting carried over from regular lookup.
Flags still get for compatibility when falling back.
Note .. and . handling can get partially folded together.
2020-08-05 07:30:17 +00:00
Mateusz Guzik
db99ec5656
vfs: support lockless dotdot lookup
...
Tested by: pho
2020-08-04 23:07:42 +00:00
Mateusz Guzik
b403aa126e
cache: add NCF_WIP flag
...
This allows making half-constructed entries visible to the lockless lookup,
which now can check for either "not yet fully constructed" and "no longer valid"
state.
This will be used for .. lookup.
2020-08-04 23:07:00 +00:00
Mateusz Guzik
6e10434c02
cache: add cache_purge_vgone
...
cache_purge locklessly checks whether the vnode at hand has any namecache
entries. This can race with a concurrent purge which managed to remove
the last entry, but may not be done touching the vnode.
Make sure we observe the relevant vnode lock as not taken before proceeding
with vgone.
Paired with the fact that doomed vnodes cannnot receive entries this restores
the invariant that there are no namecache-related writing users past cache_purge
in vgone.
Reported by: pho
2020-08-04 23:04:29 +00:00
Mateusz Guzik
1164f7a566
cache: factor away failed vexec handling
2020-08-04 19:55:26 +00:00
Mateusz Guzik
0439b00ea8
cache: assorted tidy ups
2020-08-04 19:55:00 +00:00
Mateusz Guzik
18bd02e2ce
cache: factor away lockless dot lookup and add missing stat + sdt probe
2020-08-04 19:54:37 +00:00
Mateusz Guzik
17a66c7087
vfs: add vfs_op_thread_enter/exit _crit variants
...
and employ them in the namecache. Eliminates all spurious checks for preemption.
2020-08-04 19:54:10 +00:00
Mateusz Guzik
0311b05fec
cache: add missing numcache detrement on insertion failure
2020-08-04 19:52:52 +00:00
Mateusz Guzik
7ad2f1105e
vfs: store precomputed namecache hash in the vnode
...
This significantly speeds up path lookup, Cascade Lake doing access(2) on ufs
on /usr/obj/usr/src/amd64.amd64/sys/GENERIC/vnode_if.c, ops/s:
before: 2535298
after: 2797621
Over +10%.
The reversed order of computation here does not seem to matter for hash
distribution.
Reviewed by: kib
Differential Revision: https://reviews.freebsd.org/D25921
2020-08-02 20:02:06 +00:00