aboutsummaryrefslogtreecommitdiffhomepage
path: root/patches/server/0977-Rewrite-chunk-system.patch
diff options
context:
space:
mode:
Diffstat (limited to 'patches/server/0977-Rewrite-chunk-system.patch')
-rw-r--r--patches/server/0977-Rewrite-chunk-system.patch21900
1 files changed, 21900 insertions, 0 deletions
diff --git a/patches/server/0977-Rewrite-chunk-system.patch b/patches/server/0977-Rewrite-chunk-system.patch
new file mode 100644
index 0000000000..eea1dbc7f3
--- /dev/null
+++ b/patches/server/0977-Rewrite-chunk-system.patch
@@ -0,0 +1,21900 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Spottedleaf <[email protected]>
+Date: Thu, 11 Mar 2021 02:32:30 -0800
+Subject: [PATCH] Rewrite chunk system
+
+Rebased patches:
+
+New player chunk loader system
+
+Make ChunkStatus.EMPTY not rely on the main thread for completion
+
+In order to do this, we need to push the POI consistency checks
+to a later status. Since FULL is the only other status that
+uses the main thread, it can go there.
+
+The consistency checks are only really for when a desync occurs,
+and so that delaying the check only matters when the chunk data
+has desync'd. As long as the desync is sorted before the
+chunk is full loaded (i.e before setBlock can occur on
+a chunk), it should not matter.
+
+This change is primarily due to behavioural changes
+in the chunk task queue brought by region threading -
+which is to split the queue into separate regions. As such,
+it is required that in order for the sync load to complete
+that the region owning the chunk drain and execute the task
+while ticking. However, that is not always possible in
+region threading. Thus, removing the main thread reliance allows
+the chunk to progress without requiring a tick thread.
+Specifically, this allows far sync loads (outside of a specific
+regions bounds) to occur without issue - namely with structure
+searching.
+
+Increase parallelism for neighbour writing chunk statuses
+
+Namely, everything after FEATURES. By creating a dependency
+chain indicating what chunks are in use, we can safely
+schedule completely independent tasks in parallel. This
+will allow the chunk system to scale beyond 10 threads
+per world.
+
+Properly cancel chunk load tasks that were not scheduled
+
+Since the chunk load task was not scheduled, the entity/poi load
+task fields will not be set, but the task complete counter
+will not be adjusted. Thus, the chunk load task will not complete.
+
+To resolve this, detect when the entity/poi tasks were not scheduled
+and decrement the task complete counter in such cases.
+
+Mark POI/Entity load tasks as completed before releasing scheduling lock
+
+It must be marked as completed during that lock hold since the
+waiters field is set to null. Thus, any other thread attempting
+a cancellation will fail to remove from waiters. Also, any
+other thread attempting to cancel may set the completed field
+to true which would cause accept() to fail as well.
+
+Completion was always designed to happen while holding the
+scheduling lock to prevent these race conditions. The code
+was originally set up to complete while not holding the
+scheduling lock to avoid invoking callbacks while holding the
+lock, however the access to the completion field was not
+considered.
+
+Resolve this by marking the callback as completed during the
+lock, but invoking the accept() function after releasing
+the lock. This will prevent any cancellation attempts to be
+blocked, and allow the current thread to complete the callback
+without any issues.
+
+Cache whether region files do not exist
+
+The repeated I/O of creating the directory for the regionfile
+or for checking if the file exists can be heavy in
+when pushing chunk generation extremely hard - as each chunk gen
+request may effectively go through to the I/O thread.
+
+Use coordinate-based locking to increase chunk system parallelism
+
+A significant overhead in Folia comes from the chunk system's
+locks, the ticket lock and the scheduling lock. The public
+test server, which had ~330 players, had signficant performance
+problems with these locks: ~80% of the time spent ticking
+was _waiting_ for the locks to free. Given that it used
+around 15 cores total at peak, this is a complete and utter loss
+of potential.
+
+To address this issue, I have replaced the ticket lock and scheduling
+lock with two ReentrantAreaLocks. The ReentrantAreaLock takes a
+shift, which is used internally to group positions into sections.
+This grouping is neccessary, as the possible radius of area that
+needs to be acquired for any given lock usage is up to 64. As such,
+the shift is critical to reduce the number of areas required to lock
+for any lock operation. Currently, it is set to a shift of 6, which
+is identical to the ticket level propagation shift (and, it must be
+at least the ticket level propagation shift AND the region shift).
+
+The chunk system locking changes required a complete rewrite of the
+chunk system tick, chunk system unload, and chunk system ticket level
+propagation - as all of the previous logic only works with a single
+global lock.
+
+This does introduce two other section shifts: the lock shift, and the
+ticket shift. The lock shift is simply what shift the area locks use,
+and the ticket shift represents the size of the ticket sections.
+Currently, these values are just set to the region shift for simplicity.
+However, they are not arbitrary: the lock shift must be at least the size
+of the ticket shift and must be at least the size of the region shift.
+The ticket shift must also be >= the ceil(log2(max ticket level source)).
+
+The chunk system's ticket propagator is now global state, instead of
+region state. This cleans up the logic for ticket levels significantly,
+and removes usage of the region lock in this area, but it also means
+that the addition of a ticket no longer creates a region. To alleviate
+the side effects of this change, the global tick thread now processes
+ticket level updates for each world every tick to guarantee eventual
+ticket level processing. The chunk system also provides a hook to
+process ticket level changes in a given _section_, so that the
+region queue can guarantee that after adding its reference counter
+that the region section is created/exists/wont be destroyed.
+
+The ticket propagator operates by updating the sources in a single ticket
+section, and propagating the updates to its 1 radius neighbours. This
+allows the ticket updates to occur in parallel or selectively (see above).
+Currently, the process ticket level update function operates by
+polling from a concurrent queue of sections to update and simply
+invoking the single section update logic. This allows the function
+to operate completely in parallel, provided the queue is ordered right.
+Additionally, this limits the area used in the ticket/scheduling lock
+when processing updates, which should massively increase parallelism compared
+to before.
+
+The chunk system ticket addition for expirable ticket types has been modified
+to no longer track exact tick deadlines, as this relies on what region the
+ticket is in. Instead, the chunk system tracks a map of
+lock section -> (chunk coordinate -> expire ticket count) and every ticket
+has been changed to have a removeDelay count that is decremented each tick.
+Each region searches its own sections to find tickets to try to expire.
+
+Chunk system unloading has been modified to track unloads by lock section.
+The ordering is determined by which section a chunk resides in.
+The unload process now removes from unload sections and processes
+the full unload stages (1, 2, 3) before moving to the next section, if possible.
+This allows the unload logic to only hold one lock section at a time for
+each lock, which is a massive parallelism increase.
+
+In stress testing, these changes lowered the locking overhead to only 5%
+from ~70%, which completely fix the original problem as described.
+
+== AT ==
+public net.minecraft.server.level.ChunkHolder pos
+public net.minecraft.server.level.ChunkMap overworldDataStorage
+public-f net.minecraft.world.level.chunk.storage.RegionFileStorage
+public net.minecraft.server.level.ChunkMap getPoiManager()Lnet/minecraft/world/entity/ai/village/poi/PoiManager;
+
+diff --git a/src/main/java/ca/spottedleaf/concurrentutil/lock/ReentrantAreaLock.java b/src/main/java/ca/spottedleaf/concurrentutil/lock/ReentrantAreaLock.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..4fd9a0cd8f1e6ae1a97e963dc7731a80bc6fac5b
+--- /dev/null
++++ b/src/main/java/ca/spottedleaf/concurrentutil/lock/ReentrantAreaLock.java
+@@ -0,0 +1,395 @@
++package ca.spottedleaf.concurrentutil.lock;
++
++import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
++import it.unimi.dsi.fastutil.HashCommon;
++import java.util.ArrayList;
++import java.util.List;
++import java.util.concurrent.ConcurrentHashMap;
++import java.util.concurrent.locks.LockSupport;
++
++public final class ReentrantAreaLock {
++
++ public final int coordinateShift;
++
++ // aggressive load factor to reduce contention
++ private final ConcurrentHashMap<Coordinate, Node> nodes = new ConcurrentHashMap<>(128, 0.2f);
++
++ public ReentrantAreaLock(final int coordinateShift) {
++ this.coordinateShift = coordinateShift;
++ }
++
++ public boolean isHeldByCurrentThread(final int x, final int z) {
++ final Thread currThread = Thread.currentThread();
++ final int shift = this.coordinateShift;
++ final int sectionX = x >> shift;
++ final int sectionZ = z >> shift;
++
++ final Coordinate coordinate = new Coordinate(Coordinate.key(sectionX, sectionZ));
++ final Node node = this.nodes.get(coordinate);
++
++ return node != null && node.thread == currThread;
++ }
++
++ public boolean isHeldByCurrentThread(final int centerX, final int centerZ, final int radius) {
++ return this.isHeldByCurrentThread(centerX - radius, centerZ - radius, centerX + radius, centerZ + radius);
++ }
++
++ public boolean isHeldByCurrentThread(final int fromX, final int fromZ, final int toX, final int toZ) {
++ if (fromX > toX || fromZ > toZ) {
++ throw new IllegalArgumentException();
++ }
++
++ final Thread currThread = Thread.currentThread();
++ final int shift = this.coordinateShift;
++ final int fromSectionX = fromX >> shift;
++ final int fromSectionZ = fromZ >> shift;
++ final int toSectionX = toX >> shift;
++ final int toSectionZ = toZ >> shift;
++
++ for (int currZ = fromSectionZ; currZ <= toSectionZ; ++currZ) {
++ for (int currX = fromSectionX; currX <= toSectionX; ++currX) {
++ final Coordinate coordinate = new Coordinate(Coordinate.key(currX, currZ));
++
++ final Node node = this.nodes.get(coordinate);
++
++ if (node == null || node.thread != currThread) {
++ return false;
++ }
++ }
++ }
++
++ return true;
++ }
++
++ public Node tryLock(final int x, final int z) {
++ return this.tryLock(x, z, x, z);
++ }
++
++ public Node tryLock(final int centerX, final int centerZ, final int radius) {
++ return this.tryLock(centerX - radius, centerZ - radius, centerX + radius, centerZ + radius);
++ }
++
++ public Node tryLock(final int fromX, final int fromZ, final int toX, final int toZ) {
++ if (fromX > toX || fromZ > toZ) {
++ throw new IllegalArgumentException();
++ }
++
++ final Thread currThread = Thread.currentThread();
++ final int shift = this.coordinateShift;
++ final int fromSectionX = fromX >> shift;
++ final int fromSectionZ = fromZ >> shift;
++ final int toSectionX = toX >> shift;
++ final int toSectionZ = toZ >> shift;
++
++ final List<Coordinate> areaAffected = new ArrayList<>();
++
++ final Node ret = new Node(this, areaAffected, currThread);
++
++ boolean failed = false;
++
++ // try to fast acquire area
++ for (int currZ = fromSectionZ; currZ <= toSectionZ; ++currZ) {
++ for (int currX = fromSectionX; currX <= toSectionX; ++currX) {
++ final Coordinate coordinate = new Coordinate(Coordinate.key(currX, currZ));
++
++ final Node prev = this.nodes.putIfAbsent(coordinate, ret);
++
++ if (prev == null) {
++ areaAffected.add(coordinate);
++ continue;
++ }
++
++ if (prev.thread != currThread) {
++ failed = true;
++ break;
++ }
++ }
++ }
++
++ if (!failed) {
++ return ret;
++ }
++
++ // failed, undo logic
++ if (!areaAffected.isEmpty()) {
++ for (int i = 0, len = areaAffected.size(); i < len; ++i) {
++ final Coordinate key = areaAffected.get(i);
++
++ if (this.nodes.remove(key) != ret) {
++ throw new IllegalStateException();
++ }
++ }
++
++ areaAffected.clear();
++
++ // since we inserted, we need to drain waiters
++ Thread unpark;
++ while ((unpark = ret.pollOrBlockAdds()) != null) {
++ LockSupport.unpark(unpark);
++ }
++ }
++
++ return null;
++ }
++
++ public Node lock(final int x, final int z) {
++ final Thread currThread = Thread.currentThread();
++ final int shift = this.coordinateShift;
++ final int sectionX = x >> shift;
++ final int sectionZ = z >> shift;
++
++ final List<Coordinate> areaAffected = new ArrayList<>(1);
++
++ final Node ret = new Node(this, areaAffected, currThread);
++ final Coordinate coordinate = new Coordinate(Coordinate.key(sectionX, sectionZ));
++
++ for (long failures = 0L;;) {
++ final Node park;
++
++ // try to fast acquire area
++ {
++ final Node prev = this.nodes.putIfAbsent(coordinate, ret);
++
++ if (prev == null) {
++ areaAffected.add(coordinate);
++ return ret;
++ } else if (prev.thread != currThread) {
++ park = prev;
++ } else {
++ // only one node we would want to acquire, and it's owned by this thread already
++ return ret;
++ }
++ }
++
++ ++failures;
++
++ if (failures > 128L && park.add(currThread)) {
++ LockSupport.park();
++ } else {
++ // high contention, spin wait
++ if (failures < 128L) {
++ for (long i = 0; i < failures; ++i) {
++ Thread.onSpinWait();
++ }
++ failures = failures << 1;
++ } else if (failures < 1_200L) {
++ LockSupport.parkNanos(1_000L);
++ failures = failures + 1L;
++ } else { // scale 0.1ms (100us) per failure
++ Thread.yield();
++ LockSupport.parkNanos(100_000L * failures);
++ failures = failures + 1L;
++ }
++ }
++ }
++ }
++
++ public Node lock(final int centerX, final int centerZ, final int radius) {
++ return this.lock(centerX - radius, centerZ - radius, centerX + radius, centerZ + radius);
++ }
++
++ public Node lock(final int fromX, final int fromZ, final int toX, final int toZ) {
++ if (fromX > toX || fromZ > toZ) {
++ throw new IllegalArgumentException();
++ }
++
++ final Thread currThread = Thread.currentThread();
++ final int shift = this.coordinateShift;
++ final int fromSectionX = fromX >> shift;
++ final int fromSectionZ = fromZ >> shift;
++ final int toSectionX = toX >> shift;
++ final int toSectionZ = toZ >> shift;
++
++ if (((fromSectionX ^ toSectionX) | (fromSectionZ ^ toSectionZ)) == 0) {
++ return this.lock(fromX, fromZ);
++ }
++
++ final List<Coordinate> areaAffected = new ArrayList<>();
++
++ final Node ret = new Node(this, areaAffected, currThread);
++
++ for (long failures = 0L;;) {
++ Node park = null;
++ boolean addedToArea = false;
++ boolean alreadyOwned = false;
++ boolean allOwned = true;
++
++ // try to fast acquire area
++ for (int currZ = fromSectionZ; currZ <= toSectionZ; ++currZ) {
++ for (int currX = fromSectionX; currX <= toSectionX; ++currX) {
++ final Coordinate coordinate = new Coordinate(Coordinate.key(currX, currZ));
++
++ final Node prev = this.nodes.putIfAbsent(coordinate, ret);
++
++ if (prev == null) {
++ addedToArea = true;
++ allOwned = false;
++ areaAffected.add(coordinate);
++ continue;
++ }
++
++ if (prev.thread != currThread) {
++ park = prev;
++ alreadyOwned = true;
++ break;
++ }
++ }
++ }
++
++ if (park == null) {
++ if (alreadyOwned && !allOwned) {
++ throw new IllegalStateException("Improper lock usage: Should never acquire intersecting areas");
++ }
++ return ret;
++ }
++
++ // failed, undo logic
++ if (addedToArea) {
++ for (int i = 0, len = areaAffected.size(); i < len; ++i) {
++ final Coordinate key = areaAffected.get(i);
++
++ if (this.nodes.remove(key) != ret) {
++ throw new IllegalStateException();
++ }
++ }
++
++ areaAffected.clear();
++
++ // since we inserted, we need to drain waiters
++ Thread unpark;
++ while ((unpark = ret.pollOrBlockAdds()) != null) {
++ LockSupport.unpark(unpark);
++ }
++ }
++
++ ++failures;
++
++ if (failures > 128L && park.add(currThread)) {
++ LockSupport.park(park);
++ } else {
++ // high contention, spin wait
++ if (failures < 128L) {
++ for (long i = 0; i < failures; ++i) {
++ Thread.onSpinWait();
++ }
++ failures = failures << 1;
++ } else if (failures < 1_200L) {
++ LockSupport.parkNanos(1_000L);
++ failures = failures + 1L;
++ } else { // scale 0.1ms (100us) per failure
++ Thread.yield();
++ LockSupport.parkNanos(100_000L * failures);
++ failures = failures + 1L;
++ }
++ }
++
++ if (addedToArea) {
++ // try again, so we need to allow adds so that other threads can properly block on us
++ ret.allowAdds();
++ }
++ }
++ }
++
++ public void unlock(final Node node) {
++ if (node.lock != this) {
++ throw new IllegalStateException("Unlock target lock mismatch");
++ }
++
++ final List<Coordinate> areaAffected = node.areaAffected;
++
++ if (areaAffected.isEmpty()) {
++ // here we are not in the node map, and so do not need to remove from the node map or unblock any waiters
++ return;
++ }
++
++ // remove from node map; allowing other threads to lock
++ for (int i = 0, len = areaAffected.size(); i < len; ++i) {
++ final Coordinate coordinate = areaAffected.get(i);
++ if (this.nodes.remove(coordinate) != node) {
++ throw new IllegalStateException();
++ }
++ }
++
++ Thread unpark;
++ while ((unpark = node.pollOrBlockAdds()) != null) {
++ LockSupport.unpark(unpark);
++ }
++ }
++
++ public static final class Node extends MultiThreadedQueue<Thread> {
++
++ private final ReentrantAreaLock lock;
++ private final List<Coordinate> areaAffected;
++ private final Thread thread;
++ //private final Throwable WHO_CREATED_MY_ASS = new Throwable();
++
++ private Node(final ReentrantAreaLock lock, final List<Coordinate> areaAffected, final Thread thread) {
++ this.lock = lock;
++ this.areaAffected = areaAffected;
++ this.thread = thread;
++ }
++
++ @Override
++ public String toString() {
++ return "Node{" +
++ "areaAffected=" + this.areaAffected +
++ ", thread=" + this.thread +
++ '}';
++ }
++ }
++
++ private static final class Coordinate implements Comparable<Coordinate> {
++
++ public final long key;
++
++ public Coordinate(final long key) {
++ this.key = key;
++ }
++
++ public Coordinate(final int x, final int z) {
++ this.key = key(x, z);
++ }
++
++ public static long key(final int x, final int z) {
++ return ((long)z << 32) | (x & 0xFFFFFFFFL);
++ }
++
++ public static int x(final long key) {
++ return (int)key;
++ }
++
++ public static int z(final long key) {
++ return (int)(key >>> 32);
++ }
++
++ @Override
++ public int hashCode() {
++ return (int)HashCommon.mix(this.key);
++ }
++
++ @Override
++ public boolean equals(final Object obj) {
++ if (this == obj) {
++ return true;
++ }
++
++ if (!(obj instanceof Coordinate other)) {
++ return false;
++ }
++
++ return this.key == other.key;
++ }
++
++ // This class is intended for HashMap/ConcurrentHashMap usage, which do treeify bin nodes if the chain
++ // is too large. So we should implement compareTo to help.
++ @Override
++ public int compareTo(final Coordinate other) {
++ return Long.compare(this.key, other.key);
++ }
++
++ @Override
++ public String toString() {
++ return "[" + x(this.key) + "," + z(this.key) + "]";
++ }
++ }
++}
+diff --git a/src/main/java/ca/spottedleaf/concurrentutil/lock/SyncReentrantAreaLock.java b/src/main/java/ca/spottedleaf/concurrentutil/lock/SyncReentrantAreaLock.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..64b5803d002b2968841a5ddee987f98b72964e87
+--- /dev/null
++++ b/src/main/java/ca/spottedleaf/concurrentutil/lock/SyncReentrantAreaLock.java
+@@ -0,0 +1,217 @@
++package ca.spottedleaf.concurrentutil.lock;
++
++import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
++import it.unimi.dsi.fastutil.longs.Long2ReferenceOpenHashMap;
++import it.unimi.dsi.fastutil.longs.LongArrayList;
++import java.util.concurrent.locks.LockSupport;
++
++// not concurrent, unlike ReentrantAreaLock
++// no incorrect lock usage detection (acquiring intersecting areas)
++// this class is nothing more than a performance reference for ReentrantAreaLock
++public final class SyncReentrantAreaLock {
++
++ private final int coordinateShift;
++
++ // aggressive load factor to reduce contention
++ private final Long2ReferenceOpenHashMap<Node> nodes = new Long2ReferenceOpenHashMap<>(128, 0.2f);
++
++ public SyncReentrantAreaLock(final int coordinateShift) {
++ this.coordinateShift = coordinateShift;
++ }
++
++ private static long key(final int x, final int z) {
++ return ((long)z << 32) | (x & 0xFFFFFFFFL);
++ }
++
++ public Node lock(final int x, final int z) {
++ final Thread currThread = Thread.currentThread();
++ final int shift = this.coordinateShift;
++ final int sectionX = x >> shift;
++ final int sectionZ = z >> shift;
++
++ final LongArrayList areaAffected = new LongArrayList();
++
++ final Node ret = new Node(this, areaAffected, currThread);
++
++ final long coordinate = key(sectionX, sectionZ);
++
++ for (long failures = 0L;;) {
++ final Node park;
++
++ synchronized (this) {
++ // try to fast acquire area
++ final Node prev = this.nodes.putIfAbsent(coordinate, ret);
++
++ if (prev == null) {
++ areaAffected.add(coordinate);
++ return ret;
++ } else if (prev.thread != currThread) {
++ park = prev;
++ } else {
++ // only one node we would want to acquire, and it's owned by this thread already
++ return ret;
++ }
++ }
++
++ ++failures;
++
++ if (failures > 128L && park.add(currThread)) {
++ LockSupport.park();
++ } else {
++ // high contention, spin wait
++ if (failures < 128L) {
++ for (long i = 0; i < failures; ++i) {
++ Thread.onSpinWait();
++ }
++ failures = failures << 1;
++ } else if (failures < 1_200L) {
++ LockSupport.parkNanos(1_000L);
++ failures = failures + 1L;
++ } else { // scale 0.1ms (100us) per failure
++ Thread.yield();
++ LockSupport.parkNanos(100_000L * failures);
++ failures = failures + 1L;
++ }
++ }
++ }
++ }
++
++ public Node lock(final int centerX, final int centerZ, final int radius) {
++ return this.lock(centerX - radius, centerZ - radius, centerX + radius, centerZ + radius);
++ }
++
++ public Node lock(final int fromX, final int fromZ, final int toX, final int toZ) {
++ if (fromX > toX || fromZ > toZ) {
++ throw new IllegalArgumentException();
++ }
++
++ final Thread currThread = Thread.currentThread();
++ final int shift = this.coordinateShift;
++ final int fromSectionX = fromX >> shift;
++ final int fromSectionZ = fromZ >> shift;
++ final int toSectionX = toX >> shift;
++ final int toSectionZ = toZ >> shift;
++
++ final LongArrayList areaAffected = new LongArrayList();
++
++ final Node ret = new Node(this, areaAffected, currThread);
++
++ for (long failures = 0L;;) {
++ Node park = null;
++ boolean addedToArea = false;
++
++ synchronized (this) {
++ // try to fast acquire area
++ for (int currZ = fromSectionZ; currZ <= toSectionZ; ++currZ) {
++ for (int currX = fromSectionX; currX <= toSectionX; ++currX) {
++ final long coordinate = key(currX, currZ);
++
++ final Node prev = this.nodes.putIfAbsent(coordinate, ret);
++
++ if (prev == null) {
++ addedToArea = true;
++ areaAffected.add(coordinate);
++ continue;
++ }
++
++ if (prev.thread != currThread) {
++ park = prev;
++ break;
++ }
++ }
++ }
++
++ if (park == null) {
++ return ret;
++ }
++
++ // failed, undo logic
++ if (!areaAffected.isEmpty()) {
++ for (int i = 0, len = areaAffected.size(); i < len; ++i) {
++ final long key = areaAffected.getLong(i);
++
++ if (!this.nodes.remove(key, ret)) {
++ throw new IllegalStateException();
++ }
++ }
++ }
++ }
++
++ if (addedToArea) {
++ areaAffected.clear();
++ // since we inserted, we need to drain waiters
++ Thread unpark;
++ while ((unpark = ret.pollOrBlockAdds()) != null) {
++ LockSupport.unpark(unpark);
++ }
++ }
++
++ ++failures;
++
++ if (failures > 128L && park.add(currThread)) {
++ LockSupport.park();
++ } else {
++ // high contention, spin wait
++ if (failures < 128L) {
++ for (long i = 0; i < failures; ++i) {
++ Thread.onSpinWait();
++ }
++ failures = failures << 1;
++ } else if (failures < 1_200L) {
++ LockSupport.parkNanos(1_000L);
++ failures = failures + 1L;
++ } else { // scale 0.1ms (100us) per failure
++ Thread.yield();
++ LockSupport.parkNanos(100_000L * failures);
++ failures = failures + 1L;
++ }
++ }
++
++ if (addedToArea) {
++ // try again, so we need to allow adds so that other threads can properly block on us
++ ret.allowAdds();
++ }
++ }
++ }
++
++ public void unlock(final Node node) {
++ if (node.lock != this) {
++ throw new IllegalStateException("Unlock target lock mismatch");
++ }
++
++ final LongArrayList areaAffected = node.areaAffected;
++
++ if (areaAffected.isEmpty()) {
++ // here we are not in the node map, and so do not need to remove from the node map or unblock any waiters
++ return;
++ }
++
++ // remove from node map; allowing other threads to lock
++ synchronized (this) {
++ for (int i = 0, len = areaAffected.size(); i < len; ++i) {
++ final long coordinate = areaAffected.getLong(i);
++ if (!this.nodes.remove(coordinate, node)) {
++ throw new IllegalStateException();
++ }
++ }
++ }
++
++ Thread unpark;
++ while ((unpark = node.pollOrBlockAdds()) != null) {
++ LockSupport.unpark(unpark);
++ }
++ }
++
++ public static final class Node extends MultiThreadedQueue<Thread> {
++
++ private final SyncReentrantAreaLock lock;
++ private final LongArrayList areaAffected;
++ private final Thread thread;
++
++ private Node(final SyncReentrantAreaLock lock, final LongArrayList areaAffected, final Thread thread) {
++ this.lock = lock;
++ this.areaAffected = areaAffected;
++ this.thread = thread;
++ }
++ }
++}
+diff --git a/src/main/java/ca/spottedleaf/starlight/common/light/StarLightInterface.java b/src/main/java/ca/spottedleaf/starlight/common/light/StarLightInterface.java
+index e0338db4d6fa359029ed5edeacc3646aa98701f5..c03dbb4a74d00d794be4139f0f7c4b5ff1b01d38 100644
+--- a/src/main/java/ca/spottedleaf/starlight/common/light/StarLightInterface.java
++++ b/src/main/java/ca/spottedleaf/starlight/common/light/StarLightInterface.java
+@@ -41,14 +41,14 @@ public final class StarLightInterface {
+ protected final ArrayDeque<SkyStarLightEngine> cachedSkyPropagators;
+ protected final ArrayDeque<BlockStarLightEngine> cachedBlockPropagators;
+
+- protected final LightQueue lightQueue = new LightQueue(this);
++ public final io.papermc.paper.chunk.system.light.LightQueue lightQueue; // Paper - replace light queue
+
+ protected final LayerLightEventListener skyReader;
+ protected final LayerLightEventListener blockReader;
+ protected final boolean isClientSide;
+
+- protected final int minSection;
+- protected final int maxSection;
++ public final int minSection; // Paper - public
++ public final int maxSection; // Paper - public
+ protected final int minLightSection;
+ protected final int maxLightSection;
+
+@@ -182,6 +182,7 @@ public final class StarLightInterface {
+ StarLightInterface.this.sectionChange(pos, notReady);
+ }
+ };
++ this.lightQueue = new io.papermc.paper.chunk.system.light.LightQueue(this); // Paper - replace light queue
+ }
+
+ public boolean hasSkyLight() {
+@@ -333,7 +334,7 @@ public final class StarLightInterface {
+ return this.lightAccess;
+ }
+
+- protected final SkyStarLightEngine getSkyLightEngine() {
++ public final SkyStarLightEngine getSkyLightEngine() { // Paper - public
+ if (this.cachedSkyPropagators == null) {
+ return null;
+ }
+@@ -348,7 +349,7 @@ public final class StarLightInterface {
+ return ret;
+ }
+
+- protected final void releaseSkyLightEngine(final SkyStarLightEngine engine) {
++ public final void releaseSkyLightEngine(final SkyStarLightEngine engine) { // Paper - public
+ if (this.cachedSkyPropagators == null) {
+ return;
+ }
+@@ -357,7 +358,7 @@ public final class StarLightInterface {
+ }
+ }
+
+- protected final BlockStarLightEngine getBlockLightEngine() {
++ public final BlockStarLightEngine getBlockLightEngine() { // Paper - public
+ if (this.cachedBlockPropagators == null) {
+ return null;
+ }
+@@ -372,7 +373,7 @@ public final class StarLightInterface {
+ return ret;
+ }
+
+- protected final void releaseBlockLightEngine(final BlockStarLightEngine engine) {
++ public final void releaseBlockLightEngine(final BlockStarLightEngine engine) { // Paper - public
+ if (this.cachedBlockPropagators == null) {
+ return;
+ }
+@@ -381,7 +382,7 @@ public final class StarLightInterface {
+ }
+ }
+
+- public LightQueue.ChunkTasks blockChange(final BlockPos pos) {
++ public io.papermc.paper.chunk.system.light.LightQueue.ChunkTasks blockChange(final BlockPos pos) { // Paper - rewrite chunk system
+ if (this.world == null || pos.getY() < WorldUtil.getMinBlockY(this.world) || pos.getY() > WorldUtil.getMaxBlockY(this.world)) { // empty world
+ return null;
+ }
+@@ -389,7 +390,7 @@ public final class StarLightInterface {
+ return this.lightQueue.queueBlockChange(pos);
+ }
+
+- public LightQueue.ChunkTasks sectionChange(final SectionPos pos, final boolean newEmptyValue) {
++ public io.papermc.paper.chunk.system.light.LightQueue.ChunkTasks sectionChange(final SectionPos pos, final boolean newEmptyValue) { // Paper - rewrite chunk system
+ if (this.world == null) { // empty world
+ return null;
+ }
+@@ -519,57 +520,15 @@ public final class StarLightInterface {
+ }
+
+ public void scheduleChunkLight(final ChunkPos pos, final Runnable run) {
+- this.lightQueue.queueChunkLighting(pos, run);
++ throw new UnsupportedOperationException("No longer implemented, use the new lightQueue field to queue tasks"); // Paper - replace light queue
+ }
+
+ public void removeChunkTasks(final ChunkPos pos) {
+- this.lightQueue.removeChunk(pos);
++ throw new UnsupportedOperationException("No longer implemented, use the new lightQueue field to queue tasks"); // Paper - replace light queue
+ }
+
+ public void propagateChanges() {
+- if (this.lightQueue.isEmpty()) {
+- return;
+- }
+-
+- final SkyStarLightEngine skyEngine = this.getSkyLightEngine();
+- final BlockStarLightEngine blockEngine = this.getBlockLightEngine();
+-
+- try {
+- LightQueue.ChunkTasks task;
+- while ((task = this.lightQueue.removeFirstTask()) != null) {
+- if (task.lightTasks != null) {
+- for (final Runnable run : task.lightTasks) {
+- run.run();
+- }
+- }
+-
+- final long coordinate = task.chunkCoordinate;
+- final int chunkX = CoordinateUtils.getChunkX(coordinate);
+- final int chunkZ = CoordinateUtils.getChunkZ(coordinate);
+-
+- final Set<BlockPos> positions = task.changedPositions;
+- final Boolean[] sectionChanges = task.changedSectionSet;
+-
+- if (skyEngine != null && (!positions.isEmpty() || sectionChanges != null)) {
+- skyEngine.blocksChangedInChunk(this.lightAccess, chunkX, chunkZ, positions, sectionChanges);
+- }
+- if (blockEngine != null && (!positions.isEmpty() || sectionChanges != null)) {
+- blockEngine.blocksChangedInChunk(this.lightAccess, chunkX, chunkZ, positions, sectionChanges);
+- }
+-
+- if (skyEngine != null && task.queuedEdgeChecksSky != null) {
+- skyEngine.checkChunkEdges(this.lightAccess, chunkX, chunkZ, task.queuedEdgeChecksSky);
+- }
+- if (blockEngine != null && task.queuedEdgeChecksBlock != null) {
+- blockEngine.checkChunkEdges(this.lightAccess, chunkX, chunkZ, task.queuedEdgeChecksBlock);
+- }
+-
+- task.onComplete.complete(null);
+- }
+- } finally {
+- this.releaseSkyLightEngine(skyEngine);
+- this.releaseBlockLightEngine(blockEngine);
+- }
++ throw new UnsupportedOperationException("No longer implemented, task draining is now performed by the light thread"); // Paper - replace light queue
+ }
+
+ public static final class LightQueue {
+diff --git a/src/main/java/co/aikar/timings/WorldTimingsHandler.java b/src/main/java/co/aikar/timings/WorldTimingsHandler.java
+index 2f0d9b953802dee821cfde82d22b0567cce8ee91..22687667ec69a954261e55e59261286ac1b8b8cd 100644
+--- a/src/main/java/co/aikar/timings/WorldTimingsHandler.java
++++ b/src/main/java/co/aikar/timings/WorldTimingsHandler.java
+@@ -59,6 +59,16 @@ public class WorldTimingsHandler {
+
+ public final Timing miscMobSpawning;
+
++ public final Timing poiUnload;
++ public final Timing chunkUnload;
++ public final Timing poiSaveDataSerialization;
++ public final Timing chunkSave;
++ public final Timing chunkSaveDataSerialization;
++ public final Timing chunkSaveIOWait;
++ public final Timing chunkUnloadPrepareSave;
++ public final Timing chunkUnloadPOISerialization;
++ public final Timing chunkUnloadDataSave;
++
+ public WorldTimingsHandler(Level server) {
+ String name = ((PrimaryLevelData) server.getLevelData()).getLevelName() + " - ";
+
+@@ -112,6 +122,16 @@ public class WorldTimingsHandler {
+
+
+ miscMobSpawning = Timings.ofSafe(name + "Mob spawning - Misc");
++
++ poiUnload = Timings.ofSafe(name + "Chunk unload - POI");
++ chunkUnload = Timings.ofSafe(name + "Chunk unload - Chunk");
++ poiSaveDataSerialization = Timings.ofSafe(name + "Chunk save - POI Data serialization");
++ chunkSave = Timings.ofSafe(name + "Chunk save - Chunk");
++ chunkSaveDataSerialization = Timings.ofSafe(name + "Chunk save - Chunk Data serialization");
++ chunkSaveIOWait = Timings.ofSafe(name + "Chunk save - Chunk IO Wait");
++ chunkUnloadPrepareSave = Timings.ofSafe(name + "Chunk unload - Async Save Prepare");
++ chunkUnloadPOISerialization = Timings.ofSafe(name + "Chunk unload - POI Data Serialization");
++ chunkUnloadDataSave = Timings.ofSafe(name + "Chunk unload - Data Serialization");
+ }
+
+ public static Timing getTickList(ServerLevel worldserver, String timingsType) {
+diff --git a/src/main/java/io/papermc/paper/chunk/system/ChunkSystem.java b/src/main/java/io/papermc/paper/chunk/system/ChunkSystem.java
+index cff2f04409fab9abca87ceec85a551e1d59f9e7d..e3f56908cc8a9c3f4580def50fcfdc61bd495a71 100644
+--- a/src/main/java/io/papermc/paper/chunk/system/ChunkSystem.java
++++ b/src/main/java/io/papermc/paper/chunk/system/ChunkSystem.java
+@@ -32,192 +32,41 @@ public final class ChunkSystem {
+ }
+
+ public static void scheduleChunkTask(final ServerLevel level, final int chunkX, final int chunkZ, final Runnable run, final PrioritisedExecutor.Priority priority) {
+- level.chunkSource.mainThreadProcessor.execute(run);
++ level.chunkTaskScheduler.scheduleChunkTask(chunkX, chunkZ, run, priority); // Paper - rewrite chunk system
+ }
+
+ public static void scheduleChunkLoad(final ServerLevel level, final int chunkX, final int chunkZ, final boolean gen,
+ final ChunkStatus toStatus, final boolean addTicket, final PrioritisedExecutor.Priority priority,
+ final Consumer<ChunkAccess> onComplete) {
+- if (gen) {
+- scheduleChunkLoad(level, chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
+- return;
+- }
+- scheduleChunkLoad(level, chunkX, chunkZ, ChunkStatus.EMPTY, addTicket, priority, (final ChunkAccess chunk) -> {
+- if (chunk == null) {
+- onComplete.accept(null);
+- } else {
+- if (chunk.getStatus().isOrAfter(toStatus)) {
+- scheduleChunkLoad(level, chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
+- } else {
+- onComplete.accept(null);
+- }
+- }
+- });
++ level.chunkTaskScheduler.scheduleChunkLoad(chunkX, chunkZ, gen, toStatus, addTicket, priority, onComplete); // Paper - rewrite chunk system
+ }
+
+- static final TicketType<Long> CHUNK_LOAD = TicketType.create("chunk_load", Long::compareTo);
+-
+- private static long chunkLoadCounter = 0L;
++ // Paper - rewrite chunk system
+ public static void scheduleChunkLoad(final ServerLevel level, final int chunkX, final int chunkZ, final ChunkStatus toStatus,
+ final boolean addTicket, final PrioritisedExecutor.Priority priority, final Consumer<ChunkAccess> onComplete) {
+- if (!Bukkit.isPrimaryThread()) {
+- scheduleChunkTask(level, chunkX, chunkZ, () -> {
+- scheduleChunkLoad(level, chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
+- }, priority);
+- return;
+- }
+-
+- final int minLevel = 33 + ChunkStatus.getDistance(toStatus);
+- final Long chunkReference = addTicket ? Long.valueOf(++chunkLoadCounter) : null;
+- final ChunkPos chunkPos = new ChunkPos(chunkX, chunkZ);
+-
+- if (addTicket) {
+- level.chunkSource.addTicketAtLevel(CHUNK_LOAD, chunkPos, minLevel, chunkReference);
+- }
+- level.chunkSource.runDistanceManagerUpdates();
+-
+- final Consumer<ChunkAccess> loadCallback = (final ChunkAccess chunk) -> {
+- try {
+- if (onComplete != null) {
+- onComplete.accept(chunk);
+- }
+- } catch (final ThreadDeath death) {
+- throw death;
+- } catch (final Throwable thr) {
+- LOGGER.error("Exception handling chunk load callback", thr);
+- SneakyThrow.sneaky(thr);
+- } finally {
+- if (addTicket) {
+- level.chunkSource.addTicketAtLevel(TicketType.UNKNOWN, chunkPos, minLevel, chunkPos);
+- level.chunkSource.removeTicketAtLevel(CHUNK_LOAD, chunkPos, minLevel, chunkReference);
+- }
+- }
+- };
+-
+- final ChunkHolder holder = level.chunkSource.chunkMap.updatingChunkMap.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
+-
+- if (holder == null || holder.getTicketLevel() > minLevel) {
+- loadCallback.accept(null);
+- return;
+- }
+-
+- final CompletableFuture<Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure>> loadFuture = holder.getOrScheduleFuture(toStatus, level.chunkSource.chunkMap);
+-
+- if (loadFuture.isDone()) {
+- loadCallback.accept(loadFuture.join().left().orElse(null));
+- return;
+- }
+-
+- loadFuture.whenCompleteAsync((final Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure> either, final Throwable thr) -> {
+- if (thr != null) {
+- loadCallback.accept(null);
+- return;
+- }
+- loadCallback.accept(either.left().orElse(null));
+- }, (final Runnable r) -> {
+- scheduleChunkTask(level, chunkX, chunkZ, r, PrioritisedExecutor.Priority.HIGHEST);
+- });
++ level.chunkTaskScheduler.scheduleChunkLoad(chunkX, chunkZ, toStatus, addTicket, priority, onComplete); // Paper - rewrite chunk system
+ }
+
+ public static void scheduleTickingState(final ServerLevel level, final int chunkX, final int chunkZ,
+ final FullChunkStatus toStatus, final boolean addTicket,
+ final PrioritisedExecutor.Priority priority, final Consumer<LevelChunk> onComplete) {
+- // This method goes unused until the chunk system rewrite
+- if (toStatus == FullChunkStatus.INACCESSIBLE) {
+- throw new IllegalArgumentException("Cannot wait for INACCESSIBLE status");
+- }
+-
+- if (!Bukkit.isPrimaryThread()) {
+- scheduleChunkTask(level, chunkX, chunkZ, () -> {
+- scheduleTickingState(level, chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
+- }, priority);
+- return;
+- }
+-
+- final int minLevel = 33 - (toStatus.ordinal() - 1);
+- final int radius = toStatus.ordinal() - 1;
+- final Long chunkReference = addTicket ? Long.valueOf(++chunkLoadCounter) : null;
+- final ChunkPos chunkPos = new ChunkPos(chunkX, chunkZ);
+-
+- if (addTicket) {
+- level.chunkSource.addTicketAtLevel(CHUNK_LOAD, chunkPos, minLevel, chunkReference);
+- }
+- level.chunkSource.runDistanceManagerUpdates();
+-
+- final Consumer<LevelChunk> loadCallback = (final LevelChunk chunk) -> {
+- try {
+- if (onComplete != null) {
+- onComplete.accept(chunk);
+- }
+- } catch (final ThreadDeath death) {
+- throw death;
+- } catch (final Throwable thr) {
+- LOGGER.error("Exception handling chunk load callback", thr);
+- SneakyThrow.sneaky(thr);
+- } finally {
+- if (addTicket) {
+- level.chunkSource.addTicketAtLevel(TicketType.UNKNOWN, chunkPos, minLevel, chunkPos);
+- level.chunkSource.removeTicketAtLevel(CHUNK_LOAD, chunkPos, minLevel, chunkReference);
+- }
+- }
+- };
+-
+- final ChunkHolder holder = level.chunkSource.chunkMap.updatingChunkMap.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
+-
+- if (holder == null || holder.getTicketLevel() > minLevel) {
+- loadCallback.accept(null);
+- return;
+- }
+-
+- final CompletableFuture<Either<LevelChunk, ChunkHolder.ChunkLoadingFailure>> tickingState;
+- switch (toStatus) {
+- case FULL: {
+- tickingState = holder.getFullChunkFuture();
+- break;
+- }
+- case BLOCK_TICKING: {
+- tickingState = holder.getTickingChunkFuture();
+- break;
+- }
+- case ENTITY_TICKING: {
+- tickingState = holder.getEntityTickingChunkFuture();
+- break;
+- }
+- default: {
+- throw new IllegalStateException("Cannot reach here");
+- }
+- }
+-
+- if (tickingState.isDone()) {
+- loadCallback.accept(tickingState.join().left().orElse(null));
+- return;
+- }
+-
+- tickingState.whenCompleteAsync((final Either<LevelChunk, ChunkHolder.ChunkLoadingFailure> either, final Throwable thr) -> {
+- if (thr != null) {
+- loadCallback.accept(null);
+- return;
+- }
+- loadCallback.accept(either.left().orElse(null));
+- }, (final Runnable r) -> {
+- scheduleChunkTask(level, chunkX, chunkZ, r, PrioritisedExecutor.Priority.HIGHEST);
+- });
++ level.chunkTaskScheduler.scheduleTickingState(chunkX, chunkZ, toStatus, addTicket, priority, onComplete); // Paper - rewrite chunk system
+ }
+
+ public static List<ChunkHolder> getVisibleChunkHolders(final ServerLevel level) {
+- return new ArrayList<>(level.chunkSource.chunkMap.visibleChunkMap.values());
++ return level.chunkTaskScheduler.chunkHolderManager.getOldChunkHolders(); // Paper - rewrite chunk system
+ }
+
+ public static List<ChunkHolder> getUpdatingChunkHolders(final ServerLevel level) {
+- return new ArrayList<>(level.chunkSource.chunkMap.updatingChunkMap.values());
++ return level.chunkTaskScheduler.chunkHolderManager.getOldChunkHolders(); // Paper - rewrite chunk system
+ }
+
+ public static int getVisibleChunkHolderCount(final ServerLevel level) {
+- return level.chunkSource.chunkMap.visibleChunkMap.size();
++ return level.chunkTaskScheduler.chunkHolderManager.size(); // Paper - rewrite chunk system
+ }
+
+ public static int getUpdatingChunkHolderCount(final ServerLevel level) {
+- return level.chunkSource.chunkMap.updatingChunkMap.size();
++ return level.chunkTaskScheduler.chunkHolderManager.size(); // Paper - rewrite chunk system
+ }
+
+ public static boolean hasAnyChunkHolders(final ServerLevel level) {
+@@ -244,26 +93,32 @@ public final class ChunkSystem {
+
+ public static void onChunkBorder(final LevelChunk chunk, final ChunkHolder holder) {
+ chunk.playerChunk = holder;
++ chunk.chunkStatus = net.minecraft.server.level.FullChunkStatus.FULL;
+ }
+
+ public static void onChunkNotBorder(final LevelChunk chunk, final ChunkHolder holder) {
+-
++ chunk.chunkStatus = net.minecraft.server.level.FullChunkStatus.INACCESSIBLE;
+ }
+
+ public static void onChunkTicking(final LevelChunk chunk, final ChunkHolder holder) {
+ chunk.level.getChunkSource().tickingChunks.add(chunk);
++ chunk.chunkStatus = net.minecraft.server.level.FullChunkStatus.BLOCK_TICKING;
++ chunk.level.chunkSource.chunkMap.tickingGenerated.incrementAndGet();
+ }
+
+ public static void onChunkNotTicking(final LevelChunk chunk, final ChunkHolder holder) {
+ chunk.level.getChunkSource().tickingChunks.remove(chunk);
++ chunk.chunkStatus = net.minecraft.server.level.FullChunkStatus.FULL;
+ }
+
+ public static void onChunkEntityTicking(final LevelChunk chunk, final ChunkHolder holder) {
+ chunk.level.getChunkSource().entityTickingChunks.add(chunk);
++ chunk.chunkStatus = net.minecraft.server.level.FullChunkStatus.ENTITY_TICKING;
+ }
+
+ public static void onChunkNotEntityTicking(final LevelChunk chunk, final ChunkHolder holder) {
+ chunk.level.getChunkSource().entityTickingChunks.remove(chunk);
++ chunk.chunkStatus = net.minecraft.server.level.FullChunkStatus.BLOCK_TICKING;
+ }
+
+ public static ChunkHolder getUnloadingChunkHolder(final ServerLevel level, final int chunkX, final int chunkZ) {
+@@ -271,23 +126,15 @@ public final class ChunkSystem {
+ }
+
+ public static int getSendViewDistance(final ServerPlayer player) {
+- return getLoadViewDistance(player);
++ return io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.getAPISendViewDistance(player);
+ }
+
+ public static int getLoadViewDistance(final ServerPlayer player) {
+- final ServerLevel level = player.serverLevel();
+- if (level == null) {
+- return Bukkit.getViewDistance();
+- }
+- return level.chunkSource.chunkMap.getPlayerViewDistance(player);
++ return io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.getLoadViewDistance(player);
+ }
+
+ public static int getTickViewDistance(final ServerPlayer player) {
+- final ServerLevel level = player.serverLevel();
+- if (level == null) {
+- return Bukkit.getSimulationDistance();
+- }
+- return level.chunkSource.chunkMap.distanceManager.simulationDistance;
++ return io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.getAPITickViewDistance(player);
+ }
+
+ private ChunkSystem() {
+diff --git a/src/main/java/io/papermc/paper/chunk/system/RegionizedPlayerChunkLoader.java b/src/main/java/io/papermc/paper/chunk/system/RegionizedPlayerChunkLoader.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..ee58c67cb2bd78159cce19ec75f13dc6168a0e7a
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/RegionizedPlayerChunkLoader.java
+@@ -0,0 +1,1375 @@
++package io.papermc.paper.chunk.system;
++
++import ca.spottedleaf.concurrentutil.collection.SRSWLinkedQueue;
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
++import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
++import io.papermc.paper.chunk.system.scheduling.ChunkHolderManager;
++import io.papermc.paper.configuration.GlobalConfiguration;
++import io.papermc.paper.util.CoordinateUtils;
++import io.papermc.paper.util.TickThread;
++import io.papermc.paper.util.player.SingleUserAreaMap;
++import it.unimi.dsi.fastutil.longs.Long2ByteOpenHashMap;
++import it.unimi.dsi.fastutil.longs.LongArrayList;
++import it.unimi.dsi.fastutil.longs.LongComparator;
++import it.unimi.dsi.fastutil.longs.LongHeapPriorityQueue;
++import it.unimi.dsi.fastutil.longs.LongIterator;
++import it.unimi.dsi.fastutil.longs.LongLinkedOpenHashSet;
++import it.unimi.dsi.fastutil.longs.LongOpenHashSet;
++import net.minecraft.network.protocol.Packet;
++import net.minecraft.network.protocol.game.ClientboundSetChunkCacheCenterPacket;
++import net.minecraft.network.protocol.game.ClientboundSetChunkCacheRadiusPacket;
++import net.minecraft.network.protocol.game.ClientboundSetSimulationDistancePacket;
++import net.minecraft.server.level.ChunkTrackingView;
++import net.minecraft.server.level.ServerLevel;
++import net.minecraft.server.level.ServerPlayer;
++import net.minecraft.server.level.TicketType;
++import net.minecraft.server.network.PlayerChunkSender;
++import net.minecraft.world.level.ChunkPos;
++import net.minecraft.world.level.GameRules;
++import net.minecraft.world.level.chunk.ChunkAccess;
++import net.minecraft.world.level.chunk.status.ChunkStatus;
++import net.minecraft.world.level.chunk.LevelChunk;
++import net.minecraft.world.level.levelgen.BelowZeroRetrogen;
++import org.bukkit.craftbukkit.entity.CraftPlayer;
++import org.bukkit.entity.Player;
++import java.lang.invoke.VarHandle;
++import java.util.ArrayDeque;
++import java.util.Arrays;
++import java.util.Objects;
++import java.util.concurrent.TimeUnit;
++import java.util.concurrent.atomic.AtomicLong;
++
++public class RegionizedPlayerChunkLoader {
++
++ // expected that this list returns for a given radius, the set of chunks ordered
++ // by manhattan distance
++ private static final long[][] SEARCH_RADIUS_ITERATION_LIST = new long[64+2+1][];
++ static {
++ for (int i = 0; i < SEARCH_RADIUS_ITERATION_LIST.length; ++i) {
++ // a BFS around -x, -z, +x, +z will give increasing manhatten distance
++ SEARCH_RADIUS_ITERATION_LIST[i] = generateBFSOrder(i);
++ }
++ }
++
++ private static void expandQuadrants(final CustomLongArray input, final int size) {
++ final int len = input.size();
++ final long[] array = input.elements();
++
++ int writeIndex = size - 1;
++ for (int i = len - 1; i >= 0; --i) {
++ final long key = array[i];
++ final int chunkX = CoordinateUtils.getChunkX(key);
++ final int chunkZ = CoordinateUtils.getChunkZ(key);
++
++ if ((chunkX | chunkZ) < 0 || (i != 0 && chunkX == 0 && chunkZ == 0)) {
++ throw new IllegalStateException();
++ }
++
++ // Q4
++ if (chunkZ != 0) {
++ array[writeIndex--] = CoordinateUtils.getChunkKey(chunkX, -chunkZ);
++ }
++ // Q3
++ if (chunkX != 0 && chunkZ != 0) {
++ array[writeIndex--] = CoordinateUtils.getChunkKey(-chunkX, -chunkZ);
++ }
++ // Q2
++ if (chunkX != 0) {
++ array[writeIndex--] = CoordinateUtils.getChunkKey(-chunkX, chunkZ);
++ }
++
++ array[writeIndex--] = key;
++ }
++
++ input.forceSize(size);
++
++ if (writeIndex != -1) {
++ throw new IllegalStateException();
++ }
++ }
++
++ private static long[] generateBFSOrder(final int radius) {
++ // by using only the first quadrant, we can reduce the total element size by 4 when spreading
++ final CustomLongArray[] byDistance = makeQ1BFS(radius);
++
++ // to increase generation parallelism, we want to space the chunks out so that they are not nearby when generating
++ // this also means we are minimising locality
++ // but, we need to maintain sorted order by manhatten distance
++
++ // per manhatten distance we transform the chunk list so that each element is maximally spaced out from each other
++ for (int i = 0, len = byDistance.length; i < len; ++i) {
++ final CustomLongArray points = byDistance[i];
++ final int expectedSize = getDistanceSize(i, radius);
++
++ final CustomLongArray spread = spread(points, expectedSize);
++ // add in Q2, Q3, Q4
++ expandQuadrants(spread, expectedSize);
++
++ byDistance[i] = spread;
++ }
++
++ // now, rebuild the list so that it still maintains manhatten distance order
++ final CustomLongArray ret = new CustomLongArray((2 * radius + 1) * (2 * radius + 1));
++
++ for (final CustomLongArray dist : byDistance) {
++ ret.addAll(dist);
++ }
++
++ return ret.elements();
++ }
++
++ public static final TicketType<Long> REGION_PLAYER_TICKET = TicketType.create("region_player_ticket", Long::compareTo);
++
++ public static final int MIN_VIEW_DISTANCE = 2;
++ public static final int MAX_VIEW_DISTANCE = 32;
++
++ public static final int TICK_TICKET_LEVEL = 31;
++ public static final int GENERATED_TICKET_LEVEL = 33 + ChunkStatus.getDistance(ChunkStatus.FULL);
++ public static final int LOADED_TICKET_LEVEL = 33 + ChunkStatus.getDistance(ChunkStatus.EMPTY);
++
++ public static final record ViewDistances(
++ int tickViewDistance,
++ int loadViewDistance,
++ int sendViewDistance
++ ) {
++ public ViewDistances setTickViewDistance(final int distance) {
++ return new ViewDistances(distance, this.loadViewDistance, this.sendViewDistance);
++ }
++
++ public ViewDistances setLoadViewDistance(final int distance) {
++ return new ViewDistances(this.tickViewDistance, distance, this.sendViewDistance);
++ }
++
++
++ public ViewDistances setSendViewDistance(final int distance) {
++ return new ViewDistances(this.tickViewDistance, this.loadViewDistance, distance);
++ }
++ }
++
++ public static int getAPITickViewDistance(final Player player) {
++ return getAPITickViewDistance(((CraftPlayer)player).getHandle());
++ }
++
++ public static int getAPITickViewDistance(final ServerPlayer player) {
++ final ServerLevel level = (ServerLevel)player.level();
++ final PlayerChunkLoaderData data = player.chunkLoader;
++ if (data == null) {
++ return level.playerChunkLoader.getAPITickDistance();
++ }
++ return data.lastTickDistance;
++ }
++
++ public static int getAPIViewDistance(final Player player) {
++ return getAPIViewDistance(((CraftPlayer)player).getHandle());
++ }
++
++ public static int getAPIViewDistance(final ServerPlayer player) {
++ final ServerLevel level = (ServerLevel)player.level();
++ final PlayerChunkLoaderData data = player.chunkLoader;
++ if (data == null) {
++ return level.playerChunkLoader.getAPIViewDistance();
++ }
++ // view distance = load distance + 1
++ return data.lastLoadDistance - 1;
++ }
++
++ public static int getLoadViewDistance(final ServerPlayer player) {
++ final ServerLevel level = (ServerLevel)player.level();
++ final PlayerChunkLoaderData data = player.chunkLoader;
++ if (data == null) {
++ return level.playerChunkLoader.getAPIViewDistance();
++ }
++ // view distance = load distance + 1
++ return data.lastLoadDistance - 1;
++ }
++
++ public static int getAPISendViewDistance(final Player player) {
++ return getAPISendViewDistance(((CraftPlayer)player).getHandle());
++ }
++
++ public static int getAPISendViewDistance(final ServerPlayer player) {
++ final ServerLevel level = (ServerLevel)player.level();
++ final PlayerChunkLoaderData data = player.chunkLoader;
++ if (data == null) {
++ return level.playerChunkLoader.getAPISendViewDistance();
++ }
++ return data.lastSendDistance;
++ }
++
++ private final ServerLevel world;
++
++ public RegionizedPlayerChunkLoader(final ServerLevel world) {
++ this.world = world;
++ }
++
++ public void addPlayer(final ServerPlayer player) {
++ TickThread.ensureTickThread(player, "Cannot add player to player chunk loader async");
++ if (!player.isRealPlayer) {
++ return;
++ }
++
++ if (player.chunkLoader != null) {
++ throw new IllegalStateException("Player is already added to player chunk loader");
++ }
++
++ final PlayerChunkLoaderData loader = new PlayerChunkLoaderData(this.world, player);
++
++ player.chunkLoader = loader;
++ loader.add();
++ }
++
++ public void updatePlayer(final ServerPlayer player) {
++ final PlayerChunkLoaderData loader = player.chunkLoader;
++ if (loader != null) {
++ loader.update();
++ }
++ }
++
++ public void removePlayer(final ServerPlayer player) {
++ TickThread.ensureTickThread(player, "Cannot remove player from player chunk loader async");
++ if (!player.isRealPlayer) {
++ return;
++ }
++
++ final PlayerChunkLoaderData loader = player.chunkLoader;
++
++ if (loader == null) {
++ return;
++ }
++
++ loader.remove();
++ player.chunkLoader = null;
++ }
++
++ public void setSendDistance(final int distance) {
++ this.world.setSendViewDistance(distance);
++ }
++
++ public void setLoadDistance(final int distance) {
++ this.world.setLoadViewDistance(distance);
++ }
++
++ public void setTickDistance(final int distance) {
++ this.world.setTickViewDistance(distance);
++ }
++
++ // Note: follow the player chunk loader so everything stays consistent...
++ public int getAPITickDistance() {
++ final ViewDistances distances = this.world.getViewDistances();
++ final int tickViewDistance = PlayerChunkLoaderData.getTickDistance(-1, distances.tickViewDistance);
++ return tickViewDistance;
++ }
++
++ public int getAPIViewDistance() {
++ final ViewDistances distances = this.world.getViewDistances();
++ final int tickViewDistance = PlayerChunkLoaderData.getTickDistance(-1, distances.tickViewDistance);
++ final int loadDistance = PlayerChunkLoaderData.getLoadViewDistance(tickViewDistance, -1, distances.loadViewDistance);
++
++ // loadDistance = api view distance + 1
++ return loadDistance - 1;
++ }
++
++ public int getAPISendViewDistance() {
++ final ViewDistances distances = this.world.getViewDistances();
++ final int tickViewDistance = PlayerChunkLoaderData.getTickDistance(-1, distances.tickViewDistance);
++ final int loadDistance = PlayerChunkLoaderData.getLoadViewDistance(tickViewDistance, -1, distances.loadViewDistance);
++ final int sendViewDistance = PlayerChunkLoaderData.getSendViewDistance(
++ loadDistance, -1, -1, distances.sendViewDistance
++ );
++
++ return sendViewDistance;
++ }
++
++ public boolean isChunkSent(final ServerPlayer player, final int chunkX, final int chunkZ, final boolean borderOnly) {
++ return borderOnly ? this.isChunkSentBorderOnly(player, chunkX, chunkZ) : this.isChunkSent(player, chunkX, chunkZ);
++ }
++
++ public boolean isChunkSent(final ServerPlayer player, final int chunkX, final int chunkZ) {
++ final PlayerChunkLoaderData loader = player.chunkLoader;
++ if (loader == null) {
++ return false;
++ }
++
++ return loader.sentChunks.contains(CoordinateUtils.getChunkKey(chunkX, chunkZ));
++ }
++
++ public boolean isChunkSentBorderOnly(final ServerPlayer player, final int chunkX, final int chunkZ) {
++ final PlayerChunkLoaderData loader = player.chunkLoader;
++ if (loader == null) {
++ return false;
++ }
++
++ for (int dz = -1; dz <= 1; ++dz) {
++ for (int dx = -1; dx <= 1; ++dx) {
++ if (!loader.sentChunks.contains(CoordinateUtils.getChunkKey(dx + chunkX, dz + chunkZ))) {
++ return true;
++ }
++ }
++ }
++
++ return false;
++ }
++
++ public void tick() {
++ TickThread.ensureTickThread("Cannot tick player chunk loader async");
++ long currTime = System.nanoTime();
++ for (final ServerPlayer player : new java.util.ArrayList<>(this.world.players())) {
++ final PlayerChunkLoaderData loader = player.chunkLoader;
++ if (loader == null || loader.world != this.world) {
++ // not our problem anymore
++ continue;
++ }
++ loader.update(); // can't invoke plugin logic
++ loader.updateQueues(currTime);
++ }
++ }
++
++ public static final class PlayerChunkLoaderData {
++
++ private static final AtomicLong ID_GENERATOR = new AtomicLong();
++ private final long id = ID_GENERATOR.incrementAndGet();
++ private final Long idBoxed = Long.valueOf(this.id);
++
++ private static final long MAX_RATE = 10_000L;
++
++ private final ServerPlayer player;
++ private final ServerLevel world;
++
++ private int lastChunkX = Integer.MIN_VALUE;
++ private int lastChunkZ = Integer.MIN_VALUE;
++
++ private int lastSendDistance = Integer.MIN_VALUE;
++ private int lastLoadDistance = Integer.MIN_VALUE;
++ private int lastTickDistance = Integer.MIN_VALUE;
++
++ private int lastSentChunkCenterX = Integer.MIN_VALUE;
++ private int lastSentChunkCenterZ = Integer.MIN_VALUE;
++
++ private int lastSentChunkRadius = Integer.MIN_VALUE;
++ private int lastSentSimulationDistance = Integer.MIN_VALUE;
++
++ private boolean canGenerateChunks = true;
++
++ private final ArrayDeque<ChunkHolderManager.TicketOperation<?, ?>> delayedTicketOps = new ArrayDeque<>();
++ private final LongOpenHashSet sentChunks = new LongOpenHashSet();
++
++ private static final byte CHUNK_TICKET_STAGE_NONE = 0;
++ private static final byte CHUNK_TICKET_STAGE_LOADING = 1;
++ private static final byte CHUNK_TICKET_STAGE_LOADED = 2;
++ private static final byte CHUNK_TICKET_STAGE_GENERATING = 3;
++ private static final byte CHUNK_TICKET_STAGE_GENERATED = 4;
++ private static final byte CHUNK_TICKET_STAGE_TICK = 5;
++ private static final int[] TICKET_STAGE_TO_LEVEL = new int[] {
++ ChunkHolderManager.MAX_TICKET_LEVEL + 1,
++ LOADED_TICKET_LEVEL,
++ LOADED_TICKET_LEVEL,
++ GENERATED_TICKET_LEVEL,
++ GENERATED_TICKET_LEVEL,
++ TICK_TICKET_LEVEL
++ };
++ private final Long2ByteOpenHashMap chunkTicketStage = new Long2ByteOpenHashMap();
++ {
++ this.chunkTicketStage.defaultReturnValue(CHUNK_TICKET_STAGE_NONE);
++ }
++
++ // rate limiting
++ private final AllocatingRateLimiter chunkSendLimiter = new AllocatingRateLimiter();
++ private final AllocatingRateLimiter chunkLoadTicketLimiter = new AllocatingRateLimiter();
++ private final AllocatingRateLimiter chunkGenerateTicketLimiter = new AllocatingRateLimiter();
++
++ // queues
++ private final LongComparator CLOSEST_MANHATTAN_DIST = (final long c1, final long c2) -> {
++ final int c1x = CoordinateUtils.getChunkX(c1);
++ final int c1z = CoordinateUtils.getChunkZ(c1);
++
++ final int c2x = CoordinateUtils.getChunkX(c2);
++ final int c2z = CoordinateUtils.getChunkZ(c2);
++
++ final int centerX = PlayerChunkLoaderData.this.lastChunkX;
++ final int centerZ = PlayerChunkLoaderData.this.lastChunkZ;
++
++ return Integer.compare(
++ Math.abs(c1x - centerX) + Math.abs(c1z - centerZ),
++ Math.abs(c2x - centerX) + Math.abs(c2z - centerZ)
++ );
++ };
++ private final LongHeapPriorityQueue sendQueue = new LongHeapPriorityQueue(CLOSEST_MANHATTAN_DIST);
++ private final LongHeapPriorityQueue tickingQueue = new LongHeapPriorityQueue(CLOSEST_MANHATTAN_DIST);
++ private final LongHeapPriorityQueue generatingQueue = new LongHeapPriorityQueue(CLOSEST_MANHATTAN_DIST);
++ private final LongHeapPriorityQueue genQueue = new LongHeapPriorityQueue(CLOSEST_MANHATTAN_DIST);
++ private final LongHeapPriorityQueue loadingQueue = new LongHeapPriorityQueue(CLOSEST_MANHATTAN_DIST);
++ private final LongHeapPriorityQueue loadQueue = new LongHeapPriorityQueue(CLOSEST_MANHATTAN_DIST);
++
++ private volatile boolean removed;
++
++ public PlayerChunkLoaderData(final ServerLevel world, final ServerPlayer player) {
++ this.world = world;
++ this.player = player;
++ }
++
++ private void flushDelayedTicketOps() {
++ if (this.delayedTicketOps.isEmpty()) {
++ return;
++ }
++ this.world.chunkTaskScheduler.chunkHolderManager.performTicketUpdates(this.delayedTicketOps);
++ this.delayedTicketOps.clear();
++ }
++
++ private void pushDelayedTicketOp(final ChunkHolderManager.TicketOperation<?, ?> op) {
++ this.delayedTicketOps.addLast(op);
++ }
++
++ private void sendChunk(final int chunkX, final int chunkZ) {
++ if (this.sentChunks.add(CoordinateUtils.getChunkKey(chunkX, chunkZ))) {
++ PlayerChunkSender.sendChunk(this.player.connection, this.world, this.world.getChunkIfLoaded(chunkX, chunkZ));
++ return;
++ }
++ throw new IllegalStateException();
++ }
++
++ private void sendUnloadChunk(final int chunkX, final int chunkZ) {
++ if (!this.sentChunks.remove(CoordinateUtils.getChunkKey(chunkX, chunkZ))) {
++ return;
++ }
++ this.sendUnloadChunkRaw(chunkX, chunkZ);
++ }
++
++ private void sendUnloadChunkRaw(final int chunkX, final int chunkZ) {
++ PlayerChunkSender.dropChunkStatic(this.player, new ChunkPos(chunkX, chunkZ));
++ }
++
++ private final SingleUserAreaMap<PlayerChunkLoaderData> broadcastMap = new SingleUserAreaMap<>(this) {
++ @Override
++ protected void addCallback(final PlayerChunkLoaderData parameter, final int chunkX, final int chunkZ) {
++ // do nothing, we only care about remove
++ }
++
++ @Override
++ protected void removeCallback(final PlayerChunkLoaderData parameter, final int chunkX, final int chunkZ) {
++ parameter.sendUnloadChunk(chunkX, chunkZ);
++ }
++ };
++ private final SingleUserAreaMap<io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.PlayerChunkLoaderData> loadTicketCleanup = new SingleUserAreaMap<>(this) {
++ @Override
++ protected void addCallback(final PlayerChunkLoaderData parameter, final int chunkX, final int chunkZ) {
++ // do nothing, we only care about remove
++ }
++
++ @Override
++ protected void removeCallback(final PlayerChunkLoaderData parameter, final int chunkX, final int chunkZ) {
++ final long chunk = CoordinateUtils.getChunkKey(chunkX, chunkZ);
++ final byte ticketStage = parameter.chunkTicketStage.remove(chunk);
++ final int level = TICKET_STAGE_TO_LEVEL[ticketStage];
++ if (level > ChunkHolderManager.MAX_TICKET_LEVEL) {
++ return;
++ }
++
++ parameter.pushDelayedTicketOp(ChunkHolderManager.TicketOperation.addAndRemove(
++ chunk,
++ TicketType.UNKNOWN, level, new ChunkPos(chunkX, chunkZ),
++ REGION_PLAYER_TICKET, level, parameter.idBoxed
++ ));
++ }
++ };
++ private final SingleUserAreaMap<PlayerChunkLoaderData> tickMap = new SingleUserAreaMap<>(this) {
++ @Override
++ protected void addCallback(final PlayerChunkLoaderData parameter, final int chunkX, final int chunkZ) {
++ // do nothing, we will detect ticking chunks when we try to load them
++ }
++
++ @Override
++ protected void removeCallback(final PlayerChunkLoaderData parameter, final int chunkX, final int chunkZ) {
++ final long chunk = CoordinateUtils.getChunkKey(chunkX, chunkZ);
++ // note: by the time this is called, the tick cleanup should have ran - so, if the chunk is at
++ // the tick stage it was deemed in range for loading. Thus, we need to move it to generated
++ if (!parameter.chunkTicketStage.replace(chunk, CHUNK_TICKET_STAGE_TICK, CHUNK_TICKET_STAGE_GENERATED)) {
++ return;
++ }
++
++ // Since we are possibly downgrading the ticket level, we add an unknown ticket so that
++ // the level is kept until tick().
++ parameter.pushDelayedTicketOp(ChunkHolderManager.TicketOperation.addAndRemove(
++ chunk,
++ TicketType.UNKNOWN, TICK_TICKET_LEVEL, new ChunkPos(chunkX, chunkZ),
++ REGION_PLAYER_TICKET, TICK_TICKET_LEVEL, parameter.idBoxed
++ ));
++ // keep chunk at new generated level
++ parameter.pushDelayedTicketOp(ChunkHolderManager.TicketOperation.addOp(
++ chunk,
++ REGION_PLAYER_TICKET, GENERATED_TICKET_LEVEL, parameter.idBoxed
++ ));
++ }
++ };
++
++ private static boolean wantChunkLoaded(final int centerX, final int centerZ, final int chunkX, final int chunkZ,
++ final int sendRadius) {
++ // expect sendRadius to be = 1 + target viewable radius
++ return ChunkTrackingView.isWithinDistance(centerX, centerZ, sendRadius, chunkX, chunkZ, true);
++ }
++
++ private static int getClientViewDistance(final ServerPlayer player) {
++ final Integer vd = player.requestedViewDistance();
++ return vd == null ? -1 : Math.max(0, vd.intValue());
++ }
++
++ private static int getTickDistance(final int playerTickViewDistance, final int worldTickViewDistance) {
++ return playerTickViewDistance < 0 ? worldTickViewDistance : playerTickViewDistance;
++ }
++
++ private static int getLoadViewDistance(final int tickViewDistance, final int playerLoadViewDistance,
++ final int worldLoadViewDistance) {
++ return Math.max(tickViewDistance + 1, playerLoadViewDistance < 0 ? worldLoadViewDistance : playerLoadViewDistance);
++ }
++
++ private static int getSendViewDistance(final int loadViewDistance, final int clientViewDistance,
++ final int playerSendViewDistance, final int worldSendViewDistance) {
++ return Math.min(
++ loadViewDistance - 1,
++ playerSendViewDistance < 0 ? (!GlobalConfiguration.get().chunkLoadingAdvanced.autoConfigSendDistance || clientViewDistance < 0 ? (worldSendViewDistance < 0 ? (loadViewDistance - 1) : worldSendViewDistance) : clientViewDistance + 1) : playerSendViewDistance
++ );
++ }
++
++ private Packet<?> updateClientChunkRadius(final int radius) {
++ this.lastSentChunkRadius = radius;
++ return new ClientboundSetChunkCacheRadiusPacket(radius);
++ }
++
++ private Packet<?> updateClientSimulationDistance(final int distance) {
++ this.lastSentSimulationDistance = distance;
++ return new ClientboundSetSimulationDistancePacket(distance);
++ }
++
++ private Packet<?> updateClientChunkCenter(final int chunkX, final int chunkZ) {
++ this.lastSentChunkCenterX = chunkX;
++ this.lastSentChunkCenterZ = chunkZ;
++ return new ClientboundSetChunkCacheCenterPacket(chunkX, chunkZ);
++ }
++
++ private boolean canPlayerGenerateChunks() {
++ return !this.player.isSpectator() || this.world.getGameRules().getBoolean(GameRules.RULE_SPECTATORSGENERATECHUNKS);
++ }
++
++ private double getMaxChunkLoadRate() {
++ final double configRate = GlobalConfiguration.get().chunkLoadingBasic.playerMaxChunkLoadRate;
++
++ return configRate < 0.0 || configRate > (double)MAX_RATE ? (double)MAX_RATE : Math.max(1.0, configRate);
++ }
++
++ private double getMaxChunkGenRate() {
++ final double configRate = GlobalConfiguration.get().chunkLoadingBasic.playerMaxChunkGenerateRate;
++
++ return configRate < 0.0 || configRate > (double)MAX_RATE ? (double)MAX_RATE : Math.max(1.0, configRate);
++ }
++
++ private double getMaxChunkSendRate() {
++ final double configRate = GlobalConfiguration.get().chunkLoadingBasic.playerMaxChunkSendRate;
++
++ return configRate < 0.0 || configRate > (double)MAX_RATE ? (double)MAX_RATE : Math.max(1.0, configRate);
++ }
++
++ private long getMaxChunkLoads() {
++ final long radiusChunks = (2L * this.lastLoadDistance + 1L) * (2L * this.lastLoadDistance + 1L);
++ long configLimit = GlobalConfiguration.get().chunkLoadingAdvanced.playerMaxConcurrentChunkLoads;
++ if (configLimit == 0L) {
++ // by default, only allow 1/5th of the chunks in the view distance to be concurrently active
++ configLimit = Math.max(5L, radiusChunks / 5L);
++ } else if (configLimit < 0L) {
++ configLimit = Integer.MAX_VALUE;
++ } // else: use the value configured
++ configLimit = configLimit - this.loadingQueue.size();
++
++ return configLimit;
++ }
++
++ private long getMaxChunkGenerates() {
++ final long radiusChunks = (2L * this.lastLoadDistance + 1L) * (2L * this.lastLoadDistance + 1L);
++ long configLimit = GlobalConfiguration.get().chunkLoadingAdvanced.playerMaxConcurrentChunkGenerates;
++ if (configLimit == 0L) {
++ // by default, only allow 1/5th of the chunks in the view distance to be concurrently active
++ configLimit = Math.max(5L, radiusChunks / 5L);
++ } else if (configLimit < 0L) {
++ configLimit = Integer.MAX_VALUE;
++ } // else: use the value configured
++ configLimit = configLimit - this.generatingQueue.size();
++
++ return configLimit;
++ }
++
++ private boolean wantChunkSent(final int chunkX, final int chunkZ) {
++ final int dx = this.lastChunkX - chunkX;
++ final int dz = this.lastChunkZ - chunkZ;
++ return (Math.max(Math.abs(dx), Math.abs(dz)) <= (this.lastSendDistance + 1)) && wantChunkLoaded(
++ this.lastChunkX, this.lastChunkZ, chunkX, chunkZ, this.lastSendDistance
++ );
++ }
++
++ private boolean wantChunkTicked(final int chunkX, final int chunkZ) {
++ final int dx = this.lastChunkX - chunkX;
++ final int dz = this.lastChunkZ - chunkZ;
++ return Math.max(Math.abs(dx), Math.abs(dz)) <= this.lastTickDistance;
++ }
++
++ void updateQueues(final long time) {
++ TickThread.ensureTickThread(this.player, "Cannot tick player chunk loader async");
++ if (this.removed) {
++ throw new IllegalStateException("Ticking removed player chunk loader");
++ }
++ // update rate limits
++ final double loadRate = this.getMaxChunkLoadRate();
++ final double genRate = this.getMaxChunkGenRate();
++ final double sendRate = this.getMaxChunkSendRate();
++
++ this.chunkLoadTicketLimiter.tickAllocation(time, loadRate, loadRate);
++ this.chunkGenerateTicketLimiter.tickAllocation(time, genRate, genRate);
++ this.chunkSendLimiter.tickAllocation(time, sendRate, sendRate);
++
++ // try to progress chunk loads
++ while (!this.loadingQueue.isEmpty()) {
++ final long pendingLoadChunk = this.loadingQueue.firstLong();
++ final int pendingChunkX = CoordinateUtils.getChunkX(pendingLoadChunk);
++ final int pendingChunkZ = CoordinateUtils.getChunkZ(pendingLoadChunk);
++ final ChunkAccess pending = this.world.chunkSource.getChunkAtImmediately(pendingChunkX, pendingChunkZ);
++ if (pending == null) {
++ // nothing to do here
++ break;
++ }
++ // chunk has loaded, so we can take it out of the queue
++ this.loadingQueue.dequeueLong();
++
++ // try to move to generate queue
++ final byte prev = this.chunkTicketStage.put(pendingLoadChunk, CHUNK_TICKET_STAGE_LOADED);
++ if (prev != CHUNK_TICKET_STAGE_LOADING) {
++ throw new IllegalStateException("Previous state should be " + CHUNK_TICKET_STAGE_LOADING + ", not " + prev);
++ }
++
++ if (this.canGenerateChunks || this.isLoadedChunkGeneratable(pending)) {
++ this.genQueue.enqueue(pendingLoadChunk);
++ } // else: don't want to generate, so just leave it loaded
++ }
++
++ // try to push more chunk loads
++ final long maxLoads = Math.max(0L, Math.min(MAX_RATE, Math.min(this.loadQueue.size(), this.getMaxChunkLoads())));
++ final int maxLoadsThisTick = (int)this.chunkLoadTicketLimiter.takeAllocation(time, loadRate, maxLoads);
++ if (maxLoadsThisTick > 0) {
++ final LongArrayList chunks = new LongArrayList(maxLoadsThisTick);
++ for (int i = 0; i < maxLoadsThisTick; ++i) {
++ final long chunk = this.loadQueue.dequeueLong();
++ final byte prev = this.chunkTicketStage.put(chunk, CHUNK_TICKET_STAGE_LOADING);
++ if (prev != CHUNK_TICKET_STAGE_NONE) {
++ throw new IllegalStateException("Previous state should be " + CHUNK_TICKET_STAGE_NONE + ", not " + prev);
++ }
++ this.pushDelayedTicketOp(
++ ChunkHolderManager.TicketOperation.addOp(
++ chunk,
++ REGION_PLAYER_TICKET, LOADED_TICKET_LEVEL, this.idBoxed
++ )
++ );
++ chunks.add(chunk);
++ this.loadingQueue.enqueue(chunk);
++ }
++
++ // here we need to flush tickets, as scheduleChunkLoad requires tickets to be propagated with addTicket = false
++ this.flushDelayedTicketOps();
++ // we only need to call scheduleChunkLoad because the loaded ticket level is not enough to start the chunk
++ // load - only generate ticket levels start anything, but they start generation...
++ // propagate levels
++ // Note: this CAN call plugin logic, so it is VITAL that our bookkeeping logic is completely done by the time this is invoked
++ this.world.chunkTaskScheduler.chunkHolderManager.processTicketUpdates();
++
++ if (this.removed) {
++ // process ticket updates may invoke plugin logic, which may remove this player
++ return;
++ }
++
++ for (int i = 0; i < maxLoadsThisTick; ++i) {
++ final long queuedLoadChunk = chunks.getLong(i);
++ final int queuedChunkX = CoordinateUtils.getChunkX(queuedLoadChunk);
++ final int queuedChunkZ = CoordinateUtils.getChunkZ(queuedLoadChunk);
++ this.world.chunkTaskScheduler.scheduleChunkLoad(
++ queuedChunkX, queuedChunkZ, ChunkStatus.EMPTY, false, PrioritisedExecutor.Priority.NORMAL, null
++ );
++ if (this.removed) {
++ return;
++ }
++ }
++ }
++
++ // try to progress chunk generations
++ while (!this.generatingQueue.isEmpty()) {
++ final long pendingGenChunk = this.generatingQueue.firstLong();
++ final int pendingChunkX = CoordinateUtils.getChunkX(pendingGenChunk);
++ final int pendingChunkZ = CoordinateUtils.getChunkZ(pendingGenChunk);
++ final LevelChunk pending = this.world.chunkSource.getChunkAtIfLoadedMainThreadNoCache(pendingChunkX, pendingChunkZ);
++ if (pending == null) {
++ // nothing to do here
++ break;
++ }
++
++ // chunk has generated, so we can take it out of queue
++ this.generatingQueue.dequeueLong();
++
++ final byte prev = this.chunkTicketStage.put(pendingGenChunk, CHUNK_TICKET_STAGE_GENERATED);
++ if (prev != CHUNK_TICKET_STAGE_GENERATING) {
++ throw new IllegalStateException("Previous state should be " + CHUNK_TICKET_STAGE_GENERATING + ", not " + prev);
++ }
++
++ // try to move to send queue
++ if (this.wantChunkSent(pendingChunkX, pendingChunkZ)) {
++ this.sendQueue.enqueue(pendingGenChunk);
++ }
++ // try to move to tick queue
++ if (this.wantChunkTicked(pendingChunkX, pendingChunkZ)) {
++ this.tickingQueue.enqueue(pendingGenChunk);
++ }
++ }
++
++ // try to push more chunk generations
++ final long maxGens = Math.max(0L, Math.min(MAX_RATE, Math.min(this.genQueue.size(), this.getMaxChunkGenerates())));
++ final int maxGensThisTick = (int)this.chunkGenerateTicketLimiter.takeAllocation(time, genRate, maxGens);
++ int ratedGensThisTick = 0;
++ while (!this.genQueue.isEmpty()) {
++ final long chunkKey = this.genQueue.firstLong();
++ final int chunkX = CoordinateUtils.getChunkX(chunkKey);
++ final int chunkZ = CoordinateUtils.getChunkZ(chunkKey);
++ final ChunkAccess chunk = this.world.chunkSource.getChunkAtImmediately(chunkX, chunkZ);
++ if (chunk.getStatus() != ChunkStatus.FULL) {
++ // only rate limit actual generations
++ if ((ratedGensThisTick + 1) > maxGensThisTick) {
++ break;
++ }
++ ++ratedGensThisTick;
++ }
++
++ this.genQueue.dequeueLong();
++
++ final byte prev = this.chunkTicketStage.put(chunkKey, CHUNK_TICKET_STAGE_GENERATING);
++ if (prev != CHUNK_TICKET_STAGE_LOADED) {
++ throw new IllegalStateException("Previous state should be " + CHUNK_TICKET_STAGE_LOADED + ", not " + prev);
++ }
++ this.pushDelayedTicketOp(
++ ChunkHolderManager.TicketOperation.addAndRemove(
++ chunkKey,
++ REGION_PLAYER_TICKET, GENERATED_TICKET_LEVEL, this.idBoxed,
++ REGION_PLAYER_TICKET, LOADED_TICKET_LEVEL, this.idBoxed
++ )
++ );
++ this.generatingQueue.enqueue(chunkKey);
++ }
++
++ // try to pull ticking chunks
++ tick_check_outer:
++ while (!this.tickingQueue.isEmpty()) {
++ final long pendingTicking = this.tickingQueue.firstLong();
++ final int pendingChunkX = CoordinateUtils.getChunkX(pendingTicking);
++ final int pendingChunkZ = CoordinateUtils.getChunkZ(pendingTicking);
++
++ final int tickingReq = 2;
++ for (int dz = -tickingReq; dz <= tickingReq; ++dz) {
++ for (int dx = -tickingReq; dx <= tickingReq; ++dx) {
++ if ((dx | dz) == 0) {
++ continue;
++ }
++ final long neighbour = CoordinateUtils.getChunkKey(dx + pendingChunkX, dz + pendingChunkZ);
++ final byte stage = this.chunkTicketStage.get(neighbour);
++ if (stage != CHUNK_TICKET_STAGE_GENERATED && stage != CHUNK_TICKET_STAGE_TICK) {
++ break tick_check_outer;
++ }
++ }
++ }
++ // only gets here if all neighbours were marked as generated or ticking themselves
++ this.tickingQueue.dequeueLong();
++ this.pushDelayedTicketOp(
++ ChunkHolderManager.TicketOperation.addAndRemove(
++ pendingTicking,
++ REGION_PLAYER_TICKET, TICK_TICKET_LEVEL, this.idBoxed,
++ REGION_PLAYER_TICKET, GENERATED_TICKET_LEVEL, this.idBoxed
++ )
++ );
++ // there is no queue to add after ticking
++ final byte prev = this.chunkTicketStage.put(pendingTicking, CHUNK_TICKET_STAGE_TICK);
++ if (prev != CHUNK_TICKET_STAGE_GENERATED) {
++ throw new IllegalStateException("Previous state should be " + CHUNK_TICKET_STAGE_GENERATED + ", not " + prev);
++ }
++ }
++
++ // try to pull sending chunks
++ final long maxSends = Math.max(0L, Math.min(MAX_RATE, Integer.MAX_VALUE)); // no logic to track concurrent sends
++ final int maxSendsThisTick = Math.min((int)this.chunkSendLimiter.takeAllocation(time, sendRate, maxSends), this.sendQueue.size());
++ // we do not return sends that we took from the allocation back because we want to limit the max send rate, not target it
++ for (int i = 0; i < maxSendsThisTick; ++i) {
++ final long pendingSend = this.sendQueue.firstLong();
++ final int pendingSendX = CoordinateUtils.getChunkX(pendingSend);
++ final int pendingSendZ = CoordinateUtils.getChunkZ(pendingSend);
++ final LevelChunk chunk = this.world.chunkSource.getChunkAtIfLoadedMainThreadNoCache(pendingSendX, pendingSendZ);
++ if (!chunk.areNeighboursLoaded(1) || !TickThread.isTickThreadFor(this.world, pendingSendX, pendingSendZ)) {
++ // nothing to do
++ // the target chunk may not be owned by this region, but this should be resolved in the future
++ break;
++ }
++ if (!chunk.isPostProcessingDone) {
++ // not yet post-processed, need to do this so that tile entities can properly be sent to clients
++ chunk.postProcessGeneration();
++ // check if there was any recursive action
++ if (this.removed || this.sendQueue.isEmpty() || this.sendQueue.firstLong() != pendingSend) {
++ return;
++ } // else: good to dequeue and send, fall through
++ }
++ this.sendQueue.dequeueLong();
++
++ this.sendChunk(pendingSendX, pendingSendZ);
++ if (this.removed) {
++ // sendChunk may invoke plugin logic
++ return;
++ }
++ }
++
++ this.flushDelayedTicketOps();
++ // we assume propagate ticket levels happens after this call
++ }
++
++ void add() {
++ TickThread.ensureTickThread(this.player, "Cannot add player asynchronously");
++ if (this.removed) {
++ throw new IllegalStateException("Adding removed player chunk loader");
++ }
++ final ViewDistances playerDistances = this.player.getViewDistances();
++ final ViewDistances worldDistances = this.world.getViewDistances();
++ final int chunkX = this.player.chunkPosition().x;
++ final int chunkZ = this.player.chunkPosition().z;
++
++ final int tickViewDistance = getTickDistance(playerDistances.tickViewDistance, worldDistances.tickViewDistance);
++ // load view cannot be less-than tick view + 1
++ final int loadViewDistance = getLoadViewDistance(tickViewDistance, playerDistances.loadViewDistance, worldDistances.loadViewDistance);
++ // send view cannot be greater-than load view
++ final int clientViewDistance = getClientViewDistance(this.player);
++ final int sendViewDistance = getSendViewDistance(loadViewDistance, clientViewDistance, playerDistances.sendViewDistance, worldDistances.sendViewDistance);
++
++ // send view distances
++ this.player.connection.send(this.updateClientChunkRadius(sendViewDistance));
++ this.player.connection.send(this.updateClientSimulationDistance(tickViewDistance));
++
++ // add to distance maps
++ this.broadcastMap.add(chunkX, chunkZ, sendViewDistance + 1);
++ this.loadTicketCleanup.add(chunkX, chunkZ, loadViewDistance + 1);
++ this.tickMap.add(chunkX, chunkZ, tickViewDistance);
++
++ // update chunk center
++ this.player.connection.send(this.updateClientChunkCenter(chunkX, chunkZ));
++
++ // now we can update
++ this.update();
++ }
++
++ private boolean isLoadedChunkGeneratable(final int chunkX, final int chunkZ) {
++ return this.isLoadedChunkGeneratable(this.world.chunkSource.getChunkAtImmediately(chunkX, chunkZ));
++ }
++
++ private boolean isLoadedChunkGeneratable(final ChunkAccess chunkAccess) {
++ final BelowZeroRetrogen belowZeroRetrogen;
++ // see PortalForcer#findPortalAround
++ return chunkAccess != null && (
++ chunkAccess.getStatus() == ChunkStatus.FULL ||
++ ((belowZeroRetrogen = chunkAccess.getBelowZeroRetrogen()) != null && belowZeroRetrogen.targetStatus().isOrAfter(ChunkStatus.SPAWN))
++ );
++ }
++
++ void update() {
++ TickThread.ensureTickThread(this.player, "Cannot update player asynchronously");
++ if (this.removed) {
++ throw new IllegalStateException("Updating removed player chunk loader");
++ }
++ final ViewDistances playerDistances = this.player.getViewDistances();
++ final ViewDistances worldDistances = this.world.getViewDistances();
++
++ final int tickViewDistance = getTickDistance(playerDistances.tickViewDistance, worldDistances.tickViewDistance);
++ // load view cannot be less-than tick view + 1
++ final int loadViewDistance = getLoadViewDistance(tickViewDistance, playerDistances.loadViewDistance, worldDistances.loadViewDistance);
++ // send view cannot be greater-than load view
++ final int clientViewDistance = getClientViewDistance(this.player);
++ final int sendViewDistance = getSendViewDistance(loadViewDistance, clientViewDistance, playerDistances.sendViewDistance, worldDistances.sendViewDistance);
++
++ final ChunkPos playerPos = this.player.chunkPosition();
++ final boolean canGenerateChunks = this.canPlayerGenerateChunks();
++ final int currentChunkX = playerPos.x;
++ final int currentChunkZ = playerPos.z;
++
++ final int prevChunkX = this.lastChunkX;
++ final int prevChunkZ = this.lastChunkZ;
++
++ if (
++ // has view distance stayed the same?
++ sendViewDistance == this.lastSendDistance
++ && loadViewDistance == this.lastLoadDistance
++ && tickViewDistance == this.lastTickDistance
++
++ // has our chunk stayed the same?
++ && prevChunkX == currentChunkX
++ && prevChunkZ == currentChunkZ
++
++ // can we still generate chunks?
++ && this.canGenerateChunks == canGenerateChunks
++ ) {
++ // nothing we care about changed, so we're not re-calculating
++ return;
++ }
++
++ // update distance maps
++ this.broadcastMap.update(currentChunkX, currentChunkZ, sendViewDistance + 1);
++ this.loadTicketCleanup.update(currentChunkX, currentChunkZ, loadViewDistance + 1);
++ this.tickMap.update(currentChunkX, currentChunkZ, tickViewDistance);
++ if (sendViewDistance > loadViewDistance || tickViewDistance > loadViewDistance) {
++ throw new IllegalStateException();
++ }
++
++ // update VDs for client
++ // this should be after the distance map updates, as they will send unload packets
++ if (this.lastSentChunkRadius != sendViewDistance) {
++ this.player.connection.send(this.updateClientChunkRadius(sendViewDistance));
++ }
++ if (this.lastSentSimulationDistance != tickViewDistance) {
++ this.player.connection.send(this.updateClientSimulationDistance(tickViewDistance));
++ }
++
++ this.sendQueue.clear();
++ this.tickingQueue.clear();
++ this.generatingQueue.clear();
++ this.genQueue.clear();
++ this.loadingQueue.clear();
++ this.loadQueue.clear();
++
++ this.lastChunkX = currentChunkX;
++ this.lastChunkZ = currentChunkZ;
++ this.lastSendDistance = sendViewDistance;
++ this.lastLoadDistance = loadViewDistance;
++ this.lastTickDistance = tickViewDistance;
++ this.canGenerateChunks = canGenerateChunks;
++
++ // +1 since we need to load chunks +1 around the load view distance...
++ final long[] toIterate = SEARCH_RADIUS_ITERATION_LIST[loadViewDistance + 1];
++ // the iteration order is by increasing manhattan distance - so, we do NOT need to
++ // sort anything in the queue!
++ for (final long deltaChunk : toIterate) {
++ final int dx = CoordinateUtils.getChunkX(deltaChunk);
++ final int dz = CoordinateUtils.getChunkZ(deltaChunk);
++ final int chunkX = dx + currentChunkX;
++ final int chunkZ = dz + currentChunkZ;
++ final long chunk = CoordinateUtils.getChunkKey(chunkX, chunkZ);
++ final int squareDistance = Math.max(Math.abs(dx), Math.abs(dz));
++ final int manhattanDistance = Math.abs(dx) + Math.abs(dz);
++
++ // since chunk sending is not by radius alone, we need an extra check here to account for
++ // everything <= sendDistance
++ // Note: Vanilla may want to send chunks outside the send view distance, so we do need
++ // the dist <= view check
++ final boolean sendChunk = (squareDistance <= (sendViewDistance + 1))
++ && wantChunkLoaded(currentChunkX, currentChunkZ, chunkX, chunkZ, sendViewDistance);
++ final boolean sentChunk = sendChunk ? this.sentChunks.contains(chunk) : this.sentChunks.remove(chunk);
++
++ if (!sendChunk && sentChunk) {
++ // have sent the chunk, but don't want it anymore
++ // unload it now
++ this.sendUnloadChunkRaw(chunkX, chunkZ);
++ }
++
++ final byte stage = this.chunkTicketStage.get(chunk);
++ switch (stage) {
++ case CHUNK_TICKET_STAGE_NONE: {
++ // we want the chunk to be at least loaded
++ this.loadQueue.enqueue(chunk);
++ break;
++ }
++ case CHUNK_TICKET_STAGE_LOADING: {
++ this.loadingQueue.enqueue(chunk);
++ break;
++ }
++ case CHUNK_TICKET_STAGE_LOADED: {
++ if (canGenerateChunks || this.isLoadedChunkGeneratable(chunkX, chunkZ)) {
++ this.genQueue.enqueue(chunk);
++ }
++ break;
++ }
++ case CHUNK_TICKET_STAGE_GENERATING: {
++ this.generatingQueue.enqueue(chunk);
++ break;
++ }
++ case CHUNK_TICKET_STAGE_GENERATED: {
++ if (sendChunk && !sentChunk) {
++ this.sendQueue.enqueue(chunk);
++ }
++ if (squareDistance <= tickViewDistance) {
++ this.tickingQueue.enqueue(chunk);
++ }
++ break;
++ }
++ case CHUNK_TICKET_STAGE_TICK: {
++ if (sendChunk && !sentChunk) {
++ this.sendQueue.enqueue(chunk);
++ }
++ break;
++ }
++ default: {
++ throw new IllegalStateException("Unknown stage: " + stage);
++ }
++ }
++ }
++
++ // update the chunk center
++ // this must be done last so that the client does not ignore any of our unload chunk packets above
++ if (this.lastSentChunkCenterX != currentChunkX || this.lastSentChunkCenterZ != currentChunkZ) {
++ this.player.connection.send(this.updateClientChunkCenter(currentChunkX, currentChunkZ));
++ }
++
++ this.flushDelayedTicketOps();
++ }
++
++ void remove() {
++ TickThread.ensureTickThread(this.player, "Cannot add player asynchronously");
++ if (this.removed) {
++ throw new IllegalStateException("Removing removed player chunk loader");
++ }
++ this.removed = true;
++ // sends the chunk unload packets
++ this.broadcastMap.remove();
++ // cleans up loading/generating tickets
++ this.loadTicketCleanup.remove();
++ // cleans up ticking tickets
++ this.tickMap.remove();
++
++ // purge queues
++ this.sendQueue.clear();
++ this.tickingQueue.clear();
++ this.generatingQueue.clear();
++ this.genQueue.clear();
++ this.loadingQueue.clear();
++ this.loadQueue.clear();
++
++ // flush ticket changes
++ this.flushDelayedTicketOps();
++
++ // now all tickets should be removed, which is all of our external state
++ }
++ }
++
++ // TODO rebase into util patch
++ private static final class AllocatingRateLimiter {
++
++ // max difference granularity in ns
++ private static final long MAX_GRANULARITY = TimeUnit.SECONDS.toNanos(1L);
++
++ private double allocation;
++ private long lastAllocationUpdate;
++ private double takeCarry;
++ private long lastTakeUpdate;
++
++ // rate in units/s, and time in ns
++ public void tickAllocation(final long time, final double rate, final double maxAllocation) {
++ final long diff = Math.min(MAX_GRANULARITY, time - this.lastAllocationUpdate);
++ this.lastAllocationUpdate = time;
++
++ this.allocation = Math.min(maxAllocation - this.takeCarry, this.allocation + rate * (diff*1.0E-9D));
++ }
++
++ // rate in units/s, and time in ns
++ public long takeAllocation(final long time, final double rate, final long maxTake) {
++ if (maxTake < 1L) {
++ return 0L;
++ }
++
++ double ret = this.takeCarry;
++ final long diff = Math.min(MAX_GRANULARITY, time - this.lastTakeUpdate);
++ this.lastTakeUpdate = time;
++
++ // note: abs(takeCarry) <= 1.0
++ final double take = Math.min(Math.min((double)maxTake - this.takeCarry, this.allocation), rate * (diff*1.0E-9));
++
++ ret += take;
++ this.allocation -= take;
++
++ final long retInteger = (long)Math.floor(ret);
++ this.takeCarry = ret - (double)retInteger;
++
++ return retInteger;
++ }
++ }
++
++ static final class CountedSRSWLinkedQueue<E> {
++
++ private final SRSWLinkedQueue<E> queue = new SRSWLinkedQueue<>();
++ private volatile long countAdded;
++ private volatile long countRemoved;
++
++ private static final VarHandle COUNT_ADDED_HANDLE = ConcurrentUtil.getVarHandle(CountedSRSWLinkedQueue.class, "countAdded", long.class);
++ private static final VarHandle COUNT_REMOVED_HANDLE = ConcurrentUtil.getVarHandle(CountedSRSWLinkedQueue.class, "countRemoved", long.class);
++
++ private long getCountAddedPlain() {
++ return (long)COUNT_ADDED_HANDLE.get(this);
++ }
++
++ private long getCountAddedAcquire() {
++ return (long)COUNT_ADDED_HANDLE.getAcquire(this);
++ }
++
++ private void setCountAddedRelease(final long to) {
++ COUNT_ADDED_HANDLE.setRelease(this, to);
++ }
++
++ private long getCountRemovedPlain() {
++ return (long)COUNT_REMOVED_HANDLE.get(this);
++ }
++
++ private long getCountRemovedAcquire() {
++ return (long)COUNT_REMOVED_HANDLE.getAcquire(this);
++ }
++
++ private void setCountRemovedRelease(final long to) {
++ COUNT_REMOVED_HANDLE.setRelease(this, to);
++ }
++
++ public void add(final E element) {
++ this.setCountAddedRelease(this.getCountAddedPlain() + 1L);
++ this.queue.addLast(element);
++ }
++
++ public E poll() {
++ final E ret = this.queue.poll();
++ if (ret != null) {
++ this.setCountRemovedRelease(this.getCountRemovedPlain() + 1L);
++ }
++
++ return ret;
++ }
++
++ public long size() {
++ final long removed = this.getCountRemovedAcquire();
++ final long added = this.getCountAddedAcquire();
++
++ return added - removed;
++ }
++ }
++
++ private static class CustomLongArray extends LongArrayList {
++
++ public CustomLongArray() {
++ super();
++ }
++
++ public CustomLongArray(final int expected) {
++ super(expected);
++ }
++
++ public boolean addAll(final CustomLongArray list) {
++ this.addElements(this.size, list.a, 0, list.size);
++ return list.size != 0;
++ }
++
++ public void addUnchecked(final long value) {
++ this.a[this.size++] = value;
++ }
++
++ public void forceSize(final int to) {
++ this.size = to;
++ }
++
++ @Override
++ public int hashCode() {
++ long h = 1L;
++
++ Objects.checkFromToIndex(0, this.size, this.a.length);
++
++ for (int i = 0; i < this.size; ++i) {
++ h = it.unimi.dsi.fastutil.HashCommon.mix(h + this.a[i]);
++ }
++
++ return (int)h;
++ }
++
++ @Override
++ public boolean equals(final Object o) {
++ if (o == this) {
++ return true;
++ }
++
++ if (!(o instanceof CustomLongArray other)) {
++ return false;
++ }
++
++ return this.size == other.size && Arrays.equals(this.a, 0, this.size, other.a, 0, this.size);
++ }
++ }
++
++ private static int getDistanceSize(final int radius, final int max) {
++ if (radius == 0) {
++ return 1;
++ }
++ final int diff = radius - max;
++ if (diff <= 0) {
++ return 4*radius;
++ }
++ return 4*(max - Math.max(0, diff - 1));
++ }
++
++ private static int getQ1DistanceSize(final int radius, final int max) {
++ if (radius == 0) {
++ return 1;
++ }
++ final int diff = radius - max;
++ if (diff <= 0) {
++ return radius+1;
++ }
++ return max - diff + 1;
++ }
++
++ private static final class BasicFIFOLQueue {
++
++ private final long[] values;
++ private int head, tail;
++
++ public BasicFIFOLQueue(final int cap) {
++ if (cap <= 1) {
++ throw new IllegalArgumentException();
++ }
++ this.values = new long[cap];
++ }
++
++ public boolean isEmpty() {
++ return this.head == this.tail;
++ }
++
++ public long removeFirst() {
++ final long ret = this.values[this.head];
++
++ if (this.head == this.tail) {
++ throw new IllegalStateException();
++ }
++
++ ++this.head;
++ if (this.head == this.values.length) {
++ this.head = 0;
++ }
++
++ return ret;
++ }
++
++ public void addLast(final long value) {
++ this.values[this.tail++] = value;
++
++ if (this.tail == this.head) {
++ throw new IllegalStateException();
++ }
++
++ if (this.tail == this.values.length) {
++ this.tail = 0;
++ }
++ }
++ }
++
++ private static CustomLongArray[] makeQ1BFS(final int radius) {
++ final CustomLongArray[] ret = new CustomLongArray[2 * radius + 1];
++ final BasicFIFOLQueue queue = new BasicFIFOLQueue(Math.max(1, 4 * radius) + 1);
++ final LongOpenHashSet seen = new LongOpenHashSet((radius + 1) * (radius + 1));
++
++ seen.add(CoordinateUtils.getChunkKey(0, 0));
++ queue.addLast(CoordinateUtils.getChunkKey(0, 0));
++ while (!queue.isEmpty()) {
++ final long chunk = queue.removeFirst();
++ final int chunkX = CoordinateUtils.getChunkX(chunk);
++ final int chunkZ = CoordinateUtils.getChunkZ(chunk);
++
++ final int index = Math.abs(chunkX) + Math.abs(chunkZ);
++ final CustomLongArray list = ret[index];
++ if (list != null) {
++ list.addUnchecked(chunk);
++ } else {
++ (ret[index] = new CustomLongArray(getQ1DistanceSize(index, radius))).addUnchecked(chunk);
++ }
++
++ for (int i = 0; i < 4; ++i) {
++ // 0 -> -1, 0
++ // 1 -> 0, -1
++ // 2 -> 1, 0
++ // 3 -> 0, 1
++
++ final int signInv = -(i >>> 1); // 2/3 -> -(1), 0/1 -> -(0)
++ // note: -n = (~n) + 1
++ // (n ^ signInv) - signInv = signInv == 0 ? ((n ^ 0) - 0 = n) : ((n ^ -1) - (-1) = ~n + 1)
++
++ final int axis = i & 1; // 0/2 -> 0, 1/3 -> 1
++ final int dx = ((axis - 1) ^ signInv) - signInv; // 0 -> -1, 1 -> 0
++ final int dz = (-axis ^ signInv) - signInv; // 0 -> 0, 1 -> -1
++
++ final int neighbourX = chunkX + dx;
++ final int neighbourZ = chunkZ + dz;
++ final long neighbour = CoordinateUtils.getChunkKey(neighbourX, neighbourZ);
++
++ if ((neighbourX | neighbourZ) < 0 || Math.max(Math.abs(neighbourX), Math.abs(neighbourZ)) > radius) {
++ // don't enqueue out of range
++ continue;
++ }
++
++ if (!seen.add(neighbour)) {
++ continue;
++ }
++
++ queue.addLast(neighbour);
++ }
++ }
++
++ return ret;
++ }
++
++ // doesn't appear worth optimising this function now, even though it's 70% of the call
++ private static CustomLongArray spread(final CustomLongArray input, final int size) {
++ final LongLinkedOpenHashSet notAdded = new LongLinkedOpenHashSet(input);
++ final CustomLongArray added = new CustomLongArray(size);
++
++ while (!notAdded.isEmpty()) {
++ if (added.isEmpty()) {
++ added.addUnchecked(notAdded.removeLastLong());
++ continue;
++ }
++
++ long maxChunk = -1L;
++ int maxDist = 0;
++
++ // select the chunk from the not yet added set that has the largest minimum distance from
++ // the current set of added chunks
++
++ for (final LongIterator iterator = notAdded.iterator(); iterator.hasNext();) {
++ final long chunkKey = iterator.nextLong();
++ final int chunkX = CoordinateUtils.getChunkX(chunkKey);
++ final int chunkZ = CoordinateUtils.getChunkZ(chunkKey);
++
++ int minDist = Integer.MAX_VALUE;
++
++ final int len = added.size();
++ final long[] addedArr = added.elements();
++ Objects.checkFromToIndex(0, len, addedArr.length);
++ for (int i = 0; i < len; ++i) {
++ final long addedKey = addedArr[i];
++ final int addedX = CoordinateUtils.getChunkX(addedKey);
++ final int addedZ = CoordinateUtils.getChunkZ(addedKey);
++
++ // here we use square distance because chunk generation uses neighbours in a square radius
++ final int dist = Math.max(Math.abs(addedX - chunkX), Math.abs(addedZ - chunkZ));
++
++ minDist = Math.min(dist, minDist);
++ }
++
++ if (minDist > maxDist) {
++ maxDist = minDist;
++ maxChunk = chunkKey;
++ }
++ }
++
++ // move the selected chunk from the not added set to the added set
++
++ if (!notAdded.remove(maxChunk)) {
++ throw new IllegalStateException();
++ }
++
++ added.addUnchecked(maxChunk);
++ }
++
++ return added;
++ }
++}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/entity/EntityLookup.java b/src/main/java/io/papermc/paper/chunk/system/entity/EntityLookup.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..15ee41452992714108efe53b708b5a4e1da7c1ff
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/entity/EntityLookup.java
+@@ -0,0 +1,902 @@
++package io.papermc.paper.chunk.system.entity;
++
++import com.destroystokyo.paper.util.maplist.EntityList;
++import com.mojang.logging.LogUtils;
++import io.papermc.paper.util.CoordinateUtils;
++import io.papermc.paper.util.TickThread;
++import io.papermc.paper.util.WorldUtil;
++import io.papermc.paper.world.ChunkEntitySlices;
++import it.unimi.dsi.fastutil.ints.Int2ReferenceOpenHashMap;
++import it.unimi.dsi.fastutil.longs.Long2ObjectOpenHashMap;
++import it.unimi.dsi.fastutil.objects.Object2ReferenceOpenHashMap;
++import net.minecraft.core.BlockPos;
++import io.papermc.paper.chunk.system.ChunkSystem;
++import net.minecraft.server.level.ChunkHolder;
++import net.minecraft.server.level.ServerLevel;
++import net.minecraft.util.AbortableIterationConsumer;
++import net.minecraft.util.Mth;
++import net.minecraft.world.entity.Entity;
++import net.minecraft.world.entity.EntityType;
++import net.minecraft.world.level.ChunkPos;
++import net.minecraft.world.level.entity.EntityInLevelCallback;
++import net.minecraft.world.level.entity.EntityTypeTest;
++import net.minecraft.world.level.entity.LevelCallback;
++import net.minecraft.server.level.FullChunkStatus;
++import net.minecraft.world.level.entity.LevelEntityGetter;
++import net.minecraft.world.level.entity.Visibility;
++import net.minecraft.world.phys.AABB;
++import net.minecraft.world.phys.Vec3;
++import org.jetbrains.annotations.NotNull;
++import org.jetbrains.annotations.Nullable;
++import org.slf4j.Logger;
++import java.util.ArrayList;
++import java.util.Arrays;
++import java.util.Iterator;
++import java.util.List;
++import java.util.NoSuchElementException;
++import java.util.UUID;
++import java.util.concurrent.locks.StampedLock;
++import java.util.function.Consumer;
++import java.util.function.Predicate;
++
++public final class EntityLookup implements LevelEntityGetter<Entity> {
++
++ private static final Logger LOGGER = LogUtils.getClassLogger();
++
++ protected static final int REGION_SHIFT = 5;
++ protected static final int REGION_MASK = (1 << REGION_SHIFT) - 1;
++ protected static final int REGION_SIZE = 1 << REGION_SHIFT;
++
++ public final ServerLevel world;
++
++ private final StampedLock stateLock = new StampedLock();
++ protected final Long2ObjectOpenHashMap<ChunkSlicesRegion> regions = new Long2ObjectOpenHashMap<>(128, 0.5f);
++
++ private final int minSection; // inclusive
++ private final int maxSection; // inclusive
++ private final LevelCallback<Entity> worldCallback;
++
++ private final StampedLock entityByLock = new StampedLock();
++ private final Int2ReferenceOpenHashMap<Entity> entityById = new Int2ReferenceOpenHashMap<>();
++ private final Object2ReferenceOpenHashMap<UUID, Entity> entityByUUID = new Object2ReferenceOpenHashMap<>();
++ private final EntityList accessibleEntities = new EntityList();
++
++ public EntityLookup(final ServerLevel world, final LevelCallback<Entity> worldCallback) {
++ this.world = world;
++ this.minSection = WorldUtil.getMinSection(world);
++ this.maxSection = WorldUtil.getMaxSection(world);
++ this.worldCallback = worldCallback;
++ }
++
++ private static Entity maskNonAccessible(final Entity entity) {
++ if (entity == null) {
++ return null;
++ }
++ final Visibility visibility = EntityLookup.getEntityStatus(entity);
++ return visibility.isAccessible() ? entity : null;
++ }
++
++ @Nullable
++ @Override
++ public Entity get(final int id) {
++ final long attempt = this.entityByLock.tryOptimisticRead();
++ if (attempt != 0L) {
++ try {
++ final Entity ret = this.entityById.get(id);
++
++ if (this.entityByLock.validate(attempt)) {
++ return maskNonAccessible(ret);
++ }
++ } catch (final Error error) {
++ throw error;
++ } catch (final Throwable thr) {
++ // ignore
++ }
++ }
++
++ this.entityByLock.readLock();
++ try {
++ return maskNonAccessible(this.entityById.get(id));
++ } finally {
++ this.entityByLock.tryUnlockRead();
++ }
++ }
++
++ @Nullable
++ @Override
++ public Entity get(final UUID id) {
++ final long attempt = this.entityByLock.tryOptimisticRead();
++ if (attempt != 0L) {
++ try {
++ final Entity ret = this.entityByUUID.get(id);
++
++ if (this.entityByLock.validate(attempt)) {
++ return maskNonAccessible(ret);
++ }
++ } catch (final Error error) {
++ throw error;
++ } catch (final Throwable thr) {
++ // ignore
++ }
++ }
++
++ this.entityByLock.readLock();
++ try {
++ return maskNonAccessible(this.entityByUUID.get(id));
++ } finally {
++ this.entityByLock.tryUnlockRead();
++ }
++ }
++
++ public boolean hasEntity(final UUID uuid) {
++ return this.get(uuid) != null;
++ }
++
++ public String getDebugInfo() {
++ return "count_id:" + this.entityById.size() + ",count_uuid:" + this.entityByUUID.size() + ",region_count:" + this.regions.size();
++ }
++
++ static final class ArrayIterable<T> implements Iterable<T> {
++
++ private final T[] array;
++ private final int off;
++ private final int length;
++
++ public ArrayIterable(final T[] array, final int off, final int length) {
++ this.array = array;
++ this.off = off;
++ this.length = length;
++ if (length > array.length) {
++ throw new IllegalArgumentException("Length must be no greater-than the array length");
++ }
++ }
++
++ @NotNull
++ @Override
++ public Iterator<T> iterator() {
++ return new ArrayIterator<>(this.array, this.off, this.length);
++ }
++
++ static final class ArrayIterator<T> implements Iterator<T> {
++
++ private final T[] array;
++ private int off;
++ private final int length;
++
++ public ArrayIterator(final T[] array, final int off, final int length) {
++ this.array = array;
++ this.off = off;
++ this.length = length;
++ }
++
++ @Override
++ public boolean hasNext() {
++ return this.off < this.length;
++ }
++
++ @Override
++ public T next() {
++ if (this.off >= this.length) {
++ throw new NoSuchElementException();
++ }
++ return this.array[this.off++];
++ }
++
++ @Override
++ public void remove() {
++ throw new UnsupportedOperationException();
++ }
++ }
++ }
++
++ @Override
++ public Iterable<Entity> getAll() {
++ return new ArrayIterable<>(this.accessibleEntities.getRawData(), 0, this.accessibleEntities.size());
++ }
++
++ public Entity[] getAllCopy() {
++ return Arrays.copyOf(this.accessibleEntities.getRawData(), this.accessibleEntities.size(), Entity[].class);
++ }
++
++ @Override
++ public <U extends Entity> void get(final EntityTypeTest<Entity, U> filter, final AbortableIterationConsumer<U> action) {
++ final Int2ReferenceOpenHashMap<Entity> entityCopy;
++
++ this.entityByLock.readLock();
++ try {
++ entityCopy = this.entityById.clone();
++ } finally {
++ this.entityByLock.tryUnlockRead();
++ }
++ for (final Entity entity : entityCopy.values()) {
++ final Visibility visibility = EntityLookup.getEntityStatus(entity);
++ if (!visibility.isAccessible()) {
++ continue;
++ }
++ final U casted = filter.tryCast(entity);
++ if (casted != null && action.accept(casted).shouldAbort()) {
++ break;
++ }
++ }
++ }
++
++ @Override
++ public void get(final AABB box, final Consumer<Entity> action) {
++ List<Entity> entities = new ArrayList<>();
++ this.getEntitiesWithoutDragonParts(null, box, entities, null);
++ for (int i = 0, len = entities.size(); i < len; ++i) {
++ action.accept(entities.get(i));
++ }
++ }
++
++ @Override
++ public <U extends Entity> void get(final EntityTypeTest<Entity, U> filter, final AABB box, final AbortableIterationConsumer<U> action) {
++ List<Entity> entities = new ArrayList<>();
++ this.getEntitiesWithoutDragonParts(null, box, entities, null);
++ for (int i = 0, len = entities.size(); i < len; ++i) {
++ final U casted = filter.tryCast(entities.get(i));
++ if (casted != null && action.accept(casted).shouldAbort()) {
++ break;
++ }
++ }
++ }
++
++ public void entityStatusChange(final Entity entity, final ChunkEntitySlices slices, final Visibility oldVisibility, final Visibility newVisibility, final boolean moved,
++ final boolean created, final boolean destroyed) {
++ TickThread.ensureTickThread(entity, "Entity status change must only happen on the main thread");
++
++ if (entity.updatingSectionStatus) {
++ // recursive status update
++ LOGGER.error("Cannot recursively update entity chunk status for entity " + entity, new Throwable());
++ return;
++ }
++
++ final boolean entityStatusUpdateBefore = slices == null ? false : slices.startPreventingStatusUpdates();
++
++ if (entityStatusUpdateBefore) {
++ LOGGER.error("Cannot update chunk status for entity " + entity + " since entity chunk (" + slices.chunkX + "," + slices.chunkZ + ") is receiving update", new Throwable());
++ return;
++ }
++
++ try {
++ final Boolean ticketBlockBefore = this.world.chunkTaskScheduler.chunkHolderManager.blockTicketUpdates();
++ try {
++ entity.updatingSectionStatus = true;
++ try {
++ if (created) {
++ EntityLookup.this.worldCallback.onCreated(entity);
++ }
++
++ if (oldVisibility == newVisibility) {
++ if (moved && newVisibility.isAccessible()) {
++ EntityLookup.this.worldCallback.onSectionChange(entity);
++ }
++ return;
++ }
++
++ if (newVisibility.ordinal() > oldVisibility.ordinal()) {
++ // status upgrade
++ if (!oldVisibility.isAccessible() && newVisibility.isAccessible()) {
++ this.accessibleEntities.add(entity);
++ EntityLookup.this.worldCallback.onTrackingStart(entity);
++ }
++
++ if (!oldVisibility.isTicking() && newVisibility.isTicking()) {
++ EntityLookup.this.worldCallback.onTickingStart(entity);
++ }
++ } else {
++ // status downgrade
++ if (oldVisibility.isTicking() && !newVisibility.isTicking()) {
++ EntityLookup.this.worldCallback.onTickingEnd(entity);
++ }
++
++ if (oldVisibility.isAccessible() && !newVisibility.isAccessible()) {
++ this.accessibleEntities.remove(entity);
++ EntityLookup.this.worldCallback.onTrackingEnd(entity);
++ }
++ }
++
++ if (moved && newVisibility.isAccessible()) {
++ EntityLookup.this.worldCallback.onSectionChange(entity);
++ }
++
++ if (destroyed) {
++ EntityLookup.this.worldCallback.onDestroyed(entity);
++ }
++ } finally {
++ entity.updatingSectionStatus = false;
++ }
++ } finally {
++ this.world.chunkTaskScheduler.chunkHolderManager.unblockTicketUpdates(ticketBlockBefore);
++ }
++ } finally {
++ if (slices != null) {
++ slices.stopPreventingStatusUpdates(false);
++ }
++ }
++ }
++
++ public void chunkStatusChange(final int x, final int z, final FullChunkStatus newStatus) {
++ this.getChunk(x, z).updateStatus(newStatus, this);
++ }
++
++ public void addLegacyChunkEntities(final List<Entity> entities, final ChunkPos forChunk) {
++ this.addEntityChunk(entities, forChunk, true);
++ }
++
++ public void addEntityChunkEntities(final List<Entity> entities, final ChunkPos forChunk) {
++ this.addEntityChunk(entities, forChunk, true);
++ }
++
++ public void addWorldGenChunkEntities(final List<Entity> entities, final ChunkPos forChunk) {
++ this.addEntityChunk(entities, forChunk, false);
++ }
++
++ private void addRecursivelySafe(final Entity root, final boolean fromDisk) {
++ if (!this.addEntity(root, fromDisk)) {
++ // possible we are a passenger, and so should dismount from any valid entity in the world
++ root.stopRiding(true);
++ return;
++ }
++ for (final Entity passenger : root.getPassengers()) {
++ this.addRecursivelySafe(passenger, fromDisk);
++ }
++ }
++
++ private void addEntityChunk(final List<Entity> entities, final ChunkPos forChunk, final boolean fromDisk) {
++ for (int i = 0, len = entities.size(); i < len; ++i) {
++ final Entity entity = entities.get(i);
++ if (entity.isPassenger()) {
++ continue;
++ }
++
++ if (!entity.chunkPosition().equals(forChunk)) {
++ LOGGER.warn("Root entity " + entity + " is outside of serialized chunk " + forChunk);
++ // can't set removed here, as we may not own the chunk position
++ // skip the entity
++ continue;
++ }
++
++ final Vec3 rootPosition = entity.position();
++
++ // always adjust positions before adding passengers in case plugins access the entity, and so that
++ // they are added to the right entity chunk
++ for (final Entity passenger : entity.getIndirectPassengers()) {
++ if (!passenger.chunkPosition().equals(forChunk)) {
++ passenger.setPosRaw(rootPosition.x, rootPosition.y, rootPosition.z, true);
++ }
++ }
++
++ this.addRecursivelySafe(entity, fromDisk);
++ }
++ }
++
++ public boolean addNewEntity(final Entity entity) {
++ return this.addEntity(entity, false);
++ }
++
++ public static Visibility getEntityStatus(final Entity entity) {
++ if (entity.isAlwaysTicking()) {
++ return Visibility.TICKING;
++ }
++ final FullChunkStatus entityStatus = entity.chunkStatus;
++ return Visibility.fromFullChunkStatus(entityStatus == null ? FullChunkStatus.INACCESSIBLE : entityStatus);
++ }
++
++ private boolean addEntity(final Entity entity, final boolean fromDisk) {
++ final BlockPos pos = entity.blockPosition();
++ final int sectionX = pos.getX() >> 4;
++ final int sectionY = Mth.clamp(pos.getY() >> 4, this.minSection, this.maxSection);
++ final int sectionZ = pos.getZ() >> 4;
++ TickThread.ensureTickThread(this.world, sectionX, sectionZ, "Cannot add entity off-main thread");
++
++ if (entity.isRemoved()) {
++ LOGGER.warn("Refusing to add removed entity: " + entity);
++ return false;
++ }
++
++ if (entity.updatingSectionStatus) {
++ LOGGER.warn("Entity " + entity + " is currently prevented from being added/removed to world since it is processing section status updates", new Throwable());
++ return false;
++ }
++
++ if (fromDisk) {
++ ChunkSystem.onEntityPreAdd(this.world, entity);
++ if (entity.isRemoved()) {
++ // removed from checkDupeUUID call
++ return false;
++ }
++ }
++
++ this.entityByLock.writeLock();
++ try {
++ if (this.entityById.containsKey(entity.getId())) {
++ LOGGER.warn("Entity id already exists: " + entity.getId() + ", mapped to " + this.entityById.get(entity.getId()) + ", can't add " + entity);
++ return false;
++ }
++ if (this.entityByUUID.containsKey(entity.getUUID())) {
++ LOGGER.warn("Entity uuid already exists: " + entity.getUUID() + ", mapped to " + this.entityByUUID.get(entity.getUUID()) + ", can't add " + entity);
++ return false;
++ }
++ this.entityById.put(entity.getId(), entity);
++ this.entityByUUID.put(entity.getUUID(), entity);
++ } finally {
++ this.entityByLock.tryUnlockWrite();
++ }
++
++ entity.sectionX = sectionX;
++ entity.sectionY = sectionY;
++ entity.sectionZ = sectionZ;
++ final ChunkEntitySlices slices = this.getOrCreateChunk(sectionX, sectionZ);
++ if (!slices.addEntity(entity, sectionY)) {
++ LOGGER.warn("Entity " + entity + " added to world '" + this.world.getWorld().getName() + "', but was already contained in entity chunk (" + sectionX + "," + sectionZ + ")");
++ }
++
++ entity.setLevelCallback(new EntityCallback(entity));
++
++ this.entityStatusChange(entity, slices, Visibility.HIDDEN, getEntityStatus(entity), false, !fromDisk, false);
++
++ return true;
++ }
++
++ public boolean canRemoveEntity(final Entity entity) {
++ if (entity.updatingSectionStatus) {
++ return false;
++ }
++
++ final int sectionX = entity.sectionX;
++ final int sectionZ = entity.sectionZ;
++ final ChunkEntitySlices slices = this.getChunk(sectionX, sectionZ);
++ return slices == null || !slices.isPreventingStatusUpdates();
++ }
++
++ private void removeEntity(final Entity entity) {
++ final int sectionX = entity.sectionX;
++ final int sectionY = entity.sectionY;
++ final int sectionZ = entity.sectionZ;
++ TickThread.ensureTickThread(this.world, sectionX, sectionZ, "Cannot remove entity off-main");
++ if (!entity.isRemoved()) {
++ throw new IllegalStateException("Only call Entity#setRemoved to remove an entity");
++ }
++ final ChunkEntitySlices slices = this.getChunk(sectionX, sectionZ);
++ // all entities should be in a chunk
++ if (slices == null) {
++ LOGGER.warn("Cannot remove entity " + entity + " from null entity slices (" + sectionX + "," + sectionZ + ")");
++ } else {
++ if (slices.isPreventingStatusUpdates()) {
++ throw new IllegalStateException("Attempting to remove entity " + entity + " from entity slices (" + sectionX + "," + sectionZ + ") that is receiving status updates");
++ }
++ if (!slices.removeEntity(entity, sectionY)) {
++ LOGGER.warn("Failed to remove entity " + entity + " from entity slices (" + sectionX + "," + sectionZ + ")");
++ }
++ }
++ entity.sectionX = entity.sectionY = entity.sectionZ = Integer.MIN_VALUE;
++
++ this.entityByLock.writeLock();
++ try {
++ if (!this.entityById.remove(entity.getId(), entity)) {
++ LOGGER.warn("Failed to remove entity " + entity + " by id, current entity mapped: " + this.entityById.get(entity.getId()));
++ }
++ if (!this.entityByUUID.remove(entity.getUUID(), entity)) {
++ LOGGER.warn("Failed to remove entity " + entity + " by uuid, current entity mapped: " + this.entityByUUID.get(entity.getUUID()));
++ }
++ } finally {
++ this.entityByLock.tryUnlockWrite();
++ }
++ }
++
++ private ChunkEntitySlices moveEntity(final Entity entity) {
++ // ensure we own the entity
++ TickThread.ensureTickThread(entity, "Cannot move entity off-main");
++
++ final BlockPos newPos = entity.blockPosition();
++ final int newSectionX = newPos.getX() >> 4;
++ final int newSectionY = Mth.clamp(newPos.getY() >> 4, this.minSection, this.maxSection);
++ final int newSectionZ = newPos.getZ() >> 4;
++
++ if (newSectionX == entity.sectionX && newSectionY == entity.sectionY && newSectionZ == entity.sectionZ) {
++ return null;
++ }
++
++ // ensure the new section is owned by this tick thread
++ TickThread.ensureTickThread(this.world, newSectionX, newSectionZ, "Cannot move entity off-main");
++
++ // ensure the old section is owned by this tick thread
++ TickThread.ensureTickThread(this.world, entity.sectionX, entity.sectionZ, "Cannot move entity off-main");
++
++ final ChunkEntitySlices old = this.getChunk(entity.sectionX, entity.sectionZ);
++ final ChunkEntitySlices slices = this.getOrCreateChunk(newSectionX, newSectionZ);
++
++ if (!old.removeEntity(entity, entity.sectionY)) {
++ LOGGER.warn("Could not remove entity " + entity + " from its old chunk section (" + entity.sectionX + "," + entity.sectionY + "," + entity.sectionZ + ") since it was not contained in the section");
++ }
++
++ if (!slices.addEntity(entity, newSectionY)) {
++ LOGGER.warn("Could not add entity " + entity + " to its new chunk section (" + newSectionX + "," + newSectionY + "," + newSectionZ + ") as it is already contained in the section");
++ }
++
++ entity.sectionX = newSectionX;
++ entity.sectionY = newSectionY;
++ entity.sectionZ = newSectionZ;
++
++ return slices;
++ }
++
++ public void getEntitiesWithoutDragonParts(final Entity except, final AABB box, final List<Entity> into, final Predicate<? super Entity> predicate) {
++ final int minChunkX = (Mth.floor(box.minX) - 2) >> 4;
++ final int minChunkZ = (Mth.floor(box.minZ) - 2) >> 4;
++ final int maxChunkX = (Mth.floor(box.maxX) + 2) >> 4;
++ final int maxChunkZ = (Mth.floor(box.maxZ) + 2) >> 4;
++
++ final int minRegionX = minChunkX >> REGION_SHIFT;
++ final int minRegionZ = minChunkZ >> REGION_SHIFT;
++ final int maxRegionX = maxChunkX >> REGION_SHIFT;
++ final int maxRegionZ = maxChunkZ >> REGION_SHIFT;
++
++ for (int currRegionZ = minRegionZ; currRegionZ <= maxRegionZ; ++currRegionZ) {
++ final int minZ = currRegionZ == minRegionZ ? minChunkZ & REGION_MASK : 0;
++ final int maxZ = currRegionZ == maxRegionZ ? maxChunkZ & REGION_MASK : REGION_MASK;
++
++ for (int currRegionX = minRegionX; currRegionX <= maxRegionX; ++currRegionX) {
++ final ChunkSlicesRegion region = this.getRegion(currRegionX, currRegionZ);
++
++ if (region == null) {
++ continue;
++ }
++
++ final int minX = currRegionX == minRegionX ? minChunkX & REGION_MASK : 0;
++ final int maxX = currRegionX == maxRegionX ? maxChunkX & REGION_MASK : REGION_MASK;
++
++ for (int currZ = minZ; currZ <= maxZ; ++currZ) {
++ for (int currX = minX; currX <= maxX; ++currX) {
++ final ChunkEntitySlices chunk = region.get(currX | (currZ << REGION_SHIFT));
++ if (chunk == null || !chunk.status.isOrAfter(FullChunkStatus.FULL)) {
++ continue;
++ }
++
++ chunk.getEntitiesWithoutDragonParts(except, box, into, predicate);
++ }
++ }
++ }
++ }
++ }
++
++ public void getEntities(final Entity except, final AABB box, final List<Entity> into, final Predicate<? super Entity> predicate) {
++ final int minChunkX = (Mth.floor(box.minX) - 2) >> 4;
++ final int minChunkZ = (Mth.floor(box.minZ) - 2) >> 4;
++ final int maxChunkX = (Mth.floor(box.maxX) + 2) >> 4;
++ final int maxChunkZ = (Mth.floor(box.maxZ) + 2) >> 4;
++
++ final int minRegionX = minChunkX >> REGION_SHIFT;
++ final int minRegionZ = minChunkZ >> REGION_SHIFT;
++ final int maxRegionX = maxChunkX >> REGION_SHIFT;
++ final int maxRegionZ = maxChunkZ >> REGION_SHIFT;
++
++ for (int currRegionZ = minRegionZ; currRegionZ <= maxRegionZ; ++currRegionZ) {
++ final int minZ = currRegionZ == minRegionZ ? minChunkZ & REGION_MASK : 0;
++ final int maxZ = currRegionZ == maxRegionZ ? maxChunkZ & REGION_MASK : REGION_MASK;
++
++ for (int currRegionX = minRegionX; currRegionX <= maxRegionX; ++currRegionX) {
++ final ChunkSlicesRegion region = this.getRegion(currRegionX, currRegionZ);
++
++ if (region == null) {
++ continue;
++ }
++
++ final int minX = currRegionX == minRegionX ? minChunkX & REGION_MASK : 0;
++ final int maxX = currRegionX == maxRegionX ? maxChunkX & REGION_MASK : REGION_MASK;
++
++ for (int currZ = minZ; currZ <= maxZ; ++currZ) {
++ for (int currX = minX; currX <= maxX; ++currX) {
++ final ChunkEntitySlices chunk = region.get(currX | (currZ << REGION_SHIFT));
++ if (chunk == null || !chunk.status.isOrAfter(FullChunkStatus.FULL)) {
++ continue;
++ }
++
++ chunk.getEntities(except, box, into, predicate);
++ }
++ }
++ }
++ }
++ }
++
++ public void getHardCollidingEntities(final Entity except, final AABB box, final List<Entity> into, final Predicate<? super Entity> predicate) {
++ final int minChunkX = (Mth.floor(box.minX) - 2) >> 4;
++ final int minChunkZ = (Mth.floor(box.minZ) - 2) >> 4;
++ final int maxChunkX = (Mth.floor(box.maxX) + 2) >> 4;
++ final int maxChunkZ = (Mth.floor(box.maxZ) + 2) >> 4;
++
++ final int minRegionX = minChunkX >> REGION_SHIFT;
++ final int minRegionZ = minChunkZ >> REGION_SHIFT;
++ final int maxRegionX = maxChunkX >> REGION_SHIFT;
++ final int maxRegionZ = maxChunkZ >> REGION_SHIFT;
++
++ for (int currRegionZ = minRegionZ; currRegionZ <= maxRegionZ; ++currRegionZ) {
++ final int minZ = currRegionZ == minRegionZ ? minChunkZ & REGION_MASK : 0;
++ final int maxZ = currRegionZ == maxRegionZ ? maxChunkZ & REGION_MASK : REGION_MASK;
++
++ for (int currRegionX = minRegionX; currRegionX <= maxRegionX; ++currRegionX) {
++ final ChunkSlicesRegion region = this.getRegion(currRegionX, currRegionZ);
++
++ if (region == null) {
++ continue;
++ }
++
++ final int minX = currRegionX == minRegionX ? minChunkX & REGION_MASK : 0;
++ final int maxX = currRegionX == maxRegionX ? maxChunkX & REGION_MASK : REGION_MASK;
++
++ for (int currZ = minZ; currZ <= maxZ; ++currZ) {
++ for (int currX = minX; currX <= maxX; ++currX) {
++ final ChunkEntitySlices chunk = region.get(currX | (currZ << REGION_SHIFT));
++ if (chunk == null || !chunk.status.isOrAfter(FullChunkStatus.FULL)) {
++ continue;
++ }
++
++ chunk.getHardCollidingEntities(except, box, into, predicate);
++ }
++ }
++ }
++ }
++ }
++
++ public <T extends Entity> void getEntities(final EntityType<?> type, final AABB box, final List<? super T> into,
++ final Predicate<? super T> predicate) {
++ final int minChunkX = (Mth.floor(box.minX) - 2) >> 4;
++ final int minChunkZ = (Mth.floor(box.minZ) - 2) >> 4;
++ final int maxChunkX = (Mth.floor(box.maxX) + 2) >> 4;
++ final int maxChunkZ = (Mth.floor(box.maxZ) + 2) >> 4;
++
++ final int minRegionX = minChunkX >> REGION_SHIFT;
++ final int minRegionZ = minChunkZ >> REGION_SHIFT;
++ final int maxRegionX = maxChunkX >> REGION_SHIFT;
++ final int maxRegionZ = maxChunkZ >> REGION_SHIFT;
++
++ for (int currRegionZ = minRegionZ; currRegionZ <= maxRegionZ; ++currRegionZ) {
++ final int minZ = currRegionZ == minRegionZ ? minChunkZ & REGION_MASK : 0;
++ final int maxZ = currRegionZ == maxRegionZ ? maxChunkZ & REGION_MASK : REGION_MASK;
++
++ for (int currRegionX = minRegionX; currRegionX <= maxRegionX; ++currRegionX) {
++ final ChunkSlicesRegion region = this.getRegion(currRegionX, currRegionZ);
++
++ if (region == null) {
++ continue;
++ }
++
++ final int minX = currRegionX == minRegionX ? minChunkX & REGION_MASK : 0;
++ final int maxX = currRegionX == maxRegionX ? maxChunkX & REGION_MASK : REGION_MASK;
++
++ for (int currZ = minZ; currZ <= maxZ; ++currZ) {
++ for (int currX = minX; currX <= maxX; ++currX) {
++ final ChunkEntitySlices chunk = region.get(currX | (currZ << REGION_SHIFT));
++ if (chunk == null || !chunk.status.isOrAfter(FullChunkStatus.FULL)) {
++ continue;
++ }
++
++ chunk.getEntities(type, box, (List)into, (Predicate)predicate);
++ }
++ }
++ }
++ }
++ }
++
++ public <T extends Entity> void getEntities(final Class<? extends T> clazz, final Entity except, final AABB box, final List<? super T> into,
++ final Predicate<? super T> predicate) {
++ final int minChunkX = (Mth.floor(box.minX) - 2) >> 4;
++ final int minChunkZ = (Mth.floor(box.minZ) - 2) >> 4;
++ final int maxChunkX = (Mth.floor(box.maxX) + 2) >> 4;
++ final int maxChunkZ = (Mth.floor(box.maxZ) + 2) >> 4;
++
++ final int minRegionX = minChunkX >> REGION_SHIFT;
++ final int minRegionZ = minChunkZ >> REGION_SHIFT;
++ final int maxRegionX = maxChunkX >> REGION_SHIFT;
++ final int maxRegionZ = maxChunkZ >> REGION_SHIFT;
++
++ for (int currRegionZ = minRegionZ; currRegionZ <= maxRegionZ; ++currRegionZ) {
++ final int minZ = currRegionZ == minRegionZ ? minChunkZ & REGION_MASK : 0;
++ final int maxZ = currRegionZ == maxRegionZ ? maxChunkZ & REGION_MASK : REGION_MASK;
++
++ for (int currRegionX = minRegionX; currRegionX <= maxRegionX; ++currRegionX) {
++ final ChunkSlicesRegion region = this.getRegion(currRegionX, currRegionZ);
++
++ if (region == null) {
++ continue;
++ }
++
++ final int minX = currRegionX == minRegionX ? minChunkX & REGION_MASK : 0;
++ final int maxX = currRegionX == maxRegionX ? maxChunkX & REGION_MASK : REGION_MASK;
++
++ for (int currZ = minZ; currZ <= maxZ; ++currZ) {
++ for (int currX = minX; currX <= maxX; ++currX) {
++ final ChunkEntitySlices chunk = region.get(currX | (currZ << REGION_SHIFT));
++ if (chunk == null || !chunk.status.isOrAfter(FullChunkStatus.FULL)) {
++ continue;
++ }
++
++ chunk.getEntities(clazz, except, box, into, predicate);
++ }
++ }
++ }
++ }
++ }
++
++ public void entitySectionLoad(final int chunkX, final int chunkZ, final ChunkEntitySlices slices) {
++ TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Cannot load in entity section off-main");
++ synchronized (this) {
++ final ChunkEntitySlices curr = this.getChunk(chunkX, chunkZ);
++ if (curr != null) {
++ this.removeChunk(chunkX, chunkZ);
++
++ curr.mergeInto(slices);
++
++ this.addChunk(chunkX, chunkZ, slices);
++ } else {
++ this.addChunk(chunkX, chunkZ, slices);
++ }
++ }
++ }
++
++ public void entitySectionUnload(final int chunkX, final int chunkZ) {
++ TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Cannot unload entity section off-main");
++ this.removeChunk(chunkX, chunkZ);
++ }
++
++ public ChunkEntitySlices getChunk(final int chunkX, final int chunkZ) {
++ final ChunkSlicesRegion region = this.getRegion(chunkX >> REGION_SHIFT, chunkZ >> REGION_SHIFT);
++ if (region == null) {
++ return null;
++ }
++
++ return region.get((chunkX & REGION_MASK) | ((chunkZ & REGION_MASK) << REGION_SHIFT));
++ }
++
++ public ChunkEntitySlices getOrCreateChunk(final int chunkX, final int chunkZ) {
++ final ChunkSlicesRegion region = this.getRegion(chunkX >> REGION_SHIFT, chunkZ >> REGION_SHIFT);
++ ChunkEntitySlices ret;
++ if (region == null || (ret = region.get((chunkX & REGION_MASK) | ((chunkZ & REGION_MASK) << REGION_SHIFT))) == null) {
++ // loadInEntityChunk will call addChunk for us
++ return this.world.chunkTaskScheduler.chunkHolderManager.getOrCreateEntityChunk(chunkX, chunkZ, true);
++ }
++
++ return ret;
++ }
++
++ public ChunkSlicesRegion getRegion(final int regionX, final int regionZ) {
++ final long key = CoordinateUtils.getChunkKey(regionX, regionZ);
++ final long attempt = this.stateLock.tryOptimisticRead();
++ if (attempt != 0L) {
++ try {
++ final ChunkSlicesRegion ret = this.regions.get(key);
++
++ if (this.stateLock.validate(attempt)) {
++ return ret;
++ }
++ } catch (final Error error) {
++ throw error;
++ } catch (final Throwable thr) {
++ // ignore
++ }
++ }
++
++ this.stateLock.readLock();
++ try {
++ return this.regions.get(key);
++ } finally {
++ this.stateLock.tryUnlockRead();
++ }
++ }
++
++ private synchronized void removeChunk(final int chunkX, final int chunkZ) {
++ final long key = CoordinateUtils.getChunkKey(chunkX >> REGION_SHIFT, chunkZ >> REGION_SHIFT);
++ final int relIndex = (chunkX & REGION_MASK) | ((chunkZ & REGION_MASK) << REGION_SHIFT);
++
++ final ChunkSlicesRegion region = this.regions.get(key);
++ final int remaining = region.remove(relIndex);
++
++ if (remaining == 0) {
++ this.stateLock.writeLock();
++ try {
++ this.regions.remove(key);
++ } finally {
++ this.stateLock.tryUnlockWrite();
++ }
++ }
++ }
++
++ public synchronized void addChunk(final int chunkX, final int chunkZ, final ChunkEntitySlices slices) {
++ final long key = CoordinateUtils.getChunkKey(chunkX >> REGION_SHIFT, chunkZ >> REGION_SHIFT);
++ final int relIndex = (chunkX & REGION_MASK) | ((chunkZ & REGION_MASK) << REGION_SHIFT);
++
++ ChunkSlicesRegion region = this.regions.get(key);
++ if (region != null) {
++ region.add(relIndex, slices);
++ } else {
++ region = new ChunkSlicesRegion();
++ region.add(relIndex, slices);
++ this.stateLock.writeLock();
++ try {
++ this.regions.put(key, region);
++ } finally {
++ this.stateLock.tryUnlockWrite();
++ }
++ }
++ }
++
++ public static final class ChunkSlicesRegion {
++
++ protected final ChunkEntitySlices[] slices = new ChunkEntitySlices[REGION_SIZE * REGION_SIZE];
++ protected int sliceCount;
++
++ public ChunkEntitySlices get(final int index) {
++ return this.slices[index];
++ }
++
++ public int remove(final int index) {
++ final ChunkEntitySlices slices = this.slices[index];
++ if (slices == null) {
++ throw new IllegalStateException();
++ }
++
++ this.slices[index] = null;
++
++ return --this.sliceCount;
++ }
++
++ public void add(final int index, final ChunkEntitySlices slices) {
++ final ChunkEntitySlices curr = this.slices[index];
++ if (curr != null) {
++ throw new IllegalStateException();
++ }
++
++ this.slices[index] = slices;
++
++ ++this.sliceCount;
++ }
++ }
++
++ private final class EntityCallback implements EntityInLevelCallback {
++
++ public final Entity entity;
++
++ public EntityCallback(final Entity entity) {
++ this.entity = entity;
++ }
++
++ @Override
++ public void onMove() {
++ final Entity entity = this.entity;
++ final Visibility oldVisibility = getEntityStatus(entity);
++ final ChunkEntitySlices newSlices = EntityLookup.this.moveEntity(this.entity);
++ if (newSlices == null) {
++ // no new section, so didn't change sections
++ return;
++ }
++ final Visibility newVisibility = getEntityStatus(entity);
++
++ EntityLookup.this.entityStatusChange(entity, newSlices, oldVisibility, newVisibility, true, false, false);
++ }
++
++ @Override
++ public void onRemove(final Entity.RemovalReason reason) {
++ final Entity entity = this.entity;
++ TickThread.ensureTickThread(entity, "Cannot remove entity off-main"); // Paper - rewrite chunk system
++ final Visibility tickingState = EntityLookup.getEntityStatus(entity);
++
++ EntityLookup.this.removeEntity(entity);
++
++ EntityLookup.this.entityStatusChange(entity, null, tickingState, Visibility.HIDDEN, false, false, reason.shouldDestroy());
++
++ this.entity.setLevelCallback(NoOpCallback.INSTANCE);
++ }
++ }
++
++ private static final class NoOpCallback implements EntityInLevelCallback {
++
++ public static final NoOpCallback INSTANCE = new NoOpCallback();
++
++ @Override
++ public void onMove() {}
++
++ @Override
++ public void onRemove(final Entity.RemovalReason reason) {}
++ }
++}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/io/RegionFileIOThread.java b/src/main/java/io/papermc/paper/chunk/system/io/RegionFileIOThread.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..2934f0cf0ef09c84739312b00186c2ef0019a165
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/io/RegionFileIOThread.java
+@@ -0,0 +1,1343 @@
++package io.papermc.paper.chunk.system.io;
++
++import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
++import ca.spottedleaf.concurrentutil.executor.Cancellable;
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedQueueExecutorThread;
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedThreadedTaskQueue;
++import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
++import com.mojang.logging.LogUtils;
++import io.papermc.paper.util.CoordinateUtils;
++import io.papermc.paper.util.TickThread;
++import it.unimi.dsi.fastutil.HashCommon;
++import net.minecraft.nbt.CompoundTag;
++import net.minecraft.server.level.ServerLevel;
++import net.minecraft.world.level.ChunkPos;
++import net.minecraft.world.level.chunk.storage.RegionFile;
++import net.minecraft.world.level.chunk.storage.RegionFileStorage;
++import org.slf4j.Logger;
++import java.io.IOException;
++import java.lang.invoke.VarHandle;
++import java.util.concurrent.CompletableFuture;
++import java.util.concurrent.CompletionException;
++import java.util.concurrent.ConcurrentHashMap;
++import java.util.concurrent.atomic.AtomicInteger;
++import java.util.function.BiConsumer;
++import java.util.function.BiFunction;
++import java.util.function.Consumer;
++import java.util.function.Function;
++
++/**
++ * Prioritised RegionFile I/O executor, responsible for all RegionFile access.
++ * <p>
++ * All functions provided are MT-Safe, however certain ordering constraints are recommended:
++ * <ul>
++ * <li>
++ * Chunk saves may not occur for unloaded chunks.
++ * </li>
++ * <li>
++ * Tasks must be scheduled on the chunk scheduler thread.
++ * </li>
++ * </ul>
++ * By following these constraints, no chunk data loss should occur with the exception of underlying I/O problems.
++ */
++public final class RegionFileIOThread extends PrioritisedQueueExecutorThread {
++
++ private static final Logger LOGGER = LogUtils.getClassLogger();
++
++ /**
++ * The kinds of region files controlled by the region file thread. Add more when needed, and ensure
++ * getControllerFor is updated.
++ */
++ public static enum RegionFileType {
++ CHUNK_DATA,
++ POI_DATA,
++ ENTITY_DATA;
++ }
++
++ protected static final RegionFileType[] CACHED_REGIONFILE_TYPES = RegionFileType.values();
++
++ private ChunkDataController getControllerFor(final ServerLevel world, final RegionFileType type) {
++ return switch (type) {
++ case CHUNK_DATA -> world.chunkDataControllerNew;
++ case POI_DATA -> world.poiDataControllerNew;
++ case ENTITY_DATA -> world.entityDataControllerNew;
++ default -> throw new IllegalStateException("Unknown controller type " + type);
++ };
++ }
++
++ /**
++ * Collects regionfile data for a certain chunk.
++ */
++ public static final class RegionFileData {
++
++ private final boolean[] hasResult = new boolean[CACHED_REGIONFILE_TYPES.length];
++ private final CompoundTag[] data = new CompoundTag[CACHED_REGIONFILE_TYPES.length];
++ private final Throwable[] throwables = new Throwable[CACHED_REGIONFILE_TYPES.length];
++
++ /**
++ * Sets the result associated with the specified regionfile type. Note that
++ * results can only be set once per regionfile type.
++ *
++ * @param type The regionfile type.
++ * @param data The result to set.
++ */
++ public void setData(final RegionFileType type, final CompoundTag data) {
++ final int index = type.ordinal();
++
++ if (this.hasResult[index]) {
++ throw new IllegalArgumentException("Result already exists for type " + type);
++ }
++ this.hasResult[index] = true;
++ this.data[index] = data;
++ }
++
++ /**
++ * Sets the result associated with the specified regionfile type. Note that
++ * results can only be set once per regionfile type.
++ *
++ * @param type The regionfile type.
++ * @param throwable The result to set.
++ */
++ public void setThrowable(final RegionFileType type, final Throwable throwable) {
++ final int index = type.ordinal();
++
++ if (this.hasResult[index]) {
++ throw new IllegalArgumentException("Result already exists for type " + type);
++ }
++ this.hasResult[index] = true;
++ this.throwables[index] = throwable;
++ }
++
++ /**
++ * Returns whether there is a result for the specified regionfile type.
++ *
++ * @param type Specified regionfile type.
++ *
++ * @return Whether a result exists for {@code type}.
++ */
++ public boolean hasResult(final RegionFileType type) {
++ return this.hasResult[type.ordinal()];
++ }
++
++ /**
++ * Returns the data result for the regionfile type.
++ *
++ * @param type Specified regionfile type.
++ *
++ * @throws IllegalArgumentException If the result has not been set for {@code type}.
++ * @return The data result for the specified type. If the result is a {@code Throwable},
++ * then returns {@code null}.
++ */
++ public CompoundTag getData(final RegionFileType type) {
++ final int index = type.ordinal();
++
++ if (!this.hasResult[index]) {
++ throw new IllegalArgumentException("Result does not exist for type " + type);
++ }
++
++ return this.data[index];
++ }
++
++ /**
++ * Returns the throwable result for the regionfile type.
++ *
++ * @param type Specified regionfile type.
++ *
++ * @throws IllegalArgumentException If the result has not been set for {@code type}.
++ * @return The throwable result for the specified type. If the result is an {@code CompoundTag},
++ * then returns {@code null}.
++ */
++ public Throwable getThrowable(final RegionFileType type) {
++ final int index = type.ordinal();
++
++ if (!this.hasResult[index]) {
++ throw new IllegalArgumentException("Result does not exist for type " + type);
++ }
++
++ return this.throwables[index];
++ }
++ }
++
++ private static final Object INIT_LOCK = new Object();
++
++ static RegionFileIOThread[] threads;
++
++ /* needs to be consistent given a set of parameters */
++ static RegionFileIOThread selectThread(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type) {
++ if (threads == null) {
++ throw new IllegalStateException("Threads not initialised");
++ }
++
++ final int regionX = chunkX >> 5;
++ final int regionZ = chunkZ >> 5;
++ final int typeOffset = type.ordinal();
++
++ return threads[(System.identityHashCode(world) + regionX + regionZ + typeOffset) % threads.length];
++ }
++
++ /**
++ * Shuts down the I/O executor(s). Watis for all tasks to complete if specified.
++ * Tasks queued during this call might not be accepted, and tasks queued after will not be accepted.
++ *
++ * @param wait Whether to wait until all tasks have completed.
++ */
++ public static void close(final boolean wait) {
++ for (int i = 0, len = threads.length; i < len; ++i) {
++ threads[i].close(false, true);
++ }
++ if (wait) {
++ RegionFileIOThread.flush();
++ }
++ }
++
++ public static long[] getExecutedTasks() {
++ final long[] ret = new long[threads.length];
++ for (int i = 0, len = threads.length; i < len; ++i) {
++ ret[i] = threads[i].getTotalTasksExecuted();
++ }
++
++ return ret;
++ }
++
++ public static long[] getTasksScheduled() {
++ final long[] ret = new long[threads.length];
++ for (int i = 0, len = threads.length; i < len; ++i) {
++ ret[i] = threads[i].getTotalTasksScheduled();
++ }
++ return ret;
++ }
++
++ public static void flush() {
++ for (int i = 0, len = threads.length; i < len; ++i) {
++ threads[i].waitUntilAllExecuted();
++ }
++ }
++
++ public static void partialFlush(final int totalTasksRemaining) {
++ long failures = 1L; // start out at 0.25ms
++
++ for (;;) {
++ final long[] executed = getExecutedTasks();
++ final long[] scheduled = getTasksScheduled();
++
++ long sum = 0;
++ for (int i = 0; i < executed.length; ++i) {
++ sum += scheduled[i] - executed[i];
++ }
++
++ if (sum <= totalTasksRemaining) {
++ break;
++ }
++
++ failures = ConcurrentUtil.linearLongBackoff(failures, 250_000L, 5_000_000L); // 500us, 5ms
++ }
++ }
++
++ /**
++ * Inits the executor with the specified number of threads.
++ *
++ * @param threads Specified number of threads.
++ */
++ public static void init(final int threads) {
++ synchronized (INIT_LOCK) {
++ if (RegionFileIOThread.threads != null) {
++ throw new IllegalStateException("Already initialised threads");
++ }
++
++ RegionFileIOThread.threads = new RegionFileIOThread[threads];
++
++ for (int i = 0; i < threads; ++i) {
++ RegionFileIOThread.threads[i] = new RegionFileIOThread(i);
++ RegionFileIOThread.threads[i].start();
++ }
++ }
++ }
++
++ private RegionFileIOThread(final int threadNumber) {
++ super(new PrioritisedThreadedTaskQueue(), (int)(1.0e6)); // 1.0ms spinwait time
++ this.setName("RegionFile I/O Thread #" + threadNumber);
++ this.setPriority(Thread.NORM_PRIORITY - 2); // we keep priority close to normal because threads can wait on us
++ this.setUncaughtExceptionHandler((final Thread thread, final Throwable thr) -> {
++ LOGGER.error("Uncaught exception thrown from I/O thread, report this! Thread: " + thread.getName(), thr);
++ });
++ }
++
++ /**
++ * Returns whether the current thread is a regionfile I/O executor.
++ * @return Whether the current thread is a regionfile I/O executor.
++ */
++ public static boolean isRegionFileThread() {
++ return Thread.currentThread() instanceof RegionFileIOThread;
++ }
++
++ /**
++ * Returns the priority associated with blocking I/O based on the current thread. The goal is to avoid
++ * dumb plugins from taking away priority from threads we consider crucial.
++ * @return The priroity to use with blocking I/O on the current thread.
++ */
++ public static PrioritisedExecutor.Priority getIOBlockingPriorityForCurrentThread() {
++ if (TickThread.isTickThread()) {
++ return PrioritisedExecutor.Priority.BLOCKING;
++ }
++ return PrioritisedExecutor.Priority.HIGHEST;
++ }
++
++ /**
++ * Returns the current {@code CompoundTag} pending for write for the specified chunk and regionfile type.
++ * Note that this does not copy the result, so do not modify the result returned.
++ *
++ * @param world Specified world.
++ * @param chunkX Specified chunk x.
++ * @param chunkZ Specified chunk z.
++ * @param type Specified regionfile type.
++ *
++ * @return The compound tag associated for the specified chunk. {@code null} if no write was pending, or if {@code null} is the write pending.
++ */
++ public static CompoundTag getPendingWrite(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type) {
++ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
++ return thread.getPendingWriteInternal(world, chunkX, chunkZ, type);
++ }
++
++ CompoundTag getPendingWriteInternal(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type) {
++ final ChunkDataController taskController = this.getControllerFor(world, type);
++ final ChunkDataTask task = taskController.tasks.get(Long.valueOf(CoordinateUtils.getChunkKey(chunkX, chunkZ)));
++
++ if (task == null) {
++ return null;
++ }
++
++ final CompoundTag ret = task.inProgressWrite;
++
++ return ret == ChunkDataTask.NOTHING_TO_WRITE ? null : ret;
++ }
++
++ /**
++ * Returns the priority for the specified regionfile type for the specified chunk.
++ * @param world Specified world.
++ * @param chunkX Specified chunk x.
++ * @param chunkZ Specified chunk z.
++ * @param type Specified regionfile type.
++ * @return The priority for the chunk
++ */
++ public static PrioritisedExecutor.Priority getPriority(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type) {
++ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
++ return thread.getPriorityInternal(world, chunkX, chunkZ, type);
++ }
++
++ PrioritisedExecutor.Priority getPriorityInternal(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type) {
++ final ChunkDataController taskController = this.getControllerFor(world, type);
++ final ChunkDataTask task = taskController.tasks.get(Long.valueOf(CoordinateUtils.getChunkKey(chunkX, chunkZ)));
++
++ if (task == null) {
++ return PrioritisedExecutor.Priority.COMPLETING;
++ }
++
++ return task.prioritisedTask.getPriority();
++ }
++
++ /**
++ * Sets the priority for all regionfile types for the specified chunk. Note that great care should
++ * be taken using this method, as there can be multiple tasks tied to the same chunk that want different
++ * priorities.
++ *
++ * @param world Specified world.
++ * @param chunkX Specified chunk x.
++ * @param chunkZ Specified chunk z.
++ * @param priority New priority.
++ *
++ * @see #raisePriority(ServerLevel, int, int, Priority)
++ * @see #raisePriority(ServerLevel, int, int, RegionFileType, Priority)
++ * @see #lowerPriority(ServerLevel, int, int, Priority)
++ * @see #lowerPriority(ServerLevel, int, int, RegionFileType, Priority)
++ */
++ public static void setPriority(final ServerLevel world, final int chunkX, final int chunkZ,
++ final PrioritisedExecutor.Priority priority) {
++ for (final RegionFileType type : CACHED_REGIONFILE_TYPES) {
++ RegionFileIOThread.setPriority(world, chunkX, chunkZ, type, priority);
++ }
++ }
++
++ /**
++ * Sets the priority for the specified regionfile type for the specified chunk. Note that great care should
++ * be taken using this method, as there can be multiple tasks tied to the same chunk that want different
++ * priorities.
++ *
++ * @param world Specified world.
++ * @param chunkX Specified chunk x.
++ * @param chunkZ Specified chunk z.
++ * @param type Specified regionfile type.
++ * @param priority New priority.
++ *
++ * @see #raisePriority(ServerLevel, int, int, Priority)
++ * @see #raisePriority(ServerLevel, int, int, RegionFileType, Priority)
++ * @see #lowerPriority(ServerLevel, int, int, Priority)
++ * @see #lowerPriority(ServerLevel, int, int, RegionFileType, Priority)
++ */
++ public static void setPriority(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
++ final PrioritisedExecutor.Priority priority) {
++ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
++ thread.setPriorityInternal(world, chunkX, chunkZ, type, priority);
++ }
++
++ void setPriorityInternal(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
++ final PrioritisedExecutor.Priority priority) {
++ final ChunkDataController taskController = this.getControllerFor(world, type);
++ final ChunkDataTask task = taskController.tasks.get(Long.valueOf(CoordinateUtils.getChunkKey(chunkX, chunkZ)));
++
++ if (task != null) {
++ task.prioritisedTask.setPriority(priority);
++ }
++ }
++
++ /**
++ * Raises the priority for all regionfile types for the specified chunk.
++ *
++ * @param world Specified world.
++ * @param chunkX Specified chunk x.
++ * @param chunkZ Specified chunk z.
++ * @param priority New priority.
++ *
++ * @see #setPriority(ServerLevel, int, int, Priority)
++ * @see #setPriority(ServerLevel, int, int, RegionFileType, Priority)
++ * @see #lowerPriority(ServerLevel, int, int, Priority)
++ * @see #lowerPriority(ServerLevel, int, int, RegionFileType, Priority)
++ */
++ public static void raisePriority(final ServerLevel world, final int chunkX, final int chunkZ,
++ final PrioritisedExecutor.Priority priority) {
++ for (final RegionFileType type : CACHED_REGIONFILE_TYPES) {
++ RegionFileIOThread.raisePriority(world, chunkX, chunkZ, type, priority);
++ }
++ }
++
++ /**
++ * Raises the priority for the specified regionfile type for the specified chunk.
++ *
++ * @param world Specified world.
++ * @param chunkX Specified chunk x.
++ * @param chunkZ Specified chunk z.
++ * @param type Specified regionfile type.
++ * @param priority New priority.
++ *
++ * @see #setPriority(ServerLevel, int, int, Priority)
++ * @see #setPriority(ServerLevel, int, int, RegionFileType, Priority)
++ * @see #lowerPriority(ServerLevel, int, int, Priority)
++ * @see #lowerPriority(ServerLevel, int, int, RegionFileType, Priority)
++ */
++ public static void raisePriority(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
++ final PrioritisedExecutor.Priority priority) {
++ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
++ thread.raisePriorityInternal(world, chunkX, chunkZ, type, priority);
++ }
++
++ void raisePriorityInternal(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
++ final PrioritisedExecutor.Priority priority) {
++ final ChunkDataController taskController = this.getControllerFor(world, type);
++ final ChunkDataTask task = taskController.tasks.get(Long.valueOf(CoordinateUtils.getChunkKey(chunkX, chunkZ)));
++
++ if (task != null) {
++ task.prioritisedTask.raisePriority(priority);
++ }
++ }
++
++ /**
++ * Lowers the priority for all regionfile types for the specified chunk.
++ *
++ * @param world Specified world.
++ * @param chunkX Specified chunk x.
++ * @param chunkZ Specified chunk z.
++ * @param priority New priority.
++ *
++ * @see #raisePriority(ServerLevel, int, int, Priority)
++ * @see #raisePriority(ServerLevel, int, int, RegionFileType, Priority)
++ * @see #setPriority(ServerLevel, int, int, Priority)
++ * @see #setPriority(ServerLevel, int, int, RegionFileType, Priority)
++ */
++ public static void lowerPriority(final ServerLevel world, final int chunkX, final int chunkZ,
++ final PrioritisedExecutor.Priority priority) {
++ for (final RegionFileType type : CACHED_REGIONFILE_TYPES) {
++ RegionFileIOThread.lowerPriority(world, chunkX, chunkZ, type, priority);
++ }
++ }
++
++ /**
++ * Lowers the priority for the specified regionfile type for the specified chunk.
++ *
++ * @param world Specified world.
++ * @param chunkX Specified chunk x.
++ * @param chunkZ Specified chunk z.
++ * @param type Specified regionfile type.
++ * @param priority New priority.
++ *
++ * @see #raisePriority(ServerLevel, int, int, Priority)
++ * @see #raisePriority(ServerLevel, int, int, RegionFileType, Priority)
++ * @see #setPriority(ServerLevel, int, int, Priority)
++ * @see #setPriority(ServerLevel, int, int, RegionFileType, Priority)
++ */
++ public static void lowerPriority(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
++ final PrioritisedExecutor.Priority priority) {
++ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
++ thread.lowerPriorityInternal(world, chunkX, chunkZ, type, priority);
++ }
++
++ void lowerPriorityInternal(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
++ final PrioritisedExecutor.Priority priority) {
++ final ChunkDataController taskController = this.getControllerFor(world, type);
++ final ChunkDataTask task = taskController.tasks.get(Long.valueOf(CoordinateUtils.getChunkKey(chunkX, chunkZ)));
++
++ if (task != null) {
++ task.prioritisedTask.lowerPriority(priority);
++ }
++ }
++
++ /**
++ * Schedules the chunk data to be written asynchronously.
++ * <p>
++ * Impl notes:
++ * <ul>
++ * <li>
++ * This function presumes a chunk load for the coordinates is not called during this function (anytime after is OK). This means
++ * saves must be scheduled before a chunk is unloaded.
++ * </li>
++ * <li>
++ * Writes may be called concurrently, although only the "later" write will go through.
++ * </li>
++ * </ul>
++ *
++ * @param world Chunk's world
++ * @param chunkX Chunk's x coordinate
++ * @param chunkZ Chunk's z coordinate
++ * @param data Chunk's data
++ * @param type The regionfile type to write to.
++ *
++ * @throws IllegalStateException If the file io thread has shutdown.
++ */
++ public static void scheduleSave(final ServerLevel world, final int chunkX, final int chunkZ, final CompoundTag data,
++ final RegionFileType type) {
++ RegionFileIOThread.scheduleSave(world, chunkX, chunkZ, data, type, PrioritisedExecutor.Priority.NORMAL);
++ }
++
++ /**
++ * Schedules the chunk data to be written asynchronously.
++ * <p>
++ * Impl notes:
++ * <ul>
++ * <li>
++ * This function presumes a chunk load for the coordinates is not called during this function (anytime after is OK). This means
++ * saves must be scheduled before a chunk is unloaded.
++ * </li>
++ * <li>
++ * Writes may be called concurrently, although only the "later" write will go through.
++ * </li>
++ * </ul>
++ *
++ * @param world Chunk's world
++ * @param chunkX Chunk's x coordinate
++ * @param chunkZ Chunk's z coordinate
++ * @param data Chunk's data
++ * @param type The regionfile type to write to.
++ * @param priority The minimum priority to schedule at.
++ *
++ * @throws IllegalStateException If the file io thread has shutdown.
++ */
++ public static void scheduleSave(final ServerLevel world, final int chunkX, final int chunkZ, final CompoundTag data,
++ final RegionFileType type, final PrioritisedExecutor.Priority priority) {
++ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
++ thread.scheduleSaveInternal(world, chunkX, chunkZ, data, type, priority);
++ }
++
++ void scheduleSaveInternal(final ServerLevel world, final int chunkX, final int chunkZ, final CompoundTag data,
++ final RegionFileType type, final PrioritisedExecutor.Priority priority) {
++ final ChunkDataController taskController = this.getControllerFor(world, type);
++
++ final boolean[] created = new boolean[1];
++ final ChunkCoordinate key = new ChunkCoordinate(CoordinateUtils.getChunkKey(chunkX, chunkZ));
++ final ChunkDataTask task = taskController.tasks.compute(key, (final ChunkCoordinate keyInMap, final ChunkDataTask taskRunning) -> {
++ if (taskRunning == null || taskRunning.failedWrite) {
++ // no task is scheduled or the previous write failed - meaning we need to overwrite it
++
++ // create task
++ final ChunkDataTask newTask = new ChunkDataTask(world, chunkX, chunkZ, taskController, RegionFileIOThread.this, priority);
++ newTask.inProgressWrite = data;
++ created[0] = true;
++
++ return newTask;
++ }
++
++ taskRunning.inProgressWrite = data;
++
++ return taskRunning;
++ });
++
++ if (created[0]) {
++ task.prioritisedTask.queue();
++ } else {
++ task.prioritisedTask.raisePriority(priority);
++ }
++ }
++
++ /**
++ * Schedules a load to be executed asynchronously. This task will load all regionfile types, and then call
++ * {@code onComplete}. This is a bulk load operation, see {@link #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)}
++ * for single load.
++ * <p>
++ * Impl notes:
++ * <ul>
++ * <li>
++ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
++ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
++ * data is undefined behaviour, and can cause deadlock.
++ * </li>
++ * </ul>
++ *
++ * @param world Chunk's world
++ * @param chunkX Chunk's x coordinate
++ * @param chunkZ Chunk's z coordinate
++ * @param onComplete Consumer to execute once this task has completed
++ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
++ * of this call.
++ *
++ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
++ *
++ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)
++ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)
++ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, RegionFileType...)
++ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, Priority, RegionFileType...)
++ */
++ public static Cancellable loadAllChunkData(final ServerLevel world, final int chunkX, final int chunkZ,
++ final Consumer<RegionFileData> onComplete, final boolean intendingToBlock) {
++ return RegionFileIOThread.loadAllChunkData(world, chunkX, chunkZ, onComplete, intendingToBlock, PrioritisedExecutor.Priority.NORMAL);
++ }
++
++ /**
++ * Schedules a load to be executed asynchronously. This task will load all regionfile types, and then call
++ * {@code onComplete}. This is a bulk load operation, see {@link #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)}
++ * for single load.
++ * <p>
++ * Impl notes:
++ * <ul>
++ * <li>
++ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
++ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
++ * data is undefined behaviour, and can cause deadlock.
++ * </li>
++ * </ul>
++ *
++ * @param world Chunk's world
++ * @param chunkX Chunk's x coordinate
++ * @param chunkZ Chunk's z coordinate
++ * @param onComplete Consumer to execute once this task has completed
++ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
++ * of this call.
++ * @param priority The minimum priority to load the data at.
++ *
++ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
++ *
++ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)
++ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)
++ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, RegionFileType...)
++ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, Priority, RegionFileType...)
++ */
++ public static Cancellable loadAllChunkData(final ServerLevel world, final int chunkX, final int chunkZ,
++ final Consumer<RegionFileData> onComplete, final boolean intendingToBlock,
++ final PrioritisedExecutor.Priority priority) {
++ return RegionFileIOThread.loadChunkData(world, chunkX, chunkZ, onComplete, intendingToBlock, priority, CACHED_REGIONFILE_TYPES);
++ }
++
++ /**
++ * Schedules a load to be executed asynchronously. This task will load data for the specified regionfile type(s), and
++ * then call {@code onComplete}. This is a bulk load operation, see {@link #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)}
++ * for single load.
++ * <p>
++ * Impl notes:
++ * <ul>
++ * <li>
++ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
++ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
++ * data is undefined behaviour, and can cause deadlock.
++ * </li>
++ * </ul>
++ *
++ * @param world Chunk's world
++ * @param chunkX Chunk's x coordinate
++ * @param chunkZ Chunk's z coordinate
++ * @param onComplete Consumer to execute once this task has completed
++ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
++ * of this call.
++ * @param types The regionfile type(s) to load.
++ *
++ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
++ *
++ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)
++ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)
++ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean)
++ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean, Priority)
++ */
++ public static Cancellable loadChunkData(final ServerLevel world, final int chunkX, final int chunkZ,
++ final Consumer<RegionFileData> onComplete, final boolean intendingToBlock,
++ final RegionFileType... types) {
++ return RegionFileIOThread.loadChunkData(world, chunkX, chunkZ, onComplete, intendingToBlock, PrioritisedExecutor.Priority.NORMAL, types);
++ }
++
++ /**
++ * Schedules a load to be executed asynchronously. This task will load data for the specified regionfile type(s), and
++ * then call {@code onComplete}. This is a bulk load operation, see {@link #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)}
++ * for single load.
++ * <p>
++ * Impl notes:
++ * <ul>
++ * <li>
++ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
++ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
++ * data is undefined behaviour, and can cause deadlock.
++ * </li>
++ * </ul>
++ *
++ * @param world Chunk's world
++ * @param chunkX Chunk's x coordinate
++ * @param chunkZ Chunk's z coordinate
++ * @param onComplete Consumer to execute once this task has completed
++ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
++ * of this call.
++ * @param types The regionfile type(s) to load.
++ * @param priority The minimum priority to load the data at.
++ *
++ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
++ *
++ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)
++ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)
++ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean)
++ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean, Priority)
++ */
++ public static Cancellable loadChunkData(final ServerLevel world, final int chunkX, final int chunkZ,
++ final Consumer<RegionFileData> onComplete, final boolean intendingToBlock,
++ final PrioritisedExecutor.Priority priority, final RegionFileType... types) {
++ if (types == null) {
++ throw new NullPointerException("Types cannot be null");
++ }
++ if (types.length == 0) {
++ throw new IllegalArgumentException("Types cannot be empty");
++ }
++
++ final RegionFileData ret = new RegionFileData();
++
++ final Cancellable[] reads = new CancellableRead[types.length];
++ final AtomicInteger completions = new AtomicInteger();
++ final int expectedCompletions = types.length;
++
++ for (int i = 0; i < expectedCompletions; ++i) {
++ final RegionFileType type = types[i];
++ reads[i] = RegionFileIOThread.loadDataAsync(world, chunkX, chunkZ, type,
++ (final CompoundTag data, final Throwable throwable) -> {
++ if (throwable != null) {
++ ret.setThrowable(type, throwable);
++ } else {
++ ret.setData(type, data);
++ }
++
++ if (completions.incrementAndGet() == expectedCompletions) {
++ onComplete.accept(ret);
++ }
++ }, intendingToBlock, priority);
++ }
++
++ return new CancellableReads(reads);
++ }
++
++ /**
++ * Schedules a load to be executed asynchronously. This task will load the specified regionfile type, and then call
++ * {@code onComplete}.
++ * <p>
++ * Impl notes:
++ * <ul>
++ * <li>
++ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
++ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
++ * data is undefined behaviour, and can cause deadlock.
++ * </li>
++ * </ul>
++ *
++ * @param world Chunk's world
++ * @param chunkX Chunk's x coordinate
++ * @param chunkZ Chunk's z coordinate
++ * @param onComplete Consumer to execute once this task has completed
++ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
++ * of this call.
++ *
++ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
++ *
++ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, RegionFileType...)
++ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, Priority, RegionFileType...)
++ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean)
++ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean, Priority)
++ */
++ public static Cancellable loadDataAsync(final ServerLevel world, final int chunkX, final int chunkZ,
++ final RegionFileType type, final BiConsumer<CompoundTag, Throwable> onComplete,
++ final boolean intendingToBlock) {
++ return RegionFileIOThread.loadDataAsync(world, chunkX, chunkZ, type, onComplete, intendingToBlock, PrioritisedExecutor.Priority.NORMAL);
++ }
++
++ /**
++ * Schedules a load to be executed asynchronously. This task will load the specified regionfile type, and then call
++ * {@code onComplete}.
++ * <p>
++ * Impl notes:
++ * <ul>
++ * <li>
++ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
++ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
++ * data is undefined behaviour, and can cause deadlock.
++ * </li>
++ * </ul>
++ *
++ * @param world Chunk's world
++ * @param chunkX Chunk's x coordinate
++ * @param chunkZ Chunk's z coordinate
++ * @param onComplete Consumer to execute once this task has completed
++ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
++ * of this call.
++ * @param priority Minimum priority to load the data at.
++ *
++ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
++ *
++ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, RegionFileType...)
++ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, Priority, RegionFileType...)
++ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean)
++ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean, Priority)
++ */
++ public static Cancellable loadDataAsync(final ServerLevel world, final int chunkX, final int chunkZ,
++ final RegionFileType type, final BiConsumer<CompoundTag, Throwable> onComplete,
++ final boolean intendingToBlock, final PrioritisedExecutor.Priority priority) {
++ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
++ return thread.loadDataAsyncInternal(world, chunkX, chunkZ, type, onComplete, intendingToBlock, priority);
++ }
++
++ private static Boolean doesRegionFileExist(final int chunkX, final int chunkZ, final boolean intendingToBlock,
++ final ChunkDataController taskController) {
++ final ChunkPos chunkPos = new ChunkPos(chunkX, chunkZ);
++ if (intendingToBlock) {
++ return taskController.computeForRegionFile(chunkX, chunkZ, true, (final RegionFile file) -> {
++ if (file == null) { // null if no regionfile exists
++ return Boolean.FALSE;
++ }
++
++ return file.hasChunk(chunkPos) ? Boolean.TRUE : Boolean.FALSE;
++ });
++ } else {
++ // first check if the region file for sure does not exist
++ if (taskController.doesRegionFileNotExist(chunkX, chunkZ)) {
++ return Boolean.FALSE;
++ } // else: it either exists or is not known, fall back to checking the loaded region file
++
++ return taskController.computeForRegionFileIfLoaded(chunkX, chunkZ, (final RegionFile file) -> {
++ if (file == null) { // null if not loaded
++ // not sure at this point, let the I/O thread figure it out
++ return Boolean.TRUE;
++ }
++
++ return file.hasChunk(chunkPos) ? Boolean.TRUE : Boolean.FALSE;
++ });
++ }
++ }
++
++ Cancellable loadDataAsyncInternal(final ServerLevel world, final int chunkX, final int chunkZ,
++ final RegionFileType type, final BiConsumer<CompoundTag, Throwable> onComplete,
++ final boolean intendingToBlock, final PrioritisedExecutor.Priority priority) {
++ final ChunkDataController taskController = this.getControllerFor(world, type);
++
++ final ImmediateCallbackCompletion callbackInfo = new ImmediateCallbackCompletion();
++
++ final ChunkCoordinate key = new ChunkCoordinate(CoordinateUtils.getChunkKey(chunkX, chunkZ));
++ final BiFunction<ChunkCoordinate, ChunkDataTask, ChunkDataTask> compute = (final ChunkCoordinate keyInMap, final ChunkDataTask running) -> {
++ if (running == null) {
++ // not scheduled
++
++ if (callbackInfo.regionFileCalculation == null) {
++ // caller will compute this outside of compute(), to avoid holding the bin lock
++ callbackInfo.needsRegionFileTest = true;
++ return null;
++ }
++
++ if (callbackInfo.regionFileCalculation == Boolean.FALSE) {
++ // not on disk
++ callbackInfo.data = null;
++ callbackInfo.throwable = null;
++ callbackInfo.completeNow = true;
++ return null;
++ }
++
++ // set up task
++ final ChunkDataTask newTask = new ChunkDataTask(
++ world, chunkX, chunkZ, taskController, RegionFileIOThread.this, priority
++ );
++ newTask.inProgressRead = new RegionFileIOThread.InProgressRead();
++ newTask.inProgressRead.waiters.add(onComplete);
++
++ callbackInfo.tasksNeedsScheduling = true;
++ return newTask;
++ }
++
++ final CompoundTag pendingWrite = running.inProgressWrite;
++
++ if (pendingWrite == ChunkDataTask.NOTHING_TO_WRITE) {
++ // need to add to waiters here, because the regionfile thread will use compute() to lock and check for cancellations
++ if (!running.inProgressRead.addToWaiters(onComplete)) {
++ callbackInfo.data = running.inProgressRead.value;
++ callbackInfo.throwable = running.inProgressRead.throwable;
++ callbackInfo.completeNow = true;
++ }
++ return running;
++ }
++ // using the result sync here - don't bump priority
++
++ // at this stage we have to use the in progress write's data to avoid an order issue
++ callbackInfo.data = pendingWrite;
++ callbackInfo.throwable = null;
++ callbackInfo.completeNow = true;
++ return running;
++ };
++
++ ChunkDataTask curr = taskController.tasks.get(key);
++ if (curr == null) {
++ callbackInfo.regionFileCalculation = doesRegionFileExist(chunkX, chunkZ, intendingToBlock, taskController);
++ }
++ ChunkDataTask ret = taskController.tasks.compute(key, compute);
++ if (callbackInfo.needsRegionFileTest) {
++ // curr isn't null but when we went into compute() it was
++ callbackInfo.regionFileCalculation = doesRegionFileExist(chunkX, chunkZ, intendingToBlock, taskController);
++ // now it should be fine
++ ret = taskController.tasks.compute(key, compute);
++ }
++
++ // needs to be scheduled
++ if (callbackInfo.tasksNeedsScheduling) {
++ ret.prioritisedTask.queue();
++ } else if (callbackInfo.completeNow) {
++ try {
++ onComplete.accept(callbackInfo.data, callbackInfo.throwable);
++ } catch (final ThreadDeath thr) {
++ throw thr;
++ } catch (final Throwable thr) {
++ LOGGER.error("Callback " + ConcurrentUtil.genericToString(onComplete) + " synchronously failed to handle chunk data for task " + ret.toString(), thr);
++ }
++ } else {
++ // we're waiting on a task we didn't schedule, so raise its priority to what we want
++ ret.prioritisedTask.raisePriority(priority);
++ }
++
++ return new CancellableRead(onComplete, ret);
++ }
++
++ /**
++ * Schedules a load task to be executed asynchronously, and blocks on that task.
++ *
++ * @param world Chunk's world
++ * @param chunkX Chunk's x coordinate
++ * @param chunkZ Chunk's z coordinate
++ * @param type Regionfile type
++ * @param priority Minimum priority to load the data at.
++ *
++ * @return The chunk data for the chunk. Note that a {@code null} result means the chunk or regionfile does not exist on disk.
++ *
++ * @throws IOException If the load fails for any reason
++ */
++ public static CompoundTag loadData(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
++ final PrioritisedExecutor.Priority priority) throws IOException {
++ final CompletableFuture<CompoundTag> ret = new CompletableFuture<>();
++
++ RegionFileIOThread.loadDataAsync(world, chunkX, chunkZ, type, (final CompoundTag compound, final Throwable thr) -> {
++ if (thr != null) {
++ ret.completeExceptionally(thr);
++ } else {
++ ret.complete(compound);
++ }
++ }, true, priority);
++
++ try {
++ return ret.join();
++ } catch (final CompletionException ex) {
++ throw new IOException(ex);
++ }
++ }
++
++ private static final class ImmediateCallbackCompletion {
++
++ public CompoundTag data;
++ public Throwable throwable;
++ public boolean completeNow;
++ public boolean tasksNeedsScheduling;
++ public boolean needsRegionFileTest;
++ public Boolean regionFileCalculation;
++
++ }
++
++ static final class CancellableRead implements Cancellable {
++
++ private BiConsumer<CompoundTag, Throwable> callback;
++ private RegionFileIOThread.ChunkDataTask task;
++
++ CancellableRead(final BiConsumer<CompoundTag, Throwable> callback, final RegionFileIOThread.ChunkDataTask task) {
++ this.callback = callback;
++ this.task = task;
++ }
++
++ @Override
++ public boolean cancel() {
++ final BiConsumer<CompoundTag, Throwable> callback = this.callback;
++ final RegionFileIOThread.ChunkDataTask task = this.task;
++
++ if (callback == null || task == null) {
++ return false;
++ }
++
++ this.callback = null;
++ this.task = null;
++
++ final RegionFileIOThread.InProgressRead read = task.inProgressRead;
++
++ // read can be null if no read was scheduled (i.e no regionfile existed or chunk in regionfile didn't)
++ return (read != null && read.waiters.remove(callback));
++ }
++ }
++
++ static final class CancellableReads implements Cancellable {
++
++ private Cancellable[] reads;
++
++ protected static final VarHandle READS_HANDLE = ConcurrentUtil.getVarHandle(CancellableReads.class, "reads", Cancellable[].class);
++
++ CancellableReads(final Cancellable[] reads) {
++ this.reads = reads;
++ }
++
++ @Override
++ public boolean cancel() {
++ final Cancellable[] reads = (Cancellable[])READS_HANDLE.getAndSet((CancellableReads)this, (Cancellable[])null);
++
++ if (reads == null) {
++ return false;
++ }
++
++ boolean ret = false;
++
++ for (final Cancellable read : reads) {
++ ret |= read.cancel();
++ }
++
++ return ret;
++ }
++ }
++
++ static final class InProgressRead {
++
++ private static final Logger LOGGER = LogUtils.getClassLogger();
++
++ CompoundTag value;
++ Throwable throwable;
++ final MultiThreadedQueue<BiConsumer<CompoundTag, Throwable>> waiters = new MultiThreadedQueue<>();
++
++ // rets false if already completed (callback not invoked), true if callback was added
++ boolean addToWaiters(final BiConsumer<CompoundTag, Throwable> callback) {
++ return this.waiters.add(callback);
++ }
++
++ void complete(final RegionFileIOThread.ChunkDataTask task, final CompoundTag value, final Throwable throwable) {
++ this.value = value;
++ this.throwable = throwable;
++
++ BiConsumer<CompoundTag, Throwable> consumer;
++ while ((consumer = this.waiters.pollOrBlockAdds()) != null) {
++ try {
++ consumer.accept(value, throwable);
++ } catch (final ThreadDeath thr) {
++ throw thr;
++ } catch (final Throwable thr) {
++ LOGGER.error("Callback " + ConcurrentUtil.genericToString(consumer) + " failed to handle chunk data for task " + task.toString(), thr);
++ }
++ }
++ }
++ }
++
++ /**
++ * Class exists to replace {@link Long} usages as keys inside non-fastutil hashtables. The hash for some Long {@code x}
++ * is defined as {@code (x >>> 32) ^ x}. Chunk keys as long values are defined as {@code ((chunkX & 0xFFFFFFFFL) | (chunkZ << 32))},
++ * which means the hashcode as a Long value will be {@code chunkX ^ chunkZ}. Given that most chunks are created within a radius arounds players,
++ * this will lead to many hash collisions. So, this class uses a better hashing algorithm so that usage of
++ * non-fastutil collections is not degraded.
++ */
++ public static final class ChunkCoordinate implements Comparable<ChunkCoordinate> {
++
++ public final long key;
++
++ public ChunkCoordinate(final long key) {
++ this.key = key;
++ }
++
++ @Override
++ public int hashCode() {
++ return (int)HashCommon.mix(this.key);
++ }
++
++ @Override
++ public boolean equals(final Object obj) {
++ if (this == obj) {
++ return true;
++ }
++
++ if (!(obj instanceof ChunkCoordinate)) {
++ return false;
++ }
++
++ final ChunkCoordinate other = (ChunkCoordinate)obj;
++
++ return this.key == other.key;
++ }
++
++ // This class is intended for HashMap/ConcurrentHashMap usage, which do treeify bin nodes if the chain
++ // is too large. So we should implement compareTo to help.
++ @Override
++ public int compareTo(final RegionFileIOThread.ChunkCoordinate other) {
++ return Long.compare(this.key, other.key);
++ }
++
++ @Override
++ public String toString() {
++ return new ChunkPos(this.key).toString();
++ }
++ }
++
++ public static abstract class ChunkDataController {
++
++ // ConcurrentHashMap synchronizes per chain, so reduce the chance of task's hashes colliding.
++ protected final ConcurrentHashMap<ChunkCoordinate, ChunkDataTask> tasks = new ConcurrentHashMap<>(8192, 0.10f);
++
++ public final RegionFileType type;
++
++ public ChunkDataController(final RegionFileType type) {
++ this.type = type;
++ }
++
++ public abstract RegionFileStorage getCache();
++
++ public abstract void writeData(final int chunkX, final int chunkZ, final CompoundTag compound) throws IOException;
++
++ public abstract CompoundTag readData(final int chunkX, final int chunkZ) throws IOException;
++
++ public boolean hasTasks() {
++ return !this.tasks.isEmpty();
++ }
++
++ public boolean doesRegionFileNotExist(final int chunkX, final int chunkZ) {
++ return this.getCache().doesRegionFileNotExistNoIO(new ChunkPos(chunkX, chunkZ));
++ }
++
++ public <T> T computeForRegionFile(final int chunkX, final int chunkZ, final boolean existingOnly, final Function<RegionFile, T> function) {
++ final RegionFileStorage cache = this.getCache();
++ final RegionFile regionFile;
++ synchronized (cache) {
++ try {
++ regionFile = cache.getRegionFile(new ChunkPos(chunkX, chunkZ), existingOnly, true);
++ } catch (final IOException ex) {
++ throw new RuntimeException(ex);
++ }
++ }
++
++ try {
++ return function.apply(regionFile);
++ } finally {
++ if (regionFile != null) {
++ regionFile.fileLock.unlock();
++ }
++ }
++ }
++
++ public <T> T computeForRegionFileIfLoaded(final int chunkX, final int chunkZ, final Function<RegionFile, T> function) {
++ final RegionFileStorage cache = this.getCache();
++ final RegionFile regionFile;
++
++ synchronized (cache) {
++ regionFile = cache.getRegionFileIfLoaded(new ChunkPos(chunkX, chunkZ));
++ if (regionFile != null) {
++ regionFile.fileLock.lock();
++ }
++ }
++
++ try {
++ return function.apply(regionFile);
++ } finally {
++ if (regionFile != null) {
++ regionFile.fileLock.unlock();
++ }
++ }
++ }
++ }
++
++ static final class ChunkDataTask implements Runnable {
++
++ protected static final CompoundTag NOTHING_TO_WRITE = new CompoundTag();
++
++ private static final Logger LOGGER = LogUtils.getClassLogger();
++
++ RegionFileIOThread.InProgressRead inProgressRead;
++ volatile CompoundTag inProgressWrite = NOTHING_TO_WRITE; // only needs to be acquire/release
++
++ boolean failedWrite;
++
++ final ServerLevel world;
++ final int chunkX;
++ final int chunkZ;
++ final RegionFileIOThread.ChunkDataController taskController;
++
++ final PrioritisedExecutor.PrioritisedTask prioritisedTask;
++
++ /*
++ * IO thread will perform reads before writes for a given chunk x and z
++ *
++ * How reads/writes are scheduled:
++ *
++ * If read is scheduled while scheduling write, take no special action and just schedule write
++ * If read is scheduled while scheduling read and no write is scheduled, chain the read task
++ *
++ *
++ * If write is scheduled while scheduling read, use the pending write data and ret immediately (so no read is scheduled)
++ * If write is scheduled while scheduling write (ignore read in progress), overwrite the write in progress data
++ *
++ * This allows the reads and writes to act as if they occur synchronously to the thread scheduling them, however
++ * it fails to properly propagate write failures thanks to writes overwriting each other
++ */
++
++ public ChunkDataTask(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileIOThread.ChunkDataController taskController,
++ final PrioritisedExecutor executor, final PrioritisedExecutor.Priority priority) {
++ this.world = world;
++ this.chunkX = chunkX;
++ this.chunkZ = chunkZ;
++ this.taskController = taskController;
++ this.prioritisedTask = executor.createTask(this, priority);
++ }
++
++ @Override
++ public String toString() {
++ return "Task for world: '" + this.world.getWorld().getName() + "' at (" + this.chunkX + "," + this.chunkZ +
++ ") type: " + this.taskController.type.name() + ", hash: " + this.hashCode();
++ }
++
++ @Override
++ public void run() {
++ final RegionFileIOThread.InProgressRead read = this.inProgressRead;
++ final ChunkCoordinate chunkKey = new ChunkCoordinate(CoordinateUtils.getChunkKey(this.chunkX, this.chunkZ));
++
++ if (read != null) {
++ final boolean[] canRead = new boolean[] { true };
++
++ if (read.waiters.isEmpty()) {
++ // cancelled read? go to task controller to confirm
++ final ChunkDataTask inMap = this.taskController.tasks.compute(chunkKey, (final ChunkCoordinate keyInMap, final ChunkDataTask valueInMap) -> {
++ if (valueInMap == null) {
++ throw new IllegalStateException("Write completed concurrently, expected this task: " + ChunkDataTask.this.toString() + ", report this!");
++ }
++ if (valueInMap != ChunkDataTask.this) {
++ throw new IllegalStateException("Chunk task mismatch, expected this task: " + ChunkDataTask.this.toString() + ", got: " + valueInMap.toString() + ", report this!");
++ }
++
++ if (!read.waiters.isEmpty()) { // as per usual IntelliJ is unable to figure out that there are concurrent accesses.
++ return valueInMap;
++ } else {
++ canRead[0] = false;
++ }
++
++ return valueInMap.inProgressWrite == NOTHING_TO_WRITE ? null : valueInMap;
++ });
++
++ if (inMap == null) {
++ // read is cancelled - and no write pending, so we're done
++ return;
++ }
++ // if there is a write in progress, we don't actually have to worry about waiters gaining new entries -
++ // the readers will just use the in progress write, so the value in canRead is good to use without
++ // further synchronisation.
++ }
++
++ if (canRead[0]) {
++ CompoundTag compound = null;
++ Throwable throwable = null;
++
++ try {
++ compound = this.taskController.readData(this.chunkX, this.chunkZ);
++ } catch (final ThreadDeath thr) {
++ throw thr;
++ } catch (final Throwable thr) {
++ throwable = thr;
++ LOGGER.error("Failed to read chunk data for task: " + this.toString(), thr);
++ }
++ read.complete(this, compound, throwable);
++ }
++ }
++
++ CompoundTag write = this.inProgressWrite;
++
++ if (write == NOTHING_TO_WRITE) {
++ final ChunkDataTask inMap = this.taskController.tasks.compute(chunkKey, (final ChunkCoordinate keyInMap, final ChunkDataTask valueInMap) -> {
++ if (valueInMap == null) {
++ throw new IllegalStateException("Write completed concurrently, expected this task: " + ChunkDataTask.this.toString() + ", report this!");
++ }
++ if (valueInMap != ChunkDataTask.this) {
++ throw new IllegalStateException("Chunk task mismatch, expected this task: " + ChunkDataTask.this.toString() + ", got: " + valueInMap.toString() + ", report this!");
++ }
++ return valueInMap.inProgressWrite == NOTHING_TO_WRITE ? null : valueInMap;
++ });
++
++ if (inMap == null) {
++ return; // set the task value to null, indicating we're done
++ } // else: inProgressWrite changed, so now we have something to write
++ }
++
++ for (;;) {
++ write = this.inProgressWrite;
++ final CompoundTag dataWritten = write;
++
++ boolean failedWrite = false;
++
++ try {
++ this.taskController.writeData(this.chunkX, this.chunkZ, write);
++ } catch (final ThreadDeath thr) {
++ throw thr;
++ } catch (final Throwable thr) {
++ if (thr instanceof RegionFileStorage.RegionFileSizeException) {
++ final int maxSize = RegionFile.MAX_CHUNK_SIZE / (1024 * 1024);
++ LOGGER.error("Chunk at (" + this.chunkX + "," + this.chunkZ + ") in '" + this.world.getWorld().getName() + "' exceeds max size of " + maxSize + "MiB, it has been deleted from disk.");
++ } else {
++ failedWrite = thr instanceof IOException;
++ LOGGER.error("Failed to write chunk data for task: " + this.toString(), thr);
++ }
++ }
++
++ final boolean finalFailWrite = failedWrite;
++ final boolean[] done = new boolean[] { false };
++
++ this.taskController.tasks.compute(chunkKey, (final ChunkCoordinate keyInMap, final ChunkDataTask valueInMap) -> {
++ if (valueInMap == null) {
++ throw new IllegalStateException("Write completed concurrently, expected this task: " + ChunkDataTask.this.toString() + ", report this!");
++ }
++ if (valueInMap != ChunkDataTask.this) {
++ throw new IllegalStateException("Chunk task mismatch, expected this task: " + ChunkDataTask.this.toString() + ", got: " + valueInMap.toString() + ", report this!");
++ }
++ if (valueInMap.inProgressWrite == dataWritten) {
++ valueInMap.failedWrite = finalFailWrite;
++ done[0] = true;
++ // keep the data in map if we failed the write so we can try to prevent data loss
++ return finalFailWrite ? valueInMap : null;
++ }
++ // different data than expected, means we need to retry write
++ return valueInMap;
++ });
++
++ if (done[0]) {
++ return;
++ }
++
++ // fetch & write new data
++ continue;
++ }
++ }
++ }
++}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/light/LightQueue.java b/src/main/java/io/papermc/paper/chunk/system/light/LightQueue.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..69e9944358951bd69ff5e8b3482da1a5e4476209
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/light/LightQueue.java
+@@ -0,0 +1,283 @@
++package io.papermc.paper.chunk.system.light;
++
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
++import ca.spottedleaf.starlight.common.light.BlockStarLightEngine;
++import ca.spottedleaf.starlight.common.light.SkyStarLightEngine;
++import ca.spottedleaf.starlight.common.light.StarLightInterface;
++import io.papermc.paper.util.CoordinateUtils;
++import it.unimi.dsi.fastutil.longs.Long2ObjectOpenHashMap;
++import it.unimi.dsi.fastutil.shorts.ShortCollection;
++import it.unimi.dsi.fastutil.shorts.ShortOpenHashSet;
++import net.minecraft.core.BlockPos;
++import net.minecraft.core.SectionPos;
++import net.minecraft.server.level.ServerLevel;
++import net.minecraft.world.level.ChunkPos;
++import net.minecraft.world.level.chunk.status.ChunkStatus;
++import java.util.ArrayList;
++import java.util.HashSet;
++import java.util.List;
++import java.util.Set;
++import java.util.concurrent.CompletableFuture;
++import java.util.function.BooleanSupplier;
++
++public final class LightQueue {
++
++ protected final Long2ObjectOpenHashMap<ChunkTasks> chunkTasks = new Long2ObjectOpenHashMap<>();
++ protected final StarLightInterface manager;
++ protected final ServerLevel world;
++
++ public LightQueue(final StarLightInterface manager) {
++ this.manager = manager;
++ this.world = ((ServerLevel)manager.getWorld());
++ }
++
++ public void lowerPriority(final int chunkX, final int chunkZ, final PrioritisedExecutor.Priority priority) {
++ final ChunkTasks task;
++ synchronized (this) {
++ task = this.chunkTasks.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
++ }
++ if (task != null) {
++ task.lowerPriority(priority);
++ }
++ }
++
++ public void setPriority(final int chunkX, final int chunkZ, final PrioritisedExecutor.Priority priority) {
++ final ChunkTasks task;
++ synchronized (this) {
++ task = this.chunkTasks.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
++ }
++ if (task != null) {
++ task.setPriority(priority);
++ }
++ }
++
++ public void raisePriority(final int chunkX, final int chunkZ, final PrioritisedExecutor.Priority priority) {
++ final ChunkTasks task;
++ synchronized (this) {
++ task = this.chunkTasks.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
++ }
++ if (task != null) {
++ task.raisePriority(priority);
++ }
++ }
++
++ public PrioritisedExecutor.Priority getPriority(final int chunkX, final int chunkZ) {
++ final ChunkTasks task;
++ synchronized (this) {
++ task = this.chunkTasks.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
++ }
++ if (task != null) {
++ return task.getPriority();
++ }
++
++ return PrioritisedExecutor.Priority.COMPLETING;
++ }
++
++ public boolean isEmpty() {
++ synchronized (this) {
++ return this.chunkTasks.isEmpty();
++ }
++ }
++
++ public ChunkTasks queueBlockChange(final BlockPos pos) {
++ final ChunkTasks tasks;
++ synchronized (this) {
++ tasks = this.chunkTasks.computeIfAbsent(CoordinateUtils.getChunkKey(pos), (final long keyInMap) -> {
++ return new ChunkTasks(keyInMap, LightQueue.this.manager, LightQueue.this);
++ });
++ tasks.changedPositions.add(pos.immutable());
++ }
++
++ tasks.schedule();
++
++ return tasks;
++ }
++
++ public ChunkTasks queueSectionChange(final SectionPos pos, final boolean newEmptyValue) {
++ final ChunkTasks tasks;
++ synchronized (this) {
++ tasks = this.chunkTasks.computeIfAbsent(CoordinateUtils.getChunkKey(pos), (final long keyInMap) -> {
++ return new ChunkTasks(keyInMap, LightQueue.this.manager, LightQueue.this);
++ });
++
++ if (tasks.changedSectionSet == null) {
++ tasks.changedSectionSet = new Boolean[this.manager.maxSection - this.manager.minSection + 1];
++ }
++ tasks.changedSectionSet[pos.getY() - this.manager.minSection] = Boolean.valueOf(newEmptyValue);
++ }
++
++ tasks.schedule();
++
++ return tasks;
++ }
++
++ public ChunkTasks queueChunkLightTask(final ChunkPos pos, final BooleanSupplier lightTask, final PrioritisedExecutor.Priority priority) {
++ final ChunkTasks tasks;
++ synchronized (this) {
++ tasks = this.chunkTasks.computeIfAbsent(CoordinateUtils.getChunkKey(pos), (final long keyInMap) -> {
++ return new ChunkTasks(keyInMap, LightQueue.this.manager, LightQueue.this, priority);
++ });
++ if (tasks.lightTasks == null) {
++ tasks.lightTasks = new ArrayList<>();
++ }
++ tasks.lightTasks.add(lightTask);
++ }
++
++ tasks.schedule();
++
++ return tasks;
++ }
++
++ public ChunkTasks queueChunkSkylightEdgeCheck(final SectionPos pos, final ShortCollection sections) {
++ final ChunkTasks tasks;
++ synchronized (this) {
++ tasks = this.chunkTasks.computeIfAbsent(CoordinateUtils.getChunkKey(pos), (final long keyInMap) -> {
++ return new ChunkTasks(keyInMap, LightQueue.this.manager, LightQueue.this);
++ });
++
++ ShortOpenHashSet queuedEdges = tasks.queuedEdgeChecksSky;
++ if (queuedEdges == null) {
++ queuedEdges = tasks.queuedEdgeChecksSky = new ShortOpenHashSet();
++ }
++ queuedEdges.addAll(sections);
++ }
++
++ tasks.schedule();
++
++ return tasks;
++ }
++
++ public ChunkTasks queueChunkBlocklightEdgeCheck(final SectionPos pos, final ShortCollection sections) {
++ final ChunkTasks tasks;
++
++ synchronized (this) {
++ tasks = this.chunkTasks.computeIfAbsent(CoordinateUtils.getChunkKey(pos), (final long keyInMap) -> {
++ return new ChunkTasks(keyInMap, LightQueue.this.manager, LightQueue.this);
++ });
++
++ ShortOpenHashSet queuedEdges = tasks.queuedEdgeChecksBlock;
++ if (queuedEdges == null) {
++ queuedEdges = tasks.queuedEdgeChecksBlock = new ShortOpenHashSet();
++ }
++ queuedEdges.addAll(sections);
++ }
++
++ tasks.schedule();
++
++ return tasks;
++ }
++
++ public void removeChunk(final ChunkPos pos) {
++ final ChunkTasks tasks;
++ synchronized (this) {
++ tasks = this.chunkTasks.remove(CoordinateUtils.getChunkKey(pos));
++ }
++ if (tasks != null && tasks.cancel()) {
++ tasks.onComplete.complete(null);
++ }
++ }
++
++ public static final class ChunkTasks implements Runnable {
++
++ public final CompletableFuture<Void> onComplete = new CompletableFuture<>();
++ public boolean isTicketAdded;
++ public final long chunkCoordinate;
++
++ private final StarLightInterface lightEngine;
++ private final LightQueue queue;
++ private final PrioritisedExecutor.PrioritisedTask task;
++ private final Set<BlockPos> changedPositions = new HashSet<>();
++ private Boolean[] changedSectionSet;
++ private ShortOpenHashSet queuedEdgeChecksSky;
++ private ShortOpenHashSet queuedEdgeChecksBlock;
++ private List<BooleanSupplier> lightTasks;
++
++ public ChunkTasks(final long chunkCoordinate, final StarLightInterface lightEngine, final LightQueue queue) {
++ this(chunkCoordinate, lightEngine, queue, PrioritisedExecutor.Priority.NORMAL);
++ }
++
++ public ChunkTasks(final long chunkCoordinate, final StarLightInterface lightEngine, final LightQueue queue,
++ final PrioritisedExecutor.Priority priority) {
++ this.chunkCoordinate = chunkCoordinate;
++ this.lightEngine = lightEngine;
++ this.queue = queue;
++ this.task = queue.world.chunkTaskScheduler.radiusAwareScheduler.createTask(
++ CoordinateUtils.getChunkX(chunkCoordinate), CoordinateUtils.getChunkZ(chunkCoordinate),
++ ChunkStatus.LIGHT.writeRadius, this, priority
++ );
++ }
++
++ public void schedule() {
++ this.task.queue();
++ }
++
++ public boolean cancel() {
++ return this.task.cancel();
++ }
++
++ public PrioritisedExecutor.Priority getPriority() {
++ return this.task.getPriority();
++ }
++
++ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
++ this.task.lowerPriority(priority);
++ }
++
++ public void setPriority(final PrioritisedExecutor.Priority priority) {
++ this.task.setPriority(priority);
++ }
++
++ public void raisePriority(final PrioritisedExecutor.Priority priority) {
++ this.task.raisePriority(priority);
++ }
++
++ @Override
++ public void run() {
++ synchronized (this.queue) {
++ this.queue.chunkTasks.remove(this.chunkCoordinate);
++ }
++
++ boolean litChunk = false;
++ if (this.lightTasks != null) {
++ for (final BooleanSupplier run : this.lightTasks) {
++ if (run.getAsBoolean()) {
++ litChunk = true;
++ break;
++ }
++ }
++ }
++
++ final SkyStarLightEngine skyEngine = this.lightEngine.getSkyLightEngine();
++ final BlockStarLightEngine blockEngine = this.lightEngine.getBlockLightEngine();
++ try {
++ final long coordinate = this.chunkCoordinate;
++ final int chunkX = CoordinateUtils.getChunkX(coordinate);
++ final int chunkZ = CoordinateUtils.getChunkZ(coordinate);
++
++ final Set<BlockPos> positions = this.changedPositions;
++ final Boolean[] sectionChanges = this.changedSectionSet;
++
++ if (!litChunk) {
++ if (skyEngine != null && (!positions.isEmpty() || sectionChanges != null)) {
++ skyEngine.blocksChangedInChunk(this.lightEngine.getLightAccess(), chunkX, chunkZ, positions, sectionChanges);
++ }
++ if (blockEngine != null && (!positions.isEmpty() || sectionChanges != null)) {
++ blockEngine.blocksChangedInChunk(this.lightEngine.getLightAccess(), chunkX, chunkZ, positions, sectionChanges);
++ }
++
++ if (skyEngine != null && this.queuedEdgeChecksSky != null) {
++ skyEngine.checkChunkEdges(this.lightEngine.getLightAccess(), chunkX, chunkZ, this.queuedEdgeChecksSky);
++ }
++ if (blockEngine != null && this.queuedEdgeChecksBlock != null) {
++ blockEngine.checkChunkEdges(this.lightEngine.getLightAccess(), chunkX, chunkZ, this.queuedEdgeChecksBlock);
++ }
++ }
++
++ this.onComplete.complete(null);
++ } finally {
++ this.lightEngine.releaseSkyLightEngine(skyEngine);
++ this.lightEngine.releaseBlockLightEngine(blockEngine);
++ }
++ }
++ }
++}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/poi/PoiChunk.java b/src/main/java/io/papermc/paper/chunk/system/poi/PoiChunk.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..d72041aa814ff179e6e29a45dcd359a91d426d47
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/poi/PoiChunk.java
+@@ -0,0 +1,213 @@
++package io.papermc.paper.chunk.system.poi;
++
++import com.mojang.logging.LogUtils;
++import com.mojang.serialization.Codec;
++import com.mojang.serialization.DataResult;
++import io.papermc.paper.util.CoordinateUtils;
++import io.papermc.paper.util.TickThread;
++import io.papermc.paper.util.WorldUtil;
++import net.minecraft.SharedConstants;
++import net.minecraft.nbt.CompoundTag;
++import net.minecraft.nbt.NbtOps;
++import net.minecraft.nbt.Tag;
++import net.minecraft.resources.RegistryOps;
++import net.minecraft.server.level.ServerLevel;
++import net.minecraft.world.entity.ai.village.poi.PoiManager;
++import net.minecraft.world.entity.ai.village.poi.PoiSection;
++import org.slf4j.Logger;
++
++import java.util.Optional;
++
++public final class PoiChunk {
++
++ private static final Logger LOGGER = LogUtils.getClassLogger();
++
++ public final ServerLevel world;
++ public final int chunkX;
++ public final int chunkZ;
++ public final int minSection;
++ public final int maxSection;
++
++ protected final PoiSection[] sections;
++
++ private boolean isDirty;
++ private boolean loaded;
++
++ public PoiChunk(final ServerLevel world, final int chunkX, final int chunkZ, final int minSection, final int maxSection) {
++ this(world, chunkX, chunkZ, minSection, maxSection, new PoiSection[maxSection - minSection + 1]);
++ }
++
++ public PoiChunk(final ServerLevel world, final int chunkX, final int chunkZ, final int minSection, final int maxSection, final PoiSection[] sections) {
++ this.world = world;
++ this.chunkX = chunkX;
++ this.chunkZ = chunkZ;
++ this.minSection = minSection;
++ this.maxSection = maxSection;
++ this.sections = sections;
++ if (this.sections.length != (maxSection - minSection + 1)) {
++ throw new IllegalStateException("Incorrect length used, expected " + (maxSection - minSection + 1) + ", got " + this.sections.length);
++ }
++ }
++
++ public void load() {
++ TickThread.ensureTickThread(this.world, this.chunkX, this.chunkZ, "Loading in poi chunk off-main");
++ if (this.loaded) {
++ return;
++ }
++ this.loaded = true;
++ this.world.chunkSource.getPoiManager().loadInPoiChunk(this);
++ }
++
++ public boolean isLoaded() {
++ return this.loaded;
++ }
++
++ public boolean isEmpty() {
++ for (final PoiSection section : this.sections) {
++ if (section != null && !section.isEmpty()) {
++ return false;
++ }
++ }
++
++ return true;
++ }
++
++ public PoiSection getOrCreateSection(final int chunkY) {
++ if (chunkY >= this.minSection && chunkY <= this.maxSection) {
++ final int idx = chunkY - this.minSection;
++ final PoiSection ret = this.sections[idx];
++ if (ret != null) {
++ return ret;
++ }
++
++ final PoiManager poiManager = this.world.getPoiManager();
++ final long key = CoordinateUtils.getChunkSectionKey(this.chunkX, chunkY, this.chunkZ);
++
++ return this.sections[idx] = new PoiSection(() -> {
++ poiManager.setDirty(key);
++ });
++ }
++ throw new IllegalArgumentException("chunkY is out of bounds, chunkY: " + chunkY + " outside [" + this.minSection + "," + this.maxSection + "]");
++ }
++
++ public PoiSection getSection(final int chunkY) {
++ if (chunkY >= this.minSection && chunkY <= this.maxSection) {
++ return this.sections[chunkY - this.minSection];
++ }
++ return null;
++ }
++
++ public Optional<PoiSection> getSectionForVanilla(final int chunkY) {
++ if (chunkY >= this.minSection && chunkY <= this.maxSection) {
++ final PoiSection ret = this.sections[chunkY - this.minSection];
++ return ret == null ? Optional.empty() : ret.noAllocateOptional;
++ }
++ return Optional.empty();
++ }
++
++ public boolean isDirty() {
++ return this.isDirty;
++ }
++
++ public void setDirty(final boolean dirty) {
++ this.isDirty = dirty;
++ }
++
++ // returns null if empty
++ public CompoundTag save() {
++ final RegistryOps<Tag> registryOps = RegistryOps.create(NbtOps.INSTANCE, world.getPoiManager().registryAccess);
++
++ final CompoundTag ret = new CompoundTag();
++ final CompoundTag sections = new CompoundTag();
++ ret.put("Sections", sections);
++
++ ret.putInt("DataVersion", SharedConstants.getCurrentVersion().getDataVersion().getVersion());
++
++ final ServerLevel world = this.world;
++ final PoiManager poiManager = world.getPoiManager();
++ final int chunkX = this.chunkX;
++ final int chunkZ = this.chunkZ;
++
++ for (int sectionY = this.minSection; sectionY <= this.maxSection; ++sectionY) {
++ final PoiSection chunk = this.sections[sectionY - this.minSection];
++ if (chunk == null || chunk.isEmpty()) {
++ continue;
++ }
++
++ final long key = CoordinateUtils.getChunkSectionKey(chunkX, sectionY, chunkZ);
++ // codecs are honestly such a fucking disaster. What the fuck is this trash?
++ final Codec<PoiSection> codec = PoiSection.codec(() -> {
++ poiManager.setDirty(key);
++ });
++
++ final DataResult<Tag> serializedResult = codec.encodeStart(registryOps, chunk);
++ final int finalSectionY = sectionY;
++ final Tag serialized = serializedResult.resultOrPartial((final String description) -> {
++ LOGGER.error("Failed to serialize poi chunk for world: " + world.getWorld().getName() + ", chunk: (" + chunkX + "," + finalSectionY + "," + chunkZ + "); description: " + description);
++ }).orElse(null);
++ if (serialized == null) {
++ // failed, should be logged from the resultOrPartial
++ continue;
++ }
++
++ sections.put(Integer.toString(sectionY), serialized);
++ }
++
++ return sections.isEmpty() ? null : ret;
++ }
++
++ public static PoiChunk empty(final ServerLevel world, final int chunkX, final int chunkZ) {
++ final PoiChunk ret = new PoiChunk(world, chunkX, chunkZ, WorldUtil.getMinSection(world), WorldUtil.getMaxSection(world));
++ ret.loaded = true;
++ return ret;
++ }
++
++ public static PoiChunk parse(final ServerLevel world, final int chunkX, final int chunkZ, final CompoundTag data) {
++ final PoiChunk ret = empty(world, chunkX, chunkZ);
++
++ final RegistryOps<Tag> registryOps = RegistryOps.create(NbtOps.INSTANCE, world.getPoiManager().registryAccess);
++
++ final CompoundTag sections = data.getCompound("Sections");
++
++ if (sections.isEmpty()) {
++ // nothing to parse
++ return ret;
++ }
++
++ final PoiManager poiManager = world.getPoiManager();
++
++ boolean readAnything = false;
++
++ for (int sectionY = ret.minSection; sectionY <= ret.maxSection; ++sectionY) {
++ final String key = Integer.toString(sectionY);
++ if (!sections.contains(key)) {
++ continue;
++ }
++
++ final long coordinateKey = CoordinateUtils.getChunkSectionKey(chunkX, sectionY, chunkZ);
++ // codecs are honestly such a fucking disaster. What the fuck is this trash?
++ final Codec<PoiSection> codec = PoiSection.codec(() -> {
++ poiManager.setDirty(coordinateKey);
++ });
++
++ final CompoundTag section = sections.getCompound(key);
++ final DataResult<PoiSection> deserializeResult = codec.parse(registryOps, section);
++ final int finalSectionY = sectionY;
++ final PoiSection deserialized = deserializeResult.resultOrPartial((final String description) -> {
++ LOGGER.error("Failed to deserialize poi chunk for world: " + world.getWorld().getName() + ", chunk: (" + chunkX + "," + finalSectionY + "," + chunkZ + "); description: " + description);
++ }).orElse(null);
++
++ if (deserialized == null || deserialized.isEmpty()) {
++ // completely empty, no point in storing this
++ continue;
++ }
++
++ readAnything = true;
++ ret.sections[sectionY - ret.minSection] = deserialized;
++ }
++
++ ret.loaded = !readAnything; // Set loaded to false if we read anything to ensure proper callbacks to PoiManager are made on #load
++
++ return ret;
++ }
++}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkFullTask.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkFullTask.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..c307b084f59f7bb94dc02f25bbcd3e01e01d2306
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkFullTask.java
+@@ -0,0 +1,131 @@
++package io.papermc.paper.chunk.system.scheduling;
++
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
++import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
++import com.mojang.logging.LogUtils;
++import io.papermc.paper.chunk.system.poi.PoiChunk;
++import net.minecraft.server.level.ChunkMap;
++import net.minecraft.server.level.ServerLevel;
++import net.minecraft.world.level.chunk.ChunkAccess;
++import net.minecraft.world.level.chunk.status.ChunkStatus;
++import net.minecraft.world.level.chunk.ImposterProtoChunk;
++import net.minecraft.world.level.chunk.LevelChunk;
++import net.minecraft.world.level.chunk.ProtoChunk;
++import org.slf4j.Logger;
++import java.lang.invoke.VarHandle;
++
++public final class ChunkFullTask extends ChunkProgressionTask implements Runnable {
++
++ private static final Logger LOGGER = LogUtils.getClassLogger();
++
++ protected final NewChunkHolder chunkHolder;
++ protected final ChunkAccess fromChunk;
++ protected final PrioritisedExecutor.PrioritisedTask convertToFullTask;
++
++ public ChunkFullTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX, final int chunkZ,
++ final NewChunkHolder chunkHolder, final ChunkAccess fromChunk, final PrioritisedExecutor.Priority priority) {
++ super(scheduler, world, chunkX, chunkZ);
++ this.chunkHolder = chunkHolder;
++ this.fromChunk = fromChunk;
++ this.convertToFullTask = scheduler.createChunkTask(chunkX, chunkZ, this, priority);
++ }
++
++ @Override
++ public ChunkStatus getTargetStatus() {
++ return ChunkStatus.FULL;
++ }
++
++ @Override
++ public void run() {
++ // See Vanilla protoChunkToFullChunk for what this function should be doing
++ final LevelChunk chunk;
++ try {
++ // moved from the load from nbt stage into here
++ final PoiChunk poiChunk = this.chunkHolder.getPoiChunk();
++ if (poiChunk == null) {
++ LOGGER.error("Expected poi chunk to be loaded with chunk for task " + this.toString());
++ } else {
++ poiChunk.load();
++ this.world.getPoiManager().checkConsistency(this.fromChunk);
++ }
++
++ if (this.fromChunk instanceof ImposterProtoChunk wrappedFull) {
++ chunk = wrappedFull.getWrapped();
++ } else {
++ final ServerLevel world = this.world;
++ final ProtoChunk protoChunk = (ProtoChunk)this.fromChunk;
++ chunk = new LevelChunk(this.world, protoChunk, (final LevelChunk unused) -> {
++ ChunkMap.postLoadProtoChunk(world, protoChunk.getEntities(), protoChunk.getPos()); // Paper - rewrite chunk system
++ });
++ }
++
++ chunk.setChunkHolder(this.scheduler.chunkHolderManager.getChunkHolder(this.chunkX, this.chunkZ)); // replaces setFullStatus
++ chunk.runPostLoad();
++ // Unlike Vanilla, we load the entity chunk here, as we load the NBT in empty status (unlike Vanilla)
++ // This brings entity addition back in line with older versions of the game
++ // Since we load the NBT in the empty status, this will never block for I/O
++ this.world.chunkTaskScheduler.chunkHolderManager.getOrCreateEntityChunk(this.chunkX, this.chunkZ, false);
++
++ // we don't need the entitiesInLevel trash, this system doesn't double run callbacks
++ chunk.setLoaded(true);
++ chunk.registerAllBlockEntitiesAfterLevelLoad();
++ chunk.registerTickContainerInLevel(this.world);
++ } catch (final Throwable throwable) {
++ this.complete(null, throwable);
++ return;
++ }
++ this.complete(chunk, null);
++ }
++
++ protected volatile boolean scheduled;
++ protected static final VarHandle SCHEDULED_HANDLE = ConcurrentUtil.getVarHandle(ChunkFullTask.class, "scheduled", boolean.class);
++
++ @Override
++ public boolean isScheduled() {
++ return this.scheduled;
++ }
++
++ @Override
++ public void schedule() {
++ if ((boolean)SCHEDULED_HANDLE.getAndSet((ChunkFullTask)this, true)) {
++ throw new IllegalStateException("Cannot double call schedule()");
++ }
++ this.convertToFullTask.queue();
++ }
++
++ @Override
++ public void cancel() {
++ if (this.convertToFullTask.cancel()) {
++ this.complete(null, null);
++ }
++ }
++
++ @Override
++ public PrioritisedExecutor.Priority getPriority() {
++ return this.convertToFullTask.getPriority();
++ }
++
++ @Override
++ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++ this.convertToFullTask.lowerPriority(priority);
++ }
++
++ @Override
++ public void setPriority(final PrioritisedExecutor.Priority priority) {
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++ this.convertToFullTask.setPriority(priority);
++ }
++
++ @Override
++ public void raisePriority(final PrioritisedExecutor.Priority priority) {
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++ this.convertToFullTask.raisePriority(priority);
++ }
++}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkHolderManager.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkHolderManager.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..5b446e6ac151f99f64f0c442d0b40b5e251bc4c4
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkHolderManager.java
+@@ -0,0 +1,1500 @@
++package io.papermc.paper.chunk.system.scheduling;
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
++import ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock;
++import ca.spottedleaf.concurrentutil.map.SWMRLong2ObjectHashTable;
++import com.google.common.collect.ImmutableList;
++import com.google.gson.JsonArray;
++import com.google.gson.JsonObject;
++import com.mojang.logging.LogUtils;
++import io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader;
++import io.papermc.paper.chunk.system.io.RegionFileIOThread;
++import io.papermc.paper.chunk.system.poi.PoiChunk;
++import io.papermc.paper.threadedregions.TickRegions;
++import io.papermc.paper.util.CoordinateUtils;
++import io.papermc.paper.util.TickThread;
++import io.papermc.paper.world.ChunkEntitySlices;
++import it.unimi.dsi.fastutil.longs.Long2ByteLinkedOpenHashMap;
++import it.unimi.dsi.fastutil.longs.Long2ByteMap;
++import it.unimi.dsi.fastutil.longs.Long2IntLinkedOpenHashMap;
++import it.unimi.dsi.fastutil.longs.Long2IntMap;
++import it.unimi.dsi.fastutil.longs.Long2IntOpenHashMap;
++import it.unimi.dsi.fastutil.longs.Long2ObjectMap;
++import it.unimi.dsi.fastutil.longs.Long2ObjectOpenHashMap;
++import it.unimi.dsi.fastutil.longs.LongArrayList;
++import it.unimi.dsi.fastutil.longs.LongIterator;
++import it.unimi.dsi.fastutil.objects.ObjectRBTreeSet;
++import net.minecraft.nbt.CompoundTag;
++import io.papermc.paper.chunk.system.ChunkSystem;
++import net.minecraft.server.MinecraftServer;
++import net.minecraft.server.level.ChunkHolder;
++import net.minecraft.server.level.ChunkLevel;
++import net.minecraft.server.level.FullChunkStatus;
++import net.minecraft.server.level.ServerLevel;
++import net.minecraft.server.level.Ticket;
++import net.minecraft.server.level.TicketType;
++import net.minecraft.util.SortedArraySet;
++import net.minecraft.util.Unit;
++import net.minecraft.world.level.ChunkPos;
++import org.bukkit.plugin.Plugin;
++import org.slf4j.Logger;
++import java.io.IOException;
++import java.text.DecimalFormat;
++import java.util.ArrayDeque;
++import java.util.ArrayList;
++import java.util.Collection;
++import java.util.Collections;
++import java.util.Iterator;
++import java.util.List;
++import java.util.concurrent.ConcurrentHashMap;
++import java.util.concurrent.TimeUnit;
++import java.util.concurrent.atomic.AtomicBoolean;
++import java.util.concurrent.atomic.AtomicLong;
++import java.util.concurrent.atomic.AtomicReference;
++import java.util.concurrent.locks.LockSupport;
++import java.util.function.Predicate;
++
++public final class ChunkHolderManager {
++
++ private static final Logger LOGGER = LogUtils.getClassLogger();
++
++ public static final int FULL_LOADED_TICKET_LEVEL = 33;
++ public static final int BLOCK_TICKING_TICKET_LEVEL = 32;
++ public static final int ENTITY_TICKING_TICKET_LEVEL = 31;
++ public static final int MAX_TICKET_LEVEL = ChunkLevel.MAX_LEVEL; // inclusive
++
++ private static final long NO_TIMEOUT_MARKER = Long.MIN_VALUE;
++ private static final long PROBE_MARKER = Long.MIN_VALUE + 1;
++ public final ReentrantAreaLock ticketLockArea;
++
++ private final ConcurrentHashMap<RegionFileIOThread.ChunkCoordinate, SortedArraySet<Ticket<?>>> tickets = new java.util.concurrent.ConcurrentHashMap<>();
++ private final ConcurrentHashMap<RegionFileIOThread.ChunkCoordinate, Long2IntOpenHashMap> sectionToChunkToExpireCount = new java.util.concurrent.ConcurrentHashMap<>();
++ final ChunkQueue unloadQueue;
++
++ public boolean processTicketUpdates(final int posX, final int posZ) {
++ final int ticketShift = ThreadedTicketLevelPropagator.SECTION_SHIFT;
++ final int ticketMask = (1 << ticketShift) - 1;
++ final List<ChunkProgressionTask> scheduledTasks = new ArrayList<>();
++ final List<NewChunkHolder> changedFullStatus = new ArrayList<>();
++ final boolean ret;
++ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(
++ ((posX >> ticketShift) - 1) << ticketShift,
++ ((posZ >> ticketShift) - 1) << ticketShift,
++ (((posX >> ticketShift) + 1) << ticketShift) | ticketMask,
++ (((posZ >> ticketShift) + 1) << ticketShift) | ticketMask
++ );
++ try {
++ ret = this.processTicketUpdatesNoLock(posX >> ticketShift, posZ >> ticketShift, scheduledTasks, changedFullStatus);
++ } finally {
++ this.ticketLockArea.unlock(ticketLock);
++ }
++
++ this.addChangedStatuses(changedFullStatus);
++
++ for (int i = 0, len = scheduledTasks.size(); i < len; ++i) {
++ scheduledTasks.get(i).schedule();
++ }
++
++ return ret;
++ }
++
++ private boolean processTicketUpdatesNoLock(final int sectionX, final int sectionZ, final List<ChunkProgressionTask> scheduledTasks,
++ final List<NewChunkHolder> changedFullStatus) {
++ return this.ticketLevelPropagator.performUpdate(
++ sectionX, sectionZ, this.taskScheduler.schedulingLockArea, scheduledTasks, changedFullStatus
++ );
++ }
++
++ private final SWMRLong2ObjectHashTable<NewChunkHolder> chunkHolders = new SWMRLong2ObjectHashTable<>(16384, 0.25f);
++ // what a disaster of a name
++ // this is a map of removal tick to a map of chunks and the number of tickets a chunk has that are to expire that tick
++ private final Long2ObjectOpenHashMap<Long2IntOpenHashMap> removeTickToChunkExpireTicketCount = new Long2ObjectOpenHashMap<>();
++ private final ServerLevel world;
++ private final ChunkTaskScheduler taskScheduler;
++ private long currentTick;
++
++ private final ArrayDeque<NewChunkHolder> pendingFullLoadUpdate = new ArrayDeque<>();
++ private final ObjectRBTreeSet<NewChunkHolder> autoSaveQueue = new ObjectRBTreeSet<>((final NewChunkHolder c1, final NewChunkHolder c2) -> {
++ if (c1 == c2) {
++ return 0;
++ }
++
++ final int saveTickCompare = Long.compare(c1.lastAutoSave, c2.lastAutoSave);
++
++ if (saveTickCompare != 0) {
++ return saveTickCompare;
++ }
++
++ final long coord1 = CoordinateUtils.getChunkKey(c1.chunkX, c1.chunkZ);
++ final long coord2 = CoordinateUtils.getChunkKey(c2.chunkX, c2.chunkZ);
++
++ if (coord1 == coord2) {
++ throw new IllegalStateException("Duplicate chunkholder in auto save queue");
++ }
++
++ return Long.compare(coord1, coord2);
++ });
++
++ public ChunkHolderManager(final ServerLevel world, final ChunkTaskScheduler taskScheduler) {
++ this.world = world;
++ this.taskScheduler = taskScheduler;
++ this.ticketLockArea = new ReentrantAreaLock(taskScheduler.getChunkSystemLockShift());
++ this.unloadQueue = new ChunkQueue(world.getRegionChunkShift());
++ }
++
++ private final AtomicLong statusUpgradeId = new AtomicLong();
++
++ long getNextStatusUpgradeId() {
++ return this.statusUpgradeId.incrementAndGet();
++ }
++
++ public List<ChunkHolder> getOldChunkHolders() {
++ final List<NewChunkHolder> holders = this.getChunkHolders();
++ final List<ChunkHolder> ret = new ArrayList<>(holders.size());
++ for (final NewChunkHolder holder : holders) {
++ ret.add(holder.vanillaChunkHolder);
++ }
++ return ret;
++ }
++
++ public List<NewChunkHolder> getChunkHolders() {
++ final List<NewChunkHolder> ret = new ArrayList<>(this.chunkHolders.size());
++ this.chunkHolders.forEachValue(ret::add);
++ return ret;
++ }
++
++ public int size() {
++ return this.chunkHolders.size();
++ }
++
++ public void close(final boolean save, final boolean halt) {
++ TickThread.ensureTickThread("Closing world off-main");
++ if (halt) {
++ LOGGER.info("Waiting 60s for chunk system to halt for world '" + this.world.getWorld().getName() + "'");
++ if (!this.taskScheduler.halt(true, TimeUnit.SECONDS.toNanos(60L))) {
++ LOGGER.warn("Failed to halt world generation/loading tasks for world '" + this.world.getWorld().getName() + "'");
++ } else {
++ LOGGER.info("Halted chunk system for world '" + this.world.getWorld().getName() + "'");
++ }
++ }
++
++ if (save) {
++ this.saveAllChunks(true, true, true);
++ }
++
++ if (this.world.chunkDataControllerNew.hasTasks() || this.world.entityDataControllerNew.hasTasks() || this.world.poiDataControllerNew.hasTasks()) {
++ RegionFileIOThread.flush();
++ }
++
++ // kill regionfile cache
++ try {
++ this.world.chunkDataControllerNew.getCache().close();
++ } catch (final IOException ex) {
++ LOGGER.error("Failed to close chunk regionfile cache for world '" + this.world.getWorld().getName() + "'", ex);
++ }
++ try {
++ this.world.entityDataControllerNew.getCache().close();
++ } catch (final IOException ex) {
++ LOGGER.error("Failed to close entity regionfile cache for world '" + this.world.getWorld().getName() + "'", ex);
++ }
++ try {
++ this.world.poiDataControllerNew.getCache().close();
++ } catch (final IOException ex) {
++ LOGGER.error("Failed to close poi regionfile cache for world '" + this.world.getWorld().getName() + "'", ex);
++ }
++ }
++
++ void ensureInAutosave(final NewChunkHolder holder) {
++ if (!this.autoSaveQueue.contains(holder)) {
++ holder.lastAutoSave = MinecraftServer.currentTick;
++ this.autoSaveQueue.add(holder);
++ }
++ }
++
++ public void autoSave() {
++ final List<NewChunkHolder> reschedule = new ArrayList<>();
++ final long currentTick = MinecraftServer.currentTickLong;
++ final long maxSaveTime = currentTick - this.world.paperConfig().chunks.autoSaveInterval.value();
++ for (int autoSaved = 0; autoSaved < this.world.paperConfig().chunks.maxAutoSaveChunksPerTick && !this.autoSaveQueue.isEmpty();) {
++ final NewChunkHolder holder = this.autoSaveQueue.first();
++
++ if (holder.lastAutoSave > maxSaveTime) {
++ break;
++ }
++
++ this.autoSaveQueue.remove(holder);
++
++ holder.lastAutoSave = currentTick;
++ if (holder.save(false, false) != null) {
++ ++autoSaved;
++ }
++
++ if (holder.getChunkStatus().isOrAfter(FullChunkStatus.FULL)) {
++ reschedule.add(holder);
++ }
++ }
++
++ for (final NewChunkHolder holder : reschedule) {
++ if (holder.getChunkStatus().isOrAfter(FullChunkStatus.FULL)) {
++ this.autoSaveQueue.add(holder);
++ }
++ }
++ }
++
++ public void saveAllChunks(final boolean flush, final boolean shutdown, final boolean logProgress) {
++ final List<NewChunkHolder> holders = this.getChunkHolders();
++
++ if (logProgress) {
++ LOGGER.info("Saving all chunkholders for world '" + this.world.getWorld().getName() + "'");
++ }
++
++ final DecimalFormat format = new DecimalFormat("#0.00");
++
++ int saved = 0;
++
++ long start = System.nanoTime();
++ long lastLog = start;
++ boolean needsFlush = false;
++ final int flushInterval = 50;
++
++ int savedChunk = 0;
++ int savedEntity = 0;
++ int savedPoi = 0;
++
++ for (int i = 0, len = holders.size(); i < len; ++i) {
++ final NewChunkHolder holder = holders.get(i);
++ try {
++ final NewChunkHolder.SaveStat saveStat = holder.save(shutdown, false);
++ if (saveStat != null) {
++ ++saved;
++ needsFlush = flush;
++ if (saveStat.savedChunk()) {
++ ++savedChunk;
++ }
++ if (saveStat.savedEntityChunk()) {
++ ++savedEntity;
++ }
++ if (saveStat.savedPoiChunk()) {
++ ++savedPoi;
++ }
++ }
++ } catch (final ThreadDeath thr) {
++ throw thr;
++ } catch (final Throwable thr) {
++ LOGGER.error("Failed to save chunk (" + holder.chunkX + "," + holder.chunkZ + ") in world '" + this.world.getWorld().getName() + "'", thr);
++ }
++ if (needsFlush && (saved % flushInterval) == 0) {
++ needsFlush = false;
++ RegionFileIOThread.partialFlush(flushInterval / 2);
++ }
++ if (logProgress) {
++ final long currTime = System.nanoTime();
++ if ((currTime - lastLog) > TimeUnit.SECONDS.toNanos(10L)) {
++ lastLog = currTime;
++ LOGGER.info("Saved " + saved + " chunks (" + format.format((double)(i+1)/(double)len * 100.0) + "%) in world '" + this.world.getWorld().getName() + "'");
++ }
++ }
++ }
++ if (flush) {
++ RegionFileIOThread.flush();
++ if (this.world.paperConfig().chunks.flushRegionsOnSave) {
++ try {
++ this.world.chunkSource.chunkMap.regionFileCache.flush();
++ } catch (IOException ex) {
++ LOGGER.error("Exception when flushing regions in world {}", this.world.getWorld().getName(), ex);
++ }
++ }
++ }
++ if (logProgress) {
++ LOGGER.info("Saved " + savedChunk + " block chunks, " + savedEntity + " entity chunks, " + savedPoi + " poi chunks in world '" + this.world.getWorld().getName() + "' in " + format.format(1.0E-9 * (System.nanoTime() - start)) + "s");
++ }
++ }
++
++ protected final ThreadedTicketLevelPropagator ticketLevelPropagator = new ThreadedTicketLevelPropagator() {
++ @Override
++ protected void processLevelUpdates(final Long2ByteLinkedOpenHashMap updates) {
++ // first the necessary chunkholders must be created, so just update the ticket levels
++ for (final Iterator<Long2ByteMap.Entry> iterator = updates.long2ByteEntrySet().fastIterator(); iterator.hasNext();) {
++ final Long2ByteMap.Entry entry = iterator.next();
++ final long key = entry.getLongKey();
++ final int newLevel = convertBetweenTicketLevels((int)entry.getByteValue());
++
++ NewChunkHolder current = ChunkHolderManager.this.chunkHolders.get(key);
++ if (current == null && newLevel > MAX_TICKET_LEVEL) {
++ // not loaded and it shouldn't be loaded!
++ iterator.remove();
++ continue;
++ }
++
++ final int currentLevel = current == null ? MAX_TICKET_LEVEL + 1 : current.getCurrentTicketLevel();
++ if (currentLevel == newLevel) {
++ // nothing to do
++ iterator.remove();
++ continue;
++ }
++
++ if (current == null) {
++ // must create
++ current = ChunkHolderManager.this.createChunkHolder(key);
++ synchronized (ChunkHolderManager.this.chunkHolders) {
++ ChunkHolderManager.this.chunkHolders.put(key, current);
++ }
++ current.updateTicketLevel(newLevel);
++ } else {
++ current.updateTicketLevel(newLevel);
++ }
++ }
++ }
++
++ @Override
++ protected void processSchedulingUpdates(final Long2ByteLinkedOpenHashMap updates, final List<ChunkProgressionTask> scheduledTasks,
++ final List<NewChunkHolder> changedFullStatus) {
++ final List<ChunkProgressionTask> prev = CURRENT_TICKET_UPDATE_SCHEDULING.get();
++ CURRENT_TICKET_UPDATE_SCHEDULING.set(scheduledTasks);
++ try {
++ for (final LongIterator iterator = updates.keySet().iterator(); iterator.hasNext();) {
++ final long key = iterator.nextLong();
++ final NewChunkHolder current = ChunkHolderManager.this.chunkHolders.get(key);
++
++ if (current == null) {
++ throw new IllegalStateException("Expected chunk holder to be created");
++ }
++
++ current.processTicketLevelUpdate(scheduledTasks, changedFullStatus);
++ }
++ } finally {
++ CURRENT_TICKET_UPDATE_SCHEDULING.set(prev);
++ }
++ }
++ };
++ // function for converting between ticket levels and propagator levels and vice versa
++ // the problem is the ticket level propagator will propagate from a set source down to zero, whereas mojang expects
++ // levels to propagate from a set value up to a maximum value. so we need to convert the levels we put into the propagator
++ // and the levels we get out of the propagator
++
++ public static int convertBetweenTicketLevels(final int level) {
++ return ChunkLevel.MAX_LEVEL - level + 1;
++ }
++
++ public String getTicketDebugString(final long coordinate) {
++ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(CoordinateUtils.getChunkX(coordinate), CoordinateUtils.getChunkZ(coordinate));
++ try {
++ final SortedArraySet<Ticket<?>> tickets = this.tickets.get(new RegionFileIOThread.ChunkCoordinate(coordinate));
++
++ return tickets != null ? tickets.first().toString() : "no_ticket";
++ } finally {
++ if (ticketLock != null) {
++ this.ticketLockArea.unlock(ticketLock);
++ }
++ }
++ }
++
++ public Long2ObjectOpenHashMap<SortedArraySet<Ticket<?>>> getTicketsCopy() {
++ final Long2ObjectOpenHashMap<SortedArraySet<Ticket<?>>> ret = new Long2ObjectOpenHashMap<>();
++ final Long2ObjectOpenHashMap<List<RegionFileIOThread.ChunkCoordinate>> sections = new Long2ObjectOpenHashMap();
++ final int sectionShift = this.taskScheduler.getChunkSystemLockShift();
++ for (final RegionFileIOThread.ChunkCoordinate coord : this.tickets.keySet()) {
++ sections.computeIfAbsent(
++ CoordinateUtils.getChunkKey(
++ CoordinateUtils.getChunkX(coord.key) >> sectionShift,
++ CoordinateUtils.getChunkZ(coord.key) >> sectionShift
++ ),
++ (final long keyInMap) -> {
++ return new ArrayList<>();
++ }
++ ).add(coord);
++ }
++
++ for (final Iterator<Long2ObjectMap.Entry<List<RegionFileIOThread.ChunkCoordinate>>> iterator = sections.long2ObjectEntrySet().fastIterator();
++ iterator.hasNext();) {
++ final Long2ObjectMap.Entry<List<RegionFileIOThread.ChunkCoordinate>> entry = iterator.next();
++ final long sectionKey = entry.getLongKey();
++ final List<RegionFileIOThread.ChunkCoordinate> coordinates = entry.getValue();
++
++ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(
++ CoordinateUtils.getChunkX(sectionKey) << sectionShift,
++ CoordinateUtils.getChunkZ(sectionKey) << sectionShift
++ );
++ try {
++ for (final RegionFileIOThread.ChunkCoordinate coord : coordinates) {
++ final SortedArraySet<Ticket<?>> tickets = this.tickets.get(coord);
++ if (tickets == null) {
++ // removed before we acquired lock
++ continue;
++ }
++ ret.put(coord.key, new SortedArraySet<>(tickets));
++ }
++ } finally {
++ this.ticketLockArea.unlock(ticketLock);
++ }
++ }
++
++ return ret;
++ }
++
++ public Collection<Plugin> getPluginChunkTickets(int x, int z) {
++ ImmutableList.Builder<Plugin> ret;
++ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(x, z);
++ try {
++ final long coordinate = CoordinateUtils.getChunkKey(x, z);
++ final SortedArraySet<Ticket<?>> tickets = this.tickets.get(new RegionFileIOThread.ChunkCoordinate(coordinate));
++
++ if (tickets == null) {
++ return Collections.emptyList();
++ }
++
++ ret = ImmutableList.builder();
++ for (Ticket<?> ticket : tickets) {
++ if (ticket.getType() == TicketType.PLUGIN_TICKET) {
++ ret.add((Plugin)ticket.key);
++ }
++ }
++ } finally {
++ this.ticketLockArea.unlock(ticketLock);
++ }
++
++ return ret.build();
++ }
++
++ protected final void updateTicketLevel(final long coordinate, final int ticketLevel) {
++ if (ticketLevel > ChunkLevel.MAX_LEVEL) {
++ this.ticketLevelPropagator.removeSource(CoordinateUtils.getChunkX(coordinate), CoordinateUtils.getChunkZ(coordinate));
++ } else {
++ this.ticketLevelPropagator.setSource(CoordinateUtils.getChunkX(coordinate), CoordinateUtils.getChunkZ(coordinate), convertBetweenTicketLevels(ticketLevel));
++ }
++ }
++
++ private static int getTicketLevelAt(SortedArraySet<Ticket<?>> tickets) {
++ return !tickets.isEmpty() ? tickets.first().getTicketLevel() : MAX_TICKET_LEVEL + 1;
++ }
++
++ public <T> boolean addTicketAtLevel(final TicketType<T> type, final ChunkPos chunkPos, final int level,
++ final T identifier) {
++ return this.addTicketAtLevel(type, CoordinateUtils.getChunkKey(chunkPos), level, identifier);
++ }
++
++ public <T> boolean addTicketAtLevel(final TicketType<T> type, final int chunkX, final int chunkZ, final int level,
++ final T identifier) {
++ return this.addTicketAtLevel(type, CoordinateUtils.getChunkKey(chunkX, chunkZ), level, identifier);
++ }
++
++ private void addExpireCount(final int chunkX, final int chunkZ) {
++ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
++
++ final int sectionShift = this.world.getRegionChunkShift();
++ final RegionFileIOThread.ChunkCoordinate sectionKey = new RegionFileIOThread.ChunkCoordinate(CoordinateUtils.getChunkKey(
++ chunkX >> sectionShift,
++ chunkZ >> sectionShift
++ ));
++
++ this.sectionToChunkToExpireCount.computeIfAbsent(sectionKey, (final RegionFileIOThread.ChunkCoordinate keyInMap) -> {
++ return new Long2IntOpenHashMap();
++ }).addTo(chunkKey, 1);
++ }
++
++ private void removeExpireCount(final int chunkX, final int chunkZ) {
++ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
++
++ final int sectionShift = this.world.getRegionChunkShift();
++ final RegionFileIOThread.ChunkCoordinate sectionKey = new RegionFileIOThread.ChunkCoordinate(CoordinateUtils.getChunkKey(
++ chunkX >> sectionShift,
++ chunkZ >> sectionShift
++ ));
++
++ final Long2IntOpenHashMap removeCounts = this.sectionToChunkToExpireCount.get(sectionKey);
++ final int prevCount = removeCounts.addTo(chunkKey, -1);
++
++ if (prevCount == 1) {
++ removeCounts.remove(chunkKey);
++ if (removeCounts.isEmpty()) {
++ this.sectionToChunkToExpireCount.remove(sectionKey);
++ }
++ }
++ }
++
++ // supposed to return true if the ticket was added and did not replace another
++ // but, we always return false if the ticket cannot be added
++ public <T> boolean addTicketAtLevel(final TicketType<T> type, final long chunk, final int level, final T identifier) {
++ return this.addTicketAtLevel(type, chunk, level, identifier, true);
++ }
++
++ <T> boolean addTicketAtLevel(final TicketType<T> type, final long chunk, final int level, final T identifier, final boolean lock) {
++ final long removeDelay = type.timeout <= 0 ? NO_TIMEOUT_MARKER : type.timeout;
++ if (level > MAX_TICKET_LEVEL) {
++ return false;
++ }
++
++ final int chunkX = CoordinateUtils.getChunkX(chunk);
++ final int chunkZ = CoordinateUtils.getChunkZ(chunk);
++ final RegionFileIOThread.ChunkCoordinate chunkCoord = new RegionFileIOThread.ChunkCoordinate(chunk);
++ final Ticket<T> ticket = new Ticket<>(type, level, identifier, removeDelay);
++
++ final ReentrantAreaLock.Node ticketLock = lock ? this.ticketLockArea.lock(chunkX, chunkZ) : null;
++ try {
++ final SortedArraySet<Ticket<?>> ticketsAtChunk = this.tickets.computeIfAbsent(chunkCoord, (final RegionFileIOThread.ChunkCoordinate keyInMap) -> {
++ return SortedArraySet.create(4);
++ });
++
++ final int levelBefore = getTicketLevelAt(ticketsAtChunk);
++ final Ticket<T> current = (Ticket<T>)ticketsAtChunk.replace(ticket);
++ final int levelAfter = getTicketLevelAt(ticketsAtChunk);
++
++ if (current != ticket) {
++ final long oldRemoveDelay = current.removeDelay;
++ if (removeDelay != oldRemoveDelay) {
++ if (oldRemoveDelay != NO_TIMEOUT_MARKER && removeDelay == NO_TIMEOUT_MARKER) {
++ this.removeExpireCount(chunkX, chunkZ);
++ } else if (oldRemoveDelay == NO_TIMEOUT_MARKER) {
++ // since old != new, we have that NO_TIMEOUT_MARKER != new
++ this.addExpireCount(chunkX, chunkZ);
++ }
++ }
++ } else {
++ if (removeDelay != NO_TIMEOUT_MARKER) {
++ this.addExpireCount(chunkX, chunkZ);
++ }
++ }
++
++ if (levelBefore != levelAfter) {
++ this.updateTicketLevel(chunk, levelAfter);
++ }
++
++ return current == ticket;
++ } finally {
++ if (ticketLock != null) {
++ this.ticketLockArea.unlock(ticketLock);
++ }
++ }
++ }
++
++ public <T> boolean removeTicketAtLevel(final TicketType<T> type, final ChunkPos chunkPos, final int level, final T identifier) {
++ return this.removeTicketAtLevel(type, CoordinateUtils.getChunkKey(chunkPos), level, identifier);
++ }
++
++ public <T> boolean removeTicketAtLevel(final TicketType<T> type, final int chunkX, final int chunkZ, final int level, final T identifier) {
++ return this.removeTicketAtLevel(type, CoordinateUtils.getChunkKey(chunkX, chunkZ), level, identifier);
++ }
++
++ public <T> boolean removeTicketAtLevel(final TicketType<T> type, final long chunk, final int level, final T identifier) {
++ return this.removeTicketAtLevel(type, chunk, level, identifier, true);
++ }
++
++ <T> boolean removeTicketAtLevel(final TicketType<T> type, final long chunk, final int level, final T identifier, final boolean lock) {
++ if (level > MAX_TICKET_LEVEL) {
++ return false;
++ }
++
++ final int chunkX = CoordinateUtils.getChunkX(chunk);
++ final int chunkZ = CoordinateUtils.getChunkZ(chunk);
++ final RegionFileIOThread.ChunkCoordinate chunkCoord = new RegionFileIOThread.ChunkCoordinate(chunk);
++ final Ticket<T> probe = new Ticket<>(type, level, identifier, PROBE_MARKER);
++
++ final ReentrantAreaLock.Node ticketLock = lock ? this.ticketLockArea.lock(chunkX, chunkZ) : null;
++ try {
++ final SortedArraySet<Ticket<?>> ticketsAtChunk = this.tickets.get(chunkCoord);
++ if (ticketsAtChunk == null) {
++ return false;
++ }
++
++ final int oldLevel = getTicketLevelAt(ticketsAtChunk);
++ final Ticket<T> ticket = (Ticket<T>)ticketsAtChunk.removeAndGet(probe);
++
++ if (ticket == null) {
++ return false;
++ }
++
++ final int newLevel = getTicketLevelAt(ticketsAtChunk);
++ // we should not change the ticket levels while the target region may be ticking
++ if (oldLevel != newLevel) {
++ // Delay unload chunk patch originally by Aikar, updated to 1.20 by jpenilla
++ // these days, the patch is mostly useful to keep chunks ticking when players teleport
++ // so that their pets can teleport with them as well.
++ final long delayTimeout = this.world.paperConfig().chunks.delayChunkUnloadsBy.ticks();
++ final TicketType<ChunkPos> toAdd;
++ final long timeout;
++ if (type == RegionizedPlayerChunkLoader.REGION_PLAYER_TICKET && delayTimeout > 0) {
++ toAdd = TicketType.DELAY_UNLOAD;
++ timeout = delayTimeout;
++ } else {
++ toAdd = TicketType.UNKNOWN;
++ // always expect UNKNOWN to be > 1, but just in case
++ timeout = Math.max(1, toAdd.timeout);
++ }
++ final Ticket<ChunkPos> unknownTicket = new Ticket<>(toAdd, level, new ChunkPos(chunk), timeout);
++ if (ticketsAtChunk.add(unknownTicket)) {
++ this.addExpireCount(chunkX, chunkZ);
++ } else {
++ throw new IllegalStateException("Should have been able to add " + unknownTicket + " to " + ticketsAtChunk);
++ }
++ }
++
++ final long removeDelay = ticket.removeDelay;
++ if (removeDelay != NO_TIMEOUT_MARKER) {
++ this.removeExpireCount(chunkX, chunkZ);
++ }
++
++ return true;
++ } finally {
++ if (ticketLock != null) {
++ this.ticketLockArea.unlock(ticketLock);
++ }
++ }
++ }
++
++ // atomic with respect to all add/remove/addandremove ticket calls for the given chunk
++ public <T, V> void addAndRemoveTickets(final long chunk, final TicketType<T> addType, final int addLevel, final T addIdentifier,
++ final TicketType<V> removeType, final int removeLevel, final V removeIdentifier) {
++ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(CoordinateUtils.getChunkX(chunk), CoordinateUtils.getChunkZ(chunk));
++ try {
++ this.addTicketAtLevel(addType, chunk, addLevel, addIdentifier, false);
++ this.removeTicketAtLevel(removeType, chunk, removeLevel, removeIdentifier, false);
++ } finally {
++ this.ticketLockArea.unlock(ticketLock);
++ }
++ }
++
++ // atomic with respect to all add/remove/addandremove ticket calls for the given chunk
++ public <T, V> boolean addIfRemovedTicket(final long chunk, final TicketType<T> addType, final int addLevel, final T addIdentifier,
++ final TicketType<V> removeType, final int removeLevel, final V removeIdentifier) {
++ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(CoordinateUtils.getChunkX(chunk), CoordinateUtils.getChunkZ(chunk));
++ try {
++ if (this.removeTicketAtLevel(removeType, chunk, removeLevel, removeIdentifier, false)) {
++ this.addTicketAtLevel(addType, chunk, addLevel, addIdentifier, false);
++ return true;
++ }
++ return false;
++ } finally {
++ this.ticketLockArea.unlock(ticketLock);
++ }
++ }
++
++ public <T> void removeAllTicketsFor(final TicketType<T> ticketType, final int ticketLevel, final T ticketIdentifier) {
++ if (ticketLevel > MAX_TICKET_LEVEL) {
++ return;
++ }
++
++ final Long2ObjectOpenHashMap<List<RegionFileIOThread.ChunkCoordinate>> sections = new Long2ObjectOpenHashMap();
++ final int sectionShift = this.taskScheduler.getChunkSystemLockShift();
++ for (final RegionFileIOThread.ChunkCoordinate coord : this.tickets.keySet()) {
++ sections.computeIfAbsent(
++ CoordinateUtils.getChunkKey(
++ CoordinateUtils.getChunkX(coord.key) >> sectionShift,
++ CoordinateUtils.getChunkZ(coord.key) >> sectionShift
++ ),
++ (final long keyInMap) -> {
++ return new ArrayList<>();
++ }
++ ).add(coord);
++ }
++
++ for (final Iterator<Long2ObjectMap.Entry<List<RegionFileIOThread.ChunkCoordinate>>> iterator = sections.long2ObjectEntrySet().fastIterator();
++ iterator.hasNext();) {
++ final Long2ObjectMap.Entry<List<RegionFileIOThread.ChunkCoordinate>> entry = iterator.next();
++ final long sectionKey = entry.getLongKey();
++ final List<RegionFileIOThread.ChunkCoordinate> coordinates = entry.getValue();
++
++ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(
++ CoordinateUtils.getChunkX(sectionKey) << sectionShift,
++ CoordinateUtils.getChunkZ(sectionKey) << sectionShift
++ );
++ try {
++ for (final RegionFileIOThread.ChunkCoordinate coord : coordinates) {
++ this.removeTicketAtLevel(ticketType, coord.key, ticketLevel, ticketIdentifier, false);
++ }
++ } finally {
++ this.ticketLockArea.unlock(ticketLock);
++ }
++ }
++ }
++
++ public void tick() {
++ final int sectionShift = this.world.getRegionChunkShift();
++
++ final Predicate<Ticket<?>> expireNow = (final Ticket<?> ticket) -> {
++ if (ticket.removeDelay == NO_TIMEOUT_MARKER) {
++ return false;
++ }
++ return --ticket.removeDelay <= 0L;
++ };
++
++ for (final Iterator<RegionFileIOThread.ChunkCoordinate> iterator = this.sectionToChunkToExpireCount.keySet().iterator(); iterator.hasNext();) {
++ final RegionFileIOThread.ChunkCoordinate section = iterator.next();
++ final long sectionKey = section.key;
++
++ if (!this.sectionToChunkToExpireCount.containsKey(section)) {
++ // removed concurrently
++ continue;
++ }
++
++ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(
++ CoordinateUtils.getChunkX(sectionKey) << sectionShift,
++ CoordinateUtils.getChunkZ(sectionKey) << sectionShift
++ );
++
++ try {
++ final Long2IntOpenHashMap chunkToExpireCount = this.sectionToChunkToExpireCount.get(section);
++ if (chunkToExpireCount == null) {
++ // lost to some race
++ continue;
++ }
++
++ for (final Iterator<Long2IntMap.Entry> iterator1 = chunkToExpireCount.long2IntEntrySet().fastIterator(); iterator1.hasNext();) {
++ final Long2IntMap.Entry entry = iterator1.next();
++
++ final long chunkKey = entry.getLongKey();
++ final int expireCount = entry.getIntValue();
++
++ final RegionFileIOThread.ChunkCoordinate chunk = new RegionFileIOThread.ChunkCoordinate(chunkKey);
++
++ final SortedArraySet<Ticket<?>> tickets = this.tickets.get(chunk);
++ final int levelBefore = getTicketLevelAt(tickets);
++
++ final int sizeBefore = tickets.size();
++ tickets.removeIf(expireNow);
++ final int sizeAfter = tickets.size();
++ final int levelAfter = getTicketLevelAt(tickets);
++
++ if (tickets.isEmpty()) {
++ this.tickets.remove(chunk);
++ }
++ if (levelBefore != levelAfter) {
++ this.updateTicketLevel(chunkKey, levelAfter);
++ }
++
++ final int newExpireCount = expireCount - (sizeBefore - sizeAfter);
++
++ if (newExpireCount == expireCount) {
++ continue;
++ }
++
++ if (newExpireCount != 0) {
++ entry.setValue(newExpireCount);
++ } else {
++ iterator1.remove();
++ }
++ }
++
++ if (chunkToExpireCount.isEmpty()) {
++ this.sectionToChunkToExpireCount.remove(section);
++ }
++ } finally {
++ this.ticketLockArea.unlock(ticketLock);
++ }
++ }
++
++ this.processTicketUpdates();
++ }
++
++ public NewChunkHolder getChunkHolder(final int chunkX, final int chunkZ) {
++ return this.chunkHolders.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
++ }
++
++ public NewChunkHolder getChunkHolder(final long position) {
++ return this.chunkHolders.get(position);
++ }
++
++ public void raisePriority(final int x, final int z, final PrioritisedExecutor.Priority priority) {
++ final NewChunkHolder chunkHolder = this.getChunkHolder(x, z);
++ if (chunkHolder != null) {
++ chunkHolder.raisePriority(priority);
++ }
++ }
++
++ public void setPriority(final int x, final int z, final PrioritisedExecutor.Priority priority) {
++ final NewChunkHolder chunkHolder = this.getChunkHolder(x, z);
++ if (chunkHolder != null) {
++ chunkHolder.setPriority(priority);
++ }
++ }
++
++ public void lowerPriority(final int x, final int z, final PrioritisedExecutor.Priority priority) {
++ final NewChunkHolder chunkHolder = this.getChunkHolder(x, z);
++ if (chunkHolder != null) {
++ chunkHolder.lowerPriority(priority);
++ }
++ }
++
++ private NewChunkHolder createChunkHolder(final long position) {
++ final NewChunkHolder ret = new NewChunkHolder(this.world, CoordinateUtils.getChunkX(position), CoordinateUtils.getChunkZ(position), this.taskScheduler);
++
++ ChunkSystem.onChunkHolderCreate(this.world, ret.vanillaChunkHolder);
++ ret.vanillaChunkHolder.onChunkAdd();
++
++ return ret;
++ }
++
++ // because this function creates the chunk holder without a ticket, it is the caller's responsibility to ensure
++ // the chunk holder eventually unloads. this should only be used to avoid using processTicketUpdates to create chunkholders,
++ // as processTicketUpdates may call plugin logic; in every other case a ticket is appropriate
++ private NewChunkHolder getOrCreateChunkHolder(final int chunkX, final int chunkZ) {
++ return this.getOrCreateChunkHolder(CoordinateUtils.getChunkKey(chunkX, chunkZ));
++ }
++
++ private NewChunkHolder getOrCreateChunkHolder(final long position) {
++ final int chunkX = CoordinateUtils.getChunkX(position);
++ final int chunkZ = CoordinateUtils.getChunkZ(position);
++
++ if (!this.ticketLockArea.isHeldByCurrentThread(chunkX, chunkZ)) {
++ throw new IllegalStateException("Must hold ticket level update lock!");
++ }
++ if (!this.taskScheduler.schedulingLockArea.isHeldByCurrentThread(chunkX, chunkZ)) {
++ throw new IllegalStateException("Must hold scheduler lock!!");
++ }
++
++ // we could just acquire these locks, but...
++ // must own the locks because the caller needs to ensure that no unload can occur AFTER this function returns
++
++ NewChunkHolder current = this.chunkHolders.get(position);
++ if (current != null) {
++ return current;
++ }
++
++ current = this.createChunkHolder(position);
++ synchronized (this.chunkHolders) {
++ this.chunkHolders.put(position, current);
++ }
++
++ return current;
++ }
++
++ private final AtomicLong entityLoadCounter = new AtomicLong();
++
++ public ChunkEntitySlices getOrCreateEntityChunk(final int chunkX, final int chunkZ, final boolean transientChunk) {
++ TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Cannot create entity chunk off-main");
++ ChunkEntitySlices ret;
++
++ NewChunkHolder current = this.getChunkHolder(chunkX, chunkZ);
++ if (current != null && (ret = current.getEntityChunk()) != null && (transientChunk || !ret.isTransient())) {
++ return ret;
++ }
++
++ final AtomicBoolean isCompleted = new AtomicBoolean();
++ final Thread waiter = Thread.currentThread();
++ final Long entityLoadId = Long.valueOf(this.entityLoadCounter.getAndIncrement());
++ NewChunkHolder.GenericDataLoadTaskCallback loadTask = null;
++ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(chunkX, chunkZ);
++ try {
++ this.addTicketAtLevel(TicketType.ENTITY_LOAD, chunkX, chunkZ, MAX_TICKET_LEVEL, entityLoadId);
++ final ReentrantAreaLock.Node schedulingLock = this.taskScheduler.schedulingLockArea.lock(chunkX, chunkZ);
++ try {
++ current = this.getOrCreateChunkHolder(chunkX, chunkZ);
++ if ((ret = current.getEntityChunk()) != null && (transientChunk || !ret.isTransient())) {
++ this.removeTicketAtLevel(TicketType.ENTITY_LOAD, chunkX, chunkZ, MAX_TICKET_LEVEL, entityLoadId);
++ return ret;
++ }
++
++ if (current.isEntityChunkNBTLoaded()) {
++ isCompleted.setPlain(true);
++ } else {
++ loadTask = current.getOrLoadEntityData((final GenericDataLoadTask.TaskResult<CompoundTag, Throwable> result) -> {
++ if (!transientChunk) {
++ isCompleted.set(true);
++ LockSupport.unpark(waiter);
++ }
++ });
++ final ChunkLoadTask.EntityDataLoadTask entityLoad = current.getEntityDataLoadTask();
++
++ if (entityLoad != null && !transientChunk) {
++ entityLoad.raisePriority(PrioritisedExecutor.Priority.BLOCKING);
++ }
++ }
++ } finally {
++ this.taskScheduler.schedulingLockArea.unlock(schedulingLock);
++ }
++ } finally {
++ this.ticketLockArea.unlock(ticketLock);
++ }
++
++ if (loadTask != null) {
++ loadTask.schedule();
++ }
++
++ if (!transientChunk) {
++ // Note: no need to busy wait on the chunk queue, entity load will complete off-main
++ boolean interrupted = false;
++ while (!isCompleted.get()) {
++ interrupted |= Thread.interrupted();
++ LockSupport.park();
++ }
++
++ if (interrupted) {
++ Thread.currentThread().interrupt();
++ }
++ }
++
++ // now that the entity data is loaded, we can load it into the world
++
++ ret = current.loadInEntityChunk(transientChunk);
++
++ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
++ this.addAndRemoveTickets(chunkKey,
++ TicketType.UNKNOWN, MAX_TICKET_LEVEL, new ChunkPos(chunkX, chunkZ),
++ TicketType.ENTITY_LOAD, MAX_TICKET_LEVEL, entityLoadId
++ );
++
++ return ret;
++ }
++
++ public PoiChunk getPoiChunkIfLoaded(final int chunkX, final int chunkZ, final boolean checkLoadInCallback) {
++ final NewChunkHolder holder = this.getChunkHolder(chunkX, chunkZ);
++ if (holder != null) {
++ final PoiChunk ret = holder.getPoiChunk();
++ return ret == null || (checkLoadInCallback && !ret.isLoaded()) ? null : ret;
++ }
++ return null;
++ }
++
++ private final AtomicLong poiLoadCounter = new AtomicLong();
++
++ public PoiChunk loadPoiChunk(final int chunkX, final int chunkZ) {
++ TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Cannot create poi chunk off-main");
++ PoiChunk ret;
++
++ NewChunkHolder current = this.getChunkHolder(chunkX, chunkZ);
++ if (current != null && (ret = current.getPoiChunk()) != null) {
++ if (!ret.isLoaded()) {
++ ret.load();
++ }
++ return ret;
++ }
++
++ final AtomicReference<PoiChunk> completed = new AtomicReference<>();
++ final AtomicBoolean isCompleted = new AtomicBoolean();
++ final Thread waiter = Thread.currentThread();
++ final Long poiLoadId = Long.valueOf(this.poiLoadCounter.getAndIncrement());
++ NewChunkHolder.GenericDataLoadTaskCallback loadTask = null;
++ final ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(chunkX, chunkZ); // Folia - use area based lock to reduce contention
++ try {
++ // Folia - use area based lock to reduce contention
++ this.addTicketAtLevel(TicketType.POI_LOAD, chunkX, chunkZ, MAX_TICKET_LEVEL, poiLoadId);
++ final ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock.Node schedulingLock = this.taskScheduler.schedulingLockArea.lock(chunkX, chunkZ); // Folia - use area based lock to reduce contention
++ try {
++ current = this.getOrCreateChunkHolder(chunkX, chunkZ);
++ if (current.isPoiChunkLoaded()) {
++ this.removeTicketAtLevel(TicketType.POI_LOAD, chunkX, chunkZ, MAX_TICKET_LEVEL, poiLoadId);
++ return current.getPoiChunk();
++ }
++
++ loadTask = current.getOrLoadPoiData((final GenericDataLoadTask.TaskResult<PoiChunk, Throwable> result) -> {
++ completed.setPlain(result.left());
++ isCompleted.set(true);
++ LockSupport.unpark(waiter);
++ });
++ final ChunkLoadTask.PoiDataLoadTask poiLoad = current.getPoiDataLoadTask();
++
++ if (poiLoad != null) {
++ poiLoad.raisePriority(PrioritisedExecutor.Priority.BLOCKING);
++ }
++ } finally {
++ this.taskScheduler.schedulingLockArea.unlock(schedulingLock); // Folia - use area based lock to reduce contention
++ }
++ } finally {
++ this.ticketLockArea.unlock(ticketLock); // Folia - use area based lock to reduce contention
++ }
++
++ if (loadTask != null) {
++ loadTask.schedule();
++ }
++
++ // Note: no need to busy wait on the chunk queue, poi load will complete off-main
++
++ boolean interrupted = false;
++ while (!isCompleted.get()) {
++ interrupted |= Thread.interrupted();
++ LockSupport.park();
++ }
++
++ if (interrupted) {
++ Thread.currentThread().interrupt();
++ }
++
++ ret = completed.getPlain();
++
++ ret.load();
++
++ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
++ this.addAndRemoveTickets(chunkKey,
++ TicketType.UNKNOWN, MAX_TICKET_LEVEL, new ChunkPos(chunkX, chunkZ),
++ TicketType.POI_LOAD, MAX_TICKET_LEVEL, poiLoadId
++ );
++
++ return ret;
++ }
++
++ void addChangedStatuses(final List<NewChunkHolder> changedFullStatus) {
++ if (changedFullStatus.isEmpty()) {
++ return;
++ }
++ if (!TickThread.isTickThread()) {
++ this.taskScheduler.scheduleChunkTask(() -> {
++ final ArrayDeque<NewChunkHolder> pendingFullLoadUpdate = ChunkHolderManager.this.pendingFullLoadUpdate;
++ for (int i = 0, len = changedFullStatus.size(); i < len; ++i) {
++ pendingFullLoadUpdate.add(changedFullStatus.get(i));
++ }
++
++ ChunkHolderManager.this.processPendingFullUpdate();
++ }, PrioritisedExecutor.Priority.HIGHEST);
++ } else {
++ final ArrayDeque<NewChunkHolder> pendingFullLoadUpdate = this.pendingFullLoadUpdate;
++ for (int i = 0, len = changedFullStatus.size(); i < len; ++i) {
++ pendingFullLoadUpdate.add(changedFullStatus.get(i));
++ }
++ }
++ }
++
++ private void removeChunkHolder(final NewChunkHolder holder) {
++ holder.killed = true;
++ holder.vanillaChunkHolder.onChunkRemove();
++ this.autoSaveQueue.remove(holder);
++ ChunkSystem.onChunkHolderDelete(this.world, holder.vanillaChunkHolder);
++ synchronized (this.chunkHolders) {
++ this.chunkHolders.remove(CoordinateUtils.getChunkKey(holder.chunkX, holder.chunkZ));
++ }
++ }
++
++ // note: never call while inside the chunk system, this will absolutely break everything
++ public void processUnloads() {
++ TickThread.ensureTickThread("Cannot unload chunks off-main");
++
++ if (BLOCK_TICKET_UPDATES.get() == Boolean.TRUE) {
++ throw new IllegalStateException("Cannot unload chunks recursively");
++ }
++ final int sectionShift = this.unloadQueue.coordinateShift; // sectionShift <= lock shift
++ final List<ChunkQueue.SectionToUnload> unloadSectionsForRegion = this.unloadQueue.retrieveForAllRegions();
++ int unloadCountTentative = 0;
++ for (final ChunkQueue.SectionToUnload sectionRef : unloadSectionsForRegion) {
++ final ChunkQueue.UnloadSection section
++ = this.unloadQueue.getSectionUnsynchronized(sectionRef.sectionX(), sectionRef.sectionZ());
++
++ if (section == null) {
++ // removed concurrently
++ continue;
++ }
++
++ // technically reading the size field is unsafe, and it may be incorrect.
++ // We assume that the error here cumulatively goes away over many ticks. If it did not, then it is possible
++ // for chunks to never unload or not unload fast enough.
++ unloadCountTentative += section.chunks.size();
++ }
++
++ if (unloadCountTentative <= 0) {
++ // no work to do
++ return;
++ }
++
++ // Note: The behaviour that we process ticket updates while holding the lock has been dropped here, as it is racey behavior.
++ // But, we do need to process updates here so that any add ticket that is synchronised before this call does not go missed.
++ this.processTicketUpdates();
++
++ final int toUnloadCount = Math.max(50, (int)(unloadCountTentative * 0.05));
++ int processedCount = 0;
++
++ for (final ChunkQueue.SectionToUnload sectionRef : unloadSectionsForRegion) {
++ final List<NewChunkHolder> stage1 = new ArrayList<>();
++ final List<NewChunkHolder.UnloadState> stage2 = new ArrayList<>();
++
++ final int sectionLowerX = sectionRef.sectionX() << sectionShift;
++ final int sectionLowerZ = sectionRef.sectionZ() << sectionShift;
++
++ // stage 1: set up for stage 2 while holding critical locks
++ ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(sectionLowerX, sectionLowerZ);
++ try {
++ final ReentrantAreaLock.Node scheduleLock = this.taskScheduler.schedulingLockArea.lock(sectionLowerX, sectionLowerZ);
++ try {
++ final ChunkQueue.UnloadSection section
++ = this.unloadQueue.getSectionUnsynchronized(sectionRef.sectionX(), sectionRef.sectionZ());
++
++ if (section == null) {
++ // removed concurrently
++ continue;
++ }
++
++ // collect the holders to run stage 1 on
++ final int sectionCount = section.chunks.size();
++
++ if ((sectionCount + processedCount) <= toUnloadCount) {
++ // we can just drain the entire section
++
++ for (final LongIterator iterator = section.chunks.iterator(); iterator.hasNext();) {
++ final NewChunkHolder holder = this.chunkHolders.get(iterator.nextLong());
++ if (holder == null) {
++ throw new IllegalStateException();
++ }
++ stage1.add(holder);
++ }
++
++ // remove section
++ this.unloadQueue.removeSection(sectionRef.sectionX(), sectionRef.sectionZ());
++ } else {
++ // processedCount + len = toUnloadCount
++ // we cannot drain the entire section
++ for (int i = 0, len = toUnloadCount - processedCount; i < len; ++i) {
++ final NewChunkHolder holder = this.chunkHolders.get(section.chunks.removeFirstLong());
++ if (holder == null) {
++ throw new IllegalStateException();
++ }
++ stage1.add(holder);
++ }
++ }
++
++ // run stage 1
++ for (int i = 0, len = stage1.size(); i < len; ++i) {
++ final NewChunkHolder chunkHolder = stage1.get(i);
++ if (chunkHolder.isSafeToUnload() != null) {
++ LOGGER.error("Chunkholder " + chunkHolder + " is not safe to unload but is inside the unload queue?");
++ continue;
++ }
++ final NewChunkHolder.UnloadState state = chunkHolder.unloadStage1();
++ if (state == null) {
++ // can unload immediately
++ this.removeChunkHolder(chunkHolder);
++ continue;
++ }
++ stage2.add(state);
++ }
++ } finally {
++ this.taskScheduler.schedulingLockArea.unlock(scheduleLock);
++ }
++ } finally {
++ this.ticketLockArea.unlock(ticketLock);
++ }
++
++ // stage 2: invoke expensive unload logic, designed to run without locks thanks to stage 1
++ final List<NewChunkHolder> stage3 = new ArrayList<>(stage2.size());
++
++ final Boolean before = this.blockTicketUpdates();
++ try {
++ for (int i = 0, len = stage2.size(); i < len; ++i) {
++ final NewChunkHolder.UnloadState state = stage2.get(i);
++ final NewChunkHolder holder = state.holder();
++
++ holder.unloadStage2(state);
++ stage3.add(holder);
++ }
++ } finally {
++ this.unblockTicketUpdates(before);
++ }
++
++ // stage 3: actually attempt to remove the chunk holders
++ ticketLock = this.ticketLockArea.lock(sectionLowerX, sectionLowerZ);
++ try {
++ final ReentrantAreaLock.Node scheduleLock = this.taskScheduler.schedulingLockArea.lock(sectionLowerX, sectionLowerZ);
++ try {
++ for (int i = 0, len = stage3.size(); i < len; ++i) {
++ final NewChunkHolder holder = stage3.get(i);
++
++ if (holder.unloadStage3()) {
++ this.removeChunkHolder(holder);
++ } else {
++ // add cooldown so the next unload check is not immediately next tick
++ this.addTicketAtLevel(TicketType.UNLOAD_COOLDOWN, CoordinateUtils.getChunkKey(holder.chunkX, holder.chunkZ), MAX_TICKET_LEVEL, Unit.INSTANCE, false);
++ }
++ }
++ } finally {
++ this.taskScheduler.schedulingLockArea.unlock(scheduleLock);
++ }
++ } finally {
++ this.ticketLockArea.unlock(ticketLock);
++ }
++
++ processedCount += stage1.size();
++
++ if (processedCount >= toUnloadCount) {
++ break;
++ }
++ }
++ }
++
++ public enum TicketOperationType {
++ ADD, REMOVE, ADD_IF_REMOVED, ADD_AND_REMOVE
++ }
++
++ public static record TicketOperation<T, V> (
++ TicketOperationType op, long chunkCoord,
++ TicketType<T> ticketType, int ticketLevel, T identifier,
++ TicketType<V> ticketType2, int ticketLevel2, V identifier2
++ ) {
++
++ private TicketOperation(TicketOperationType op, long chunkCoord,
++ TicketType<T> ticketType, int ticketLevel, T identifier) {
++ this(op, chunkCoord, ticketType, ticketLevel, identifier, null, 0, null);
++ }
++
++ public static <T> TicketOperation<T, T> addOp(final ChunkPos chunk, final TicketType<T> type, final int ticketLevel, final T identifier) {
++ return addOp(CoordinateUtils.getChunkKey(chunk), type, ticketLevel, identifier);
++ }
++
++ public static <T> TicketOperation<T, T> addOp(final int chunkX, final int chunkZ, final TicketType<T> type, final int ticketLevel, final T identifier) {
++ return addOp(CoordinateUtils.getChunkKey(chunkX, chunkZ), type, ticketLevel, identifier);
++ }
++
++ public static <T> TicketOperation<T, T> addOp(final long chunk, final TicketType<T> type, final int ticketLevel, final T identifier) {
++ return new TicketOperation<>(TicketOperationType.ADD, chunk, type, ticketLevel, identifier);
++ }
++
++ public static <T> TicketOperation<T, T> removeOp(final ChunkPos chunk, final TicketType<T> type, final int ticketLevel, final T identifier) {
++ return removeOp(CoordinateUtils.getChunkKey(chunk), type, ticketLevel, identifier);
++ }
++
++ public static <T> TicketOperation<T, T> removeOp(final int chunkX, final int chunkZ, final TicketType<T> type, final int ticketLevel, final T identifier) {
++ return removeOp(CoordinateUtils.getChunkKey(chunkX, chunkZ), type, ticketLevel, identifier);
++ }
++
++ public static <T> TicketOperation<T, T> removeOp(final long chunk, final TicketType<T> type, final int ticketLevel, final T identifier) {
++ return new TicketOperation<>(TicketOperationType.REMOVE, chunk, type, ticketLevel, identifier);
++ }
++
++ public static <T, V> TicketOperation<T, V> addIfRemovedOp(final long chunk,
++ final TicketType<T> addType, final int addLevel, final T addIdentifier,
++ final TicketType<V> removeType, final int removeLevel, final V removeIdentifier) {
++ return new TicketOperation<>(
++ TicketOperationType.ADD_IF_REMOVED, chunk, addType, addLevel, addIdentifier,
++ removeType, removeLevel, removeIdentifier
++ );
++ }
++
++ public static <T, V> TicketOperation<T, V> addAndRemove(final long chunk,
++ final TicketType<T> addType, final int addLevel, final T addIdentifier,
++ final TicketType<V> removeType, final int removeLevel, final V removeIdentifier) {
++ return new TicketOperation<>(
++ TicketOperationType.ADD_AND_REMOVE, chunk, addType, addLevel, addIdentifier,
++ removeType, removeLevel, removeIdentifier
++ );
++ }
++ }
++
++ private boolean processTicketOp(TicketOperation operation) {
++ boolean ret = false;
++ switch (operation.op) {
++ case ADD: {
++ ret |= this.addTicketAtLevel(operation.ticketType, operation.chunkCoord, operation.ticketLevel, operation.identifier);
++ break;
++ }
++ case REMOVE: {
++ ret |= this.removeTicketAtLevel(operation.ticketType, operation.chunkCoord, operation.ticketLevel, operation.identifier);
++ break;
++ }
++ case ADD_IF_REMOVED: {
++ ret |= this.addIfRemovedTicket(
++ operation.chunkCoord,
++ operation.ticketType, operation.ticketLevel, operation.identifier,
++ operation.ticketType2, operation.ticketLevel2, operation.identifier2
++ );
++ break;
++ }
++ case ADD_AND_REMOVE: {
++ ret = true;
++ this.addAndRemoveTickets(
++ operation.chunkCoord,
++ operation.ticketType, operation.ticketLevel, operation.identifier,
++ operation.ticketType2, operation.ticketLevel2, operation.identifier2
++ );
++ break;
++ }
++ }
++
++ return ret;
++ }
++
++ public void performTicketUpdates(final Collection<TicketOperation<?, ?>> operations) {
++ for (final TicketOperation<?, ?> operation : operations) {
++ this.processTicketOp(operation);
++ }
++ }
++
++ private final ThreadLocal<Boolean> BLOCK_TICKET_UPDATES = ThreadLocal.withInitial(() -> {
++ return Boolean.FALSE;
++ });
++
++ public Boolean blockTicketUpdates() {
++ final Boolean ret = BLOCK_TICKET_UPDATES.get();
++ BLOCK_TICKET_UPDATES.set(Boolean.TRUE);
++ return ret;
++ }
++
++ public void unblockTicketUpdates(final Boolean before) {
++ BLOCK_TICKET_UPDATES.set(before);
++ }
++
++ public boolean processTicketUpdates() {
++ return this.processTicketUpdates(true, true, null);
++ }
++
++ private static final ThreadLocal<List<ChunkProgressionTask>> CURRENT_TICKET_UPDATE_SCHEDULING = new ThreadLocal<>();
++
++ static List<ChunkProgressionTask> getCurrentTicketUpdateScheduling() {
++ return CURRENT_TICKET_UPDATE_SCHEDULING.get();
++ }
++
++ private boolean processTicketUpdates(final boolean checkLocks, final boolean processFullUpdates, List<ChunkProgressionTask> scheduledTasks) {
++ TickThread.ensureTickThread("Cannot process ticket levels off-main");
++ if (BLOCK_TICKET_UPDATES.get() == Boolean.TRUE) {
++ throw new IllegalStateException("Cannot update ticket level while unloading chunks or updating entity manager");
++ }
++
++ List<NewChunkHolder> changedFullStatus = null;
++
++ final boolean isTickThread = TickThread.isTickThread();
++
++ boolean ret = false;
++ final boolean canProcessFullUpdates = processFullUpdates & isTickThread;
++ final boolean canProcessScheduling = scheduledTasks == null;
++
++ if (this.ticketLevelPropagator.hasPendingUpdates()) {
++ if (scheduledTasks == null) {
++ scheduledTasks = new ArrayList<>();
++ }
++ changedFullStatus = new ArrayList<>();
++
++ ret |= this.ticketLevelPropagator.performUpdates(
++ this.ticketLockArea, this.taskScheduler.schedulingLockArea,
++ scheduledTasks, changedFullStatus
++ );
++ }
++
++ if (changedFullStatus != null) {
++ this.addChangedStatuses(changedFullStatus);
++ }
++
++ if (canProcessScheduling && scheduledTasks != null) {
++ for (int i = 0, len = scheduledTasks.size(); i < len; ++i) {
++ scheduledTasks.get(i).schedule();
++ }
++ }
++
++ if (canProcessFullUpdates) {
++ ret |= this.processPendingFullUpdate();
++ }
++
++ return ret;
++ }
++
++ // only call on tick thread
++ protected final boolean processPendingFullUpdate() {
++ final ArrayDeque<NewChunkHolder> pendingFullLoadUpdate = this.pendingFullLoadUpdate;
++
++ boolean ret = false;
++
++ List<NewChunkHolder> changedFullStatus = new ArrayList<>();
++
++ NewChunkHolder holder;
++ while ((holder = pendingFullLoadUpdate.poll()) != null) {
++ ret |= holder.handleFullStatusChange(changedFullStatus);
++
++ if (!changedFullStatus.isEmpty()) {
++ for (int i = 0, len = changedFullStatus.size(); i < len; ++i) {
++ pendingFullLoadUpdate.add(changedFullStatus.get(i));
++ }
++ changedFullStatus.clear();
++ }
++ }
++
++ return ret;
++ }
++
++ public JsonObject getDebugJsonForWatchdog() {
++ return this.getDebugJsonNoLock();
++ }
++
++ private JsonObject getDebugJsonNoLock() {
++ final JsonObject ret = new JsonObject();
++ ret.addProperty("current_tick", Long.valueOf(this.currentTick));
++
++ final JsonArray unloadQueue = new JsonArray();
++ ret.add("unload_queue", unloadQueue);
++ ret.addProperty("lock_shift", Integer.valueOf(this.taskScheduler.getChunkSystemLockShift()));
++ ret.addProperty("ticket_shift", Integer.valueOf(ThreadedTicketLevelPropagator.SECTION_SHIFT));
++ ret.addProperty("region_shift", Integer.valueOf(this.world.getRegionChunkShift()));
++ for (final ChunkQueue.SectionToUnload section : this.unloadQueue.retrieveForAllRegions()) {
++ final JsonObject sectionJson = new JsonObject();
++ unloadQueue.add(sectionJson);
++ sectionJson.addProperty("sectionX", section.sectionX());
++ sectionJson.addProperty("sectionZ", section.sectionX());
++ sectionJson.addProperty("order", section.order());
++
++ final JsonArray coordinates = new JsonArray();
++ sectionJson.add("coordinates", coordinates);
++
++ final ChunkQueue.UnloadSection actualSection = this.unloadQueue.getSectionUnsynchronized(section.sectionX(), section.sectionZ());
++ for (final LongIterator iterator = actualSection.chunks.iterator(); iterator.hasNext();) {
++ final long coordinate = iterator.nextLong();
++
++ final JsonObject coordinateJson = new JsonObject();
++ coordinates.add(coordinateJson);
++
++ coordinateJson.addProperty("chunkX", Integer.valueOf(CoordinateUtils.getChunkX(coordinate)));
++ coordinateJson.addProperty("chunkZ", Integer.valueOf(CoordinateUtils.getChunkZ(coordinate)));
++ }
++ }
++
++ final JsonArray holders = new JsonArray();
++ ret.add("chunkholders", holders);
++
++ for (final NewChunkHolder holder : this.getChunkHolders()) {
++ holders.add(holder.getDebugJson());
++ }
++
++ // TODO
++ /*
++ final JsonArray removeTickToChunkExpireTicketCount = new JsonArray();
++ ret.add("remove_tick_to_chunk_expire_ticket_count", removeTickToChunkExpireTicketCount);
++
++ for (final Long2ObjectMap.Entry<Long2IntOpenHashMap> tickEntry : this.removeTickToChunkExpireTicketCount.long2ObjectEntrySet()) {
++ final long tick = tickEntry.getLongKey();
++ final Long2IntOpenHashMap coordinateToCount = tickEntry.getValue();
++
++ final JsonObject tickJson = new JsonObject();
++ removeTickToChunkExpireTicketCount.add(tickJson);
++
++ tickJson.addProperty("tick", Long.valueOf(tick));
++
++ final JsonArray tickEntries = new JsonArray();
++ tickJson.add("entries", tickEntries);
++
++ for (final Long2IntMap.Entry entry : coordinateToCount.long2IntEntrySet()) {
++ final long coordinate = entry.getLongKey();
++ final int count = entry.getIntValue();
++
++ final JsonObject entryJson = new JsonObject();
++ tickEntries.add(entryJson);
++
++ entryJson.addProperty("chunkX", Long.valueOf(CoordinateUtils.getChunkX(coordinate)));
++ entryJson.addProperty("chunkZ", Long.valueOf(CoordinateUtils.getChunkZ(coordinate)));
++ entryJson.addProperty("count", Integer.valueOf(count));
++ }
++ }
++
++ final JsonArray allTicketsJson = new JsonArray();
++ ret.add("tickets", allTicketsJson);
++
++ for (final Long2ObjectMap.Entry<SortedArraySet<Ticket<?>>> coordinateTickets : this.tickets.long2ObjectEntrySet()) {
++ final long coordinate = coordinateTickets.getLongKey();
++ final SortedArraySet<Ticket<?>> tickets = coordinateTickets.getValue();
++
++ final JsonObject coordinateJson = new JsonObject();
++ allTicketsJson.add(coordinateJson);
++
++ coordinateJson.addProperty("chunkX", Long.valueOf(CoordinateUtils.getChunkX(coordinate)));
++ coordinateJson.addProperty("chunkZ", Long.valueOf(CoordinateUtils.getChunkZ(coordinate)));
++
++ final JsonArray ticketsSerialized = new JsonArray();
++ coordinateJson.add("tickets", ticketsSerialized);
++
++ for (final Ticket<?> ticket : tickets) {
++ final JsonObject ticketSerialized = new JsonObject();
++ ticketsSerialized.add(ticketSerialized);
++
++ ticketSerialized.addProperty("type", ticket.getType().toString());
++ ticketSerialized.addProperty("level", Integer.valueOf(ticket.getTicketLevel()));
++ ticketSerialized.addProperty("identifier", Objects.toString(ticket.key));
++ ticketSerialized.addProperty("remove_tick", Long.valueOf(ticket.removalTick));
++ }
++ }
++ */
++
++ return ret;
++ }
++
++ public JsonObject getDebugJson() {
++ return this.getDebugJsonNoLock(); // Folia - use area based lock to reduce contention
++ }
++}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLightTask.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLightTask.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..86e618586d2ad9d843ad761b7336bb3073ed4c23
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLightTask.java
+@@ -0,0 +1,181 @@
++package io.papermc.paper.chunk.system.scheduling;
++
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
++import ca.spottedleaf.starlight.common.light.StarLightEngine;
++import ca.spottedleaf.starlight.common.light.StarLightInterface;
++import io.papermc.paper.chunk.system.light.LightQueue;
++import net.minecraft.server.level.ServerLevel;
++import net.minecraft.world.level.ChunkPos;
++import net.minecraft.world.level.chunk.ChunkAccess;
++import net.minecraft.world.level.chunk.status.ChunkStatus;
++import net.minecraft.world.level.chunk.ProtoChunk;
++import org.apache.logging.log4j.LogManager;
++import org.apache.logging.log4j.Logger;
++import java.util.function.BooleanSupplier;
++
++public final class ChunkLightTask extends ChunkProgressionTask {
++
++ private static final Logger LOGGER = LogManager.getLogger();
++
++ protected final ChunkAccess fromChunk;
++
++ private final LightTaskPriorityHolder priorityHolder;
++
++ public ChunkLightTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX, final int chunkZ,
++ final ChunkAccess chunk, final PrioritisedExecutor.Priority priority) {
++ super(scheduler, world, chunkX, chunkZ);
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++ this.priorityHolder = new LightTaskPriorityHolder(priority, this);
++ this.fromChunk = chunk;
++ }
++
++ @Override
++ public boolean isScheduled() {
++ return this.priorityHolder.isScheduled();
++ }
++
++ @Override
++ public ChunkStatus getTargetStatus() {
++ return ChunkStatus.LIGHT;
++ }
++
++ @Override
++ public void schedule() {
++ this.priorityHolder.schedule();
++ }
++
++ @Override
++ public void cancel() {
++ this.priorityHolder.cancel();
++ }
++
++ @Override
++ public PrioritisedExecutor.Priority getPriority() {
++ return this.priorityHolder.getPriority();
++ }
++
++ @Override
++ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
++ this.priorityHolder.raisePriority(priority);
++ }
++
++ @Override
++ public void setPriority(final PrioritisedExecutor.Priority priority) {
++ this.priorityHolder.setPriority(priority);
++ }
++
++ @Override
++ public void raisePriority(final PrioritisedExecutor.Priority priority) {
++ this.priorityHolder.raisePriority(priority);
++ }
++
++ private static final class LightTaskPriorityHolder extends PriorityHolder {
++
++ protected final ChunkLightTask task;
++
++ protected LightTaskPriorityHolder(final PrioritisedExecutor.Priority priority, final ChunkLightTask task) {
++ super(priority);
++ this.task = task;
++ }
++
++ @Override
++ protected void cancelScheduled() {
++ final ChunkLightTask task = this.task;
++ task.complete(null, null);
++ }
++
++ @Override
++ protected PrioritisedExecutor.Priority getScheduledPriority() {
++ final ChunkLightTask task = this.task;
++ return task.world.getChunkSource().getLightEngine().theLightEngine.lightQueue.getPriority(task.chunkX, task.chunkZ);
++ }
++
++ @Override
++ protected void scheduleTask(final PrioritisedExecutor.Priority priority) {
++ final ChunkLightTask task = this.task;
++ final StarLightInterface starLightInterface = task.world.getChunkSource().getLightEngine().theLightEngine;
++ final LightQueue lightQueue = starLightInterface.lightQueue;
++ lightQueue.queueChunkLightTask(new ChunkPos(task.chunkX, task.chunkZ), new LightTask(starLightInterface, task), priority);
++ lightQueue.setPriority(task.chunkX, task.chunkZ, priority);
++ }
++
++ @Override
++ protected void lowerPriorityScheduled(final PrioritisedExecutor.Priority priority) {
++ final ChunkLightTask task = this.task;
++ final StarLightInterface starLightInterface = task.world.getChunkSource().getLightEngine().theLightEngine;
++ final LightQueue lightQueue = starLightInterface.lightQueue;
++ lightQueue.lowerPriority(task.chunkX, task.chunkZ, priority);
++ }
++
++ @Override
++ protected void setPriorityScheduled(final PrioritisedExecutor.Priority priority) {
++ final ChunkLightTask task = this.task;
++ final StarLightInterface starLightInterface = task.world.getChunkSource().getLightEngine().theLightEngine;
++ final LightQueue lightQueue = starLightInterface.lightQueue;
++ lightQueue.setPriority(task.chunkX, task.chunkZ, priority);
++ }
++
++ @Override
++ protected void raisePriorityScheduled(final PrioritisedExecutor.Priority priority) {
++ final ChunkLightTask task = this.task;
++ final StarLightInterface starLightInterface = task.world.getChunkSource().getLightEngine().theLightEngine;
++ final LightQueue lightQueue = starLightInterface.lightQueue;
++ lightQueue.raisePriority(task.chunkX, task.chunkZ, priority);
++ }
++ }
++
++ private static final class LightTask implements BooleanSupplier {
++
++ protected final StarLightInterface lightEngine;
++ protected final ChunkLightTask task;
++
++ public LightTask(final StarLightInterface lightEngine, final ChunkLightTask task) {
++ this.lightEngine = lightEngine;
++ this.task = task;
++ }
++
++ @Override
++ public boolean getAsBoolean() {
++ final ChunkLightTask task = this.task;
++ // executed on light thread
++ if (!task.priorityHolder.markExecuting()) {
++ // cancelled
++ return false;
++ }
++
++ try {
++ final Boolean[] emptySections = StarLightEngine.getEmptySectionsForChunk(task.fromChunk);
++
++ if (task.fromChunk.isLightCorrect() && task.fromChunk.getStatus().isOrAfter(ChunkStatus.LIGHT)) {
++ this.lightEngine.forceLoadInChunk(task.fromChunk, emptySections);
++ this.lightEngine.checkChunkEdges(task.chunkX, task.chunkZ);
++ } else {
++ task.fromChunk.setLightCorrect(false);
++ this.lightEngine.lightChunk(task.fromChunk, emptySections);
++ task.fromChunk.setLightCorrect(true);
++ }
++ // we need to advance status
++ if (task.fromChunk instanceof ProtoChunk chunk && chunk.getStatus() == ChunkStatus.LIGHT.getParent()) {
++ chunk.setStatus(ChunkStatus.LIGHT);
++ }
++ } catch (final Throwable thr) {
++ if (!(thr instanceof ThreadDeath)) {
++ LOGGER.fatal("Failed to light chunk " + task.fromChunk.getPos().toString() + " in world '" + this.lightEngine.getWorld().getWorld().getName() + "'", thr);
++ }
++
++ task.complete(null, thr);
++
++ if (thr instanceof ThreadDeath) {
++ throw (ThreadDeath)thr;
++ }
++
++ return true;
++ }
++
++ task.complete(task.fromChunk, null);
++ return true;
++ }
++ }
++}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLoadTask.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLoadTask.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..5ff994d5af24b0bdd7b3a16e245b2c4100bef3f0
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLoadTask.java
+@@ -0,0 +1,484 @@
++package io.papermc.paper.chunk.system.scheduling;
++
++import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
++import ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock;
++import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
++import ca.spottedleaf.dataconverter.minecraft.MCDataConverter;
++import ca.spottedleaf.dataconverter.minecraft.datatypes.MCTypeRegistry;
++import com.mojang.logging.LogUtils;
++import io.papermc.paper.chunk.system.io.RegionFileIOThread;
++import io.papermc.paper.chunk.system.poi.PoiChunk;
++import net.minecraft.SharedConstants;
++import net.minecraft.core.registries.Registries;
++import net.minecraft.nbt.CompoundTag;
++import net.minecraft.server.level.ChunkMap;
++import net.minecraft.server.level.ServerLevel;
++import net.minecraft.world.level.ChunkPos;
++import net.minecraft.world.level.chunk.ChunkAccess;
++import net.minecraft.world.level.chunk.status.ChunkStatus;
++import net.minecraft.world.level.chunk.ProtoChunk;
++import net.minecraft.world.level.chunk.UpgradeData;
++import net.minecraft.world.level.chunk.storage.ChunkSerializer;
++import net.minecraft.world.level.chunk.storage.EntityStorage;
++import net.minecraft.world.level.levelgen.blending.BlendingData;
++import org.slf4j.Logger;
++import java.lang.invoke.VarHandle;
++import java.util.Map;
++import java.util.concurrent.atomic.AtomicInteger;
++import java.util.function.Consumer;
++
++public final class ChunkLoadTask extends ChunkProgressionTask {
++
++ private static final Logger LOGGER = LogUtils.getClassLogger();
++
++ private final NewChunkHolder chunkHolder;
++ private final ChunkDataLoadTask loadTask;
++
++ private volatile boolean cancelled;
++ private NewChunkHolder.GenericDataLoadTaskCallback entityLoadTask;
++ private NewChunkHolder.GenericDataLoadTaskCallback poiLoadTask;
++ private GenericDataLoadTask.TaskResult<ChunkAccess, Throwable> loadResult;
++ private final AtomicInteger taskCountToComplete = new AtomicInteger(3); // one for poi, one for entity, and one for chunk data
++
++ protected ChunkLoadTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX, final int chunkZ,
++ final NewChunkHolder chunkHolder, final PrioritisedExecutor.Priority priority) {
++ super(scheduler, world, chunkX, chunkZ);
++ this.chunkHolder = chunkHolder;
++ this.loadTask = new ChunkDataLoadTask(scheduler, world, chunkX, chunkZ, priority);
++ this.loadTask.addCallback((final GenericDataLoadTask.TaskResult<ChunkAccess, Throwable> result) -> {
++ ChunkLoadTask.this.loadResult = result; // must be before getAndDecrement
++ ChunkLoadTask.this.tryCompleteLoad();
++ });
++ }
++
++ private void tryCompleteLoad() {
++ if (this.taskCountToComplete.decrementAndGet() == 0) {
++ final GenericDataLoadTask.TaskResult<ChunkAccess, Throwable> result = this.cancelled ? null : this.loadResult; // only after the getAndDecrement
++ ChunkLoadTask.this.complete(result == null ? null : result.left(), result == null ? null : result.right());
++ }
++ }
++
++ @Override
++ public ChunkStatus getTargetStatus() {
++ return ChunkStatus.EMPTY;
++ }
++
++ private boolean scheduled;
++
++ @Override
++ public boolean isScheduled() {
++ return this.scheduled;
++ }
++
++ @Override
++ public void schedule() {
++ final NewChunkHolder.GenericDataLoadTaskCallback entityLoadTask;
++ final NewChunkHolder.GenericDataLoadTaskCallback poiLoadTask;
++
++ final Consumer<GenericDataLoadTask.TaskResult<?, ?>> scheduleLoadTask = (final GenericDataLoadTask.TaskResult<?, ?> result) -> {
++ ChunkLoadTask.this.tryCompleteLoad();
++ };
++
++ // NOTE: it is IMPOSSIBLE for getOrLoadEntityData/getOrLoadPoiData to complete synchronously, because
++ // they must schedule a task to off main or to on main to complete
++ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
++ try {
++ if (this.scheduled) {
++ throw new IllegalStateException("schedule() called twice");
++ }
++ this.scheduled = true;
++ if (this.cancelled) {
++ return;
++ }
++ if (!this.chunkHolder.isEntityChunkNBTLoaded()) {
++ entityLoadTask = this.chunkHolder.getOrLoadEntityData((Consumer)scheduleLoadTask);
++ } else {
++ entityLoadTask = null;
++ this.taskCountToComplete.getAndDecrement(); // we know the chunk load is not done here, as it is not scheduled
++ }
++
++ if (!this.chunkHolder.isPoiChunkLoaded()) {
++ poiLoadTask = this.chunkHolder.getOrLoadPoiData((Consumer)scheduleLoadTask);
++ } else {
++ poiLoadTask = null;
++ this.taskCountToComplete.getAndDecrement(); // we know the chunk load is not done here, as it is not scheduled
++ }
++
++ this.entityLoadTask = entityLoadTask;
++ this.poiLoadTask = poiLoadTask;
++ } finally {
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
++ }
++
++ if (entityLoadTask != null) {
++ entityLoadTask.schedule();
++ }
++
++ if (poiLoadTask != null) {
++ poiLoadTask.schedule();
++ }
++
++ this.loadTask.schedule(false);
++ }
++
++ @Override
++ public void cancel() {
++ // must be before load task access, so we can synchronise with the writes to the fields
++ final boolean scheduled;
++ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
++ try {
++ // must read field here, as it may be written later conucrrently -
++ // we need to know if we scheduled _before_ cancellation
++ scheduled = this.scheduled;
++ this.cancelled = true;
++ } finally {
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
++ }
++
++ /*
++ Note: The entityLoadTask/poiLoadTask do not complete when cancelled,
++ so we need to manually try to complete in those cases
++ It is also important to note that we set the cancelled field first, just in case
++ the chunk load task attempts to complete with a non-null value
++ */
++
++ if (scheduled) {
++ // since we scheduled, we need to cancel the tasks
++ if (this.entityLoadTask != null) {
++ if (this.entityLoadTask.cancel()) {
++ this.tryCompleteLoad();
++ }
++ }
++ if (this.poiLoadTask != null) {
++ if (this.poiLoadTask.cancel()) {
++ this.tryCompleteLoad();
++ }
++ }
++ } else {
++ // since nothing was scheduled, we need to decrement the task count here ourselves
++
++ // for entity load task
++ this.tryCompleteLoad();
++
++ // for poi load task
++ this.tryCompleteLoad();
++ }
++ this.loadTask.cancel();
++ }
++
++ @Override
++ public PrioritisedExecutor.Priority getPriority() {
++ return this.loadTask.getPriority();
++ }
++
++ @Override
++ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
++ final EntityDataLoadTask entityLoad = this.chunkHolder.getEntityDataLoadTask();
++ if (entityLoad != null) {
++ entityLoad.lowerPriority(priority);
++ }
++
++ final PoiDataLoadTask poiLoad = this.chunkHolder.getPoiDataLoadTask();
++
++ if (poiLoad != null) {
++ poiLoad.lowerPriority(priority);
++ }
++
++ this.loadTask.lowerPriority(priority);
++ }
++
++ @Override
++ public void setPriority(final PrioritisedExecutor.Priority priority) {
++ final EntityDataLoadTask entityLoad = this.chunkHolder.getEntityDataLoadTask();
++ if (entityLoad != null) {
++ entityLoad.setPriority(priority);
++ }
++
++ final PoiDataLoadTask poiLoad = this.chunkHolder.getPoiDataLoadTask();
++
++ if (poiLoad != null) {
++ poiLoad.setPriority(priority);
++ }
++
++ this.loadTask.setPriority(priority);
++ }
++
++ @Override
++ public void raisePriority(final PrioritisedExecutor.Priority priority) {
++ final EntityDataLoadTask entityLoad = this.chunkHolder.getEntityDataLoadTask();
++ if (entityLoad != null) {
++ entityLoad.raisePriority(priority);
++ }
++
++ final PoiDataLoadTask poiLoad = this.chunkHolder.getPoiDataLoadTask();
++
++ if (poiLoad != null) {
++ poiLoad.raisePriority(priority);
++ }
++
++ this.loadTask.raisePriority(priority);
++ }
++
++ protected static abstract class CallbackDataLoadTask<OnMain,FinalCompletion> extends GenericDataLoadTask<OnMain,FinalCompletion> {
++
++ private TaskResult<FinalCompletion, Throwable> result;
++ private final MultiThreadedQueue<Consumer<TaskResult<FinalCompletion, Throwable>>> waiters = new MultiThreadedQueue<>();
++
++ protected volatile boolean completed;
++ protected static final VarHandle COMPLETED_HANDLE = ConcurrentUtil.getVarHandle(CallbackDataLoadTask.class, "completed", boolean.class);
++
++ protected CallbackDataLoadTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX,
++ final int chunkZ, final RegionFileIOThread.RegionFileType type,
++ final PrioritisedExecutor.Priority priority) {
++ super(scheduler, world, chunkX, chunkZ, type, priority);
++ }
++
++ public void addCallback(final Consumer<TaskResult<FinalCompletion, Throwable>> consumer) {
++ if (!this.waiters.add(consumer)) {
++ try {
++ consumer.accept(this.result);
++ } catch (final Throwable throwable) {
++ this.scheduler.unrecoverableChunkSystemFailure(this.chunkX, this.chunkZ, Map.of(
++ "Consumer", ChunkTaskScheduler.stringIfNull(consumer),
++ "Completed throwable", ChunkTaskScheduler.stringIfNull(this.result.right())
++ ), throwable);
++ if (throwable instanceof ThreadDeath) {
++ throw (ThreadDeath)throwable;
++ }
++ }
++ }
++ }
++
++ @Override
++ protected void onComplete(final TaskResult<FinalCompletion, Throwable> result) {
++ if ((boolean)COMPLETED_HANDLE.getAndSet((CallbackDataLoadTask)this, (boolean)true)) {
++ throw new IllegalStateException("Already completed");
++ }
++ this.result = result;
++ Consumer<TaskResult<FinalCompletion, Throwable>> consumer;
++ while ((consumer = this.waiters.pollOrBlockAdds()) != null) {
++ try {
++ consumer.accept(result);
++ } catch (final Throwable throwable) {
++ this.scheduler.unrecoverableChunkSystemFailure(this.chunkX, this.chunkZ, Map.of(
++ "Consumer", ChunkTaskScheduler.stringIfNull(consumer),
++ "Completed throwable", ChunkTaskScheduler.stringIfNull(result.right())
++ ), throwable);
++ if (throwable instanceof ThreadDeath) {
++ throw (ThreadDeath)throwable;
++ }
++ return;
++ }
++ }
++ }
++ }
++
++ public static final class ChunkDataLoadTask extends CallbackDataLoadTask<ChunkAccess, ChunkAccess> {
++ protected ChunkDataLoadTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX,
++ final int chunkZ, final PrioritisedExecutor.Priority priority) {
++ super(scheduler, world, chunkX, chunkZ, RegionFileIOThread.RegionFileType.CHUNK_DATA, priority);
++ }
++
++ @Override
++ protected boolean hasOffMain() {
++ return true;
++ }
++
++ @Override
++ protected boolean hasOnMain() {
++ return false;
++ }
++
++ @Override
++ protected PrioritisedExecutor.PrioritisedTask createOffMain(final Runnable run, final PrioritisedExecutor.Priority priority) {
++ return this.scheduler.loadExecutor.createTask(run, priority);
++ }
++
++ @Override
++ protected PrioritisedExecutor.PrioritisedTask createOnMain(final Runnable run, final PrioritisedExecutor.Priority priority) {
++ throw new UnsupportedOperationException();
++ }
++
++ @Override
++ protected TaskResult<ChunkAccess, Throwable> completeOnMainOffMain(final ChunkAccess data, final Throwable throwable) {
++ throw new UnsupportedOperationException();
++ }
++
++ private ProtoChunk getEmptyChunk() {
++ return new ProtoChunk(
++ new ChunkPos(this.chunkX, this.chunkZ), UpgradeData.EMPTY, this.world,
++ this.world.registryAccess().registryOrThrow(Registries.BIOME), (BlendingData)null
++ );
++ }
++
++ @Override
++ protected TaskResult<ChunkAccess, Throwable> runOffMain(final CompoundTag data, final Throwable throwable) {
++ if (throwable != null) {
++ LOGGER.error("Failed to load chunk data for task: " + this.toString() + ", chunk data will be lost", throwable);
++ return new TaskResult<>(this.getEmptyChunk(), null);
++ }
++
++ if (data == null) {
++ return new TaskResult<>(this.getEmptyChunk(), null);
++ }
++
++ // need to convert data, and then deserialize it
++
++ try {
++ final ChunkPos chunkPos = new ChunkPos(this.chunkX, this.chunkZ);
++ final ChunkMap chunkMap = this.world.getChunkSource().chunkMap;
++ // run converters
++ // note: upgradeChunkTag copies the data already
++ final CompoundTag converted = chunkMap.upgradeChunkTag(
++ this.world.getTypeKey(), chunkMap.overworldDataStorage, data, chunkMap.generator.getTypeNameForDataFixer(),
++ chunkPos, this.world
++ );
++ // deserialize
++ final ChunkSerializer.InProgressChunkHolder chunkHolder = ChunkSerializer.readInProgressChunkHolder(
++ this.world, chunkMap.getPoiManager(), chunkPos, converted
++ );
++
++ return new TaskResult<>(chunkHolder.protoChunk, null);
++ } catch (final ThreadDeath death) {
++ throw death;
++ } catch (final Throwable thr2) {
++ LOGGER.error("Failed to parse chunk data for task: " + this.toString() + ", chunk data will be lost", thr2);
++ return new TaskResult<>(this.getEmptyChunk(), null);
++ }
++ }
++
++ @Override
++ protected TaskResult<ChunkAccess, Throwable> runOnMain(final ChunkAccess data, final Throwable throwable) {
++ throw new UnsupportedOperationException();
++ }
++ }
++
++ public static final class PoiDataLoadTask extends CallbackDataLoadTask<PoiChunk, PoiChunk> {
++ public PoiDataLoadTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX,
++ final int chunkZ, final PrioritisedExecutor.Priority priority) {
++ super(scheduler, world, chunkX, chunkZ, RegionFileIOThread.RegionFileType.POI_DATA, priority);
++ }
++
++ @Override
++ protected boolean hasOffMain() {
++ return true;
++ }
++
++ @Override
++ protected boolean hasOnMain() {
++ return false;
++ }
++
++ @Override
++ protected PrioritisedExecutor.PrioritisedTask createOffMain(final Runnable run, final PrioritisedExecutor.Priority priority) {
++ return this.scheduler.loadExecutor.createTask(run, priority);
++ }
++
++ @Override
++ protected PrioritisedExecutor.PrioritisedTask createOnMain(final Runnable run, final PrioritisedExecutor.Priority priority) {
++ throw new UnsupportedOperationException();
++ }
++
++ @Override
++ protected TaskResult<PoiChunk, Throwable> completeOnMainOffMain(final PoiChunk data, final Throwable throwable) {
++ throw new UnsupportedOperationException();
++ }
++
++ @Override
++ protected TaskResult<PoiChunk, Throwable> runOffMain(CompoundTag data, final Throwable throwable) {
++ if (throwable != null) {
++ LOGGER.error("Failed to load poi data for task: " + this.toString() + ", poi data will be lost", throwable);
++ return new TaskResult<>(PoiChunk.empty(this.world, this.chunkX, this.chunkZ), null);
++ }
++
++ if (data == null || data.isEmpty()) {
++ // nothing to do
++ return new TaskResult<>(PoiChunk.empty(this.world, this.chunkX, this.chunkZ), null);
++ }
++
++ try {
++ data = data.copy(); // coming from the I/O thread, so we need to copy
++ // run converters
++ final int dataVersion = !data.contains(SharedConstants.DATA_VERSION_TAG, net.minecraft.nbt.Tag.TAG_ANY_NUMERIC) ? 1945 : data.getInt(SharedConstants.DATA_VERSION_TAG);
++ final CompoundTag converted = MCDataConverter.convertTag(
++ MCTypeRegistry.POI_CHUNK, data, dataVersion, SharedConstants.getCurrentVersion().getDataVersion().getVersion()
++ );
++
++ // now we need to parse it
++ return new TaskResult<>(PoiChunk.parse(this.world, this.chunkX, this.chunkZ, converted), null);
++ } catch (final ThreadDeath death) {
++ throw death;
++ } catch (final Throwable thr2) {
++ LOGGER.error("Failed to run parse poi data for task: " + this.toString() + ", poi data will be lost", thr2);
++ return new TaskResult<>(PoiChunk.empty(this.world, this.chunkX, this.chunkZ), null);
++ }
++ }
++
++ @Override
++ protected TaskResult<PoiChunk, Throwable> runOnMain(final PoiChunk data, final Throwable throwable) {
++ throw new UnsupportedOperationException();
++ }
++ }
++
++ public static final class EntityDataLoadTask extends CallbackDataLoadTask<CompoundTag, CompoundTag> {
++
++ public EntityDataLoadTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX,
++ final int chunkZ, final PrioritisedExecutor.Priority priority) {
++ super(scheduler, world, chunkX, chunkZ, RegionFileIOThread.RegionFileType.ENTITY_DATA, priority);
++ }
++
++ @Override
++ protected boolean hasOffMain() {
++ return true;
++ }
++
++ @Override
++ protected boolean hasOnMain() {
++ return false;
++ }
++
++ @Override
++ protected PrioritisedExecutor.PrioritisedTask createOffMain(final Runnable run, final PrioritisedExecutor.Priority priority) {
++ return this.scheduler.loadExecutor.createTask(run, priority);
++ }
++
++ @Override
++ protected PrioritisedExecutor.PrioritisedTask createOnMain(final Runnable run, final PrioritisedExecutor.Priority priority) {
++ throw new UnsupportedOperationException();
++ }
++
++ @Override
++ protected TaskResult<CompoundTag, Throwable> completeOnMainOffMain(final CompoundTag data, final Throwable throwable) {
++ throw new UnsupportedOperationException();
++ }
++
++ @Override
++ protected TaskResult<CompoundTag, Throwable> runOffMain(final CompoundTag data, final Throwable throwable) {
++ if (throwable != null) {
++ LOGGER.error("Failed to load entity data for task: " + this.toString() + ", entity data will be lost", throwable);
++ return new TaskResult<>(null, null);
++ }
++
++ if (data == null || data.isEmpty()) {
++ // nothing to do
++ return new TaskResult<>(null, null);
++ }
++
++ try {
++ // note: data comes from the I/O thread, so we need to copy it
++ return new TaskResult<>(EntityStorage.upgradeChunkTag(data.copy()), null);
++ } catch (final ThreadDeath death) {
++ throw death;
++ } catch (final Throwable thr2) {
++ LOGGER.error("Failed to run converters for entity data for task: " + this.toString() + ", entity data will be lost", thr2);
++ return new TaskResult<>(null, thr2);
++ }
++ }
++
++ @Override
++ protected TaskResult<CompoundTag, Throwable> runOnMain(final CompoundTag data, final Throwable throwable) {
++ throw new UnsupportedOperationException();
++ }
++ }
++}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkProgressionTask.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkProgressionTask.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..b2341328bb22f08836ef18785dc27393a36ce8d6
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkProgressionTask.java
+@@ -0,0 +1,105 @@
++package io.papermc.paper.chunk.system.scheduling;
++
++import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
++import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
++import net.minecraft.server.level.ServerLevel;
++import net.minecraft.world.level.chunk.ChunkAccess;
++import net.minecraft.world.level.chunk.status.ChunkStatus;
++import java.lang.invoke.VarHandle;
++import java.util.Map;
++import java.util.function.BiConsumer;
++
++public abstract class ChunkProgressionTask {
++
++ private final MultiThreadedQueue<BiConsumer<ChunkAccess, Throwable>> waiters = new MultiThreadedQueue<>();
++ private ChunkAccess completedChunk;
++ private Throwable completedThrowable;
++
++ protected final ChunkTaskScheduler scheduler;
++ protected final ServerLevel world;
++ protected final int chunkX;
++ protected final int chunkZ;
++
++ protected volatile boolean completed;
++ protected static final VarHandle COMPLETED_HANDLE = ConcurrentUtil.getVarHandle(ChunkProgressionTask.class, "completed", boolean.class);
++
++ protected ChunkProgressionTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX, final int chunkZ) {
++ this.scheduler = scheduler;
++ this.world = world;
++ this.chunkX = chunkX;
++ this.chunkZ = chunkZ;
++ }
++
++ // Used only for debug json
++ public abstract boolean isScheduled();
++
++ // Note: It is the responsibility of the task to set the chunk's status once it has completed
++ public abstract ChunkStatus getTargetStatus();
++
++ /* Only executed once */
++ /* Implementations must be prepared to handle cases where cancel() is called before schedule() */
++ public abstract void schedule();
++
++ /* May be called multiple times */
++ public abstract void cancel();
++
++ public abstract PrioritisedExecutor.Priority getPriority();
++
++ /* Schedule lock is always held for the priority update calls */
++
++ public abstract void lowerPriority(final PrioritisedExecutor.Priority priority);
++
++ public abstract void setPriority(final PrioritisedExecutor.Priority priority);
++
++ public abstract void raisePriority(final PrioritisedExecutor.Priority priority);
++
++ public final void onComplete(final BiConsumer<ChunkAccess, Throwable> onComplete) {
++ if (!this.waiters.add(onComplete)) {
++ try {
++ onComplete.accept(this.completedChunk, this.completedThrowable);
++ } catch (final Throwable throwable) {
++ this.scheduler.unrecoverableChunkSystemFailure(this.chunkX, this.chunkZ, Map.of(
++ "Consumer", ChunkTaskScheduler.stringIfNull(onComplete),
++ "Completed throwable", ChunkTaskScheduler.stringIfNull(this.completedThrowable)
++ ), throwable);
++ if (throwable instanceof ThreadDeath) {
++ throw (ThreadDeath)throwable;
++ }
++ }
++ }
++ }
++
++ protected final void complete(final ChunkAccess chunk, final Throwable throwable) {
++ try {
++ this.complete0(chunk, throwable);
++ } catch (final Throwable thr2) {
++ this.scheduler.unrecoverableChunkSystemFailure(this.chunkX, this.chunkZ, Map.of(
++ "Completed throwable", ChunkTaskScheduler.stringIfNull(throwable)
++ ), thr2);
++ if (thr2 instanceof ThreadDeath) {
++ throw (ThreadDeath)thr2;
++ }
++ }
++ }
++
++ private void complete0(final ChunkAccess chunk, final Throwable throwable) {
++ if ((boolean)COMPLETED_HANDLE.getAndSet((ChunkProgressionTask)this, (boolean)true)) {
++ throw new IllegalStateException("Already completed");
++ }
++ this.completedChunk = chunk;
++ this.completedThrowable = throwable;
++
++ BiConsumer<ChunkAccess, Throwable> consumer;
++ while ((consumer = this.waiters.pollOrBlockAdds()) != null) {
++ consumer.accept(chunk, throwable);
++ }
++ }
++
++ @Override
++ public String toString() {
++ return "ChunkProgressionTask{class: " + this.getClass().getName() + ", for world: " + this.world.getWorld().getName() +
++ ", chunk: (" + this.chunkX + "," + this.chunkZ + "), hashcode: " + System.identityHashCode(this) + ", priority: " + this.getPriority() +
++ ", status: " + this.getTargetStatus().toString() + ", scheduled: " + this.isScheduled() + "}";
++ }
++}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkQueue.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkQueue.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..4cc1b3ba6d093a9683dbd8b7fe76106ae391e019
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkQueue.java
+@@ -0,0 +1,160 @@
++package io.papermc.paper.chunk.system.scheduling;
++
++import it.unimi.dsi.fastutil.HashCommon;
++import it.unimi.dsi.fastutil.longs.LongLinkedOpenHashSet;
++import java.util.ArrayList;
++import java.util.List;
++import java.util.Map;
++import java.util.concurrent.ConcurrentHashMap;
++import java.util.concurrent.atomic.AtomicLong;
++
++public final class ChunkQueue {
++
++ public final int coordinateShift;
++ private final AtomicLong orderGenerator = new AtomicLong();
++ private final ConcurrentHashMap<Coordinate, UnloadSection> unloadSections = new ConcurrentHashMap<>();
++
++ /*
++ * Note: write operations do not occur in parallel for any given section.
++ * Note: coordinateShift <= region shift in order for retrieveForCurrentRegion() to function correctly
++ */
++
++ public ChunkQueue(final int coordinateShift) {
++ this.coordinateShift = coordinateShift;
++ }
++
++ public static record SectionToUnload(int sectionX, int sectionZ, Coordinate coord, long order, int count) {}
++
++ public List<SectionToUnload> retrieveForAllRegions() {
++ final List<SectionToUnload> ret = new ArrayList<>();
++
++ for (final Map.Entry<Coordinate, UnloadSection> entry : this.unloadSections.entrySet()) {
++ final Coordinate coord = entry.getKey();
++ final long key = coord.key;
++ final UnloadSection section = entry.getValue();
++ final int sectionX = Coordinate.x(key);
++ final int sectionZ = Coordinate.z(key);
++
++ ret.add(new SectionToUnload(sectionX, sectionZ, coord, section.order, section.chunks.size()));
++ }
++
++ ret.sort((final SectionToUnload s1, final SectionToUnload s2) -> {
++ return Long.compare(s1.order, s2.order);
++ });
++
++ return ret;
++ }
++
++ public UnloadSection getSectionUnsynchronized(final int sectionX, final int sectionZ) {
++ final Coordinate coordinate = new Coordinate(Coordinate.key(sectionX, sectionZ));
++ return this.unloadSections.get(coordinate);
++ }
++
++ public UnloadSection removeSection(final int sectionX, final int sectionZ) {
++ final Coordinate coordinate = new Coordinate(Coordinate.key(sectionX, sectionZ));
++ return this.unloadSections.remove(coordinate);
++ }
++
++ // write operation
++ public boolean addChunk(final int chunkX, final int chunkZ) {
++ final int shift = this.coordinateShift;
++ final int sectionX = chunkX >> shift;
++ final int sectionZ = chunkZ >> shift;
++ final Coordinate coordinate = new Coordinate(Coordinate.key(sectionX, sectionZ));
++ final long chunkKey = Coordinate.key(chunkX, chunkZ);
++
++ UnloadSection section = this.unloadSections.get(coordinate);
++ if (section == null) {
++ section = new UnloadSection(this.orderGenerator.getAndIncrement());
++ // write operations do not occur in parallel for a given section
++ this.unloadSections.put(coordinate, section);
++ }
++
++ return section.chunks.add(chunkKey);
++ }
++
++ // write operation
++ public boolean removeChunk(final int chunkX, final int chunkZ) {
++ final int shift = this.coordinateShift;
++ final int sectionX = chunkX >> shift;
++ final int sectionZ = chunkZ >> shift;
++ final Coordinate coordinate = new Coordinate(Coordinate.key(sectionX, sectionZ));
++ final long chunkKey = Coordinate.key(chunkX, chunkZ);
++
++ final UnloadSection section = this.unloadSections.get(coordinate);
++
++ if (section == null) {
++ return false;
++ }
++
++ if (!section.chunks.remove(chunkKey)) {
++ return false;
++ }
++
++ if (section.chunks.isEmpty()) {
++ this.unloadSections.remove(coordinate);
++ }
++
++ return true;
++ }
++
++ public static final class UnloadSection {
++
++ public final long order;
++ public final LongLinkedOpenHashSet chunks = new LongLinkedOpenHashSet();
++
++ public UnloadSection(final long order) {
++ this.order = order;
++ }
++ }
++
++ private static final class Coordinate implements Comparable<Coordinate> {
++
++ public final long key;
++
++ public Coordinate(final long key) {
++ this.key = key;
++ }
++
++ public Coordinate(final int x, final int z) {
++ this.key = key(x, z);
++ }
++
++ public static long key(final int x, final int z) {
++ return ((long)z << 32) | (x & 0xFFFFFFFFL);
++ }
++
++ public static int x(final long key) {
++ return (int)key;
++ }
++
++ public static int z(final long key) {
++ return (int)(key >>> 32);
++ }
++
++ @Override
++ public int hashCode() {
++ return (int)HashCommon.mix(this.key);
++ }
++
++ @Override
++ public boolean equals(final Object obj) {
++ if (this == obj) {
++ return true;
++ }
++
++ if (!(obj instanceof Coordinate other)) {
++ return false;
++ }
++
++ return this.key == other.key;
++ }
++
++ // This class is intended for HashMap/ConcurrentHashMap usage, which do treeify bin nodes if the chain
++ // is too large. So we should implement compareTo to help.
++ @Override
++ public int compareTo(final Coordinate other) {
++ return Long.compare(this.key, other.key);
++ }
++ }
++}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkTaskScheduler.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkTaskScheduler.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..049e20407033073b06fcdeb46c38485f4926d778
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkTaskScheduler.java
+@@ -0,0 +1,883 @@
++package io.papermc.paper.chunk.system.scheduling;
++
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedThreadPool;
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedThreadedTaskQueue;
++import ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock;
++import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
++import com.mojang.logging.LogUtils;
++import io.papermc.paper.chunk.system.io.RegionFileIOThread;
++import io.papermc.paper.chunk.system.scheduling.queue.RadiusAwarePrioritisedExecutor;
++import io.papermc.paper.configuration.GlobalConfiguration;
++import io.papermc.paper.util.CoordinateUtils;
++import io.papermc.paper.util.TickThread;
++import java.util.function.BooleanSupplier;
++import net.minecraft.CrashReport;
++import net.minecraft.CrashReportCategory;
++import net.minecraft.ReportedException;
++import io.papermc.paper.util.MCUtil;
++import net.minecraft.server.MinecraftServer;
++import net.minecraft.server.level.ChunkMap;
++import net.minecraft.server.level.FullChunkStatus;
++import net.minecraft.server.level.ServerLevel;
++import net.minecraft.server.level.TicketType;
++import net.minecraft.world.level.ChunkPos;
++import net.minecraft.world.level.chunk.ChunkAccess;
++import net.minecraft.world.level.chunk.status.ChunkStatus;
++import net.minecraft.world.level.chunk.LevelChunk;
++import org.bukkit.Bukkit;
++import org.slf4j.Logger;
++import java.io.File;
++import java.util.ArrayDeque;
++import java.util.ArrayList;
++import java.util.Arrays;
++import java.util.Collections;
++import java.util.List;
++import java.util.Map;
++import java.util.Objects;
++import java.util.concurrent.atomic.AtomicBoolean;
++import java.util.concurrent.atomic.AtomicLong;
++import java.util.function.Consumer;
++
++public final class ChunkTaskScheduler {
++
++ private static final Logger LOGGER = LogUtils.getClassLogger();
++
++ static int newChunkSystemIOThreads;
++ static int newChunkSystemWorkerThreads;
++ static int newChunkSystemGenParallelism;
++ static int newChunkSystemLoadParallelism;
++
++ public static ca.spottedleaf.concurrentutil.executor.standard.PrioritisedThreadPool workerThreads;
++
++ private static boolean initialised = false;
++
++ public static void init(final GlobalConfiguration.ChunkSystem config) {
++ if (initialised) {
++ return;
++ }
++ initialised = true;
++ newChunkSystemIOThreads = config.ioThreads;
++ newChunkSystemWorkerThreads = config.workerThreads;
++ if (newChunkSystemIOThreads < 0) {
++ newChunkSystemIOThreads = 1;
++ } else {
++ newChunkSystemIOThreads = Math.max(1, newChunkSystemIOThreads);
++ }
++ int defaultWorkerThreads = Runtime.getRuntime().availableProcessors() / 2;
++ if (defaultWorkerThreads <= 4) {
++ defaultWorkerThreads = defaultWorkerThreads <= 3 ? 1 : 2;
++ } else {
++ defaultWorkerThreads = defaultWorkerThreads / 2;
++ }
++ defaultWorkerThreads = Integer.getInteger("Paper.WorkerThreadCount", Integer.valueOf(defaultWorkerThreads));
++
++ if (newChunkSystemWorkerThreads < 0) {
++ newChunkSystemWorkerThreads = defaultWorkerThreads;
++ } else {
++ newChunkSystemWorkerThreads = Math.max(1, newChunkSystemWorkerThreads);
++ }
++
++ String newChunkSystemGenParallelism = config.genParallelism;
++ if (newChunkSystemGenParallelism.equalsIgnoreCase("default")) {
++ newChunkSystemGenParallelism = "true";
++ }
++ boolean useParallelGen;
++ if (newChunkSystemGenParallelism.equalsIgnoreCase("on") || newChunkSystemGenParallelism.equalsIgnoreCase("enabled")
++ || newChunkSystemGenParallelism.equalsIgnoreCase("true")) {
++ useParallelGen = true;
++ } else if (newChunkSystemGenParallelism.equalsIgnoreCase("off") || newChunkSystemGenParallelism.equalsIgnoreCase("disabled")
++ || newChunkSystemGenParallelism.equalsIgnoreCase("false")) {
++ useParallelGen = false;
++ } else {
++ throw new IllegalStateException("Invalid option for gen-parallelism: must be one of [on, off, enabled, disabled, true, false, default]");
++ }
++
++ ChunkTaskScheduler.newChunkSystemGenParallelism = useParallelGen ? newChunkSystemWorkerThreads : 1;
++ ChunkTaskScheduler.newChunkSystemLoadParallelism = newChunkSystemWorkerThreads;
++
++ RegionFileIOThread.init(newChunkSystemIOThreads);
++ workerThreads = new ca.spottedleaf.concurrentutil.executor.standard.PrioritisedThreadPool(
++ "Paper Chunk System Worker Pool", newChunkSystemWorkerThreads,
++ (final Thread thread, final Integer id) -> {
++ thread.setPriority(Thread.NORM_PRIORITY - 2);
++ thread.setName("Tuinity Chunk System Worker #" + id.intValue());
++ thread.setUncaughtExceptionHandler(io.papermc.paper.chunk.system.scheduling.NewChunkHolder.CHUNKSYSTEM_UNCAUGHT_EXCEPTION_HANDLER);
++ }, (long)(20.0e6)); // 20ms
++
++ LOGGER.info("Chunk system is using " + newChunkSystemIOThreads + " I/O threads, " + newChunkSystemWorkerThreads + " worker threads, and gen parallelism of " + ChunkTaskScheduler.newChunkSystemGenParallelism + " threads");
++ }
++
++ public final ServerLevel world;
++ public final PrioritisedThreadPool workers;
++ public final RadiusAwarePrioritisedExecutor radiusAwareScheduler;
++ public final PrioritisedThreadPool.PrioritisedPoolExecutor parallelGenExecutor;
++ private final PrioritisedThreadPool.PrioritisedPoolExecutor radiusAwareGenExecutor;
++ public final PrioritisedThreadPool.PrioritisedPoolExecutor loadExecutor;
++
++ private final PrioritisedThreadedTaskQueue mainThreadExecutor = new PrioritisedThreadedTaskQueue();
++
++ public final ChunkHolderManager chunkHolderManager;
++
++ static {
++ ChunkStatus.EMPTY.writeRadius = 0;
++ ChunkStatus.STRUCTURE_STARTS.writeRadius = 0;
++ ChunkStatus.STRUCTURE_REFERENCES.writeRadius = 0;
++ ChunkStatus.BIOMES.writeRadius = 0;
++ ChunkStatus.NOISE.writeRadius = 0;
++ ChunkStatus.SURFACE.writeRadius = 0;
++ ChunkStatus.CARVERS.writeRadius = 0;
++ ChunkStatus.FEATURES.writeRadius = 1;
++ ChunkStatus.INITIALIZE_LIGHT.writeRadius = 0;
++ ChunkStatus.LIGHT.writeRadius = 2;
++ ChunkStatus.SPAWN.writeRadius = 0;
++ ChunkStatus.FULL.writeRadius = 0;
++
++ /*
++ It's important that the neighbour read radius is taken into account. If _any_ later status is using some chunk as
++ a neighbour, it must be also safe if that neighbour is being generated. i.e for any status later than FEATURES,
++ for a status to be parallel safe it must not read the block data from its neighbours.
++ */
++ final List<ChunkStatus> parallelCapableStatus = Arrays.asList(
++ // No-op executor.
++ ChunkStatus.EMPTY,
++
++ // This is parallel capable, as CB has fixed the concurrency issue with stronghold generations.
++ // Does not touch neighbour chunks.
++ ChunkStatus.STRUCTURE_STARTS,
++
++ // Surprisingly this is parallel capable. It is simply reading the already-created structure starts
++ // into the structure references for the chunk. So while it reads from it neighbours, its neighbours
++ // will not change, even if executed in parallel.
++ ChunkStatus.STRUCTURE_REFERENCES,
++
++ // Safe. Mojang runs it in parallel as well.
++ ChunkStatus.BIOMES,
++
++ // Safe. Mojang runs it in parallel as well.
++ ChunkStatus.NOISE,
++
++ // Parallel safe. Only touches the target chunk. Biome retrieval is now noise based, which is
++ // completely thread-safe.
++ ChunkStatus.SURFACE,
++
++ // No global state is modified in the carvers. It only touches the specified chunk. So it is parallel safe.
++ ChunkStatus.CARVERS,
++
++ // FEATURES is not parallel safe. It writes to neighbours.
++
++ // no-op executor
++ ChunkStatus.INITIALIZE_LIGHT
++
++ // LIGHT is not parallel safe. It also doesn't run on the generation executor, so no point.
++
++ // Only writes to the specified chunk. State is not read by later statuses. Parallel safe.
++ // Note: it may look unsafe because it writes to a worldgenregion, but the region size is always 0 -
++ // see the task margin.
++ // However, if the neighbouring FEATURES chunk is unloaded, but then fails to load in again (for whatever
++ // reason), then it would write to this chunk - and since this status reads blocks from itself, it's not
++ // safe to execute this in parallel.
++ // SPAWN
++
++ // FULL is executed on main.
++ );
++
++ for (final ChunkStatus status : parallelCapableStatus) {
++ status.isParallelCapable = true;
++ }
++ }
++
++ private static final int[] ACCESS_RADIUS_TABLE = new int[ChunkStatus.getStatusList().size()];
++ private static final int[] MAX_ACCESS_RADIUS_TABLE = new int[ACCESS_RADIUS_TABLE.length];
++ static {
++ Arrays.fill(ACCESS_RADIUS_TABLE, -1);
++ }
++
++ private static int getAccessRadius0(final ChunkStatus genStatus) {
++ if (genStatus == ChunkStatus.EMPTY) {
++ return 0;
++ }
++
++ final int radius = Math.max(genStatus.loadRange, genStatus.getRange());
++ int maxRange = radius;
++
++ for (int dist = 1; dist <= radius; ++dist) {
++ final ChunkStatus requiredNeighbourStatus = ChunkMap.getDependencyStatus(genStatus, radius);
++ final int rad = ACCESS_RADIUS_TABLE[requiredNeighbourStatus.getIndex()];
++ if (rad == -1) {
++ throw new IllegalStateException();
++ }
++
++ maxRange = Math.max(maxRange, dist + rad);
++ }
++
++ return maxRange;
++ }
++
++ private static int maxAccessRadius;
++
++ static {
++ final List<ChunkStatus> statuses = ChunkStatus.getStatusList();
++ for (int i = 0, len = statuses.size(); i < len; ++i) {
++ ACCESS_RADIUS_TABLE[i] = getAccessRadius0(statuses.get(i));
++ }
++ int max = 0;
++ for (int i = 0, len = statuses.size(); i < len; ++i) {
++ MAX_ACCESS_RADIUS_TABLE[i] = max = Math.max(ACCESS_RADIUS_TABLE[i], max);
++ }
++ maxAccessRadius = max;
++ }
++
++ public static int getMaxAccessRadius() {
++ return maxAccessRadius;
++ }
++
++ public static int getAccessRadius(final ChunkStatus genStatus) {
++ return ACCESS_RADIUS_TABLE[genStatus.getIndex()];
++ }
++
++ public static int getAccessRadius(final FullChunkStatus status) {
++ return (status.ordinal() - 1) + getAccessRadius(ChunkStatus.FULL);
++ }
++
++ final ReentrantAreaLock schedulingLockArea;
++ private final int lockShift;
++
++ public final int getChunkSystemLockShift() {
++ return this.lockShift;
++ }
++ // Folia end - use area based lock to reduce contention
++
++ public ChunkTaskScheduler(final ServerLevel world, final PrioritisedThreadPool workers) {
++ this.world = world;
++ this.workers = workers;
++ // must be >= region shift (in paper, doesn't exist) and must be >= ticket propagator section shift
++ // it must be >= region shift since the regioniser assumes ticket updates do not occur in parallel for the region sections
++ // it must be >= ticket propagator section shift so that the ticket propagator can assume that owning a position implies owning
++ // the entire section
++ // we just take the max, as we want the smallest shift that satisfies these properties
++ this.lockShift = Math.max(world.getRegionChunkShift(), ThreadedTicketLevelPropagator.SECTION_SHIFT);
++ this.schedulingLockArea = new ReentrantAreaLock(this.getChunkSystemLockShift());
++
++ final String worldName = world.getWorld().getName();
++ this.parallelGenExecutor = workers.createExecutor("Chunk parallel generation executor for world '" + worldName + "'", Math.max(1, newChunkSystemGenParallelism));
++ this.radiusAwareGenExecutor =
++ newChunkSystemGenParallelism <= 1 ? this.parallelGenExecutor : workers.createExecutor("Chunk radius aware generator for world '" + worldName + "'", newChunkSystemGenParallelism);
++ this.loadExecutor = workers.createExecutor("Chunk load executor for world '" + worldName + "'", newChunkSystemLoadParallelism);
++ this.radiusAwareScheduler = new RadiusAwarePrioritisedExecutor(this.radiusAwareGenExecutor, Math.max(1, newChunkSystemGenParallelism));
++ this.chunkHolderManager = new ChunkHolderManager(world, this);
++ }
++
++ private final AtomicBoolean failedChunkSystem = new AtomicBoolean();
++
++ public static Object stringIfNull(final Object obj) {
++ return obj == null ? "null" : obj;
++ }
++
++ public void unrecoverableChunkSystemFailure(final int chunkX, final int chunkZ, final Map<String, Object> objectsOfInterest, final Throwable thr) {
++ final NewChunkHolder holder = this.chunkHolderManager.getChunkHolder(chunkX, chunkZ);
++ LOGGER.error("Chunk system error at chunk (" + chunkX + "," + chunkZ + "), holder: " + holder + ", exception:", new Throwable(thr));
++
++ if (this.failedChunkSystem.getAndSet(true)) {
++ return;
++ }
++
++ final ReportedException reportedException = thr instanceof ReportedException ? (ReportedException)thr : new ReportedException(new CrashReport("Chunk system error", thr));
++
++ CrashReportCategory crashReportCategory = reportedException.getReport().addCategory("Chunk system details");
++ crashReportCategory.setDetail("Chunk coordinate", new ChunkPos(chunkX, chunkZ).toString());
++ crashReportCategory.setDetail("ChunkHolder", Objects.toString(holder));
++ crashReportCategory.setDetail("unrecoverableChunkSystemFailure caller thread", Thread.currentThread().getName());
++
++ crashReportCategory = reportedException.getReport().addCategory("Chunk System Objects of Interest");
++ for (final Map.Entry<String, Object> entry : objectsOfInterest.entrySet()) {
++ if (entry.getValue() instanceof Throwable thrObject) {
++ crashReportCategory.setDetailError(Objects.toString(entry.getKey()), thrObject);
++ } else {
++ crashReportCategory.setDetail(Objects.toString(entry.getKey()), Objects.toString(entry.getValue()));
++ }
++ }
++
++ final Runnable crash = () -> {
++ throw new RuntimeException("Chunk system crash propagated from unrecoverableChunkSystemFailure", reportedException);
++ };
++
++ // this may not be good enough, specifically thanks to stupid ass plugins swallowing exceptions
++ this.scheduleChunkTask(chunkX, chunkZ, crash, PrioritisedExecutor.Priority.BLOCKING);
++ // so, make the main thread pick it up
++ MinecraftServer.chunkSystemCrash = new RuntimeException("Chunk system crash propagated from unrecoverableChunkSystemFailure", reportedException);
++ }
++
++ public boolean executeMainThreadTask() {
++ TickThread.ensureTickThread("Cannot execute main thread task off-main");
++ return this.mainThreadExecutor.executeTask();
++ }
++
++ public void raisePriority(final int x, final int z, final PrioritisedExecutor.Priority priority) {
++ this.chunkHolderManager.raisePriority(x, z, priority);
++ }
++
++ public void setPriority(final int x, final int z, final PrioritisedExecutor.Priority priority) {
++ this.chunkHolderManager.setPriority(x, z, priority);
++ }
++
++ public void lowerPriority(final int x, final int z, final PrioritisedExecutor.Priority priority) {
++ this.chunkHolderManager.lowerPriority(x, z, priority);
++ }
++
++ private final AtomicLong chunkLoadCounter = new AtomicLong();
++
++ public void scheduleTickingState(final int chunkX, final int chunkZ, final FullChunkStatus toStatus,
++ final boolean addTicket, final PrioritisedExecutor.Priority priority,
++ final Consumer<LevelChunk> onComplete) {
++ if (!TickThread.isTickThread()) {
++ this.scheduleChunkTask(chunkX, chunkZ, () -> {
++ ChunkTaskScheduler.this.scheduleTickingState(chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
++ }, priority);
++ return;
++ }
++ final int accessRadius = getAccessRadius(toStatus);
++ if (this.chunkHolderManager.ticketLockArea.isHeldByCurrentThread(chunkX, chunkZ, accessRadius)) {
++ throw new IllegalStateException("Cannot schedule chunk load during ticket level update");
++ }
++ if (this.schedulingLockArea.isHeldByCurrentThread(chunkX, chunkZ, accessRadius)) {
++ throw new IllegalStateException("Cannot schedule chunk loading recursively");
++ }
++
++ if (toStatus == FullChunkStatus.INACCESSIBLE) {
++ throw new IllegalArgumentException("Cannot wait for INACCESSIBLE status");
++ }
++
++ final int minLevel = 33 - (toStatus.ordinal() - 1);
++ final Long chunkReference = addTicket ? Long.valueOf(this.chunkLoadCounter.getAndIncrement()) : null;
++ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
++
++ if (addTicket) {
++ this.chunkHolderManager.addTicketAtLevel(TicketType.CHUNK_LOAD, chunkKey, minLevel, chunkReference);
++ this.chunkHolderManager.processTicketUpdates();
++ }
++
++ final Consumer<LevelChunk> loadCallback = (final LevelChunk chunk) -> {
++ try {
++ if (onComplete != null) {
++ onComplete.accept(chunk);
++ }
++ } finally {
++ if (addTicket) {
++ ChunkTaskScheduler.this.chunkHolderManager.addAndRemoveTickets(chunkKey,
++ TicketType.UNKNOWN, minLevel, new ChunkPos(chunkKey),
++ TicketType.CHUNK_LOAD, minLevel, chunkReference
++ );
++ }
++ }
++ };
++
++ final boolean scheduled;
++ final LevelChunk chunk;
++ final ReentrantAreaLock.Node ticketLock = this.chunkHolderManager.ticketLockArea.lock(chunkX, chunkZ, accessRadius);
++ try {
++ final ReentrantAreaLock.Node schedulingLock = this.schedulingLockArea.lock(chunkX, chunkZ, accessRadius);
++ try {
++ final NewChunkHolder chunkHolder = this.chunkHolderManager.getChunkHolder(chunkKey);
++ if (chunkHolder == null || chunkHolder.getTicketLevel() > minLevel) {
++ scheduled = false;
++ chunk = null;
++ } else {
++ final FullChunkStatus currStatus = chunkHolder.getChunkStatus();
++ if (currStatus.isOrAfter(toStatus)) {
++ scheduled = false;
++ chunk = (LevelChunk)chunkHolder.getCurrentChunk();
++ } else {
++ scheduled = true;
++ chunk = null;
++
++ final int radius = toStatus.ordinal() - 1; // 0 -> BORDER, 1 -> TICKING, 2 -> ENTITY_TICKING
++ for (int dz = -radius; dz <= radius; ++dz) {
++ for (int dx = -radius; dx <= radius; ++dx) {
++ final NewChunkHolder neighbour =
++ (dx | dz) == 0 ? chunkHolder : this.chunkHolderManager.getChunkHolder(dx + chunkX, dz + chunkZ);
++ if (neighbour != null) {
++ neighbour.raisePriority(priority);
++ }
++ }
++ }
++
++ // ticket level should schedule for us
++ chunkHolder.addFullStatusConsumer(toStatus, loadCallback);
++ }
++ }
++ } finally {
++ this.schedulingLockArea.unlock(schedulingLock);
++ }
++ } finally {
++ this.chunkHolderManager.ticketLockArea.unlock(ticketLock);
++ }
++
++ if (!scheduled) {
++ // couldn't schedule
++ try {
++ loadCallback.accept(chunk);
++ } catch (final ThreadDeath thr) {
++ throw thr;
++ } catch (final Throwable thr) {
++ LOGGER.error("Failed to process chunk full status callback", thr);
++ }
++ }
++ }
++
++ public void scheduleChunkLoad(final int chunkX, final int chunkZ, final boolean gen, final ChunkStatus toStatus, final boolean addTicket,
++ final PrioritisedExecutor.Priority priority, final Consumer<ChunkAccess> onComplete) {
++ if (gen) {
++ this.scheduleChunkLoad(chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
++ return;
++ }
++ this.scheduleChunkLoad(chunkX, chunkZ, ChunkStatus.EMPTY, addTicket, priority, (final ChunkAccess chunk) -> {
++ if (chunk == null) {
++ onComplete.accept(null);
++ } else {
++ if (chunk.getStatus().isOrAfter(toStatus)) {
++ this.scheduleChunkLoad(chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
++ } else {
++ onComplete.accept(null);
++ }
++ }
++ });
++ }
++
++ // only appropriate to use with ServerLevel#syncLoadNonFull
++ public boolean beginChunkLoadForNonFullSync(final int chunkX, final int chunkZ, final ChunkStatus toStatus,
++ final PrioritisedExecutor.Priority priority) {
++ final int accessRadius = getAccessRadius(toStatus);
++ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
++ final int minLevel = 33 + ChunkStatus.getDistance(toStatus);
++ final List<ChunkProgressionTask> tasks = new ArrayList<>();
++ final ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock.Node ticketLock = this.chunkHolderManager.ticketLockArea.lock(chunkX, chunkZ, accessRadius); // Folia - use area based lock to reduce contention
++ try {
++ final ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock.Node schedulingLock = this.schedulingLockArea.lock(chunkX, chunkZ, accessRadius); // Folia - use area based lock to reduce contention
++ try {
++ final NewChunkHolder chunkHolder = this.chunkHolderManager.getChunkHolder(chunkKey);
++ if (chunkHolder == null || chunkHolder.getTicketLevel() > minLevel) {
++ return false;
++ } else {
++ final ChunkStatus genStatus = chunkHolder.getCurrentGenStatus();
++ if (genStatus != null && genStatus.isOrAfter(toStatus)) {
++ return true;
++ } else {
++ chunkHolder.raisePriority(priority);
++
++ if (!chunkHolder.upgradeGenTarget(toStatus)) {
++ this.schedule(chunkX, chunkZ, toStatus, chunkHolder, tasks);
++ }
++ }
++ }
++ } finally {
++ this.schedulingLockArea.unlock(schedulingLock);
++ }
++ } finally {
++ this.chunkHolderManager.ticketLockArea.unlock(ticketLock);
++ }
++
++ for (int i = 0, len = tasks.size(); i < len; ++i) {
++ tasks.get(i).schedule();
++ }
++
++ return true;
++ }
++
++ public void scheduleChunkLoad(final int chunkX, final int chunkZ, final ChunkStatus toStatus, final boolean addTicket,
++ final PrioritisedExecutor.Priority priority, final Consumer<ChunkAccess> onComplete) {
++ if (!TickThread.isTickThread()) {
++ this.scheduleChunkTask(chunkX, chunkZ, () -> {
++ ChunkTaskScheduler.this.scheduleChunkLoad(chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
++ }, priority);
++ return;
++ }
++ final int accessRadius = getAccessRadius(toStatus);
++ if (this.chunkHolderManager.ticketLockArea.isHeldByCurrentThread(chunkX, chunkZ, accessRadius)) {
++ throw new IllegalStateException("Cannot schedule chunk load during ticket level update");
++ }
++ if (this.schedulingLockArea.isHeldByCurrentThread(chunkX, chunkZ, accessRadius)) {
++ throw new IllegalStateException("Cannot schedule chunk loading recursively");
++ }
++
++ if (toStatus == ChunkStatus.FULL) {
++ this.scheduleTickingState(chunkX, chunkZ, FullChunkStatus.FULL, addTicket, priority, (Consumer)onComplete);
++ return;
++ }
++
++ final int minLevel = 33 + ChunkStatus.getDistance(toStatus);
++ final Long chunkReference = addTicket ? Long.valueOf(this.chunkLoadCounter.getAndIncrement()) : null;
++ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
++
++ if (addTicket) {
++ this.chunkHolderManager.addTicketAtLevel(TicketType.CHUNK_LOAD, chunkKey, minLevel, chunkReference);
++ this.chunkHolderManager.processTicketUpdates();
++ }
++
++ final Consumer<ChunkAccess> loadCallback = (final ChunkAccess chunk) -> {
++ try {
++ if (onComplete != null) {
++ onComplete.accept(chunk);
++ }
++ } finally {
++ if (addTicket) {
++ ChunkTaskScheduler.this.chunkHolderManager.addAndRemoveTickets(chunkKey,
++ TicketType.UNKNOWN, minLevel, new ChunkPos(chunkKey),
++ TicketType.CHUNK_LOAD, minLevel, chunkReference
++ );
++ }
++ }
++ };
++
++ final List<ChunkProgressionTask> tasks = new ArrayList<>();
++
++ final boolean scheduled;
++ final ChunkAccess chunk;
++ final ReentrantAreaLock.Node ticketLock = this.chunkHolderManager.ticketLockArea.lock(chunkX, chunkZ, accessRadius);
++ try {
++ final ReentrantAreaLock.Node schedulingLock = this.schedulingLockArea.lock(chunkX, chunkZ, accessRadius);
++ try {
++ final NewChunkHolder chunkHolder = this.chunkHolderManager.getChunkHolder(chunkKey);
++ if (chunkHolder == null || chunkHolder.getTicketLevel() > minLevel) {
++ scheduled = false;
++ chunk = null;
++ } else {
++ final ChunkStatus genStatus = chunkHolder.getCurrentGenStatus();
++ if (genStatus != null && genStatus.isOrAfter(toStatus)) {
++ scheduled = false;
++ chunk = chunkHolder.getCurrentChunk();
++ } else {
++ scheduled = true;
++ chunk = null;
++ chunkHolder.raisePriority(priority);
++
++ if (!chunkHolder.upgradeGenTarget(toStatus)) {
++ this.schedule(chunkX, chunkZ, toStatus, chunkHolder, tasks);
++ }
++ chunkHolder.addStatusConsumer(toStatus, loadCallback);
++ }
++ }
++ } finally {
++ this.schedulingLockArea.unlock(schedulingLock);
++ }
++ } finally {
++ this.chunkHolderManager.ticketLockArea.unlock(ticketLock);
++ }
++
++ for (int i = 0, len = tasks.size(); i < len; ++i) {
++ tasks.get(i).schedule();
++ }
++
++ if (!scheduled) {
++ // couldn't schedule
++ try {
++ loadCallback.accept(chunk);
++ } catch (final ThreadDeath thr) {
++ throw thr;
++ } catch (final Throwable thr) {
++ LOGGER.error("Failed to process chunk status callback", thr);
++ }
++ }
++ }
++
++ private ChunkProgressionTask createTask(final int chunkX, final int chunkZ, final ChunkAccess chunk,
++ final NewChunkHolder chunkHolder, final List<ChunkAccess> neighbours,
++ final ChunkStatus toStatus, final PrioritisedExecutor.Priority initialPriority) {
++ if (toStatus == ChunkStatus.EMPTY) {
++ return new ChunkLoadTask(this, this.world, chunkX, chunkZ, chunkHolder, initialPriority);
++ }
++ if (toStatus == ChunkStatus.LIGHT) {
++ return new ChunkLightTask(this, this.world, chunkX, chunkZ, chunk, initialPriority);
++ }
++ if (toStatus == ChunkStatus.FULL) {
++ return new ChunkFullTask(this, this.world, chunkX, chunkZ, chunkHolder, chunk, initialPriority);
++ }
++
++ return new ChunkUpgradeGenericStatusTask(this, this.world, chunkX, chunkZ, chunk, neighbours, toStatus, initialPriority);
++ }
++
++ ChunkProgressionTask schedule(final int chunkX, final int chunkZ, final ChunkStatus targetStatus, final NewChunkHolder chunkHolder,
++ final List<ChunkProgressionTask> allTasks) {
++ return this.schedule(chunkX, chunkZ, targetStatus, chunkHolder, allTasks, chunkHolder.getEffectivePriority());
++ }
++
++ // rets new task scheduled for the _specified_ chunk
++ // note: this must hold the scheduling lock
++ // minPriority is only used to pass the priority through to neighbours, as priority calculation has not yet been done
++ // schedule will ignore the generation target, so it should be checked by the caller to ensure the target is not regressed!
++ private ChunkProgressionTask schedule(final int chunkX, final int chunkZ, final ChunkStatus targetStatus,
++ final NewChunkHolder chunkHolder, final List<ChunkProgressionTask> allTasks,
++ final PrioritisedExecutor.Priority minPriority) {
++ if (!this.schedulingLockArea.isHeldByCurrentThread(chunkX, chunkZ, getAccessRadius(targetStatus))) {
++ throw new IllegalStateException("Not holding scheduling lock");
++ }
++
++ if (chunkHolder.hasGenerationTask()) {
++ chunkHolder.upgradeGenTarget(targetStatus);
++ return null;
++ }
++
++ final PrioritisedExecutor.Priority requestedPriority = PrioritisedExecutor.Priority.max(minPriority, chunkHolder.getEffectivePriority());
++ final ChunkStatus currentGenStatus = chunkHolder.getCurrentGenStatus();
++ final ChunkAccess chunk = chunkHolder.getCurrentChunk();
++
++ if (currentGenStatus == null) {
++ // not yet loaded
++ final ChunkProgressionTask task = this.createTask(
++ chunkX, chunkZ, chunk, chunkHolder, Collections.emptyList(), ChunkStatus.EMPTY, requestedPriority
++ );
++
++ allTasks.add(task);
++
++ final List<NewChunkHolder> chunkHolderNeighbours = new ArrayList<>(1);
++ chunkHolderNeighbours.add(chunkHolder);
++
++ chunkHolder.setGenerationTarget(targetStatus);
++ chunkHolder.setGenerationTask(task, ChunkStatus.EMPTY, chunkHolderNeighbours);
++
++ return task;
++ }
++
++ if (currentGenStatus.isOrAfter(targetStatus)) {
++ // nothing to do
++ return null;
++ }
++
++ // we know for sure now that we want to schedule _something_, so set the target
++ chunkHolder.setGenerationTarget(targetStatus);
++
++ final ChunkStatus chunkRealStatus = chunk.getStatus();
++ final ChunkStatus toStatus = currentGenStatus.getNextStatus();
++
++ // if this chunk has already generated up to or past the specified status, then we don't
++ // need the neighbours AT ALL.
++ final int neighbourReadRadius = chunkRealStatus.isOrAfter(toStatus) ? toStatus.loadRange : toStatus.getRange();
++
++ boolean unGeneratedNeighbours = false;
++
++ // copied from MCUtil.getSpiralOutChunks
++ for (int r = 1; r <= neighbourReadRadius; r++) {
++ int x = -r;
++ int z = r;
++
++ // Iterates the edge of half of the box; then negates for other half.
++ while (x <= r && z > -r) {
++ final int radius = Math.max(Math.abs(x), Math.abs(z));
++ final ChunkStatus requiredNeighbourStatus = ChunkMap.getDependencyStatus(toStatus, radius);
++
++ unGeneratedNeighbours |= this.checkNeighbour(
++ chunkX + x, chunkZ + z, requiredNeighbourStatus, chunkHolder, allTasks, requestedPriority
++ );
++ unGeneratedNeighbours |= this.checkNeighbour(
++ chunkX - x, chunkZ - z, requiredNeighbourStatus, chunkHolder, allTasks, requestedPriority
++ );
++
++ if (x < r) {
++ x++;
++ } else {
++ z--;
++ }
++ }
++ }
++
++ if (unGeneratedNeighbours) {
++ // can't schedule, but neighbour completion will schedule for us when they're ALL done
++
++ // propagate our priority to neighbours
++ chunkHolder.recalculateNeighbourPriorities();
++ return null;
++ }
++
++ // need to gather neighbours
++
++ final List<ChunkAccess> neighbours;
++ final List<NewChunkHolder> chunkHolderNeighbours;
++ if (neighbourReadRadius <= 0) {
++ neighbours = new ArrayList<>(1);
++ chunkHolderNeighbours = new ArrayList<>(1);
++ neighbours.add(chunk);
++ chunkHolderNeighbours.add(chunkHolder);
++ } else {
++ // the iteration order is _very_ important, as all generation statuses expect a certain order such that:
++ // chunkAtRelative = neighbours.get(relX + relZ * (2 * radius + 1))
++ neighbours = new ArrayList<>((2 * neighbourReadRadius + 1) * (2 * neighbourReadRadius + 1));
++ chunkHolderNeighbours = new ArrayList<>((2 * neighbourReadRadius + 1) * (2 * neighbourReadRadius + 1));
++ for (int dz = -neighbourReadRadius; dz <= neighbourReadRadius; ++dz) {
++ for (int dx = -neighbourReadRadius; dx <= neighbourReadRadius; ++dx) {
++ final NewChunkHolder holder = (dx | dz) == 0 ? chunkHolder : this.chunkHolderManager.getChunkHolder(dx + chunkX, dz + chunkZ);
++ neighbours.add(holder.getChunkForNeighbourAccess());
++ chunkHolderNeighbours.add(holder);
++ }
++ }
++ }
++
++ final ChunkProgressionTask task = this.createTask(chunkX, chunkZ, chunk, chunkHolder, neighbours, toStatus, chunkHolder.getEffectivePriority());
++ allTasks.add(task);
++
++ chunkHolder.setGenerationTask(task, toStatus, chunkHolderNeighbours);
++
++ return task;
++ }
++
++ // rets true if the neighbour is not at the required status, false otherwise
++ private boolean checkNeighbour(final int chunkX, final int chunkZ, final ChunkStatus requiredStatus, final NewChunkHolder center,
++ final List<ChunkProgressionTask> tasks, final PrioritisedExecutor.Priority minPriority) {
++ final NewChunkHolder chunkHolder = this.chunkHolderManager.getChunkHolder(chunkX, chunkZ);
++
++ if (chunkHolder == null) {
++ throw new IllegalStateException("Missing chunkholder when required");
++ }
++
++ final ChunkStatus holderStatus = chunkHolder.getCurrentGenStatus();
++ if (holderStatus != null && holderStatus.isOrAfter(requiredStatus)) {
++ return false;
++ }
++
++ if (chunkHolder.hasFailedGeneration()) {
++ return true;
++ }
++
++ center.addGenerationBlockingNeighbour(chunkHolder);
++ chunkHolder.addWaitingNeighbour(center, requiredStatus);
++
++ if (chunkHolder.upgradeGenTarget(requiredStatus)) {
++ return true;
++ }
++
++ // not at status required, so we need to schedule its generation
++ this.schedule(
++ chunkX, chunkZ, requiredStatus, chunkHolder, tasks, minPriority
++ );
++
++ return true;
++ }
++
++ /**
++ * @deprecated Chunk tasks must be tied to coordinates in the future
++ */
++ @Deprecated
++ public PrioritisedExecutor.PrioritisedTask scheduleChunkTask(final Runnable run) {
++ return this.scheduleChunkTask(run, PrioritisedExecutor.Priority.NORMAL);
++ }
++
++ /**
++ * @deprecated Chunk tasks must be tied to coordinates in the future
++ */
++ @Deprecated
++ public PrioritisedExecutor.PrioritisedTask scheduleChunkTask(final Runnable run, final PrioritisedExecutor.Priority priority) {
++ return this.mainThreadExecutor.queueRunnable(run, priority);
++ }
++
++ public PrioritisedExecutor.PrioritisedTask createChunkTask(final int chunkX, final int chunkZ, final Runnable run) {
++ return this.createChunkTask(chunkX, chunkZ, run, PrioritisedExecutor.Priority.NORMAL);
++ }
++
++ public PrioritisedExecutor.PrioritisedTask createChunkTask(final int chunkX, final int chunkZ, final Runnable run,
++ final PrioritisedExecutor.Priority priority) {
++ return this.mainThreadExecutor.createTask(run, priority);
++ }
++
++ public PrioritisedExecutor.PrioritisedTask scheduleChunkTask(final int chunkX, final int chunkZ, final Runnable run) {
++ return this.mainThreadExecutor.queueRunnable(run);
++ }
++
++ public PrioritisedExecutor.PrioritisedTask scheduleChunkTask(final int chunkX, final int chunkZ, final Runnable run,
++ final PrioritisedExecutor.Priority priority) {
++ return this.mainThreadExecutor.queueRunnable(run, priority);
++ }
++
++ public void executeTasksUntil(final BooleanSupplier exit) {
++ if (Bukkit.isPrimaryThread()) {
++ this.mainThreadExecutor.executeConditionally(exit);
++ } else {
++ long counter = 1L;
++ while (!exit.getAsBoolean()) {
++ counter = ConcurrentUtil.linearLongBackoff(counter, 100_000L, 5_000_000L); // 100us, 5ms
++ }
++ }
++ }
++
++ public boolean halt(final boolean sync, final long maxWaitNS) {
++ this.radiusAwareGenExecutor.halt();
++ this.parallelGenExecutor.halt();
++ this.loadExecutor.halt();
++ final long time = System.nanoTime();
++ if (sync) {
++ for (long failures = 9L;; failures = ConcurrentUtil.linearLongBackoff(failures, 500_000L, 50_000_000L)) {
++ if (
++ !this.radiusAwareGenExecutor.isActive() &&
++ !this.parallelGenExecutor.isActive() &&
++ !this.loadExecutor.isActive()
++ ) {
++ return true;
++ }
++ if ((System.nanoTime() - time) >= maxWaitNS) {
++ return false;
++ }
++ }
++ }
++
++ return true;
++ }
++
++ public static final ArrayDeque<ChunkInfo> WAITING_CHUNKS = new ArrayDeque<>(); // stack
++
++ public static final class ChunkInfo {
++
++ public final int chunkX;
++ public final int chunkZ;
++ public final ServerLevel world;
++
++ public ChunkInfo(final int chunkX, final int chunkZ, final ServerLevel world) {
++ this.chunkX = chunkX;
++ this.chunkZ = chunkZ;
++ this.world = world;
++ }
++
++ @Override
++ public String toString() {
++ return "[( " + this.chunkX + "," + this.chunkZ + ") in '" + this.world.getWorld().getName() + "']";
++ }
++ }
++
++ public static void pushChunkWait(final ServerLevel world, final int chunkX, final int chunkZ) {
++ synchronized (WAITING_CHUNKS) {
++ WAITING_CHUNKS.push(new ChunkInfo(chunkX, chunkZ, world));
++ }
++ }
++
++ public static void popChunkWait() {
++ synchronized (WAITING_CHUNKS) {
++ WAITING_CHUNKS.pop();
++ }
++ }
++
++ public static ChunkInfo[] getChunkInfos() {
++ synchronized (WAITING_CHUNKS) {
++ return WAITING_CHUNKS.toArray(new ChunkInfo[0]);
++ }
++ }
++
++ public static void dumpAllChunkLoadInfo(final boolean longPrint) {
++ final ChunkInfo[] chunkInfos = getChunkInfos();
++ if (chunkInfos.length > 0) {
++ LOGGER.error("Chunk wait task info below: ");
++ for (final ChunkInfo chunkInfo : chunkInfos) {
++ final NewChunkHolder holder = chunkInfo.world.chunkTaskScheduler.chunkHolderManager.getChunkHolder(chunkInfo.chunkX, chunkInfo.chunkZ);
++ LOGGER.error("Chunk wait: " + chunkInfo);
++ LOGGER.error("Chunk holder: " + holder);
++ }
++
++ if (longPrint) {
++ final File file = new File(new File(new File("."), "debug"), "chunks-watchdog.txt");
++ LOGGER.error("Writing chunk information dump to " + file);
++ try {
++ MCUtil.dumpChunks(file, true);
++ LOGGER.error("Successfully written chunk information!");
++ } catch (final Throwable thr) {
++ MinecraftServer.LOGGER.warn("Failed to dump chunk information to file " + file.toString(), thr);
++ }
++ }
++ }
++ }
++}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkUpgradeGenericStatusTask.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkUpgradeGenericStatusTask.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..bd0d0c4436f357392e13d9efd4412886385a6924
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkUpgradeGenericStatusTask.java
+@@ -0,0 +1,214 @@
++package io.papermc.paper.chunk.system.scheduling;
++
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
++import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
++import com.mojang.logging.LogUtils;
++import net.minecraft.server.level.ChunkMap;
++import net.minecraft.server.level.ChunkResult;
++import net.minecraft.server.level.ServerChunkCache;
++import net.minecraft.server.level.ServerLevel;
++import net.minecraft.world.level.chunk.ChunkAccess;
++import net.minecraft.world.level.chunk.status.ChunkStatus;
++import net.minecraft.world.level.chunk.ProtoChunk;
++import net.minecraft.world.level.chunk.status.WorldGenContext;
++import org.slf4j.Logger;
++import java.lang.invoke.VarHandle;
++import java.util.List;
++import java.util.Map;
++import java.util.concurrent.CompletableFuture;
++
++public final class ChunkUpgradeGenericStatusTask extends ChunkProgressionTask implements Runnable {
++
++ private static final Logger LOGGER = LogUtils.getClassLogger();
++
++ protected final ChunkAccess fromChunk;
++ protected final ChunkStatus fromStatus;
++ protected final ChunkStatus toStatus;
++ protected final List<ChunkAccess> neighbours;
++
++ protected final PrioritisedExecutor.PrioritisedTask generateTask;
++
++ public ChunkUpgradeGenericStatusTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX,
++ final int chunkZ, final ChunkAccess chunk, final List<ChunkAccess> neighbours,
++ final ChunkStatus toStatus, final PrioritisedExecutor.Priority priority) {
++ super(scheduler, world, chunkX, chunkZ);
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++ this.fromChunk = chunk;
++ this.fromStatus = chunk.getStatus();
++ this.toStatus = toStatus;
++ this.neighbours = neighbours;
++ if (this.toStatus.isParallelCapable) {
++ this.generateTask = this.scheduler.parallelGenExecutor.createTask(this, priority);
++ } else {
++ this.generateTask = this.scheduler.radiusAwareScheduler.createTask(chunkX, chunkZ, this.toStatus.writeRadius, this, priority);
++ }
++ }
++
++ @Override
++ public ChunkStatus getTargetStatus() {
++ return this.toStatus;
++ }
++
++ private boolean isEmptyTask() {
++ // must use fromStatus here to avoid any race condition with run() overwriting the status
++ final boolean generation = !this.fromStatus.isOrAfter(this.toStatus);
++ return (generation && this.toStatus.isEmptyGenStatus()) || (!generation && this.toStatus.isEmptyLoadStatus());
++ }
++
++ @Override
++ public void run() {
++ final ChunkAccess chunk = this.fromChunk;
++
++ final ServerChunkCache serverChunkCache = this.world.chunkSource;
++ final ChunkMap chunkMap = serverChunkCache.chunkMap;
++
++ final CompletableFuture<ChunkAccess> completeFuture;
++
++ final boolean generation;
++ boolean completing = false;
++
++ // note: should optimise the case where the chunk does not need to execute the status, because
++ // schedule() calls this synchronously if it will run through that path
++
++ final WorldGenContext ctx = new WorldGenContext(
++ this.world,
++ chunkMap.generator,
++ chunkMap.getWorldGenContext().structureManager(),
++ serverChunkCache.getLightEngine()
++ );
++ try {
++ generation = !chunk.getStatus().isOrAfter(this.toStatus);
++ if (generation) {
++ if (this.toStatus.isEmptyGenStatus()) {
++ if (chunk instanceof ProtoChunk) {
++ ((ProtoChunk)chunk).setStatus(this.toStatus);
++ }
++ completing = true;
++ this.complete(chunk, null);
++ return;
++ }
++ completeFuture = this.toStatus.generate(ctx, Runnable::run, null, this.neighbours)
++ .whenComplete((final ChunkAccess either, final Throwable throwable) -> {
++ if (either instanceof ProtoChunk proto) {
++ proto.setStatus(ChunkUpgradeGenericStatusTask.this.toStatus);
++ }
++ }
++ );
++ } else {
++ if (this.toStatus.isEmptyLoadStatus()) {
++ completing = true;
++ this.complete(chunk, null);
++ return;
++ }
++ completeFuture = this.toStatus.load(ctx, null, chunk);
++ }
++ } catch (final Throwable throwable) {
++ if (!completing) {
++ this.complete(null, throwable);
++
++ if (throwable instanceof ThreadDeath) {
++ throw (ThreadDeath)throwable;
++ }
++ return;
++ }
++
++ this.scheduler.unrecoverableChunkSystemFailure(this.chunkX, this.chunkZ, Map.of(
++ "Target status", ChunkTaskScheduler.stringIfNull(this.toStatus),
++ "From status", ChunkTaskScheduler.stringIfNull(this.fromStatus),
++ "Generation task", this
++ ), throwable);
++
++ if (!(throwable instanceof ThreadDeath)) {
++ LOGGER.error("Failed to complete status for chunk: status:" + this.toStatus + ", chunk: (" + this.chunkX + "," + this.chunkZ + "), world: " + this.world.getWorld().getName(), throwable);
++ } else {
++ // ensure the chunk system can respond, then die
++ throw (ThreadDeath)throwable;
++ }
++ return;
++ }
++
++ if (!completeFuture.isDone() && !this.toStatus.warnedAboutNoImmediateComplete.getAndSet(true)) {
++ LOGGER.warn("Future status not complete after scheduling: " + this.toStatus.toString() + ", generate: " + generation);
++ }
++
++ final ChunkAccess newChunk;
++
++ try {
++ newChunk = completeFuture.join();
++ } catch (final Throwable throwable) {
++ this.complete(null, throwable);
++ // ensure the chunk system can respond, then die
++ if (throwable instanceof ThreadDeath) {
++ throw (ThreadDeath)throwable;
++ }
++ return;
++ }
++
++ if (newChunk == null) {
++ this.complete(null, new IllegalStateException("Chunk for status: " + ChunkUpgradeGenericStatusTask.this.toStatus.toString() + ", generation: " + generation + " should not be null! Future: " + completeFuture).fillInStackTrace());
++ return;
++ }
++
++ this.complete(newChunk, null);
++ }
++
++ protected volatile boolean scheduled;
++ protected static final VarHandle SCHEDULED_HANDLE = ConcurrentUtil.getVarHandle(ChunkUpgradeGenericStatusTask.class, "scheduled", boolean.class);
++
++ @Override
++ public boolean isScheduled() {
++ return this.scheduled;
++ }
++
++ @Override
++ public void schedule() {
++ if ((boolean)SCHEDULED_HANDLE.getAndSet((ChunkUpgradeGenericStatusTask)this, true)) {
++ throw new IllegalStateException("Cannot double call schedule()");
++ }
++ if (this.isEmptyTask()) {
++ if (this.generateTask.cancel()) {
++ this.run();
++ }
++ } else {
++ this.generateTask.queue();
++ }
++ }
++
++ @Override
++ public void cancel() {
++ if (this.generateTask.cancel()) {
++ this.complete(null, null);
++ }
++ }
++
++ @Override
++ public PrioritisedExecutor.Priority getPriority() {
++ return this.generateTask.getPriority();
++ }
++
++ @Override
++ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++ this.generateTask.lowerPriority(priority);
++ }
++
++ @Override
++ public void setPriority(final PrioritisedExecutor.Priority priority) {
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++ this.generateTask.setPriority(priority);
++ }
++
++ @Override
++ public void raisePriority(final PrioritisedExecutor.Priority priority) {
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++ this.generateTask.raisePriority(priority);
++ }
++}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/GenericDataLoadTask.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/GenericDataLoadTask.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..396d72c00e47cf1669ae20dc839c1c961b1f262a
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/GenericDataLoadTask.java
+@@ -0,0 +1,746 @@
++package io.papermc.paper.chunk.system.scheduling;
++
++import ca.spottedleaf.concurrentutil.completable.Completable;
++import ca.spottedleaf.concurrentutil.executor.Cancellable;
++import ca.spottedleaf.concurrentutil.executor.standard.DelayedPrioritisedTask;
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
++import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
++import com.mojang.logging.LogUtils;
++import io.papermc.paper.chunk.system.io.RegionFileIOThread;
++import net.minecraft.nbt.CompoundTag;
++import net.minecraft.server.level.ServerLevel;
++import org.slf4j.Logger;
++import java.lang.invoke.VarHandle;
++import java.util.Map;
++import java.util.concurrent.atomic.AtomicBoolean;
++import java.util.concurrent.atomic.AtomicLong;
++import java.util.function.BiConsumer;
++
++public abstract class GenericDataLoadTask<OnMain,FinalCompletion> {
++
++ private static final Logger LOGGER = LogUtils.getClassLogger();
++
++ protected static final CompoundTag CANCELLED_DATA = new CompoundTag();
++
++ // reference count is the upper 32 bits
++ protected final AtomicLong stageAndReferenceCount = new AtomicLong(STAGE_NOT_STARTED);
++
++ protected static final long STAGE_MASK = 0xFFFFFFFFL;
++ protected static final long STAGE_CANCELLED = 0xFFFFFFFFL;
++ protected static final long STAGE_NOT_STARTED = 0L;
++ protected static final long STAGE_LOADING = 1L;
++ protected static final long STAGE_PROCESSING = 2L;
++ protected static final long STAGE_COMPLETED = 3L;
++
++ // for loading data off disk
++ protected final LoadDataFromDiskTask loadDataFromDiskTask;
++ // processing off-main
++ protected final PrioritisedExecutor.PrioritisedTask processOffMain;
++ // processing on-main
++ protected final PrioritisedExecutor.PrioritisedTask processOnMain;
++
++ protected final ChunkTaskScheduler scheduler;
++ protected final ServerLevel world;
++ protected final int chunkX;
++ protected final int chunkZ;
++ protected final RegionFileIOThread.RegionFileType type;
++
++ public GenericDataLoadTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX,
++ final int chunkZ, final RegionFileIOThread.RegionFileType type,
++ final PrioritisedExecutor.Priority priority) {
++ this.scheduler = scheduler;
++ this.world = world;
++ this.chunkX = chunkX;
++ this.chunkZ = chunkZ;
++ this.type = type;
++
++ final ProcessOnMainTask mainTask;
++ if (this.hasOnMain()) {
++ mainTask = new ProcessOnMainTask();
++ this.processOnMain = this.createOnMain(mainTask, priority);
++ } else {
++ mainTask = null;
++ this.processOnMain = null;
++ }
++
++ final ProcessOffMainTask offMainTask;
++ if (this.hasOffMain()) {
++ offMainTask = new ProcessOffMainTask(mainTask);
++ this.processOffMain = this.createOffMain(offMainTask, priority);
++ } else {
++ offMainTask = null;
++ this.processOffMain = null;
++ }
++
++ if (this.processOffMain == null && this.processOnMain == null) {
++ throw new IllegalStateException("Illegal class implementation: " + this.getClass().getName() + ", should be able to schedule at least one task!");
++ }
++
++ this.loadDataFromDiskTask = new LoadDataFromDiskTask(world, chunkX, chunkZ, type, new DataLoadCallback(offMainTask, mainTask), priority);
++ }
++
++ public static final record TaskResult<L, R>(L left, R right) {}
++
++ protected abstract boolean hasOffMain();
++
++ protected abstract boolean hasOnMain();
++
++ protected abstract PrioritisedExecutor.PrioritisedTask createOffMain(final Runnable run, final PrioritisedExecutor.Priority priority);
++
++ protected abstract PrioritisedExecutor.PrioritisedTask createOnMain(final Runnable run, final PrioritisedExecutor.Priority priority);
++
++ protected abstract TaskResult<OnMain, Throwable> runOffMain(final CompoundTag data, final Throwable throwable);
++
++ protected abstract TaskResult<FinalCompletion, Throwable> runOnMain(final OnMain data, final Throwable throwable);
++
++ protected abstract void onComplete(final TaskResult<FinalCompletion,Throwable> result);
++
++ protected abstract TaskResult<FinalCompletion, Throwable> completeOnMainOffMain(final OnMain data, final Throwable throwable);
++
++ @Override
++ public String toString() {
++ return "GenericDataLoadTask{class: " + this.getClass().getName() + ", world: " + this.world.getWorld().getName() +
++ ", chunk: (" + this.chunkX + "," + this.chunkZ + "), hashcode: " + System.identityHashCode(this) + ", priority: " + this.getPriority() +
++ ", type: " + this.type.toString() + "}";
++ }
++
++ public PrioritisedExecutor.Priority getPriority() {
++ if (this.processOnMain != null) {
++ return this.processOnMain.getPriority();
++ } else {
++ return this.processOffMain.getPriority();
++ }
++ }
++
++ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
++ // can't lower I/O tasks, we don't know what they affect
++ if (this.processOffMain != null) {
++ this.processOffMain.lowerPriority(priority);
++ }
++ if (this.processOnMain != null) {
++ this.processOnMain.lowerPriority(priority);
++ }
++ }
++
++ public void setPriority(final PrioritisedExecutor.Priority priority) {
++ // can't lower I/O tasks, we don't know what they affect
++ this.loadDataFromDiskTask.raisePriority(priority);
++ if (this.processOffMain != null) {
++ this.processOffMain.setPriority(priority);
++ }
++ if (this.processOnMain != null) {
++ this.processOnMain.setPriority(priority);
++ }
++ }
++
++ public void raisePriority(final PrioritisedExecutor.Priority priority) {
++ // can't lower I/O tasks, we don't know what they affect
++ this.loadDataFromDiskTask.raisePriority(priority);
++ if (this.processOffMain != null) {
++ this.processOffMain.raisePriority(priority);
++ }
++ if (this.processOnMain != null) {
++ this.processOnMain.raisePriority(priority);
++ }
++ }
++
++ // returns whether scheduleNow() needs to be called
++ public boolean schedule(final boolean delay) {
++ if (this.stageAndReferenceCount.get() != STAGE_NOT_STARTED ||
++ !this.stageAndReferenceCount.compareAndSet(STAGE_NOT_STARTED, (1L << 32) | STAGE_LOADING)) {
++ // try and increment reference count
++ int failures = 0;
++ for (long curr = this.stageAndReferenceCount.get();;) {
++ if ((curr & STAGE_MASK) == STAGE_CANCELLED || (curr & STAGE_MASK) == STAGE_COMPLETED) {
++ // cancelled or completed, nothing to do here
++ return false;
++ }
++
++ if (curr == (curr = this.stageAndReferenceCount.compareAndExchange(curr, curr + (1L << 32)))) {
++ // successful
++ return false;
++ }
++
++ ++failures;
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++ }
++ }
++
++ if (!delay) {
++ this.scheduleNow();
++ return false;
++ }
++ return true;
++ }
++
++ public void scheduleNow() {
++ this.loadDataFromDiskTask.schedule(); // will schedule the rest
++ }
++
++ // assumes the current stage cannot be completed
++ // returns false if cancelled, returns true if can proceed
++ private boolean advanceStage(final long expect, final long to) {
++ int failures = 0;
++ for (long curr = this.stageAndReferenceCount.get();;) {
++ if ((curr & STAGE_MASK) != expect) {
++ // must be cancelled
++ return false;
++ }
++
++ final long newVal = (curr & ~STAGE_MASK) | to;
++ if (curr == (curr = this.stageAndReferenceCount.compareAndExchange(curr, newVal))) {
++ return true;
++ }
++
++ ++failures;
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++ }
++ }
++
++ public boolean cancel() {
++ int failures = 0;
++ for (long curr = this.stageAndReferenceCount.get();;) {
++ if ((curr & STAGE_MASK) == STAGE_COMPLETED || (curr & STAGE_MASK) == STAGE_CANCELLED) {
++ return false;
++ }
++
++ if ((curr & STAGE_MASK) == STAGE_NOT_STARTED || (curr & ~STAGE_MASK) == (1L << 32)) {
++ // no other references, so we can cancel
++ final long newVal = STAGE_CANCELLED;
++ if (curr == (curr = this.stageAndReferenceCount.compareAndExchange(curr, newVal))) {
++ this.loadDataFromDiskTask.cancel();
++ if (this.processOffMain != null) {
++ this.processOffMain.cancel();
++ }
++ if (this.processOnMain != null) {
++ this.processOnMain.cancel();
++ }
++ this.onComplete(null);
++ return true;
++ }
++ } else {
++ if ((curr & ~STAGE_MASK) == (0L << 32)) {
++ throw new IllegalStateException("Reference count cannot be zero here");
++ }
++ // just decrease the reference count
++ final long newVal = curr - (1L << 32);
++ if (curr == (curr = this.stageAndReferenceCount.compareAndExchange(curr, newVal))) {
++ return false;
++ }
++ }
++
++ ++failures;
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++ }
++ }
++
++ protected final class DataLoadCallback implements BiConsumer<CompoundTag, Throwable> {
++
++ protected final ProcessOffMainTask offMainTask;
++ protected final ProcessOnMainTask onMainTask;
++
++ public DataLoadCallback(final ProcessOffMainTask offMainTask, final ProcessOnMainTask onMainTask) {
++ this.offMainTask = offMainTask;
++ this.onMainTask = onMainTask;
++ }
++
++ @Override
++ public void accept(final CompoundTag compoundTag, final Throwable throwable) {
++ if (GenericDataLoadTask.this.stageAndReferenceCount.get() == STAGE_CANCELLED) {
++ // don't try to schedule further
++ return;
++ }
++
++ try {
++ if (compoundTag == CANCELLED_DATA) {
++ // cancelled, except this isn't possible
++ LOGGER.error("Data callback says cancelled, but stage does not?");
++ return;
++ }
++
++ // get off of the regionfile callback ASAP, no clue what locks are held right now...
++ if (GenericDataLoadTask.this.processOffMain != null) {
++ this.offMainTask.data = compoundTag;
++ this.offMainTask.throwable = throwable;
++ GenericDataLoadTask.this.processOffMain.queue();
++ return;
++ } else {
++ // no off-main task, so go straight to main
++ this.onMainTask.data = (OnMain)compoundTag;
++ this.onMainTask.throwable = throwable;
++ GenericDataLoadTask.this.processOnMain.queue();
++ }
++ } catch (final ThreadDeath death) {
++ throw death;
++ } catch (final Throwable thr2) {
++ LOGGER.error("Failed I/O callback for task: " + GenericDataLoadTask.this.toString(), thr2);
++ GenericDataLoadTask.this.scheduler.unrecoverableChunkSystemFailure(
++ GenericDataLoadTask.this.chunkX, GenericDataLoadTask.this.chunkZ, Map.of(
++ "Callback throwable", ChunkTaskScheduler.stringIfNull(throwable)
++ ), thr2);
++ }
++ }
++ }
++
++ protected final class ProcessOffMainTask implements Runnable {
++
++ protected CompoundTag data;
++ protected Throwable throwable;
++ protected final ProcessOnMainTask schedule;
++
++ public ProcessOffMainTask(final ProcessOnMainTask schedule) {
++ this.schedule = schedule;
++ }
++
++ @Override
++ public void run() {
++ if (!GenericDataLoadTask.this.advanceStage(STAGE_LOADING, this.schedule == null ? STAGE_COMPLETED : STAGE_PROCESSING)) {
++ // cancelled
++ return;
++ }
++ final TaskResult<OnMain, Throwable> newData = GenericDataLoadTask.this.runOffMain(this.data, this.throwable);
++
++ if (GenericDataLoadTask.this.stageAndReferenceCount.get() == STAGE_CANCELLED) {
++ // don't try to schedule further
++ return;
++ }
++
++ if (this.schedule != null) {
++ final TaskResult<FinalCompletion, Throwable> syncComplete = GenericDataLoadTask.this.completeOnMainOffMain(newData.left, newData.right);
++
++ if (syncComplete != null) {
++ if (GenericDataLoadTask.this.advanceStage(STAGE_PROCESSING, STAGE_COMPLETED)) {
++ GenericDataLoadTask.this.onComplete(syncComplete);
++ } // else: cancelled
++ return;
++ }
++
++ this.schedule.data = newData.left;
++ this.schedule.throwable = newData.right;
++
++ GenericDataLoadTask.this.processOnMain.queue();
++ } else {
++ GenericDataLoadTask.this.onComplete((TaskResult<FinalCompletion, Throwable>)newData);
++ }
++ }
++ }
++
++ protected final class ProcessOnMainTask implements Runnable {
++
++ protected OnMain data;
++ protected Throwable throwable;
++
++ @Override
++ public void run() {
++ if (!GenericDataLoadTask.this.advanceStage(STAGE_PROCESSING, STAGE_COMPLETED)) {
++ // cancelled
++ return;
++ }
++ final TaskResult<FinalCompletion, Throwable> result = GenericDataLoadTask.this.runOnMain(this.data, this.throwable);
++
++ GenericDataLoadTask.this.onComplete(result);
++ }
++ }
++
++ public static final class LoadDataFromDiskTask {
++
++ protected volatile int priority;
++ protected static final VarHandle PRIORITY_HANDLE = ConcurrentUtil.getVarHandle(LoadDataFromDiskTask.class, "priority", int.class);
++
++ protected static final int PRIORITY_EXECUTED = Integer.MIN_VALUE >>> 0;
++ protected static final int PRIORITY_LOAD_SCHEDULED = Integer.MIN_VALUE >>> 1;
++ protected static final int PRIORITY_UNLOAD_SCHEDULED = Integer.MIN_VALUE >>> 2;
++
++ protected static final int PRIORITY_FLAGS = ~Character.MAX_VALUE;
++
++ protected final int getPriorityVolatile() {
++ return (int)PRIORITY_HANDLE.getVolatile((LoadDataFromDiskTask)this);
++ }
++
++ protected final int compareAndExchangePriorityVolatile(final int expect, final int update) {
++ return (int)PRIORITY_HANDLE.compareAndExchange((LoadDataFromDiskTask)this, (int)expect, (int)update);
++ }
++
++ protected final int getAndOrPriorityVolatile(final int val) {
++ return (int)PRIORITY_HANDLE.getAndBitwiseOr((LoadDataFromDiskTask)this, (int)val);
++ }
++
++ protected final void setPriorityPlain(final int val) {
++ PRIORITY_HANDLE.set((LoadDataFromDiskTask)this, (int)val);
++ }
++
++ private final ServerLevel world;
++ private final int chunkX;
++ private final int chunkZ;
++
++ private final RegionFileIOThread.RegionFileType type;
++ private Cancellable dataLoadTask;
++ private Cancellable dataUnloadCancellable;
++ private DelayedPrioritisedTask dataUnloadTask;
++
++ private final BiConsumer<CompoundTag, Throwable> onComplete;
++
++ // onComplete should be caller sensitive, it may complete synchronously with schedule() - which does
++ // hold a priority lock.
++ public LoadDataFromDiskTask(final ServerLevel world, final int chunkX, final int chunkZ,
++ final RegionFileIOThread.RegionFileType type,
++ final BiConsumer<CompoundTag, Throwable> onComplete,
++ final PrioritisedExecutor.Priority priority) {
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++ this.world = world;
++ this.chunkX = chunkX;
++ this.chunkZ = chunkZ;
++ this.type = type;
++ this.onComplete = onComplete;
++ this.setPriorityPlain(priority.priority);
++ }
++
++ private void complete(final CompoundTag data, final Throwable throwable) {
++ try {
++ this.onComplete.accept(data, throwable);
++ } catch (final Throwable thr2) {
++ this.world.chunkTaskScheduler.unrecoverableChunkSystemFailure(this.chunkX, this.chunkZ, Map.of(
++ "Completed throwable", ChunkTaskScheduler.stringIfNull(throwable),
++ "Regionfile type", ChunkTaskScheduler.stringIfNull(this.type)
++ ), thr2);
++ if (thr2 instanceof ThreadDeath) {
++ throw (ThreadDeath)thr2;
++ }
++ }
++ }
++
++ protected boolean markExecuting() {
++ return (this.getAndOrPriorityVolatile(PRIORITY_EXECUTED) & PRIORITY_EXECUTED) == 0;
++ }
++
++ protected boolean isMarkedExecuted() {
++ return (this.getPriorityVolatile() & PRIORITY_EXECUTED) != 0;
++ }
++
++ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++
++ int failures = 0;
++ for (int curr = this.getPriorityVolatile();;) {
++ if ((curr & PRIORITY_EXECUTED) != 0) {
++ // cancelled or executed
++ return;
++ }
++
++ if ((curr & PRIORITY_LOAD_SCHEDULED) != 0) {
++ RegionFileIOThread.lowerPriority(this.world, this.chunkX, this.chunkZ, this.type, priority);
++ return;
++ }
++
++ if ((curr & PRIORITY_UNLOAD_SCHEDULED) != 0) {
++ if (this.dataUnloadTask != null) {
++ this.dataUnloadTask.lowerPriority(priority);
++ }
++ // no return - we need to propagate priority
++ }
++
++ if (!priority.isHigherPriority(curr & ~PRIORITY_FLAGS)) {
++ return;
++ }
++
++ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, priority.priority | (curr & PRIORITY_FLAGS)))) {
++ return;
++ }
++
++ // failed, retry
++
++ ++failures;
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++ }
++ }
++
++ public void setPriority(final PrioritisedExecutor.Priority priority) {
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++
++ int failures = 0;
++ for (int curr = this.getPriorityVolatile();;) {
++ if ((curr & PRIORITY_EXECUTED) != 0) {
++ // cancelled or executed
++ return;
++ }
++
++ if ((curr & PRIORITY_LOAD_SCHEDULED) != 0) {
++ RegionFileIOThread.setPriority(this.world, this.chunkX, this.chunkZ, this.type, priority);
++ return;
++ }
++
++ if ((curr & PRIORITY_UNLOAD_SCHEDULED) != 0) {
++ if (this.dataUnloadTask != null) {
++ this.dataUnloadTask.setPriority(priority);
++ }
++ // no return - we need to propagate priority
++ }
++
++ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, priority.priority | (curr & PRIORITY_FLAGS)))) {
++ return;
++ }
++
++ // failed, retry
++
++ ++failures;
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++ }
++ }
++
++ public void raisePriority(final PrioritisedExecutor.Priority priority) {
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++
++ int failures = 0;
++ for (int curr = this.getPriorityVolatile();;) {
++ if ((curr & PRIORITY_EXECUTED) != 0) {
++ // cancelled or executed
++ return;
++ }
++
++ if ((curr & PRIORITY_LOAD_SCHEDULED) != 0) {
++ RegionFileIOThread.raisePriority(this.world, this.chunkX, this.chunkZ, this.type, priority);
++ return;
++ }
++
++ if ((curr & PRIORITY_UNLOAD_SCHEDULED) != 0) {
++ if (this.dataUnloadTask != null) {
++ this.dataUnloadTask.raisePriority(priority);
++ }
++ // no return - we need to propagate priority
++ }
++
++ if (!priority.isLowerPriority(curr & ~PRIORITY_FLAGS)) {
++ return;
++ }
++
++ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, priority.priority | (curr & PRIORITY_FLAGS)))) {
++ return;
++ }
++
++ // failed, retry
++
++ ++failures;
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++ }
++ }
++
++ public void cancel() {
++ if ((this.getAndOrPriorityVolatile(PRIORITY_EXECUTED) & PRIORITY_EXECUTED) != 0) {
++ // cancelled or executed already
++ return;
++ }
++
++ // OK if we miss the field read, the task cannot complete if the cancelled bit is set and
++ // the write to dataLoadTask will check for the cancelled bit
++ if (this.dataUnloadCancellable != null) {
++ this.dataUnloadCancellable.cancel();
++ }
++
++ if (this.dataLoadTask != null) {
++ this.dataLoadTask.cancel();
++ }
++
++ this.complete(CANCELLED_DATA, null);
++ }
++
++ private final AtomicBoolean scheduled = new AtomicBoolean();
++
++ public void schedule() {
++ if (this.scheduled.getAndSet(true)) {
++ throw new IllegalStateException("schedule() called twice");
++ }
++ int priority = this.getPriorityVolatile();
++
++ if ((priority & PRIORITY_EXECUTED) != 0) {
++ // cancelled
++ return;
++ }
++
++ final BiConsumer<CompoundTag, Throwable> consumer = (final CompoundTag data, final Throwable thr) -> {
++ // because cancelScheduled() cannot actually stop this task from executing in every case, we need
++ // to mark complete here to ensure we do not double complete
++ if (LoadDataFromDiskTask.this.markExecuting()) {
++ LoadDataFromDiskTask.this.complete(data, thr);
++ } // else: cancelled
++ };
++
++ final PrioritisedExecutor.Priority initialPriority = PrioritisedExecutor.Priority.getPriority(priority);
++ boolean scheduledUnload = false;
++
++ final NewChunkHolder holder = this.world.chunkTaskScheduler.chunkHolderManager.getChunkHolder(this.chunkX, this.chunkZ);
++ if (holder != null) {
++ final BiConsumer<CompoundTag, Throwable> unloadConsumer = (final CompoundTag data, final Throwable thr) -> {
++ if (data != null) {
++ consumer.accept(data, null);
++ } else {
++ // need to schedule task
++ LoadDataFromDiskTask.this.schedule(false, consumer, PrioritisedExecutor.Priority.getPriority(LoadDataFromDiskTask.this.getPriorityVolatile() & ~PRIORITY_FLAGS));
++ }
++ };
++ Cancellable unloadCancellable = null;
++ CompoundTag syncComplete = null;
++ final NewChunkHolder.UnloadTask unloadTask = holder.getUnloadTask(this.type); // can be null if no task exists
++ final Completable<CompoundTag> unloadCompletable = unloadTask == null ? null : unloadTask.completable();
++ if (unloadCompletable != null) {
++ unloadCancellable = unloadCompletable.addAsynchronousWaiter(unloadConsumer);
++ if (unloadCancellable == null) {
++ syncComplete = unloadCompletable.getResult();
++ }
++ }
++
++ if (syncComplete != null) {
++ consumer.accept(syncComplete, null);
++ return;
++ }
++
++ if (unloadCancellable != null) {
++ scheduledUnload = true;
++ this.dataUnloadCancellable = unloadCancellable;
++ this.dataUnloadTask = unloadTask.task();
++ }
++ }
++
++ this.schedule(scheduledUnload, consumer, initialPriority);
++ }
++
++ private void schedule(final boolean scheduledUnload, final BiConsumer<CompoundTag, Throwable> consumer, final PrioritisedExecutor.Priority initialPriority) {
++ int priority = this.getPriorityVolatile();
++
++ if ((priority & PRIORITY_EXECUTED) != 0) {
++ // cancelled
++ return;
++ }
++
++ if (!scheduledUnload) {
++ this.dataLoadTask = RegionFileIOThread.loadDataAsync(
++ this.world, this.chunkX, this.chunkZ, this.type, consumer,
++ initialPriority.isHigherPriority(PrioritisedExecutor.Priority.NORMAL), initialPriority
++ );
++ }
++
++ int failures = 0;
++ for (;;) {
++ if (priority == (priority = this.compareAndExchangePriorityVolatile(priority, priority | (scheduledUnload ? PRIORITY_UNLOAD_SCHEDULED : PRIORITY_LOAD_SCHEDULED)))) {
++ return;
++ }
++
++ if ((priority & PRIORITY_EXECUTED) != 0) {
++ // cancelled or executed
++ if (this.dataUnloadCancellable != null) {
++ this.dataUnloadCancellable.cancel();
++ }
++
++ if (this.dataLoadTask != null) {
++ this.dataLoadTask.cancel();
++ }
++ return;
++ }
++
++ if (scheduledUnload) {
++ if (this.dataUnloadTask != null) {
++ this.dataUnloadTask.setPriority(PrioritisedExecutor.Priority.getPriority(priority & ~PRIORITY_FLAGS));
++ }
++ } else {
++ RegionFileIOThread.setPriority(this.world, this.chunkX, this.chunkZ, this.type, PrioritisedExecutor.Priority.getPriority(priority & ~PRIORITY_FLAGS));
++ }
++
++ ++failures;
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++ }
++ }
++
++ /*
++ private static final class LoadDataPriorityHolder extends PriorityHolder {
++
++ protected final LoadDataFromDiskTask task;
++
++ protected LoadDataPriorityHolder(final PrioritisedExecutor.Priority priority, final LoadDataFromDiskTask task) {
++ super(priority);
++ this.task = task;
++ }
++
++ @Override
++ protected void cancelScheduled() {
++ final Cancellable dataLoadTask = this.task.dataLoadTask;
++ if (dataLoadTask != null) {
++ // OK if we miss the field read, the task cannot complete if the cancelled bit is set and
++ // the write to dataLoadTask will check for the cancelled bit
++ this.task.dataLoadTask.cancel();
++ }
++ this.task.complete(CANCELLED_DATA, null);
++ }
++
++ @Override
++ protected PrioritisedExecutor.Priority getScheduledPriority() {
++ final LoadDataFromDiskTask task = this.task;
++ return RegionFileIOThread.getPriority(task.world, task.chunkX, task.chunkZ, task.type);
++ }
++
++ @Override
++ protected void scheduleTask(final PrioritisedExecutor.Priority priority) {
++ final LoadDataFromDiskTask task = this.task;
++ final BiConsumer<CompoundTag, Throwable> consumer = (final CompoundTag data, final Throwable thr) -> {
++ // because cancelScheduled() cannot actually stop this task from executing in every case, we need
++ // to mark complete here to ensure we do not double complete
++ if (LoadDataPriorityHolder.this.markExecuting()) {
++ LoadDataPriorityHolder.this.task.complete(data, thr);
++ } // else: cancelled
++ };
++ task.dataLoadTask = RegionFileIOThread.loadDataAsync(
++ task.world, task.chunkX, task.chunkZ, task.type, consumer,
++ priority.isHigherPriority(PrioritisedExecutor.Priority.NORMAL), priority
++ );
++ if (this.isMarkedExecuted()) {
++ // if we are marked as completed, it could be:
++ // 1. we were cancelled
++ // 2. the consumer was completed
++ // in the 2nd case, cancel() does nothing
++ // in the 1st case, we ensure cancel() is called as it is possible for the cancelling thread
++ // to miss the field write here
++ task.dataLoadTask.cancel();
++ }
++ }
++
++ @Override
++ protected void lowerPriorityScheduled(final PrioritisedExecutor.Priority priority) {
++ final LoadDataFromDiskTask task = this.task;
++ RegionFileIOThread.lowerPriority(task.world, task.chunkX, task.chunkZ, task.type, priority);
++ }
++
++ @Override
++ protected void setPriorityScheduled(final PrioritisedExecutor.Priority priority) {
++ final LoadDataFromDiskTask task = this.task;
++ RegionFileIOThread.setPriority(task.world, task.chunkX, task.chunkZ, task.type, priority);
++ }
++
++ @Override
++ protected void raisePriorityScheduled(final PrioritisedExecutor.Priority priority) {
++ final LoadDataFromDiskTask task = this.task;
++ RegionFileIOThread.raisePriority(task.world, task.chunkX, task.chunkZ, task.type, priority);
++ }
++ }
++ */
++ }
++}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/NewChunkHolder.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/NewChunkHolder.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..56b07a3306e5735816c8d89601b519cb0db6379a
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/NewChunkHolder.java
+@@ -0,0 +1,2106 @@
++package io.papermc.paper.chunk.system.scheduling;
++
++import ca.spottedleaf.concurrentutil.completable.Completable;
++import ca.spottedleaf.concurrentutil.executor.Cancellable;
++import ca.spottedleaf.concurrentutil.executor.standard.DelayedPrioritisedTask;
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
++import ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock;
++import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
++import com.google.gson.JsonArray;
++import com.google.gson.JsonElement;
++import com.google.gson.JsonObject;
++import com.google.gson.JsonPrimitive;
++import com.mojang.logging.LogUtils;
++import io.papermc.paper.chunk.system.io.RegionFileIOThread;
++import io.papermc.paper.chunk.system.poi.PoiChunk;
++import io.papermc.paper.util.CoordinateUtils;
++import io.papermc.paper.util.TickThread;
++import io.papermc.paper.util.WorldUtil;
++import io.papermc.paper.world.ChunkEntitySlices;
++import it.unimi.dsi.fastutil.objects.Reference2ObjectLinkedOpenHashMap;
++import it.unimi.dsi.fastutil.objects.Reference2ObjectMap;
++import it.unimi.dsi.fastutil.objects.Reference2ObjectOpenHashMap;
++import it.unimi.dsi.fastutil.objects.ReferenceLinkedOpenHashSet;
++import net.minecraft.nbt.CompoundTag;
++import net.minecraft.server.level.ChunkHolder;
++import net.minecraft.server.level.ChunkLevel;
++import net.minecraft.server.level.FullChunkStatus;
++import net.minecraft.server.level.ServerLevel;
++import net.minecraft.server.level.TicketType;
++import net.minecraft.world.entity.Entity;
++import net.minecraft.world.level.ChunkPos;
++import net.minecraft.world.level.chunk.ChunkAccess;
++import net.minecraft.world.level.chunk.status.ChunkStatus;
++import net.minecraft.world.level.chunk.ImposterProtoChunk;
++import net.minecraft.world.level.chunk.LevelChunk;
++import net.minecraft.world.level.chunk.storage.ChunkSerializer;
++import net.minecraft.world.level.chunk.storage.EntityStorage;
++import org.slf4j.Logger;
++import java.lang.invoke.VarHandle;
++import java.util.ArrayList;
++import java.util.Iterator;
++import java.util.List;
++import java.util.Map;
++import java.util.Objects;
++import java.util.concurrent.atomic.AtomicBoolean;
++import java.util.function.Consumer;
++
++public final class NewChunkHolder {
++
++ private static final Logger LOGGER = LogUtils.getClassLogger();
++
++ public static final Thread.UncaughtExceptionHandler CHUNKSYSTEM_UNCAUGHT_EXCEPTION_HANDLER = new Thread.UncaughtExceptionHandler() {
++ @Override
++ public void uncaughtException(final Thread thread, final Throwable throwable) {
++ if (!(throwable instanceof ThreadDeath)) {
++ LOGGER.error("Uncaught exception in thread " + thread.getName(), throwable);
++ }
++ }
++ };
++
++ public final ServerLevel world;
++ public final int chunkX;
++ public final int chunkZ;
++
++ public final ChunkTaskScheduler scheduler;
++
++ // load/unload state
++
++ // chunk data state
++
++ private ChunkEntitySlices entityChunk;
++ // entity chunk that is loaded, but not yet deserialized
++ private CompoundTag pendingEntityChunk;
++
++ ChunkEntitySlices loadInEntityChunk(final boolean transientChunk) {
++ TickThread.ensureTickThread(this.world, this.chunkX, this.chunkZ, "Cannot sync load entity data off-main");
++ final CompoundTag entityChunk;
++ final ChunkEntitySlices ret;
++ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
++ try {
++ if (this.entityChunk != null && (transientChunk || !this.entityChunk.isTransient())) {
++ return this.entityChunk;
++ }
++ final CompoundTag pendingEntityChunk = this.pendingEntityChunk;
++ if (!transientChunk && pendingEntityChunk == null) {
++ throw new IllegalStateException("Must load entity data from disk before loading in the entity chunk!");
++ }
++
++ if (this.entityChunk == null) {
++ ret = this.entityChunk = new ChunkEntitySlices(
++ this.world, this.chunkX, this.chunkZ, this.getChunkStatus(),
++ WorldUtil.getMinSection(this.world), WorldUtil.getMaxSection(this.world)
++ );
++
++ ret.setTransient(transientChunk);
++
++ this.world.getEntityLookup().entitySectionLoad(this.chunkX, this.chunkZ, ret);
++ } else {
++ // transientChunk = false here
++ ret = this.entityChunk;
++ this.entityChunk.setTransient(false);
++ }
++
++ if (!transientChunk) {
++ this.pendingEntityChunk = null;
++ entityChunk = pendingEntityChunk == EMPTY_ENTITY_CHUNK ? null : pendingEntityChunk;
++ } else {
++ entityChunk = null;
++ }
++ } finally {
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
++ }
++
++ if (!transientChunk) {
++ if (entityChunk != null) {
++ final List<Entity> entities = EntityStorage.readEntities(this.world, entityChunk);
++
++ this.world.getEntityLookup().addEntityChunkEntities(entities, new ChunkPos(this.chunkX, this.chunkZ));
++ }
++ }
++
++ return ret;
++ }
++
++ // needed to distinguish whether the entity chunk has been read from disk but is empty or whether it has _not_
++ // been read from disk
++ private static final CompoundTag EMPTY_ENTITY_CHUNK = new CompoundTag();
++
++ private ChunkLoadTask.EntityDataLoadTask entityDataLoadTask;
++ // note: if entityDataLoadTask is cancelled, but on its completion entityDataLoadTaskWaiters.size() != 0,
++ // then the task is rescheduled
++ private List<GenericDataLoadTaskCallback> entityDataLoadTaskWaiters;
++
++ public ChunkLoadTask.EntityDataLoadTask getEntityDataLoadTask() {
++ return this.entityDataLoadTask;
++ }
++
++ // must hold schedule lock for the two below functions
++
++ // returns only if the data has been loaded from disk, DOES NOT relate to whether it has been deserialized
++ // or added into the world (or even into entityChunk)
++ public boolean isEntityChunkNBTLoaded() {
++ return (this.entityChunk != null && !this.entityChunk.isTransient()) || this.pendingEntityChunk != null;
++ }
++
++ private void completeEntityLoad(final GenericDataLoadTask.TaskResult<CompoundTag, Throwable> result) {
++ final List<GenericDataLoadTaskCallback> completeWaiters;
++ ChunkLoadTask.EntityDataLoadTask entityDataLoadTask = null;
++ boolean scheduleEntityTask = false;
++ ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
++ try {
++ final List<GenericDataLoadTaskCallback> waiters = this.entityDataLoadTaskWaiters;
++ this.entityDataLoadTask = null;
++ if (result != null) {
++ this.entityDataLoadTaskWaiters = null;
++ this.pendingEntityChunk = result.left() == null ? EMPTY_ENTITY_CHUNK : result.left();
++ if (result.right() != null) {
++ LOGGER.error("Unhandled entity data load exception, data data will be lost: ", result.right());
++ }
++
++ for (final GenericDataLoadTaskCallback callback : waiters) {
++ callback.markCompleted();
++ }
++
++ completeWaiters = waiters;
++ } else {
++ // cancelled
++ completeWaiters = null;
++
++ // need to re-schedule?
++ if (waiters.isEmpty()) {
++ this.entityDataLoadTaskWaiters = null;
++ // no tasks to schedule _for_
++ } else {
++ entityDataLoadTask = this.entityDataLoadTask = new ChunkLoadTask.EntityDataLoadTask(
++ this.scheduler, this.world, this.chunkX, this.chunkZ, this.getEffectivePriority()
++ );
++ entityDataLoadTask.addCallback(this::completeEntityLoad);
++ // need one schedule() per waiter
++ for (final GenericDataLoadTaskCallback callback : waiters) {
++ scheduleEntityTask |= entityDataLoadTask.schedule(true);
++ }
++ }
++ }
++ } finally {
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
++ }
++
++ if (scheduleEntityTask) {
++ entityDataLoadTask.scheduleNow();
++ }
++
++ // avoid holding the scheduling lock while completing
++ if (completeWaiters != null) {
++ for (final GenericDataLoadTaskCallback callback : completeWaiters) {
++ callback.acceptCompleted(result);
++ }
++ }
++
++ schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
++ try {
++ this.checkUnload();
++ } finally {
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
++ }
++ }
++
++ // note: it is guaranteed that the consumer cannot be called for the entirety that the schedule lock is held
++ // however, when the consumer is invoked, it will hold the schedule lock
++ public GenericDataLoadTaskCallback getOrLoadEntityData(final Consumer<GenericDataLoadTask.TaskResult<CompoundTag, Throwable>> consumer) {
++ if (this.isEntityChunkNBTLoaded()) {
++ throw new IllegalStateException("Cannot load entity data, it is already loaded");
++ }
++ // why not just acquire the lock? because the caller NEEDS to call isEntityChunkNBTLoaded before this!
++ if (!this.scheduler.schedulingLockArea.isHeldByCurrentThread(this.chunkX, this.chunkZ)) {
++ throw new IllegalStateException("Must hold scheduling lock");
++ }
++
++ final GenericDataLoadTaskCallback ret = new EntityDataLoadTaskCallback((Consumer)consumer, this);
++
++ if (this.entityDataLoadTask == null) {
++ this.entityDataLoadTask = new ChunkLoadTask.EntityDataLoadTask(
++ this.scheduler, this.world, this.chunkX, this.chunkZ, this.getEffectivePriority()
++ );
++ this.entityDataLoadTask.addCallback(this::completeEntityLoad);
++ this.entityDataLoadTaskWaiters = new ArrayList<>();
++ }
++ this.entityDataLoadTaskWaiters.add(ret);
++ if (this.entityDataLoadTask.schedule(true)) {
++ ret.schedule = this.entityDataLoadTask;
++ }
++ this.checkUnload();
++
++ return ret;
++ }
++
++ private static final class EntityDataLoadTaskCallback extends GenericDataLoadTaskCallback {
++
++ public EntityDataLoadTaskCallback(final Consumer<GenericDataLoadTask.TaskResult<?, Throwable>> consumer, final NewChunkHolder chunkHolder) {
++ super(consumer, chunkHolder);
++ }
++
++ @Override
++ void internalCancel() {
++ this.chunkHolder.entityDataLoadTaskWaiters.remove(this);
++ this.chunkHolder.entityDataLoadTask.cancel();
++ }
++ }
++
++ private PoiChunk poiChunk;
++
++ private ChunkLoadTask.PoiDataLoadTask poiDataLoadTask;
++ // note: if entityDataLoadTask is cancelled, but on its completion entityDataLoadTaskWaiters.size() != 0,
++ // then the task is rescheduled
++ private List<GenericDataLoadTaskCallback> poiDataLoadTaskWaiters;
++
++ public ChunkLoadTask.PoiDataLoadTask getPoiDataLoadTask() {
++ return this.poiDataLoadTask;
++ }
++
++ // must hold schedule lock for the two below functions
++
++ public boolean isPoiChunkLoaded() {
++ return this.poiChunk != null;
++ }
++
++ private void completePoiLoad(final GenericDataLoadTask.TaskResult<PoiChunk, Throwable> result) {
++ final List<GenericDataLoadTaskCallback> completeWaiters;
++ ChunkLoadTask.PoiDataLoadTask poiDataLoadTask = null;
++ boolean schedulePoiTask = false;
++ ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
++ try {
++ final List<GenericDataLoadTaskCallback> waiters = this.poiDataLoadTaskWaiters;
++ this.poiDataLoadTask = null;
++ if (result != null) {
++ this.poiDataLoadTaskWaiters = null;
++ this.poiChunk = result.left();
++ if (result.right() != null) {
++ LOGGER.error("Unhandled poi load exception, poi data will be lost: ", result.right());
++ }
++
++ for (final GenericDataLoadTaskCallback callback : waiters) {
++ callback.markCompleted();
++ }
++
++ completeWaiters = waiters;
++ } else {
++ // cancelled
++ completeWaiters = null;
++
++ // need to re-schedule?
++ if (waiters.isEmpty()) {
++ this.poiDataLoadTaskWaiters = null;
++ // no tasks to schedule _for_
++ } else {
++ poiDataLoadTask = this.poiDataLoadTask = new ChunkLoadTask.PoiDataLoadTask(
++ this.scheduler, this.world, this.chunkX, this.chunkZ, this.getEffectivePriority()
++ );
++ poiDataLoadTask.addCallback(this::completePoiLoad);
++ // need one schedule() per waiter
++ for (final GenericDataLoadTaskCallback callback : waiters) {
++ schedulePoiTask |= poiDataLoadTask.schedule(true);
++ }
++ }
++ }
++ } finally {
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
++ }
++
++ if (schedulePoiTask) {
++ poiDataLoadTask.scheduleNow();
++ }
++
++ // avoid holding the scheduling lock while completing
++ if (completeWaiters != null) {
++ for (final GenericDataLoadTaskCallback callback : completeWaiters) {
++ callback.acceptCompleted(result);
++ }
++ }
++ schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
++ try {
++ this.checkUnload();
++ } finally {
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
++ }
++ }
++
++ // note: it is guaranteed that the consumer cannot be called for the entirety that the schedule lock is held
++ // however, when the consumer is invoked, it will hold the schedule lock
++ public GenericDataLoadTaskCallback getOrLoadPoiData(final Consumer<GenericDataLoadTask.TaskResult<PoiChunk, Throwable>> consumer) {
++ if (this.isPoiChunkLoaded()) {
++ throw new IllegalStateException("Cannot load poi data, it is already loaded");
++ }
++ // why not just acquire the lock? because the caller NEEDS to call isPoiChunkLoaded before this!
++ if (!this.scheduler.schedulingLockArea.isHeldByCurrentThread(this.chunkX, this.chunkZ)) {
++ throw new IllegalStateException("Must hold scheduling lock");
++ }
++
++ final GenericDataLoadTaskCallback ret = new PoiDataLoadTaskCallback((Consumer)consumer, this);
++
++ if (this.poiDataLoadTask == null) {
++ this.poiDataLoadTask = new ChunkLoadTask.PoiDataLoadTask(
++ this.scheduler, this.world, this.chunkX, this.chunkZ, this.getEffectivePriority()
++ );
++ this.poiDataLoadTask.addCallback(this::completePoiLoad);
++ this.poiDataLoadTaskWaiters = new ArrayList<>();
++ }
++ this.poiDataLoadTaskWaiters.add(ret);
++ if (this.poiDataLoadTask.schedule(true)) {
++ ret.schedule = this.poiDataLoadTask;
++ }
++ this.checkUnload();
++
++ return ret;
++ }
++
++ private static final class PoiDataLoadTaskCallback extends GenericDataLoadTaskCallback {
++
++ public PoiDataLoadTaskCallback(final Consumer<GenericDataLoadTask.TaskResult<?, Throwable>> consumer, final NewChunkHolder chunkHolder) {
++ super(consumer, chunkHolder);
++ }
++
++ @Override
++ void internalCancel() {
++ this.chunkHolder.poiDataLoadTaskWaiters.remove(this);
++ this.chunkHolder.poiDataLoadTask.cancel();
++ }
++ }
++
++ public static abstract class GenericDataLoadTaskCallback implements Cancellable {
++
++ protected final Consumer<GenericDataLoadTask.TaskResult<?, Throwable>> consumer;
++ protected final NewChunkHolder chunkHolder;
++ protected boolean completed;
++ protected GenericDataLoadTask<?, ?> schedule;
++ protected final AtomicBoolean scheduled = new AtomicBoolean();
++
++ public GenericDataLoadTaskCallback(final Consumer<GenericDataLoadTask.TaskResult<?, Throwable>> consumer,
++ final NewChunkHolder chunkHolder) {
++ this.consumer = consumer;
++ this.chunkHolder = chunkHolder;
++ }
++
++ public void schedule() {
++ if (this.scheduled.getAndSet(true)) {
++ throw new IllegalStateException("Double calling schedule()");
++ }
++ if (this.schedule != null) {
++ this.schedule.scheduleNow();
++ this.schedule = null;
++ }
++ }
++
++ boolean isCompleted() {
++ return this.completed;
++ }
++
++ // must hold scheduling lock
++ private boolean setCompleted() {
++ if (this.completed) {
++ return false;
++ }
++ return this.completed = true;
++ }
++
++ // must hold scheduling lock
++ void markCompleted() {
++ if (this.completed) {
++ throw new IllegalStateException("May not be completed here");
++ }
++ this.completed = true;
++ }
++
++ void acceptCompleted(final GenericDataLoadTask.TaskResult<?, Throwable> result) {
++ if (result != null) {
++ if (this.completed) {
++ this.consumer.accept(result);
++ } else {
++ throw new IllegalStateException("Cannot be uncompleted at this point");
++ }
++ } else {
++ throw new NullPointerException("Result cannot be null (cancelled)");
++ }
++ }
++
++ // holds scheduling lock
++ abstract void internalCancel();
++
++ @Override
++ public boolean cancel() {
++ final NewChunkHolder holder = this.chunkHolder; // Folia - use area based lock to reduce contention
++ final ReentrantAreaLock.Node schedulingLock = holder.scheduler.schedulingLockArea.lock(holder.chunkX, holder.chunkZ);
++ try {
++ if (!this.completed) {
++ this.completed = true;
++ this.internalCancel();
++ return true;
++ }
++ return false;
++ } finally {
++ holder.scheduler.schedulingLockArea.unlock(schedulingLock);
++ }
++ }
++ }
++
++ private ChunkAccess currentChunk;
++
++ // generation status state
++
++ /**
++ * Current status the chunk has been brought up to by the chunk system. null indicates no work at all
++ */
++ private ChunkStatus currentGenStatus;
++
++ // This allows unsynchronised access to the chunk and last gen status
++ private volatile ChunkCompletion lastChunkCompletion;
++
++ public ChunkCompletion getLastChunkCompletion() {
++ return this.lastChunkCompletion;
++ }
++
++ public static final record ChunkCompletion(ChunkAccess chunk, ChunkStatus genStatus) {};
++
++ /**
++ * The target final chunk status the chunk system will bring the chunk to.
++ */
++ private ChunkStatus requestedGenStatus;
++
++ private ChunkProgressionTask generationTask;
++ private ChunkStatus generationTaskStatus;
++
++ /**
++ * contains the neighbours that this chunk generation is blocking on
++ */
++ protected final ReferenceLinkedOpenHashSet<NewChunkHolder> neighboursBlockingGenTask = new ReferenceLinkedOpenHashSet<>(4);
++
++ /**
++ * map of ChunkHolder -> Required Status for this chunk
++ */
++ protected final Reference2ObjectLinkedOpenHashMap<NewChunkHolder, ChunkStatus> neighboursWaitingForUs = new Reference2ObjectLinkedOpenHashMap<>();
++
++ public void addGenerationBlockingNeighbour(final NewChunkHolder neighbour) {
++ this.neighboursBlockingGenTask.add(neighbour);
++ }
++
++ public void addWaitingNeighbour(final NewChunkHolder neighbour, final ChunkStatus requiredStatus) {
++ final boolean wasEmpty = this.neighboursWaitingForUs.isEmpty();
++ this.neighboursWaitingForUs.put(neighbour, requiredStatus);
++ if (wasEmpty) {
++ this.checkUnload();
++ }
++ }
++
++ // priority state
++
++ // the target priority for this chunk to generate at
++ // TODO this will screw over scheduling at lower priorities to neighbours, fix
++ private PrioritisedExecutor.Priority priority = PrioritisedExecutor.Priority.NORMAL;
++ private boolean priorityLocked;
++
++ // the priority neighbouring chunks have requested this chunk generate at
++ private PrioritisedExecutor.Priority neighbourRequestedPriority = PrioritisedExecutor.Priority.IDLE;
++
++ public PrioritisedExecutor.Priority getEffectivePriority() {
++ return PrioritisedExecutor.Priority.max(this.priority, this.neighbourRequestedPriority);
++ }
++
++ protected void recalculateNeighbourRequestedPriority() {
++ if (this.neighboursWaitingForUs.isEmpty()) {
++ this.neighbourRequestedPriority = PrioritisedExecutor.Priority.IDLE;
++ return;
++ }
++
++ PrioritisedExecutor.Priority max = PrioritisedExecutor.Priority.IDLE;
++
++ for (final NewChunkHolder holder : this.neighboursWaitingForUs.keySet()) {
++ final PrioritisedExecutor.Priority neighbourPriority = holder.getEffectivePriority();
++ if (neighbourPriority.isHigherPriority(max)) {
++ max = neighbourPriority;
++ }
++ }
++
++ final PrioritisedExecutor.Priority current = this.getEffectivePriority();
++ this.neighbourRequestedPriority = max;
++ final PrioritisedExecutor.Priority next = this.getEffectivePriority();
++
++ if (current == next) {
++ return;
++ }
++
++ // our effective priority has changed, so change our task
++ if (this.generationTask != null) {
++ this.generationTask.setPriority(next);
++ }
++
++ // now propagate this to our neighbours
++ this.recalculateNeighbourPriorities();
++ }
++
++ public void recalculateNeighbourPriorities() {
++ for (final NewChunkHolder holder : this.neighboursBlockingGenTask) {
++ holder.recalculateNeighbourRequestedPriority();
++ }
++ }
++
++ // must hold scheduling lock
++ public void raisePriority(final PrioritisedExecutor.Priority priority) {
++ if (this.priority != null && this.priority.isHigherOrEqualPriority(priority)) {
++ return;
++ }
++ this.setPriority(priority);
++ }
++
++ private void lockPriority() {
++ this.priority = PrioritisedExecutor.Priority.NORMAL;
++ this.priorityLocked = true;
++ }
++
++ // must hold scheduling lock
++ public void setPriority(final PrioritisedExecutor.Priority priority) {
++ if (this.priorityLocked) {
++ return;
++ }
++ final PrioritisedExecutor.Priority old = this.getEffectivePriority();
++ this.priority = priority;
++ final PrioritisedExecutor.Priority newPriority = this.getEffectivePriority();
++
++ if (old != newPriority) {
++ if (this.generationTask != null) {
++ this.generationTask.setPriority(newPriority);
++ }
++ }
++
++ this.recalculateNeighbourPriorities();
++ }
++
++ // must hold scheduling lock
++ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
++ if (this.priority != null && this.priority.isLowerOrEqualPriority(priority)) {
++ return;
++ }
++ this.setPriority(priority);
++ }
++
++ // error handling state
++ private ChunkStatus failedGenStatus;
++ private Throwable genTaskException;
++ private Thread genTaskFailedThread;
++
++ private boolean failedLightUpdate;
++
++ public void failedLightUpdate() {
++ this.failedLightUpdate = true;
++ }
++
++ public boolean hasFailedGeneration() {
++ return this.genTaskException != null;
++ }
++
++ // ticket level state
++ private int oldTicketLevel = ChunkLevel.MAX_LEVEL + 1;
++ private int currentTicketLevel = ChunkLevel.MAX_LEVEL + 1;
++
++ public int getTicketLevel() {
++ return this.currentTicketLevel;
++ }
++
++ public final ChunkHolder vanillaChunkHolder;
++
++ public NewChunkHolder(final ServerLevel world, final int chunkX, final int chunkZ, final ChunkTaskScheduler scheduler) {
++ this.world = world;
++ this.chunkX = chunkX;
++ this.chunkZ = chunkZ;
++ this.scheduler = scheduler;
++ this.vanillaChunkHolder = new ChunkHolder(new ChunkPos(chunkX, chunkZ), world, world.getLightEngine(), world.chunkSource.chunkMap, this);
++ }
++
++ protected ImposterProtoChunk wrappedChunkForNeighbour;
++
++ // holds scheduling lock
++ public ChunkAccess getChunkForNeighbourAccess() {
++ // Vanilla overrides the status futures with an imposter chunk to prevent writes to full chunks
++ // But we don't store per-status futures, so we need this hack
++ if (this.wrappedChunkForNeighbour != null) {
++ return this.wrappedChunkForNeighbour;
++ }
++ final ChunkAccess ret = this.currentChunk;
++ return ret instanceof LevelChunk fullChunk ? this.wrappedChunkForNeighbour = new ImposterProtoChunk(fullChunk, false) : ret;
++ }
++
++ public ChunkAccess getCurrentChunk() {
++ return this.currentChunk;
++ }
++
++ int getCurrentTicketLevel() {
++ return this.currentTicketLevel;
++ }
++
++ void updateTicketLevel(final int toLevel) {
++ this.currentTicketLevel = toLevel;
++ }
++
++ private int totalNeighboursUsingThisChunk = 0;
++
++ // holds schedule lock
++ public void addNeighbourUsingChunk() {
++ final int now = ++this.totalNeighboursUsingThisChunk;
++
++ if (now == 1) {
++ this.checkUnload();
++ }
++ }
++
++ // holds schedule lock
++ public void removeNeighbourUsingChunk() {
++ final int now = --this.totalNeighboursUsingThisChunk;
++
++ if (now == 0) {
++ this.checkUnload();
++ }
++
++ if (now < 0) {
++ throw new IllegalStateException("Neighbours using this chunk cannot be negative");
++ }
++ }
++
++ // must hold scheduling lock
++ // returns string reason for why chunk should remain loaded, null otherwise
++ public final String isSafeToUnload() {
++ // is ticket level below threshold?
++ if (this.oldTicketLevel <= ChunkHolderManager.MAX_TICKET_LEVEL) {
++ return "ticket_level";
++ }
++
++ // are we being used by another chunk for generation?
++ if (this.totalNeighboursUsingThisChunk != 0) {
++ return "neighbours_generating";
++ }
++
++ // are we going to be used by another chunk for generation?
++ if (!this.neighboursWaitingForUs.isEmpty()) {
++ return "neighbours_waiting";
++ }
++
++ // chunk must be marked inaccessible (i.e unloaded to plugins)
++ if (this.getChunkStatus() != FullChunkStatus.INACCESSIBLE) {
++ return "fullchunkstatus";
++ }
++
++ // are we currently generating anything, or have requested generation?
++ if (this.generationTask != null) {
++ return "generating";
++ }
++ if (this.requestedGenStatus != null) {
++ return "requested_generation";
++ }
++
++ // entity data requested?
++ if (this.entityDataLoadTask != null) {
++ return "entity_data_requested";
++ }
++
++ // poi data requested?
++ if (this.poiDataLoadTask != null) {
++ return "poi_data_requested";
++ }
++
++ // are we pending serialization?
++ if (this.entityDataUnload != null) {
++ return "entity_serialization";
++ }
++ if (this.poiDataUnload != null) {
++ return "poi_serialization";
++ }
++ if (this.chunkDataUnload != null) {
++ return "chunk_serialization";
++ }
++
++ // Note: light tasks do not need a check, as they add a ticket.
++
++ // nothing is using this chunk, so it should be unloaded
++ return null;
++ }
++
++ /** Unloaded from chunk map */
++ boolean killed;
++
++ // must hold scheduling lock
++ private void checkUnload() {
++ if (this.killed) {
++ return;
++ }
++ if (this.isSafeToUnload() == null) {
++ // ensure in unload queue
++ this.scheduler.chunkHolderManager.unloadQueue.addChunk(this.chunkX, this.chunkZ);
++ } else {
++ // ensure not in unload queue
++ this.scheduler.chunkHolderManager.unloadQueue.removeChunk(this.chunkX, this.chunkZ);
++ }
++ }
++
++ static final record UnloadState(NewChunkHolder holder, ChunkAccess chunk, ChunkEntitySlices entityChunk, PoiChunk poiChunk) {};
++
++ // note: these are completed with null to indicate that no write occurred
++ // they are also completed with null to indicate a null write occurred
++ private UnloadTask chunkDataUnload;
++ private UnloadTask entityDataUnload;
++ private UnloadTask poiDataUnload;
++
++ public static final record UnloadTask(Completable<CompoundTag> completable, DelayedPrioritisedTask task) {}
++
++ public UnloadTask getUnloadTask(final RegionFileIOThread.RegionFileType type) {
++ switch (type) {
++ case CHUNK_DATA:
++ return this.chunkDataUnload;
++ case ENTITY_DATA:
++ return this.entityDataUnload;
++ case POI_DATA:
++ return this.poiDataUnload;
++ default:
++ throw new IllegalStateException("Unknown regionfile type " + type);
++ }
++ }
++
++ private UnloadState unloadState;
++
++ // holds schedule lock
++ UnloadState unloadStage1() {
++ // because we hold the scheduling lock, we cannot actually unload anything
++ // so we need to null this chunk's state
++ ChunkAccess chunk = this.currentChunk;
++ ChunkEntitySlices entityChunk = this.entityChunk;
++ PoiChunk poiChunk = this.poiChunk;
++ // chunk state
++ this.currentChunk = null;
++ this.currentGenStatus = null;
++ this.wrappedChunkForNeighbour = null;
++ this.lastChunkCompletion = null;
++ // entity chunk state
++ this.entityChunk = null;
++ this.pendingEntityChunk = null;
++
++ // poi chunk state
++ this.poiChunk = null;
++
++ // priority state
++ this.priorityLocked = false;
++
++ if (chunk != null) {
++ this.chunkDataUnload = new UnloadTask(new Completable<>(), new DelayedPrioritisedTask(PrioritisedExecutor.Priority.NORMAL));
++ }
++ if (poiChunk != null) {
++ this.poiDataUnload = new UnloadTask(new Completable<>(), null);
++ }
++ if (entityChunk != null) {
++ this.entityDataUnload = new UnloadTask(new Completable<>(), null);
++ }
++
++ return this.unloadState = (chunk != null || entityChunk != null || poiChunk != null) ? new UnloadState(this, chunk, entityChunk, poiChunk) : null;
++ }
++
++ // data is null if failed or does not need to be saved
++ void completeAsyncChunkDataSave(final CompoundTag data) {
++ if (data != null) {
++ RegionFileIOThread.scheduleSave(this.world, this.chunkX, this.chunkZ, data, RegionFileIOThread.RegionFileType.CHUNK_DATA);
++ }
++ this.chunkDataUnload.completable().complete(data);
++ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
++ try {
++ // can only write to these fields while holding the schedule lock
++ this.chunkDataUnload = null;
++ this.checkUnload();
++ } finally {
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
++ }
++ }
++
++ void unloadStage2(final UnloadState state) {
++ this.unloadState = null;
++ final ChunkAccess chunk = state.chunk();
++ final ChunkEntitySlices entityChunk = state.entityChunk();
++ final PoiChunk poiChunk = state.poiChunk();
++
++ final boolean shouldLevelChunkNotSave = (chunk instanceof LevelChunk levelChunk && levelChunk.mustNotSave);
++
++ // unload chunk data
++ if (chunk != null) {
++ if (chunk instanceof LevelChunk levelChunk) {
++ levelChunk.setLoaded(false);
++ }
++
++ if (!shouldLevelChunkNotSave) {
++ this.saveChunk(chunk, true);
++ } else {
++ this.completeAsyncChunkDataSave(null);
++ }
++
++ if (chunk instanceof LevelChunk levelChunk) {
++ this.world.unload(levelChunk);
++ }
++ }
++
++ // unload entity data
++ if (entityChunk != null) {
++ this.saveEntities(entityChunk, true);
++ // yes this is a hack to pass the compound tag through...
++ final CompoundTag lastEntityUnload = this.lastEntityUnload;
++ this.lastEntityUnload = null;
++
++ if (entityChunk.unload()) {
++ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
++ try {
++ entityChunk.setTransient(true);
++ this.entityChunk = entityChunk;
++ } finally {
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
++ }
++ } else {
++ this.world.getEntityLookup().entitySectionUnload(this.chunkX, this.chunkZ);
++ }
++ // we need to delay the callback until after determining transience, otherwise a potential loader could
++ // set entityChunk before we do
++ this.entityDataUnload.completable().complete(lastEntityUnload);
++ }
++
++ // unload poi data
++ if (poiChunk != null) {
++ if (poiChunk.isDirty() && !shouldLevelChunkNotSave) {
++ this.savePOI(poiChunk, true);
++ } else {
++ this.poiDataUnload.completable().complete(null);
++ }
++
++ if (poiChunk.isLoaded()) {
++ this.world.getPoiManager().onUnload(CoordinateUtils.getChunkKey(this.chunkX, this.chunkZ));
++ }
++ }
++ }
++
++ boolean unloadStage3() {
++ // can only write to these while holding the schedule lock, and we instantly complete them in stage2
++ this.poiDataUnload = null;
++ this.entityDataUnload = null;
++
++ // we need to check if anything has been loaded in the meantime (or if we have transient entities)
++ if (this.entityChunk != null || this.poiChunk != null || this.currentChunk != null) {
++ return false;
++ }
++
++ return this.isSafeToUnload() == null;
++ }
++
++ private void cancelGenTask() {
++ if (this.generationTask != null) {
++ this.generationTask.cancel();
++ } else {
++ // otherwise, we are blocking on neighbours, so remove them
++ if (!this.neighboursBlockingGenTask.isEmpty()) {
++ for (final NewChunkHolder neighbour : this.neighboursBlockingGenTask) {
++ if (neighbour.neighboursWaitingForUs.remove(this) == null) {
++ throw new IllegalStateException("Corrupt state");
++ }
++ if (neighbour.neighboursWaitingForUs.isEmpty()) {
++ neighbour.checkUnload();
++ }
++ }
++ this.neighboursBlockingGenTask.clear();
++ this.checkUnload();
++ }
++ }
++ }
++
++ // holds: ticket level update lock
++ // holds: schedule lock
++ public void processTicketLevelUpdate(final List<ChunkProgressionTask> scheduledTasks, final List<NewChunkHolder> changedLoadStatus) {
++ final int oldLevel = this.oldTicketLevel;
++ final int newLevel = this.currentTicketLevel;
++
++ if (oldLevel == newLevel) {
++ return;
++ }
++
++ this.oldTicketLevel = newLevel;
++
++ final FullChunkStatus oldState = ChunkLevel.fullStatus(oldLevel);
++ final FullChunkStatus newState = ChunkLevel.fullStatus(newLevel);
++ final boolean oldUnloaded = oldLevel > ChunkHolderManager.MAX_TICKET_LEVEL;
++ final boolean newUnloaded = newLevel > ChunkHolderManager.MAX_TICKET_LEVEL;
++
++ final ChunkStatus maxGenerationStatusOld = ChunkLevel.generationStatus(oldLevel);
++ final ChunkStatus maxGenerationStatusNew = ChunkLevel.generationStatus(newLevel);
++
++ // check for cancellations from downgrading ticket level
++ if (this.requestedGenStatus != null && !newState.isOrAfter(FullChunkStatus.FULL) && newLevel > oldLevel) {
++ // note: cancel() may invoke onChunkGenComplete synchronously here
++ if (newUnloaded) {
++ // need to cancel all tasks
++ // note: requested status must be set to null here before cancellation, to indicate to the
++ // completion logic that we do not want rescheduling to occur
++ this.requestedGenStatus = null;
++ this.cancelGenTask();
++ } else {
++ final ChunkStatus toCancel = maxGenerationStatusNew.getNextStatus();
++ final ChunkStatus currentRequestedStatus = this.requestedGenStatus;
++
++ if (currentRequestedStatus.isOrAfter(toCancel)) {
++ // we do have to cancel something here
++ // clamp requested status to the maximum
++ if (this.currentGenStatus != null && this.currentGenStatus.isOrAfter(maxGenerationStatusNew)) {
++ // already generated to status, so we must cancel
++ this.requestedGenStatus = null;
++ this.cancelGenTask();
++ } else {
++ // not generated to status, so we may have to cancel
++ // note: gen task is always 1 status above current gen status if not null
++ this.requestedGenStatus = maxGenerationStatusNew;
++ if (this.generationTaskStatus != null && this.generationTaskStatus.isOrAfter(toCancel)) {
++ // TOOD is this even possible? i don't think so
++ throw new IllegalStateException("?????");
++ }
++ }
++ }
++ }
++ }
++
++ if (newState != oldState) {
++ if (newState.isOrAfter(oldState)) {
++ // status upgrade
++ if (!oldState.isOrAfter(FullChunkStatus.FULL) && newState.isOrAfter(FullChunkStatus.FULL)) {
++ // may need to schedule full load
++ if (this.currentGenStatus != ChunkStatus.FULL) {
++ if (this.requestedGenStatus != null) {
++ this.requestedGenStatus = ChunkStatus.FULL;
++ } else {
++ this.scheduler.schedule(
++ this.chunkX, this.chunkZ, ChunkStatus.FULL, this, scheduledTasks
++ );
++ }
++ } else {
++ // now we are fully loaded
++ this.queueBorderFullStatus(true, changedLoadStatus);
++ }
++ }
++ } else {
++ // status downgrade
++ if (!newState.isOrAfter(FullChunkStatus.ENTITY_TICKING) && oldState.isOrAfter(FullChunkStatus.ENTITY_TICKING)) {
++ this.completeFullStatusConsumers(FullChunkStatus.ENTITY_TICKING, null);
++ }
++
++ if (!newState.isOrAfter(FullChunkStatus.BLOCK_TICKING) && oldState.isOrAfter(FullChunkStatus.BLOCK_TICKING)) {
++ this.completeFullStatusConsumers(FullChunkStatus.BLOCK_TICKING, null);
++ }
++
++ if (!newState.isOrAfter(FullChunkStatus.FULL) && oldState.isOrAfter(FullChunkStatus.FULL)) {
++ this.completeFullStatusConsumers(FullChunkStatus.FULL, null);
++ }
++ }
++ }
++
++ if (oldState != newState) {
++ if (this.onTicketUpdate(oldState, newState)) {
++ changedLoadStatus.add(this);
++ }
++ }
++
++ if (oldUnloaded != newUnloaded) {
++ this.checkUnload();
++ }
++ }
++
++ /*
++ For full chunks, vanilla just loads chunks around it up to FEATURES, 1 radius
++
++ For ticking chunks, it updates the persistent entity manager (soon to be completely nuked by EntitySliceManager, which
++ will also need to be updated but with far less implications)
++ It also shoves the scheduled block ticks into the tick scheduler
++
++ For entity ticking chunks, updates the entity manager (see above)
++ */
++
++ static final int NEIGHBOUR_RADIUS = 2;
++ private long fullNeighbourChunksLoadedBitset;
++
++ private static int getFullNeighbourIndex(final int relativeX, final int relativeZ) {
++ // index = (relativeX + NEIGHBOUR_CACHE_RADIUS) + (relativeZ + NEIGHBOUR_CACHE_RADIUS) * (NEIGHBOUR_CACHE_RADIUS * 2 + 1)
++ // optimised variant of the above by moving some of the ops to compile time
++ return relativeX + (relativeZ * (NEIGHBOUR_RADIUS * 2 + 1)) + (NEIGHBOUR_RADIUS + NEIGHBOUR_RADIUS * ((NEIGHBOUR_RADIUS * 2 + 1)));
++ }
++ public final boolean isNeighbourFullLoaded(final int relativeX, final int relativeZ) {
++ return (this.fullNeighbourChunksLoadedBitset & (1L << getFullNeighbourIndex(relativeX, relativeZ))) != 0;
++ }
++
++ // returns true if this chunk changed full status
++ public final boolean setNeighbourFullLoaded(final int relativeX, final int relativeZ) {
++ final long before = this.fullNeighbourChunksLoadedBitset;
++ final int index = getFullNeighbourIndex(relativeX, relativeZ);
++ this.fullNeighbourChunksLoadedBitset |= (1L << index);
++ return this.onNeighbourChange(before, this.fullNeighbourChunksLoadedBitset);
++ }
++
++ // returns true if this chunk changed full status
++ public final boolean setNeighbourFullUnloaded(final int relativeX, final int relativeZ) {
++ final long before = this.fullNeighbourChunksLoadedBitset;
++ final int index = getFullNeighbourIndex(relativeX, relativeZ);
++ this.fullNeighbourChunksLoadedBitset &= ~(1L << index);
++ return this.onNeighbourChange(before, this.fullNeighbourChunksLoadedBitset);
++ }
++
++ public static boolean areNeighboursFullLoaded(final long bitset, final int radius) {
++ // index = relativeX + (relativeZ * (NEIGHBOUR_CACHE_RADIUS * 2 + 1)) + (NEIGHBOUR_CACHE_RADIUS + NEIGHBOUR_CACHE_RADIUS * ((NEIGHBOUR_CACHE_RADIUS * 2 + 1)))
++ switch (radius) {
++ case 0: {
++ return (bitset & (1L << getFullNeighbourIndex(0, 0))) != 0L;
++ }
++ case 1: {
++ long mask = 0L;
++ for (int dx = -1; dx <= 1; ++dx) {
++ for (int dz = -1; dz <= 1; ++dz) {
++ mask |= (1L << getFullNeighbourIndex(dx, dz));
++ }
++ }
++ return (bitset & mask) == mask;
++ }
++ case 2: {
++ long mask = 0L;
++ for (int dx = -2; dx <= 2; ++dx) {
++ for (int dz = -2; dz <= 2; ++dz) {
++ mask |= (1L << getFullNeighbourIndex(dx, dz));
++ }
++ }
++ return (bitset & mask) == mask;
++ }
++
++ default: {
++ throw new IllegalArgumentException("Radius not recognized: " + radius);
++ }
++ }
++ }
++
++ // upper 16 bits are pending status, lower 16 bits are current status
++ private volatile long chunkStatus;
++ private static final long PENDING_STATUS_MASK = Long.MIN_VALUE >> 31;
++ private static final FullChunkStatus[] CHUNK_STATUS_BY_ID = FullChunkStatus.values();
++ private static final VarHandle CHUNK_STATUS_HANDLE = ConcurrentUtil.getVarHandle(NewChunkHolder.class, "chunkStatus", long.class);
++
++ public static FullChunkStatus getCurrentChunkStatus(final long encoded) {
++ return CHUNK_STATUS_BY_ID[(int)encoded];
++ }
++
++ public static FullChunkStatus getPendingChunkStatus(final long encoded) {
++ return CHUNK_STATUS_BY_ID[(int)(encoded >>> 32)];
++ }
++
++ public FullChunkStatus getChunkStatus() {
++ return getCurrentChunkStatus(((long)CHUNK_STATUS_HANDLE.getVolatile((NewChunkHolder)this)));
++ }
++
++ public boolean isEntityTickingReady() {
++ return this.getChunkStatus().isOrAfter(FullChunkStatus.ENTITY_TICKING);
++ }
++
++ public boolean isTickingReady() {
++ return this.getChunkStatus().isOrAfter(FullChunkStatus.BLOCK_TICKING);
++ }
++
++ public boolean isFullChunkReady() {
++ return this.getChunkStatus().isOrAfter(FullChunkStatus.FULL);
++ }
++
++ private static FullChunkStatus getStatusForBitset(final long bitset) {
++ if (areNeighboursFullLoaded(bitset, 2)) {
++ return FullChunkStatus.ENTITY_TICKING;
++ } else if (areNeighboursFullLoaded(bitset, 1)) {
++ return FullChunkStatus.BLOCK_TICKING;
++ } else if (areNeighboursFullLoaded(bitset, 0)) {
++ return FullChunkStatus.FULL;
++ } else {
++ return FullChunkStatus.INACCESSIBLE;
++ }
++ }
++
++ // note: only while updating ticket level, so holds ticket update lock + scheduling lock
++ protected final boolean onTicketUpdate(final FullChunkStatus oldState, final FullChunkStatus newState) {
++ if (oldState == newState) {
++ return false;
++ }
++
++ // preserve border request after full status complete, as it does not set anything in the bitset
++ FullChunkStatus byNeighbours = getStatusForBitset(this.fullNeighbourChunksLoadedBitset);
++ if (byNeighbours == FullChunkStatus.INACCESSIBLE && newState.isOrAfter(FullChunkStatus.FULL) && this.currentGenStatus == ChunkStatus.FULL) {
++ byNeighbours = FullChunkStatus.FULL;
++ }
++
++ final FullChunkStatus toSet;
++
++ if (newState.isOrAfter(byNeighbours)) {
++ // must clamp to neighbours level, even though we have the ticket level
++ toSet = byNeighbours;
++ } else {
++ // must clamp to ticket level, even though we have the neighbours
++ toSet = newState;
++ }
++
++ long curr = (long)CHUNK_STATUS_HANDLE.getVolatile((NewChunkHolder)this);
++
++ if (curr == ((long)toSet.ordinal() | ((long)toSet.ordinal() << 32))) {
++ // nothing to do
++ return false;
++ }
++
++ int failures = 0;
++ for (;;) {
++ final long update = (curr & ~PENDING_STATUS_MASK) | ((long)toSet.ordinal() << 32);
++ if (curr == (curr = (long)CHUNK_STATUS_HANDLE.compareAndExchange((NewChunkHolder)this, curr, update))) {
++ return true;
++ }
++
++ ++failures;
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++ }
++ }
++
++ protected final boolean onNeighbourChange(final long bitsetBefore, final long bitsetAfter) {
++ FullChunkStatus oldState = getStatusForBitset(bitsetBefore);
++ FullChunkStatus newState = getStatusForBitset(bitsetAfter);
++ final FullChunkStatus currStateTicketLevel = ChunkLevel.fullStatus(this.oldTicketLevel);
++ if (oldState.isOrAfter(currStateTicketLevel)) {
++ oldState = currStateTicketLevel;
++ }
++ if (newState.isOrAfter(currStateTicketLevel)) {
++ newState = currStateTicketLevel;
++ }
++ // preserve border request after full status complete, as it does not set anything in the bitset
++ if (newState == FullChunkStatus.INACCESSIBLE && currStateTicketLevel.isOrAfter(FullChunkStatus.FULL) && this.currentGenStatus == ChunkStatus.FULL) {
++ newState = FullChunkStatus.FULL;
++ }
++
++ if (oldState == newState) {
++ return false;
++ }
++
++ int failures = 0;
++ for (long curr = (long)CHUNK_STATUS_HANDLE.getVolatile((NewChunkHolder)this);;) {
++ final long update = (curr & ~PENDING_STATUS_MASK) | ((long)newState.ordinal() << 32);
++ if (curr == (curr = (long)CHUNK_STATUS_HANDLE.compareAndExchange((NewChunkHolder)this, curr, update))) {
++ return true;
++ }
++
++ ++failures;
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++ }
++ }
++
++ private boolean queueBorderFullStatus(final boolean loaded, final List<NewChunkHolder> changedFullStatus) {
++ final FullChunkStatus toStatus = loaded ? FullChunkStatus.FULL : FullChunkStatus.INACCESSIBLE;
++
++ int failures = 0;
++ for (long curr = (long)CHUNK_STATUS_HANDLE.getVolatile((NewChunkHolder)this);;) {
++ final FullChunkStatus currPending = getPendingChunkStatus(curr);
++ if (loaded && currPending != FullChunkStatus.INACCESSIBLE) {
++ throw new IllegalStateException("Expected " + FullChunkStatus.INACCESSIBLE + " for pending, but got " + currPending);
++ }
++
++ final long update = (curr & ~PENDING_STATUS_MASK) | ((long)toStatus.ordinal() << 32);
++ if (curr == (curr = (long)CHUNK_STATUS_HANDLE.compareAndExchange((NewChunkHolder)this, curr, update))) {
++ if ((int)(update) != (int)(update >>> 32)) {
++ changedFullStatus.add(this);
++ return true;
++ }
++ return false;
++ }
++
++ ++failures;
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++ }
++ }
++
++ // only call on main thread, must hold ticket level and scheduling lock
++ private void onFullChunkLoadChange(final boolean loaded, final List<NewChunkHolder> changedFullStatus) {
++ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ, NEIGHBOUR_RADIUS);
++ try {
++ for (int dz = -NEIGHBOUR_RADIUS; dz <= NEIGHBOUR_RADIUS; ++dz) {
++ for (int dx = -NEIGHBOUR_RADIUS; dx <= NEIGHBOUR_RADIUS; ++dx) {
++ final NewChunkHolder holder = (dx | dz) == 0 ? this : this.scheduler.chunkHolderManager.getChunkHolder(dx + this.chunkX, dz + this.chunkZ);
++ if (loaded) {
++ if (holder.setNeighbourFullLoaded(-dx, -dz)) {
++ changedFullStatus.add(holder);
++ }
++ } else {
++ if (holder != null && holder.setNeighbourFullUnloaded(-dx, -dz)) {
++ changedFullStatus.add(holder);
++ }
++ }
++ }
++ }
++ } finally {
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
++ }
++ }
++
++ private FullChunkStatus updateCurrentState(final FullChunkStatus to) {
++ int failures = 0;
++ for (long curr = (long)CHUNK_STATUS_HANDLE.getVolatile((NewChunkHolder)this);;) {
++ final long update = (curr & PENDING_STATUS_MASK) | (long)to.ordinal();
++ if (curr == (curr = (long)CHUNK_STATUS_HANDLE.compareAndExchange((NewChunkHolder)this, curr, update))) {
++ return getPendingChunkStatus(curr);
++ }
++
++ ++failures;
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++ }
++ }
++
++ private void changeEntityChunkStatus(final FullChunkStatus toStatus) {
++ this.world.getEntityLookup().chunkStatusChange(this.chunkX, this.chunkZ, toStatus);
++ }
++
++ private boolean processingFullStatus = false;
++
++ // only to be called on the main thread, no locks need to be held
++ public boolean handleFullStatusChange(final List<NewChunkHolder> changedFullStatus) {
++ TickThread.ensureTickThread(this.world, this.chunkX, this.chunkZ, "Cannot update full status thread off-main");
++
++ boolean ret = false;
++
++ if (this.processingFullStatus) {
++ // we cannot process updates recursively
++ return ret;
++ }
++
++ // note: use opaque reads for chunk status read since we need it to be atomic
++
++ // test if anything changed
++ long statusCheck = (long)CHUNK_STATUS_HANDLE.getOpaque((NewChunkHolder)this);
++ if ((int)statusCheck == (int)(statusCheck >>> 32)) {
++ // nothing changed
++ return ret;
++ }
++
++ final ChunkTaskScheduler scheduler = this.scheduler;
++ final ChunkHolderManager holderManager = scheduler.chunkHolderManager;
++ final int ticketKeep;
++ final Long ticketId = Long.valueOf(holderManager.getNextStatusUpgradeId());
++ final ReentrantAreaLock.Node ticketLock = holderManager.ticketLockArea.lock(this.chunkX, this.chunkZ);
++ try {
++ ticketKeep = this.currentTicketLevel;
++ statusCheck = (long)CHUNK_STATUS_HANDLE.getOpaque((NewChunkHolder)this);
++ // handle race condition where ticket level and target status is updated concurrently
++ if ((int)statusCheck == (int)(statusCheck >>> 32)) {
++ // nothing changed
++ return ret;
++ }
++ holderManager.addTicketAtLevel(TicketType.STATUS_UPGRADE, CoordinateUtils.getChunkKey(this.chunkX, this.chunkZ), ticketKeep, ticketId, false);
++ } finally {
++ holderManager.ticketLockArea.unlock(ticketLock);
++ }
++
++ this.processingFullStatus = true;
++ try {
++ for (;;) {
++ final long currStateEncoded = (long)CHUNK_STATUS_HANDLE.getOpaque((NewChunkHolder)this);
++ final FullChunkStatus currState = getCurrentChunkStatus(currStateEncoded);
++ FullChunkStatus nextState = getPendingChunkStatus(currStateEncoded);
++ if (currState == nextState) {
++ if (nextState == FullChunkStatus.INACCESSIBLE) {
++ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
++ try {
++ this.checkUnload();
++ } finally {
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
++ }
++ }
++ break;
++ }
++
++ // chunks cannot downgrade state while status is pending a change
++ final LevelChunk chunk = (LevelChunk)this.currentChunk;
++
++ // Note: we assume that only load/unload contain plugin logic
++ // plugin logic is anything stupid enough to possibly change the chunk status while it is already
++ // being changed (i.e during load it is possible it will try to set to full ticking)
++ // in order to allow this change, we also need this plugin logic to be contained strictly after all
++ // of the chunk system load callbacks are invoked
++ if (nextState.isOrAfter(currState)) {
++ // state upgrade
++ if (!currState.isOrAfter(FullChunkStatus.FULL) && nextState.isOrAfter(FullChunkStatus.FULL)) {
++ nextState = this.updateCurrentState(FullChunkStatus.FULL);
++ holderManager.ensureInAutosave(this);
++ chunk.pushChunkIntoLoadedMap();
++ this.changeEntityChunkStatus(FullChunkStatus.FULL);
++ chunk.onChunkLoad(this);
++ this.onFullChunkLoadChange(true, changedFullStatus);
++ this.completeFullStatusConsumers(FullChunkStatus.FULL, chunk);
++ }
++
++ if (!currState.isOrAfter(FullChunkStatus.BLOCK_TICKING) && nextState.isOrAfter(FullChunkStatus.BLOCK_TICKING)) {
++ nextState = this.updateCurrentState(FullChunkStatus.BLOCK_TICKING);
++ this.changeEntityChunkStatus(FullChunkStatus.BLOCK_TICKING);
++ chunk.onChunkTicking(this);
++ this.completeFullStatusConsumers(FullChunkStatus.BLOCK_TICKING, chunk);
++ }
++
++ if (!currState.isOrAfter(FullChunkStatus.ENTITY_TICKING) && nextState.isOrAfter(FullChunkStatus.ENTITY_TICKING)) {
++ nextState = this.updateCurrentState(FullChunkStatus.ENTITY_TICKING);
++ this.changeEntityChunkStatus(FullChunkStatus.ENTITY_TICKING);
++ chunk.onChunkEntityTicking(this);
++ this.completeFullStatusConsumers(FullChunkStatus.ENTITY_TICKING, chunk);
++ }
++ } else {
++ if (currState.isOrAfter(FullChunkStatus.ENTITY_TICKING) && !nextState.isOrAfter(FullChunkStatus.ENTITY_TICKING)) {
++ this.changeEntityChunkStatus(FullChunkStatus.BLOCK_TICKING);
++ chunk.onChunkNotEntityTicking(this);
++ nextState = this.updateCurrentState(FullChunkStatus.BLOCK_TICKING);
++ }
++
++ if (currState.isOrAfter(FullChunkStatus.BLOCK_TICKING) && !nextState.isOrAfter(FullChunkStatus.BLOCK_TICKING)) {
++ this.changeEntityChunkStatus(FullChunkStatus.FULL);
++ chunk.onChunkNotTicking(this);
++ nextState = this.updateCurrentState(FullChunkStatus.FULL);
++ }
++
++ if (currState.isOrAfter(FullChunkStatus.FULL) && !nextState.isOrAfter(FullChunkStatus.FULL)) {
++ this.onFullChunkLoadChange(false, changedFullStatus);
++ this.changeEntityChunkStatus(FullChunkStatus.INACCESSIBLE);
++ chunk.onChunkUnload(this);
++ nextState = this.updateCurrentState(FullChunkStatus.INACCESSIBLE);
++ }
++ }
++
++ ret = true;
++ }
++ } finally {
++ this.processingFullStatus = false;
++ holderManager.removeTicketAtLevel(TicketType.STATUS_UPGRADE, this.chunkX, this.chunkZ, ticketKeep, ticketId);
++ }
++
++ return ret;
++ }
++
++ // note: must hold scheduling lock
++ // rets true if the current requested gen status is not null (effectively, whether further scheduling is not needed)
++ boolean upgradeGenTarget(final ChunkStatus toStatus) {
++ if (toStatus == null) {
++ throw new NullPointerException("toStatus cannot be null");
++ }
++ if (this.requestedGenStatus == null && this.generationTask == null) {
++ return false;
++ }
++ if (this.requestedGenStatus == null || !this.requestedGenStatus.isOrAfter(toStatus)) {
++ this.requestedGenStatus = toStatus;
++ }
++ return true;
++ }
++
++ public void setGenerationTarget(final ChunkStatus toStatus) {
++ this.requestedGenStatus = toStatus;
++ }
++
++ public boolean hasGenerationTask() {
++ return this.generationTask != null;
++ }
++
++ public ChunkStatus getCurrentGenStatus() {
++ return this.currentGenStatus;
++ }
++
++ public ChunkStatus getRequestedGenStatus() {
++ return this.requestedGenStatus;
++ }
++
++ private final Reference2ObjectOpenHashMap<ChunkStatus, List<Consumer<ChunkAccess>>> statusWaiters = new Reference2ObjectOpenHashMap<>();
++
++ void addStatusConsumer(final ChunkStatus status, final Consumer<ChunkAccess> consumer) {
++ this.statusWaiters.computeIfAbsent(status, (final ChunkStatus keyInMap) -> {
++ return new ArrayList<>(4);
++ }).add(consumer);
++ }
++
++ private void completeStatusConsumers(ChunkStatus status, final ChunkAccess chunk) {
++ // need to tell future statuses to complete if cancelled
++ do {
++ this.completeStatusConsumers0(status, chunk);
++ } while (chunk == null && status != (status = status.getNextStatus()));
++ }
++
++ private void completeStatusConsumers0(final ChunkStatus status, final ChunkAccess chunk) {
++ final List<Consumer<ChunkAccess>> consumers;
++ consumers = this.statusWaiters.remove(status);
++
++ if (consumers == null) {
++ return;
++ }
++
++ // must be scheduled to main, we do not trust the callback to not do anything stupid
++ this.scheduler.scheduleChunkTask(this.chunkX, this.chunkZ, () -> {
++ for (final Consumer<ChunkAccess> consumer : consumers) {
++ try {
++ consumer.accept(chunk);
++ } catch (final ThreadDeath thr) {
++ throw thr;
++ } catch (final Throwable thr) {
++ LOGGER.error("Failed to process chunk status callback", thr);
++ }
++ }
++ }, PrioritisedExecutor.Priority.HIGHEST);
++ }
++
++ private final Reference2ObjectOpenHashMap<FullChunkStatus, List<Consumer<LevelChunk>>> fullStatusWaiters = new Reference2ObjectOpenHashMap<>();
++
++ void addFullStatusConsumer(final FullChunkStatus status, final Consumer<LevelChunk> consumer) {
++ this.fullStatusWaiters.computeIfAbsent(status, (final FullChunkStatus keyInMap) -> {
++ return new ArrayList<>(4);
++ }).add(consumer);
++ }
++
++ private void completeFullStatusConsumers(FullChunkStatus status, final LevelChunk chunk) {
++ // need to tell future statuses to complete if cancelled
++ final FullChunkStatus max = CHUNK_STATUS_BY_ID[CHUNK_STATUS_BY_ID.length - 1];
++
++ for (;;) {
++ this.completeFullStatusConsumers0(status, chunk);
++ if (chunk != null || status == max) {
++ break;
++ }
++ status = CHUNK_STATUS_BY_ID[status.ordinal() + 1];
++ }
++ }
++
++ private void completeFullStatusConsumers0(final FullChunkStatus status, final LevelChunk chunk) {
++ final List<Consumer<LevelChunk>> consumers;
++ consumers = this.fullStatusWaiters.remove(status);
++
++ if (consumers == null) {
++ return;
++ }
++
++ // must be scheduled to main, we do not trust the callback to not do anything stupid
++ this.scheduler.scheduleChunkTask(this.chunkX, this.chunkZ, () -> {
++ for (final Consumer<LevelChunk> consumer : consumers) {
++ try {
++ consumer.accept(chunk);
++ } catch (final ThreadDeath thr) {
++ throw thr;
++ } catch (final Throwable thr) {
++ LOGGER.error("Failed to process chunk status callback", thr);
++ }
++ }
++ }, PrioritisedExecutor.Priority.HIGHEST);
++ }
++
++ // note: must hold scheduling lock
++ private void onChunkGenComplete(final ChunkAccess newChunk, final ChunkStatus newStatus,
++ final List<ChunkProgressionTask> scheduleList, final List<NewChunkHolder> changedLoadStatus) {
++ if (!this.neighboursBlockingGenTask.isEmpty()) {
++ throw new IllegalStateException("Cannot have neighbours blocking this gen task");
++ }
++ if (newChunk != null || (this.requestedGenStatus == null || !this.requestedGenStatus.isOrAfter(newStatus))) {
++ this.completeStatusConsumers(newStatus, newChunk);
++ }
++ // done now, clear state (must be done before scheduling new tasks)
++ this.generationTask = null;
++ this.generationTaskStatus = null;
++ if (newChunk == null) {
++ // task was cancelled
++ // should be careful as this could be called while holding the schedule lock and/or inside the
++ // ticket level update
++ // while a task may be cancelled, it is possible for it to be later re-scheduled
++ // however, because generationTask is only set to null on _completion_, the scheduler leaves
++ // the rescheduling logic to us here
++ final ChunkStatus requestedGenStatus = this.requestedGenStatus;
++ this.requestedGenStatus = null;
++ if (requestedGenStatus != null) {
++ // it looks like it has been requested, so we must reschedule
++ if (!this.neighboursWaitingForUs.isEmpty()) {
++ for (final Iterator<Reference2ObjectMap.Entry<NewChunkHolder, ChunkStatus>> iterator = this.neighboursWaitingForUs.reference2ObjectEntrySet().fastIterator(); iterator.hasNext();) {
++ final Reference2ObjectMap.Entry<NewChunkHolder, ChunkStatus> entry = iterator.next();
++
++ final NewChunkHolder chunkHolder = entry.getKey();
++ final ChunkStatus toStatus = entry.getValue();
++
++ if (!requestedGenStatus.isOrAfter(toStatus)) {
++ // if we were cancelled, we are responsible for removing the waiter
++ if (!chunkHolder.neighboursBlockingGenTask.remove(this)) {
++ throw new IllegalStateException("Corrupt state");
++ }
++ if (chunkHolder.neighboursBlockingGenTask.isEmpty()) {
++ chunkHolder.checkUnload();
++ }
++ iterator.remove();
++ continue;
++ }
++ }
++ }
++
++ // note: only after generationTask -> null, generationTaskStatus -> null, and requestedGenStatus -> null
++ this.scheduler.schedule(
++ this.chunkX, this.chunkZ, requestedGenStatus, this, scheduleList
++ );
++
++ // return, can't do anything further
++ return;
++ }
++
++ if (!this.neighboursWaitingForUs.isEmpty()) {
++ for (final NewChunkHolder chunkHolder : this.neighboursWaitingForUs.keySet()) {
++ if (!chunkHolder.neighboursBlockingGenTask.remove(this)) {
++ throw new IllegalStateException("Corrupt state");
++ }
++ if (chunkHolder.neighboursBlockingGenTask.isEmpty()) {
++ chunkHolder.checkUnload();
++ }
++ }
++ this.neighboursWaitingForUs.clear();
++ }
++ // reset priority, we have nothing left to generate to
++ this.setPriority(PrioritisedExecutor.Priority.NORMAL);
++ this.checkUnload();
++ return;
++ }
++
++ this.currentChunk = newChunk;
++ this.currentGenStatus = newStatus;
++ this.lastChunkCompletion = new ChunkCompletion(newChunk, newStatus);
++
++ final ChunkStatus requestedGenStatus = this.requestedGenStatus;
++
++ List<NewChunkHolder> needsScheduling = null;
++ boolean recalculatePriority = false;
++ for (final Iterator<Reference2ObjectMap.Entry<NewChunkHolder, ChunkStatus>> iterator
++ = this.neighboursWaitingForUs.reference2ObjectEntrySet().fastIterator(); iterator.hasNext();) {
++ final Reference2ObjectMap.Entry<NewChunkHolder, ChunkStatus> entry = iterator.next();
++ final NewChunkHolder neighbour = entry.getKey();
++ final ChunkStatus requiredStatus = entry.getValue();
++
++ if (!newStatus.isOrAfter(requiredStatus)) {
++ if (requestedGenStatus == null || !requestedGenStatus.isOrAfter(requiredStatus)) {
++ // if we're cancelled, still need to clear this map
++ if (!neighbour.neighboursBlockingGenTask.remove(this)) {
++ throw new IllegalStateException("Neighbour is not waiting for us?");
++ }
++ if (neighbour.neighboursBlockingGenTask.isEmpty()) {
++ neighbour.checkUnload();
++ }
++
++ iterator.remove();
++ }
++ continue;
++ }
++
++ // doesn't matter what isCancelled is here, we need to schedule if we can
++
++ recalculatePriority = true;
++ if (!neighbour.neighboursBlockingGenTask.remove(this)) {
++ throw new IllegalStateException("Neighbour is not waiting for us?");
++ }
++
++ if (neighbour.neighboursBlockingGenTask.isEmpty()) {
++ if (neighbour.requestedGenStatus != null) {
++ if (needsScheduling == null) {
++ needsScheduling = new ArrayList<>();
++ }
++ needsScheduling.add(neighbour);
++ } else {
++ neighbour.checkUnload();
++ }
++ }
++
++ // remove last; access to entry will throw if removed
++ iterator.remove();
++ }
++
++ if (newStatus == ChunkStatus.FULL) {
++ this.lockPriority();
++ // must use oldTicketLevel, we hold the schedule lock but not the ticket level lock
++ // however, schedule lock needs to be held for ticket level callback, so we're fine here
++ if (ChunkLevel.fullStatus(this.oldTicketLevel).isOrAfter(FullChunkStatus.FULL)) {
++ this.queueBorderFullStatus(true, changedLoadStatus);
++ }
++ }
++
++ if (recalculatePriority) {
++ this.recalculateNeighbourRequestedPriority();
++ }
++
++ if (requestedGenStatus != null && !newStatus.isOrAfter(requestedGenStatus)) {
++ this.scheduleNeighbours(needsScheduling, scheduleList);
++
++ // we need to schedule more tasks now
++ this.scheduler.schedule(
++ this.chunkX, this.chunkZ, requestedGenStatus, this, scheduleList
++ );
++ } else {
++ // we're done now
++ if (requestedGenStatus != null) {
++ this.requestedGenStatus = null;
++ }
++ // reached final stage, so stop scheduling now
++ this.setPriority(PrioritisedExecutor.Priority.NORMAL);
++ this.checkUnload();
++
++ this.scheduleNeighbours(needsScheduling, scheduleList);
++ }
++ }
++
++ private void scheduleNeighbours(final List<NewChunkHolder> needsScheduling, final List<ChunkProgressionTask> scheduleList) {
++ if (needsScheduling != null) {
++ for (int i = 0, len = needsScheduling.size(); i < len; ++i) {
++ final NewChunkHolder neighbour = needsScheduling.get(i);
++
++ this.scheduler.schedule(
++ neighbour.chunkX, neighbour.chunkZ, neighbour.requestedGenStatus, neighbour, scheduleList
++ );
++ }
++ }
++ }
++
++ public void setGenerationTask(final ChunkProgressionTask generationTask, final ChunkStatus taskStatus,
++ final List<NewChunkHolder> neighbours) {
++ if (this.generationTask != null || (this.currentGenStatus != null && this.currentGenStatus.isOrAfter(taskStatus))) {
++ throw new IllegalStateException("Currently generating or provided task is trying to generate to a level we are already at!");
++ }
++ if (this.requestedGenStatus == null || !this.requestedGenStatus.isOrAfter(taskStatus)) {
++ throw new IllegalStateException("Cannot schedule generation task when not requested");
++ }
++ this.generationTask = generationTask;
++ this.generationTaskStatus = taskStatus;
++
++ for (int i = 0, len = neighbours.size(); i < len; ++i) {
++ neighbours.get(i).addNeighbourUsingChunk();
++ }
++
++ this.checkUnload();
++
++ generationTask.onComplete((final ChunkAccess access, final Throwable thr) -> {
++ if (generationTask != this.generationTask) {
++ throw new IllegalStateException(
++ "Cannot complete generation task '" + generationTask + "' because we are waiting on '" + this.generationTask + "' instead!"
++ );
++ }
++ if (thr != null) {
++ if (this.genTaskException != null) {
++ // first one is probably the TRUE problem
++ return;
++ }
++ // don't set generation task to null, so that scheduling will not attempt to create another task and it
++ // will automatically block any further scheduling usage of this chunk as it will wait forever for a failed
++ // task to complete
++ this.genTaskException = thr;
++ this.failedGenStatus = taskStatus;
++ this.genTaskFailedThread = Thread.currentThread();
++
++ this.scheduler.unrecoverableChunkSystemFailure(this.chunkX, this.chunkZ, Map.of(
++ "Generation task", ChunkTaskScheduler.stringIfNull(generationTask),
++ "Task to status", ChunkTaskScheduler.stringIfNull(taskStatus)
++ ), thr);
++ return;
++ }
++
++ final boolean scheduleTasks;
++ List<ChunkProgressionTask> tasks = ChunkHolderManager.getCurrentTicketUpdateScheduling();
++ if (tasks == null) {
++ scheduleTasks = true;
++ tasks = new ArrayList<>();
++ } else {
++ scheduleTasks = false;
++ // we are currently updating ticket levels, so we already hold the schedule lock
++ // this means we have to leave the ticket level update to handle the scheduling
++ }
++ final List<NewChunkHolder> changedLoadStatus = new ArrayList<>();
++ // theoretically, we could schedule a chunk at the max radius which performs another max radius access. So we need to double the radius.
++ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ, 2 * ChunkTaskScheduler.getMaxAccessRadius());
++ try {
++ for (int i = 0, len = neighbours.size(); i < len; ++i) {
++ neighbours.get(i).removeNeighbourUsingChunk();
++ }
++ this.onChunkGenComplete(access, taskStatus, tasks, changedLoadStatus);
++ } finally {
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
++ }
++ this.scheduler.chunkHolderManager.addChangedStatuses(changedLoadStatus);
++
++ if (scheduleTasks) {
++ // can't hold the lock while scheduling, so we have to build the tasks and then schedule after
++ for (int i = 0, len = tasks.size(); i < len; ++i) {
++ tasks.get(i).schedule();
++ }
++ }
++ });
++ }
++
++ public PoiChunk getPoiChunk() {
++ return this.poiChunk;
++ }
++
++ public ChunkEntitySlices getEntityChunk() {
++ return this.entityChunk;
++ }
++
++ public long lastAutoSave;
++
++ public static final record SaveStat(boolean savedChunk, boolean savedEntityChunk, boolean savedPoiChunk) {}
++
++ public SaveStat save(final boolean shutdown, final boolean unloading) {
++ TickThread.ensureTickThread(this.world, this.chunkX, this.chunkZ, "Cannot save data off-main");
++
++ ChunkAccess chunk = this.getCurrentChunk();
++ PoiChunk poi = this.getPoiChunk();
++ ChunkEntitySlices entities = this.getEntityChunk();
++ boolean executedUnloadTask = false;
++
++ if (shutdown) {
++ // make sure that the async unloads complete
++ if (this.unloadState != null) {
++ // must have errored during unload
++ chunk = this.unloadState.chunk();
++ poi = this.unloadState.poiChunk();
++ entities = this.unloadState.entityChunk();
++ }
++ final UnloadTask chunkUnloadTask = this.chunkDataUnload;
++ final DelayedPrioritisedTask chunkDataUnloadTask = chunkUnloadTask == null ? null : chunkUnloadTask.task();
++ if (chunkDataUnloadTask != null) {
++ final PrioritisedExecutor.PrioritisedTask unloadTask = chunkDataUnloadTask.getTask();
++ if (unloadTask != null) {
++ executedUnloadTask = unloadTask.execute();
++ }
++ }
++ }
++
++ boolean canSaveChunk = !(chunk instanceof LevelChunk levelChunk && levelChunk.mustNotSave) &&
++ (chunk != null && ((shutdown || chunk instanceof LevelChunk) && chunk.isUnsaved()));
++ boolean canSavePOI = !(chunk instanceof LevelChunk levelChunk && levelChunk.mustNotSave) && (poi != null && poi.isDirty());
++ boolean canSaveEntities = entities != null;
++
++ try (co.aikar.timings.Timing ignored = this.world.timings.chunkSave.startTiming()) { // Paper
++ if (canSaveChunk) {
++ canSaveChunk = this.saveChunk(chunk, unloading);
++ }
++ if (canSavePOI) {
++ canSavePOI = this.savePOI(poi, unloading);
++ }
++ if (canSaveEntities) {
++ // on shutdown, we need to force transient entity chunks to save
++ canSaveEntities = this.saveEntities(entities, unloading || shutdown);
++ if (unloading || shutdown) {
++ this.lastEntityUnload = null;
++ }
++ }
++ }
++
++ return executedUnloadTask | canSaveChunk | canSaveEntities | canSavePOI ? new SaveStat(executedUnloadTask || canSaveChunk, canSaveEntities, canSavePOI): null;
++ }
++
++ static final class AsyncChunkSerializeTask implements Runnable {
++
++ private final ServerLevel world;
++ private final ChunkAccess chunk;
++ private final ChunkSerializer.AsyncSaveData asyncSaveData;
++ private final NewChunkHolder toComplete;
++
++ public AsyncChunkSerializeTask(final ServerLevel world, final ChunkAccess chunk, final ChunkSerializer.AsyncSaveData asyncSaveData,
++ final NewChunkHolder toComplete) {
++ this.world = world;
++ this.chunk = chunk;
++ this.asyncSaveData = asyncSaveData;
++ this.toComplete = toComplete;
++ }
++
++ @Override
++ public void run() {
++ final CompoundTag toSerialize;
++ try {
++ toSerialize = ChunkSerializer.saveChunk(this.world, this.chunk, this.asyncSaveData);
++ } catch (final ThreadDeath death) {
++ throw death;
++ } catch (final Throwable throwable) {
++ LOGGER.error("Failed to asynchronously save chunk " + this.chunk.getPos() + " for world '" + this.world.getWorld().getName() + "', falling back to synchronous save", throwable);
++ this.world.chunkTaskScheduler.scheduleChunkTask(this.chunk.locX, this.chunk.locZ, () -> {
++ final CompoundTag synchronousSave;
++ try {
++ synchronousSave = ChunkSerializer.saveChunk(AsyncChunkSerializeTask.this.world, AsyncChunkSerializeTask.this.chunk, AsyncChunkSerializeTask.this.asyncSaveData);
++ } catch (final ThreadDeath death) {
++ throw death;
++ } catch (final Throwable throwable2) {
++ LOGGER.error("Failed to synchronously save chunk " + AsyncChunkSerializeTask.this.chunk.getPos() + " for world '" + AsyncChunkSerializeTask.this.world.getWorld().getName() + "', chunk data will be lost", throwable2);
++ AsyncChunkSerializeTask.this.toComplete.completeAsyncChunkDataSave(null);
++ return;
++ }
++
++ AsyncChunkSerializeTask.this.toComplete.completeAsyncChunkDataSave(synchronousSave);
++ LOGGER.info("Successfully serialized chunk " + AsyncChunkSerializeTask.this.chunk.getPos() + " for world '" + AsyncChunkSerializeTask.this.world.getWorld().getName() + "' synchronously");
++
++ }, PrioritisedExecutor.Priority.HIGHEST);
++ return;
++ }
++ this.toComplete.completeAsyncChunkDataSave(toSerialize);
++ }
++
++ @Override
++ public String toString() {
++ return "AsyncChunkSerializeTask{" +
++ "chunk={pos=" + this.chunk.getPos() + ",world=\"" + this.world.getWorld().getName() + "\"}" +
++ "}";
++ }
++ }
++
++ private boolean saveChunk(final ChunkAccess chunk, final boolean unloading) {
++ if (!chunk.isUnsaved()) {
++ if (unloading) {
++ this.completeAsyncChunkDataSave(null);
++ }
++ return false;
++ }
++ boolean completing = false;
++ try {
++ if (unloading) {
++ try {
++ final ChunkSerializer.AsyncSaveData asyncSaveData = ChunkSerializer.getAsyncSaveData(this.world, chunk);
++
++ final PrioritisedExecutor.PrioritisedTask task = this.scheduler.loadExecutor.createTask(new AsyncChunkSerializeTask(this.world, chunk, asyncSaveData, this));
++
++ this.chunkDataUnload.task().setTask(task);
++
++ task.queue();
++
++ chunk.setUnsaved(false);
++
++ return true;
++ } catch (final ThreadDeath death) {
++ throw death;
++ } catch (final Throwable thr) {
++ LOGGER.error("Failed to prepare async chunk data (" + this.chunkX + "," + this.chunkZ + ") in world '" + this.world.getWorld().getName() + "', falling back to synchronous save", thr);
++ // fall through to synchronous save
++ }
++ }
++
++ final CompoundTag save = ChunkSerializer.saveChunk(this.world, chunk, null);
++
++ if (unloading) {
++ completing = true;
++ this.completeAsyncChunkDataSave(save);
++ LOGGER.info("Successfully serialized chunk data (" + this.chunkX + "," + this.chunkZ + ") in world '" + this.world.getWorld().getName() + "' synchronously");
++ } else {
++ RegionFileIOThread.scheduleSave(this.world, this.chunkX, this.chunkZ, save, RegionFileIOThread.RegionFileType.CHUNK_DATA);
++ }
++ chunk.setUnsaved(false);
++ } catch (final ThreadDeath death) {
++ throw death;
++ } catch (final Throwable thr) {
++ LOGGER.error("Failed to save chunk data (" + this.chunkX + "," + this.chunkZ + ") in world '" + this.world.getWorld().getName() + "'");
++ if (unloading && !completing) {
++ this.completeAsyncChunkDataSave(null);
++ }
++ }
++
++ return true;
++ }
++
++ private boolean lastEntitySaveNull;
++ private CompoundTag lastEntityUnload;
++ private boolean saveEntities(final ChunkEntitySlices entities, final boolean unloading) {
++ try {
++ CompoundTag mergeFrom = null;
++ if (entities.isTransient()) {
++ if (!unloading) {
++ // if we're a transient chunk, we cannot save until unloading because otherwise a double save will
++ // result in double adding the entities
++ return false;
++ }
++ try {
++ mergeFrom = RegionFileIOThread.loadData(this.world, this.chunkX, this.chunkZ, RegionFileIOThread.RegionFileType.ENTITY_DATA, PrioritisedExecutor.Priority.BLOCKING);
++ } catch (final Exception ex) {
++ LOGGER.error("Cannot merge transient entities for chunk (" + this.chunkX + "," + this.chunkZ + ") in world '" + this.world.getWorld().getName() + "', data on disk will be replaced", ex);
++ }
++ }
++
++ final CompoundTag save = entities.save();
++ if (mergeFrom != null) {
++ if (save == null) {
++ // don't override the data on disk with nothing
++ return false;
++ } else {
++ EntityStorage.copyEntities(mergeFrom, save);
++ }
++ }
++ if (save == null && this.lastEntitySaveNull) {
++ return false;
++ }
++
++ RegionFileIOThread.scheduleSave(this.world, this.chunkX, this.chunkZ, save, RegionFileIOThread.RegionFileType.ENTITY_DATA);
++ this.lastEntitySaveNull = save == null;
++ if (unloading) {
++ this.lastEntityUnload = save;
++ }
++ } catch (final ThreadDeath death) {
++ throw death;
++ } catch (final Throwable thr) {
++ LOGGER.error("Failed to save entity data (" + this.chunkX + "," + this.chunkZ + ") in world '" + this.world.getWorld().getName() + "'");
++ }
++
++ return true;
++ }
++
++ private boolean lastPoiSaveNull;
++ private boolean savePOI(final PoiChunk poi, final boolean unloading) {
++ try {
++ final CompoundTag save = poi.save();
++ poi.setDirty(false);
++ if (save == null && this.lastPoiSaveNull) {
++ if (unloading) {
++ this.poiDataUnload.completable().complete(null);
++ }
++ return false;
++ }
++
++ RegionFileIOThread.scheduleSave(this.world, this.chunkX, this.chunkZ, save, RegionFileIOThread.RegionFileType.POI_DATA);
++ this.lastPoiSaveNull = save == null;
++ if (unloading) {
++ this.poiDataUnload.completable().complete(save);
++ }
++ } catch (final ThreadDeath death) {
++ throw death;
++ } catch (final Throwable thr) {
++ LOGGER.error("Failed to save poi data (" + this.chunkX + "," + this.chunkZ + ") in world '" + this.world.getWorld().getName() + "'");
++ }
++
++ return true;
++ }
++
++ @Override
++ public String toString() {
++ final ChunkCompletion lastCompletion = this.lastChunkCompletion;
++ final ChunkEntitySlices entityChunk = this.entityChunk;
++ final long chunkStatus = this.chunkStatus;
++ final int fullChunkStatus = (int)chunkStatus;
++ final int pendingChunkStatus = (int)(chunkStatus >>> 32);
++ final FullChunkStatus currentFullStatus = fullChunkStatus < 0 || fullChunkStatus >= CHUNK_STATUS_BY_ID.length ? null : CHUNK_STATUS_BY_ID[fullChunkStatus];
++ final FullChunkStatus pendingFullStatus = pendingChunkStatus < 0 || pendingChunkStatus >= CHUNK_STATUS_BY_ID.length ? null : CHUNK_STATUS_BY_ID[pendingChunkStatus];
++ return "NewChunkHolder{" +
++ "world=" + this.world.getWorld().getName() +
++ ", chunkX=" + this.chunkX +
++ ", chunkZ=" + this.chunkZ +
++ ", entityChunkFromDisk=" + (entityChunk != null && !entityChunk.isTransient()) +
++ ", lastChunkCompletion={chunk_class=" + (lastCompletion == null || lastCompletion.chunk() == null ? "null" : lastCompletion.chunk().getClass().getName()) + ",status=" + (lastCompletion == null ? "null" : lastCompletion.genStatus()) + "}" +
++ ", currentGenStatus=" + this.currentGenStatus +
++ ", requestedGenStatus=" + this.requestedGenStatus +
++ ", generationTask=" + this.generationTask +
++ ", generationTaskStatus=" + this.generationTaskStatus +
++ ", priority=" + this.priority +
++ ", priorityLocked=" + this.priorityLocked +
++ ", neighbourRequestedPriority=" + this.neighbourRequestedPriority +
++ ", effective_priority=" + this.getEffectivePriority() +
++ ", oldTicketLevel=" + this.oldTicketLevel +
++ ", currentTicketLevel=" + this.currentTicketLevel +
++ ", totalNeighboursUsingThisChunk=" + this.totalNeighboursUsingThisChunk +
++ ", fullNeighbourChunksLoadedBitset=" + this.fullNeighbourChunksLoadedBitset +
++ ", chunkStatusRaw=" + chunkStatus +
++ ", currentChunkStatus=" + currentFullStatus +
++ ", pendingChunkStatus=" + pendingFullStatus +
++ ", is_unload_safe=" + this.isSafeToUnload() +
++ ", killed=" + this.killed +
++ '}';
++ }
++
++ private static JsonElement serializeCompletable(final Completable<?> completable) {
++ if (completable == null) {
++ return new JsonPrimitive("null");
++ }
++
++ final JsonObject ret = new JsonObject();
++ final boolean isCompleted = completable.isCompleted();
++ ret.addProperty("completed", Boolean.valueOf(isCompleted));
++
++ if (isCompleted) {
++ ret.addProperty("completed_exceptionally", Boolean.valueOf(completable.getThrowable() != null));
++ }
++
++ return ret;
++ }
++
++ // holds ticket and scheduling lock
++ public JsonObject getDebugJson() {
++ final JsonObject ret = new JsonObject();
++
++ final ChunkCompletion lastCompletion = this.lastChunkCompletion;
++ final ChunkEntitySlices slices = this.entityChunk;
++ final PoiChunk poiChunk = this.poiChunk;
++
++ ret.addProperty("chunkX", Integer.valueOf(this.chunkX));
++ ret.addProperty("chunkZ", Integer.valueOf(this.chunkZ));
++ ret.addProperty("entity_chunk", slices == null ? "null" : "transient=" + slices.isTransient());
++ ret.addProperty("poi_chunk", "null=" + (poiChunk == null));
++ ret.addProperty("completed_chunk_class", lastCompletion == null ? "null" : lastCompletion.chunk().getClass().getName());
++ ret.addProperty("completed_gen_status", lastCompletion == null ? "null" : lastCompletion.genStatus().toString());
++ ret.addProperty("priority", Objects.toString(this.priority));
++ ret.addProperty("neighbour_requested_priority", Objects.toString(this.neighbourRequestedPriority));
++ ret.addProperty("generation_task", Objects.toString(this.generationTask));
++ ret.addProperty("is_safe_unload", Objects.toString(this.isSafeToUnload()));
++ ret.addProperty("old_ticket_level", Integer.valueOf(this.oldTicketLevel));
++ ret.addProperty("current_ticket_level", Integer.valueOf(this.currentTicketLevel));
++ ret.addProperty("neighbours_using_chunk", Integer.valueOf(this.totalNeighboursUsingThisChunk));
++
++ final JsonObject neighbourWaitState = new JsonObject();
++ ret.add("neighbour_state", neighbourWaitState);
++
++ final JsonArray blockingGenNeighbours = new JsonArray();
++ neighbourWaitState.add("blocking_gen_task", blockingGenNeighbours);
++ for (final NewChunkHolder blockingGenNeighbour : this.neighboursBlockingGenTask) {
++ final JsonObject neighbour = new JsonObject();
++ blockingGenNeighbours.add(neighbour);
++
++ neighbour.addProperty("chunkX", Integer.valueOf(blockingGenNeighbour.chunkX));
++ neighbour.addProperty("chunkZ", Integer.valueOf(blockingGenNeighbour.chunkZ));
++ }
++
++ final JsonArray neighboursWaitingForUs = new JsonArray();
++ neighbourWaitState.add("neighbours_waiting_on_us", neighboursWaitingForUs);
++ for (final Reference2ObjectMap.Entry<NewChunkHolder, ChunkStatus> entry : this.neighboursWaitingForUs.reference2ObjectEntrySet()) {
++ final NewChunkHolder holder = entry.getKey();
++ final ChunkStatus status = entry.getValue();
++
++ final JsonObject neighbour = new JsonObject();
++ neighboursWaitingForUs.add(neighbour);
++
++
++ neighbour.addProperty("chunkX", Integer.valueOf(holder.chunkX));
++ neighbour.addProperty("chunkZ", Integer.valueOf(holder.chunkZ));
++ neighbour.addProperty("waiting_for", Objects.toString(status));
++ }
++
++ ret.addProperty("fullchunkstatus", Objects.toString(this.getChunkStatus()));
++ ret.addProperty("fullchunkstatus_raw", Long.valueOf(this.chunkStatus));
++ ret.addProperty("generation_task", Objects.toString(this.generationTask));
++ ret.addProperty("requested_generation", Objects.toString(this.requestedGenStatus));
++ ret.addProperty("has_entity_load_task", Boolean.valueOf(this.entityDataLoadTask != null));
++ ret.addProperty("has_poi_load_task", Boolean.valueOf(this.poiDataLoadTask != null));
++
++ final UnloadTask entityDataUnload = this.entityDataUnload;
++ final UnloadTask poiDataUnload = this.poiDataUnload;
++ final UnloadTask chunkDataUnload = this.chunkDataUnload;
++
++ ret.add("entity_unload_completable", serializeCompletable(entityDataUnload == null ? null : entityDataUnload.completable()));
++ ret.add("poi_unload_completable", serializeCompletable(poiDataUnload == null ? null : poiDataUnload.completable()));
++ ret.add("chunk_unload_completable", serializeCompletable(chunkDataUnload == null ? null : chunkDataUnload.completable()));
++
++ final DelayedPrioritisedTask unloadTask = chunkDataUnload == null ? null : chunkDataUnload.task();
++ if (unloadTask == null) {
++ ret.addProperty("unload_task_priority", "null");
++ ret.addProperty("unload_task_priority_raw", "null");
++ } else {
++ ret.addProperty("unload_task_priority", Objects.toString(unloadTask.getPriority()));
++ ret.addProperty("unload_task_priority_raw", Integer.valueOf(unloadTask.getPriorityInternal()));
++ }
++
++ ret.addProperty("killed", Boolean.valueOf(this.killed));
++
++ return ret;
++ }
++}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/PriorityHolder.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/PriorityHolder.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..b4c56bf12dc8dd17452210ece4fd67411cc6b2fd
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/PriorityHolder.java
+@@ -0,0 +1,215 @@
++package io.papermc.paper.chunk.system.scheduling;
++
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
++import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
++import java.lang.invoke.VarHandle;
++
++public abstract class PriorityHolder {
++
++ protected volatile int priority;
++ protected static final VarHandle PRIORITY_HANDLE = ConcurrentUtil.getVarHandle(PriorityHolder.class, "priority", int.class);
++
++ protected static final int PRIORITY_SCHEDULED = Integer.MIN_VALUE >>> 0;
++ protected static final int PRIORITY_EXECUTED = Integer.MIN_VALUE >>> 1;
++
++ protected final int getPriorityVolatile() {
++ return (int)PRIORITY_HANDLE.getVolatile((PriorityHolder)this);
++ }
++
++ protected final int compareAndExchangePriorityVolatile(final int expect, final int update) {
++ return (int)PRIORITY_HANDLE.compareAndExchange((PriorityHolder)this, (int)expect, (int)update);
++ }
++
++ protected final int getAndOrPriorityVolatile(final int val) {
++ return (int)PRIORITY_HANDLE.getAndBitwiseOr((PriorityHolder)this, (int)val);
++ }
++
++ protected final void setPriorityPlain(final int val) {
++ PRIORITY_HANDLE.set((PriorityHolder)this, (int)val);
++ }
++
++ protected PriorityHolder(final PrioritisedExecutor.Priority priority) {
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++ this.setPriorityPlain(priority.priority);
++ }
++
++ // used only for debug json
++ public boolean isScheduled() {
++ return (this.getPriorityVolatile() & PRIORITY_SCHEDULED) != 0;
++ }
++
++ // returns false if cancelled
++ protected boolean markExecuting() {
++ return (this.getAndOrPriorityVolatile(PRIORITY_EXECUTED) & PRIORITY_EXECUTED) == 0;
++ }
++
++ protected boolean isMarkedExecuted() {
++ return (this.getPriorityVolatile() & PRIORITY_EXECUTED) != 0;
++ }
++
++ public void cancel() {
++ if ((this.getAndOrPriorityVolatile(PRIORITY_EXECUTED) & PRIORITY_EXECUTED) != 0) {
++ // cancelled already
++ return;
++ }
++ this.cancelScheduled();
++ }
++
++ public void schedule() {
++ int priority = this.getPriorityVolatile();
++
++ if ((priority & PRIORITY_SCHEDULED) != 0) {
++ throw new IllegalStateException("schedule() called twice");
++ }
++
++ if ((priority & PRIORITY_EXECUTED) != 0) {
++ // cancelled
++ return;
++ }
++
++ this.scheduleTask(PrioritisedExecutor.Priority.getPriority(priority));
++
++ int failures = 0;
++ for (;;) {
++ if (priority == (priority = this.compareAndExchangePriorityVolatile(priority, priority | PRIORITY_SCHEDULED))) {
++ return;
++ }
++
++ if ((priority & PRIORITY_SCHEDULED) != 0) {
++ throw new IllegalStateException("schedule() called twice");
++ }
++
++ if ((priority & PRIORITY_EXECUTED) != 0) {
++ // cancelled or executed
++ return;
++ }
++
++ this.setPriorityScheduled(PrioritisedExecutor.Priority.getPriority(priority));
++
++ ++failures;
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++ }
++ }
++
++ public final PrioritisedExecutor.Priority getPriority() {
++ final int ret = this.getPriorityVolatile();
++ if ((ret & PRIORITY_EXECUTED) != 0) {
++ return PrioritisedExecutor.Priority.COMPLETING;
++ }
++ if ((ret & PRIORITY_SCHEDULED) != 0) {
++ return this.getScheduledPriority();
++ }
++ return PrioritisedExecutor.Priority.getPriority(ret);
++ }
++
++ public final void lowerPriority(final PrioritisedExecutor.Priority priority) {
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++
++ int failures = 0;
++ for (int curr = this.getPriorityVolatile();;) {
++ if ((curr & PRIORITY_EXECUTED) != 0) {
++ return;
++ }
++
++ if ((curr & PRIORITY_SCHEDULED) != 0) {
++ this.lowerPriorityScheduled(priority);
++ return;
++ }
++
++ if (!priority.isLowerPriority(curr)) {
++ return;
++ }
++
++ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, priority.priority))) {
++ return;
++ }
++
++ // failed, retry
++
++ ++failures;
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++ }
++ }
++
++ public final void setPriority(final PrioritisedExecutor.Priority priority) {
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++
++ int failures = 0;
++ for (int curr = this.getPriorityVolatile();;) {
++ if ((curr & PRIORITY_EXECUTED) != 0) {
++ return;
++ }
++
++ if ((curr & PRIORITY_SCHEDULED) != 0) {
++ this.setPriorityScheduled(priority);
++ return;
++ }
++
++ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, priority.priority))) {
++ return;
++ }
++
++ // failed, retry
++
++ ++failures;
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++ }
++ }
++
++ public final void raisePriority(final PrioritisedExecutor.Priority priority) {
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++
++ int failures = 0;
++ for (int curr = this.getPriorityVolatile();;) {
++ if ((curr & PRIORITY_EXECUTED) != 0) {
++ return;
++ }
++
++ if ((curr & PRIORITY_SCHEDULED) != 0) {
++ this.raisePriorityScheduled(priority);
++ return;
++ }
++
++ if (!priority.isHigherPriority(curr)) {
++ return;
++ }
++
++ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, priority.priority))) {
++ return;
++ }
++
++ // failed, retry
++
++ ++failures;
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++ }
++ }
++
++ protected abstract void cancelScheduled();
++
++ protected abstract PrioritisedExecutor.Priority getScheduledPriority();
++
++ protected abstract void scheduleTask(final PrioritisedExecutor.Priority priority);
++
++ protected abstract void lowerPriorityScheduled(final PrioritisedExecutor.Priority priority);
++
++ protected abstract void setPriorityScheduled(final PrioritisedExecutor.Priority priority);
++
++ protected abstract void raisePriorityScheduled(final PrioritisedExecutor.Priority priority);
++}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ThreadedTicketLevelPropagator.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ThreadedTicketLevelPropagator.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..287240ed3b440f2f5733c368416e4276f626405d
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ThreadedTicketLevelPropagator.java
+@@ -0,0 +1,1477 @@
++package io.papermc.paper.chunk.system.scheduling;
++
++import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
++import ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock;
++import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
++import it.unimi.dsi.fastutil.HashCommon;
++import it.unimi.dsi.fastutil.longs.Long2ByteLinkedOpenHashMap;
++import it.unimi.dsi.fastutil.shorts.Short2ByteLinkedOpenHashMap;
++import it.unimi.dsi.fastutil.shorts.Short2ByteMap;
++import it.unimi.dsi.fastutil.shorts.ShortOpenHashSet;
++import java.lang.invoke.VarHandle;
++import java.util.ArrayDeque;
++import java.util.Arrays;
++import java.util.Iterator;
++import java.util.List;
++import java.util.concurrent.ConcurrentHashMap;
++import java.util.concurrent.locks.LockSupport;
++
++public abstract class ThreadedTicketLevelPropagator {
++
++ // sections are 64 in length
++ public static final int SECTION_SHIFT = 6;
++ public static final int SECTION_SIZE = 1 << SECTION_SHIFT;
++ private static final int LEVEL_BITS = SECTION_SHIFT;
++ private static final int LEVEL_COUNT = 1 << LEVEL_BITS;
++ private static final int MIN_SOURCE_LEVEL = 1;
++ // we limit the max source to 62 because the depropagation code _must_ attempt to depropagate
++ // a 1 level to 0; and if a source was 63 then it may cross more than 2 sections in depropagation
++ private static final int MAX_SOURCE_LEVEL = 62;
++
++ private final UpdateQueue updateQueue;
++ private final ConcurrentHashMap<Coordinate, Section> sections = new ConcurrentHashMap<>();
++
++ public ThreadedTicketLevelPropagator() {
++ this.updateQueue = new UpdateQueue();
++ }
++
++ // must hold ticket lock for:
++ // (posX & ~(SECTION_SIZE - 1), posZ & ~(SECTION_SIZE - 1)) to (posX | (SECTION_SIZE - 1), posZ | (SECTION_SIZE - 1))
++ public void setSource(final int posX, final int posZ, final int to) {
++ if (to < 1 || to > MAX_SOURCE_LEVEL) {
++ throw new IllegalArgumentException("Source: " + to);
++ }
++
++ final int sectionX = posX >> SECTION_SHIFT;
++ final int sectionZ = posZ >> SECTION_SHIFT;
++
++ final Coordinate coordinate = new Coordinate(sectionX, sectionZ);
++ Section section = this.sections.get(coordinate);
++ if (section == null) {
++ if (null != this.sections.putIfAbsent(coordinate, section = new Section(sectionX, sectionZ))) {
++ throw new IllegalStateException("Race condition while creating new section");
++ }
++ }
++
++ final int localIdx = (posX & (SECTION_SIZE - 1)) | ((posZ & (SECTION_SIZE - 1)) << SECTION_SHIFT);
++ final short sLocalIdx = (short)localIdx;
++
++ final short sourceAndLevel = section.levels[localIdx];
++ final int currentSource = (sourceAndLevel >>> 8) & 0xFF;
++
++ if (currentSource == to) {
++ // nothing to do
++ // make sure to kill the current update, if any
++ section.queuedSources.replace(sLocalIdx, (byte)to);
++ return;
++ }
++
++ if (section.queuedSources.put(sLocalIdx, (byte)to) == Section.NO_QUEUED_UPDATE && section.queuedSources.size() == 1) {
++ this.queueSectionUpdate(section);
++ }
++ }
++
++ // must hold ticket lock for:
++ // (posX & ~(SECTION_SIZE - 1), posZ & ~(SECTION_SIZE - 1)) to (posX | (SECTION_SIZE - 1), posZ | (SECTION_SIZE - 1))
++ public void removeSource(final int posX, final int posZ) {
++ final int sectionX = posX >> SECTION_SHIFT;
++ final int sectionZ = posZ >> SECTION_SHIFT;
++
++ final Coordinate coordinate = new Coordinate(sectionX, sectionZ);
++ final Section section = this.sections.get(coordinate);
++
++ if (section == null) {
++ return;
++ }
++
++ final int localIdx = (posX & (SECTION_SIZE - 1)) | ((posZ & (SECTION_SIZE - 1)) << SECTION_SHIFT);
++ final short sLocalIdx = (short)localIdx;
++
++ final int currentSource = (section.levels[localIdx] >>> 8) & 0xFF;
++
++ if (currentSource == 0) {
++ // we use replace here so that we do not possibly multi-queue a section for an update
++ section.queuedSources.replace(sLocalIdx, (byte)0);
++ return;
++ }
++
++ if (section.queuedSources.put(sLocalIdx, (byte)0) == Section.NO_QUEUED_UPDATE && section.queuedSources.size() == 1) {
++ this.queueSectionUpdate(section);
++ }
++ }
++
++ private void queueSectionUpdate(final Section section) {
++ this.updateQueue.append(new UpdateQueue.UpdateQueueNode(section, null));
++ }
++
++ public boolean hasPendingUpdates() {
++ return !this.updateQueue.isEmpty();
++ }
++
++ // holds ticket lock for every chunk section represented by any position in the key set
++ // updates is modifiable and passed to processSchedulingUpdates after this call
++ protected abstract void processLevelUpdates(final Long2ByteLinkedOpenHashMap updates);
++
++ // holds ticket lock for every chunk section represented by any position in the key set
++ // holds scheduling lock in max access radius for every position held by the ticket lock
++ // updates is cleared after this call
++ protected abstract void processSchedulingUpdates(final Long2ByteLinkedOpenHashMap updates, final List<ChunkProgressionTask> scheduledTasks,
++ final List<NewChunkHolder> changedFullStatus);
++
++ // must hold ticket lock for every position in the sections in one radius around sectionX,sectionZ
++ public boolean performUpdate(final int sectionX, final int sectionZ, final ReentrantAreaLock schedulingLock,
++ final List<ChunkProgressionTask> scheduledTasks, final List<NewChunkHolder> changedFullStatus) {
++ if (!this.hasPendingUpdates()) {
++ return false;
++ }
++
++ final Coordinate coordinate = new Coordinate(Coordinate.key(sectionX, sectionZ));
++ final Section section = this.sections.get(coordinate);
++
++ if (section == null || section.queuedSources.isEmpty()) {
++ // no section or no updates
++ return false;
++ }
++
++ final Propagator propagator = Propagator.acquirePropagator();
++ final boolean ret = this.performUpdate(section, null, propagator,
++ null, schedulingLock, scheduledTasks, changedFullStatus
++ );
++ Propagator.returnPropagator(propagator);
++ return ret;
++ }
++
++ private boolean performUpdate(final Section section, final UpdateQueue.UpdateQueueNode node, final Propagator propagator,
++ final ReentrantAreaLock ticketLock, final ReentrantAreaLock schedulingLock,
++ final List<ChunkProgressionTask> scheduledTasks, final List<NewChunkHolder> changedFullStatus) {
++ final int sectionX = section.sectionX;
++ final int sectionZ = section.sectionZ;
++
++ final int rad1MinX = (sectionX - 1) << SECTION_SHIFT;
++ final int rad1MinZ = (sectionZ - 1) << SECTION_SHIFT;
++ final int rad1MaxX = ((sectionX + 1) << SECTION_SHIFT) | (SECTION_SIZE - 1);
++ final int rad1MaxZ = ((sectionZ + 1) << SECTION_SHIFT) | (SECTION_SIZE - 1);
++
++ // set up encode offset first as we need to queue level changes _before_
++ propagator.setupEncodeOffset(sectionX, sectionZ);
++
++ final int coordinateOffset = propagator.coordinateOffset;
++
++ final ReentrantAreaLock.Node ticketNode = ticketLock == null ? null : ticketLock.lock(rad1MinX, rad1MinZ, rad1MaxX, rad1MaxZ);
++ final boolean ret;
++ try {
++ // first, check if this update was stolen
++ if (section != this.sections.get(new Coordinate(sectionX, sectionZ))) {
++ // occurs when a stolen update deletes this section
++ // it is possible that another update is scheduled, but that one will have the correct section
++ if (node != null) {
++ this.updateQueue.remove(node);
++ }
++ return false;
++ }
++
++ final int oldSourceSize = section.sources.size();
++
++ // process pending sources
++ for (final Iterator<Short2ByteMap.Entry> iterator = section.queuedSources.short2ByteEntrySet().fastIterator(); iterator.hasNext();) {
++ final Short2ByteMap.Entry entry = iterator.next();
++ final int pos = (int)entry.getShortKey();
++ final int posX = (pos & (SECTION_SIZE - 1)) | (sectionX << SECTION_SHIFT);
++ final int posZ = ((pos >> SECTION_SHIFT) & (SECTION_SIZE - 1)) | (sectionZ << SECTION_SHIFT);
++ final int newSource = (int)entry.getByteValue();
++
++ final short currentEncoded = section.levels[pos];
++ final int currLevel = currentEncoded & 0xFF;
++ final int prevSource = (currentEncoded >>> 8) & 0xFF;
++
++ if (prevSource == newSource) {
++ // nothing changed
++ continue;
++ }
++
++ if ((prevSource < currLevel && newSource <= currLevel) || newSource == currLevel) {
++ // just update the source, don't need to propagate change
++ section.levels[pos] = (short)(currLevel | (newSource << 8));
++ // level is unchanged, don't add to changed positions
++ } else {
++ // set current level and current source to new source
++ section.levels[pos] = (short)(newSource | (newSource << 8));
++ // must add to updated positions in case this is final
++ propagator.updatedPositions.put(Coordinate.key(posX, posZ), (byte)newSource);
++ if (newSource != 0) {
++ // queue increase with new source level
++ propagator.appendToIncreaseQueue(
++ ((long)(posX + (posZ << Propagator.COORDINATE_BITS) + coordinateOffset) & ((1L << (Propagator.COORDINATE_BITS + Propagator.COORDINATE_BITS)) - 1)) |
++ ((newSource & (LEVEL_COUNT - 1L)) << (Propagator.COORDINATE_BITS + Propagator.COORDINATE_BITS)) |
++ (Propagator.ALL_DIRECTIONS_BITSET << (Propagator.COORDINATE_BITS + Propagator.COORDINATE_BITS + LEVEL_BITS))
++ );
++ }
++ // queue decrease with previous level
++ if (newSource < currLevel) {
++ propagator.appendToDecreaseQueue(
++ ((long)(posX + (posZ << Propagator.COORDINATE_BITS) + coordinateOffset) & ((1L << (Propagator.COORDINATE_BITS + Propagator.COORDINATE_BITS)) - 1)) |
++ ((currLevel & (LEVEL_COUNT - 1L)) << (Propagator.COORDINATE_BITS + Propagator.COORDINATE_BITS)) |
++ (Propagator.ALL_DIRECTIONS_BITSET << (Propagator.COORDINATE_BITS + Propagator.COORDINATE_BITS + LEVEL_BITS))
++ );
++ }
++ }
++
++ if (newSource == 0) {
++ // prevSource != newSource, so we are removing this source
++ section.sources.remove((short)pos);
++ } else if (prevSource == 0) {
++ // prevSource != newSource, so we are adding this source
++ section.sources.add((short)pos);
++ }
++ }
++
++ section.queuedSources.clear();
++
++ final int newSourceSize = section.sources.size();
++
++ if (oldSourceSize == 0 && newSourceSize != 0) {
++ // need to make sure the sections in 1 radius are initialised
++ for (int dz = -1; dz <= 1; ++dz) {
++ for (int dx = -1; dx <= 1; ++dx) {
++ if ((dx | dz) == 0) {
++ continue;
++ }
++ final int offX = dx + sectionX;
++ final int offZ = dz + sectionZ;
++ final Coordinate coordinate = new Coordinate(offX, offZ);
++ final Section neighbour = this.sections.computeIfAbsent(coordinate, (final Coordinate keyInMap) -> {
++ return new Section(Coordinate.x(keyInMap.key), Coordinate.z(keyInMap.key));
++ });
++
++ // increase ref count
++ ++neighbour.oneRadNeighboursWithSources;
++ if (neighbour.oneRadNeighboursWithSources <= 0 || neighbour.oneRadNeighboursWithSources > 8) {
++ throw new IllegalStateException(Integer.toString(neighbour.oneRadNeighboursWithSources));
++ }
++ }
++ }
++ }
++
++ if (propagator.hasUpdates()) {
++ propagator.setupCaches(this, sectionX, sectionZ, 1);
++ propagator.performDecrease();
++ // don't need try-finally, as any exception will cause the propagator to not be returned
++ propagator.destroyCaches();
++ }
++
++ if (newSourceSize == 0) {
++ final boolean decrementRef = oldSourceSize != 0;
++ // check for section de-init
++ for (int dz = -1; dz <= 1; ++dz) {
++ for (int dx = -1; dx <= 1; ++dx) {
++ final int offX = dx + sectionX;
++ final int offZ = dz + sectionZ;
++ final Coordinate coordinate = new Coordinate(offX, offZ);
++ final Section neighbour = this.sections.get(coordinate);
++
++ if (neighbour == null) {
++ if (oldSourceSize == 0 && (dx | dz) != 0) {
++ // since we don't have sources, this section is allowed to null
++ continue;
++ }
++ throw new IllegalStateException("??");
++ }
++
++ if (decrementRef && (dx | dz) != 0) {
++ // decrease ref count, but only for neighbours
++ --neighbour.oneRadNeighboursWithSources;
++ }
++
++ // we need to check the current section for de-init as well
++ if (neighbour.oneRadNeighboursWithSources == 0) {
++ if (neighbour.queuedSources.isEmpty() && neighbour.sources.isEmpty()) {
++ // need to de-init
++ this.sections.remove(coordinate);
++ } // else: neighbour is queued for an update, and it will de-init itself
++ } else if (neighbour.oneRadNeighboursWithSources < 0 || neighbour.oneRadNeighboursWithSources > 8) {
++ throw new IllegalStateException(Integer.toString(neighbour.oneRadNeighboursWithSources));
++ }
++ }
++ }
++ }
++
++
++ ret = !propagator.updatedPositions.isEmpty();
++
++ if (ret) {
++ this.processLevelUpdates(propagator.updatedPositions);
++
++ if (!propagator.updatedPositions.isEmpty()) {
++ // now we can actually update the ticket levels in the chunk holders
++ final int maxScheduleRadius = 2 * ChunkTaskScheduler.getMaxAccessRadius();
++
++ // allow the chunkholders to process ticket level updates without needing to acquire the schedule lock every time
++ final ReentrantAreaLock.Node schedulingNode = schedulingLock.lock(
++ rad1MinX - maxScheduleRadius, rad1MinZ - maxScheduleRadius,
++ rad1MaxX + maxScheduleRadius, rad1MaxZ + maxScheduleRadius
++ );
++ try {
++ this.processSchedulingUpdates(propagator.updatedPositions, scheduledTasks, changedFullStatus);
++ } finally {
++ schedulingLock.unlock(schedulingNode);
++ }
++ }
++
++ propagator.updatedPositions.clear();
++ }
++ } finally {
++ if (ticketLock != null) {
++ ticketLock.unlock(ticketNode);
++ }
++ }
++
++ // finished
++ if (node != null) {
++ this.updateQueue.remove(node);
++ }
++
++ return ret;
++ }
++
++ public boolean performUpdates(final ReentrantAreaLock ticketLock, final ReentrantAreaLock schedulingLock,
++ final List<ChunkProgressionTask> scheduledTasks, final List<NewChunkHolder> changedFullStatus) {
++ if (this.updateQueue.isEmpty()) {
++ return false;
++ }
++
++ final long maxOrder = this.updateQueue.getLastOrder();
++
++ boolean updated = false;
++ Propagator propagator = null;
++
++ for (;;) {
++ final UpdateQueue.UpdateQueueNode toUpdate = this.updateQueue.acquireNextToUpdate(maxOrder);
++ if (toUpdate == null) {
++ this.updateQueue.awaitFirst(maxOrder);
++
++ if (!this.updateQueue.hasRemainingUpdates(maxOrder)) {
++ if (propagator != null) {
++ Propagator.returnPropagator(propagator);
++ }
++ return updated;
++ }
++
++ continue;
++ }
++
++ if (propagator == null) {
++ propagator = Propagator.acquirePropagator();
++ }
++
++ updated |= this.performUpdate(toUpdate.section, toUpdate, propagator, ticketLock, schedulingLock, scheduledTasks, changedFullStatus);
++ }
++ }
++
++ private static final class UpdateQueue {
++
++ private volatile UpdateQueueNode head;
++ private volatile UpdateQueueNode tail;
++ private volatile UpdateQueueNode lastUpdating;
++
++ protected static final VarHandle HEAD_HANDLE = ConcurrentUtil.getVarHandle(UpdateQueue.class, "head", UpdateQueueNode.class);
++ protected static final VarHandle TAIL_HANDLE = ConcurrentUtil.getVarHandle(UpdateQueue.class, "tail", UpdateQueueNode.class);
++ protected static final VarHandle LAST_UPDATING = ConcurrentUtil.getVarHandle(UpdateQueue.class, "lastUpdating", UpdateQueueNode.class);
++
++ /* head */
++
++ protected final void setHeadPlain(final UpdateQueueNode newHead) {
++ HEAD_HANDLE.set(this, newHead);
++ }
++
++ protected final void setHeadOpaque(final UpdateQueueNode newHead) {
++ HEAD_HANDLE.setOpaque(this, newHead);
++ }
++
++ protected final UpdateQueueNode getHeadPlain() {
++ return (UpdateQueueNode)HEAD_HANDLE.get(this);
++ }
++
++ protected final UpdateQueueNode getHeadOpaque() {
++ return (UpdateQueueNode)HEAD_HANDLE.getOpaque(this);
++ }
++
++ protected final UpdateQueueNode getHeadAcquire() {
++ return (UpdateQueueNode)HEAD_HANDLE.getAcquire(this);
++ }
++
++ /* tail */
++
++ protected final void setTailPlain(final UpdateQueueNode newTail) {
++ TAIL_HANDLE.set(this, newTail);
++ }
++
++ protected final void setTailOpaque(final UpdateQueueNode newTail) {
++ TAIL_HANDLE.setOpaque(this, newTail);
++ }
++
++ protected final UpdateQueueNode getTailPlain() {
++ return (UpdateQueueNode)TAIL_HANDLE.get(this);
++ }
++
++ protected final UpdateQueueNode getTailOpaque() {
++ return (UpdateQueueNode)TAIL_HANDLE.getOpaque(this);
++ }
++
++ /* lastUpdating */
++
++ protected final UpdateQueueNode getLastUpdatingVolatile() {
++ return (UpdateQueueNode)LAST_UPDATING.getVolatile(this);
++ }
++
++ protected final UpdateQueueNode compareAndExchangeLastUpdatingVolatile(final UpdateQueueNode expect, final UpdateQueueNode update) {
++ return (UpdateQueueNode)LAST_UPDATING.compareAndExchange(this, expect, update);
++ }
++
++ public UpdateQueue() {
++ final UpdateQueueNode dummy = new UpdateQueueNode(null, null);
++ dummy.order = -1L;
++ dummy.preventAdds();
++
++ this.setHeadPlain(dummy);
++ this.setTailPlain(dummy);
++ }
++
++ public boolean isEmpty() {
++ return this.peek() == null;
++ }
++
++ public boolean hasRemainingUpdates(final long maxUpdate) {
++ final UpdateQueueNode node = this.peek();
++ return node != null && node.order <= maxUpdate;
++ }
++
++ public long getLastOrder() {
++ for (UpdateQueueNode tail = this.getTailOpaque(), curr = tail;;) {
++ final UpdateQueueNode next = curr.getNextVolatile();
++ if (next == null) {
++ // try to update stale tail
++ if (this.getTailOpaque() == tail && curr != tail) {
++ this.setTailOpaque(curr);
++ }
++ return curr.order;
++ }
++ curr = next;
++ }
++ }
++
++ public UpdateQueueNode acquireNextToUpdate(final long maxOrder) {
++ int failures = 0;
++ for (UpdateQueueNode prev = this.getLastUpdatingVolatile();;) {
++ UpdateQueueNode next = prev == null ? this.peek() : prev.next;
++
++ if (next == null || next.order > maxOrder) {
++ return null;
++ }
++
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++
++ if (prev == (prev = this.compareAndExchangeLastUpdatingVolatile(prev, next))) {
++ return next;
++ }
++
++ ++failures;
++ }
++ }
++
++ public void awaitFirst(final long maxOrder) {
++ final UpdateQueueNode earliest = this.peek();
++ if (earliest == null || earliest.order > maxOrder) {
++ return;
++ }
++
++ final Thread currThread = Thread.currentThread();
++ // we do not use add-blocking because we use the nullability of the section to block
++ // remove() does not begin to poll from the wait queue until the section is null'd,
++ // and so provided we check the nullability before parking there is no ordering of these operations
++ // such that remove() finishes polling from the wait queue while section is not null
++ earliest.add(currThread);
++
++ // wait until completed
++ while (earliest.getSectionVolatile() != null) {
++ LockSupport.park();
++ }
++ }
++
++ public UpdateQueueNode peek() {
++ for (UpdateQueueNode head = this.getHeadOpaque(), curr = head;;) {
++ final UpdateQueueNode next = curr.getNextVolatile();
++ final Section element = curr.getSectionVolatile(); /* Likely in sync */
++
++ if (element != null) {
++ if (this.getHeadOpaque() == head && curr != head) {
++ this.setHeadOpaque(curr);
++ }
++ return curr;
++ }
++
++ if (next == null) {
++ if (this.getHeadOpaque() == head && curr != head) {
++ this.setHeadOpaque(curr);
++ }
++ return null;
++ }
++ curr = next;
++ }
++ }
++
++ public void remove(final UpdateQueueNode node) {
++ // mark as removed
++ node.setSectionVolatile(null);
++
++ // use peek to advance head
++ this.peek();
++
++ // unpark any waiters / block the wait queue
++ Thread unpark;
++ while ((unpark = node.poll()) != null) {
++ LockSupport.unpark(unpark);
++ }
++ }
++
++ public void append(final UpdateQueueNode node) {
++ int failures = 0;
++
++ for (UpdateQueueNode currTail = this.getTailOpaque(), curr = currTail;;) {
++ /* It has been experimentally shown that placing the read before the backoff results in significantly greater performance */
++ /* It is likely due to a cache miss caused by another write to the next field */
++ final UpdateQueueNode next = curr.getNextVolatile();
++
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++
++ if (next == null) {
++ node.order = curr.order + 1L;
++ final UpdateQueueNode compared = curr.compareExchangeNextVolatile(null, node);
++
++ if (compared == null) {
++ /* Added */
++ /* Avoid CASing on tail more than we need to */
++ /* CAS to avoid setting an out-of-date tail */
++ if (this.getTailOpaque() == currTail) {
++ this.setTailOpaque(node);
++ }
++ return;
++ }
++
++ ++failures;
++ curr = compared;
++ continue;
++ }
++
++ if (curr == currTail) {
++ /* Tail is likely not up-to-date */
++ curr = next;
++ } else {
++ /* Try to update to tail */
++ if (currTail == (currTail = this.getTailOpaque())) {
++ curr = next;
++ } else {
++ curr = currTail;
++ }
++ }
++ }
++ }
++
++ // each node also represents a set of waiters, represented by the MTQ
++ // if the queue is add-blocked, then the update is complete
++ private static final class UpdateQueueNode extends MultiThreadedQueue<Thread> {
++ private long order;
++ private Section section;
++ private volatile UpdateQueueNode next;
++
++ protected static final VarHandle SECTION_HANDLE = ConcurrentUtil.getVarHandle(UpdateQueueNode.class, "section", Section.class);
++ protected static final VarHandle NEXT_HANDLE = ConcurrentUtil.getVarHandle(UpdateQueueNode.class, "next", UpdateQueueNode.class);
++
++ public UpdateQueueNode(final Section section, final UpdateQueueNode next) {
++ SECTION_HANDLE.set(this, section);
++ NEXT_HANDLE.set(this, next);
++ }
++
++ /* section */
++
++ protected final Section getSectionPlain() {
++ return (Section)SECTION_HANDLE.get(this);
++ }
++
++ protected final Section getSectionVolatile() {
++ return (Section)SECTION_HANDLE.getVolatile(this);
++ }
++
++ protected final void setSectionPlain(final Section update) {
++ SECTION_HANDLE.set(this, update);
++ }
++
++ protected final void setSectionOpaque(final Section update) {
++ SECTION_HANDLE.setOpaque(this, update);
++ }
++
++ protected final void setSectionVolatile(final Section update) {
++ SECTION_HANDLE.setVolatile(this, update);
++ }
++
++ protected final Section getAndSetSectionVolatile(final Section update) {
++ return (Section)SECTION_HANDLE.getAndSet(this, update);
++ }
++
++ protected final Section compareExchangeSectionVolatile(final Section expect, final Section update) {
++ return (Section)SECTION_HANDLE.compareAndExchange(this, expect, update);
++ }
++
++ /* next */
++
++ protected final UpdateQueueNode getNextPlain() {
++ return (UpdateQueueNode)NEXT_HANDLE.get(this);
++ }
++
++ protected final UpdateQueueNode getNextOpaque() {
++ return (UpdateQueueNode)NEXT_HANDLE.getOpaque(this);
++ }
++
++ protected final UpdateQueueNode getNextAcquire() {
++ return (UpdateQueueNode)NEXT_HANDLE.getAcquire(this);
++ }
++
++ protected final UpdateQueueNode getNextVolatile() {
++ return (UpdateQueueNode)NEXT_HANDLE.getVolatile(this);
++ }
++
++ protected final void setNextPlain(final UpdateQueueNode next) {
++ NEXT_HANDLE.set(this, next);
++ }
++
++ protected final void setNextVolatile(final UpdateQueueNode next) {
++ NEXT_HANDLE.setVolatile(this, next);
++ }
++
++ protected final UpdateQueueNode compareExchangeNextVolatile(final UpdateQueueNode expect, final UpdateQueueNode set) {
++ return (UpdateQueueNode)NEXT_HANDLE.compareAndExchange(this, expect, set);
++ }
++ }
++ }
++
++ private static final class Section {
++
++ // upper 8 bits: sources, lower 8 bits: level
++ // if we REALLY wanted to get crazy, we could make the increase propagator use MethodHandles#byteArrayViewVarHandle
++ // to read and write the lower 8 bits of this array directly rather than reading, updating the bits, then writing back.
++ private final short[] levels = new short[SECTION_SIZE * SECTION_SIZE];
++ // set of local positions that represent sources
++ private final ShortOpenHashSet sources = new ShortOpenHashSet();
++ // map of local index to new source level
++ // the source level _cannot_ be updated in the backing storage immediately since the update
++ private static final byte NO_QUEUED_UPDATE = (byte)-1;
++ private final Short2ByteLinkedOpenHashMap queuedSources = new Short2ByteLinkedOpenHashMap();
++ {
++ this.queuedSources.defaultReturnValue(NO_QUEUED_UPDATE);
++ }
++ private int oneRadNeighboursWithSources = 0;
++
++ public final int sectionX;
++ public final int sectionZ;
++
++ public Section(final int sectionX, final int sectionZ) {
++ this.sectionX = sectionX;
++ this.sectionZ = sectionZ;
++ }
++
++ public boolean isZero() {
++ for (final short val : this.levels) {
++ if (val != 0) {
++ return false;
++ }
++ }
++ return true;
++ }
++
++ @Override
++ public String toString() {
++ final StringBuilder ret = new StringBuilder();
++
++ for (int x = 0; x < SECTION_SIZE; ++x) {
++ ret.append("levels x=").append(x).append("\n");
++ for (int z = 0; z < SECTION_SIZE; ++z) {
++ final short v = this.levels[x | (z << SECTION_SHIFT)];
++ ret.append(v & 0xFF).append(".");
++ }
++ ret.append("\n");
++ ret.append("sources x=").append(x).append("\n");
++ for (int z = 0; z < SECTION_SIZE; ++z) {
++ final short v = this.levels[x | (z << SECTION_SHIFT)];
++ ret.append((v >>> 8) & 0xFF).append(".");
++ }
++ ret.append("\n\n");
++ }
++
++ return ret.toString();
++ }
++ }
++
++
++ private static final class Propagator {
++
++ private static final ArrayDeque<Propagator> CACHED_PROPAGATORS = new ArrayDeque<>();
++ private static final int MAX_PROPAGATORS = Runtime.getRuntime().availableProcessors() * 2;
++
++ private static Propagator acquirePropagator() {
++ synchronized (CACHED_PROPAGATORS) {
++ final Propagator ret = CACHED_PROPAGATORS.pollFirst();
++ if (ret != null) {
++ return ret;
++ }
++ }
++ return new Propagator();
++ }
++
++ private static void returnPropagator(final Propagator propagator) {
++ synchronized (CACHED_PROPAGATORS) {
++ if (CACHED_PROPAGATORS.size() < MAX_PROPAGATORS) {
++ CACHED_PROPAGATORS.add(propagator);
++ }
++ }
++ }
++
++ private static final int SECTION_RADIUS = 2;
++ private static final int SECTION_CACHE_WIDTH = 2 * SECTION_RADIUS + 1;
++ // minimum number of bits to represent [0, SECTION_SIZE * SECTION_CACHE_WIDTH)
++ private static final int COORDINATE_BITS = 9;
++ private static final int COORDINATE_SIZE = 1 << COORDINATE_BITS;
++ static {
++ if ((SECTION_SIZE * SECTION_CACHE_WIDTH) > (1 << COORDINATE_BITS)) {
++ throw new IllegalStateException("Adjust COORDINATE_BITS");
++ }
++ }
++ // index = x + (z * SECTION_CACHE_WIDTH)
++ // (this requires x >= 0 and z >= 0)
++ private final Section[] sections = new Section[SECTION_CACHE_WIDTH * SECTION_CACHE_WIDTH];
++
++ private int encodeOffsetX;
++ private int encodeOffsetZ;
++
++ private int coordinateOffset;
++
++ private int encodeSectionOffsetX;
++ private int encodeSectionOffsetZ;
++
++ private int sectionIndexOffset;
++
++ public final boolean hasUpdates() {
++ return this.decreaseQueueInitialLength != 0 || this.increaseQueueInitialLength != 0;
++ }
++
++ protected final void setupEncodeOffset(final int centerSectionX, final int centerSectionZ) {
++ final int maxCoordinate = (SECTION_RADIUS * SECTION_SIZE - 1);
++ // must have that encoded >= 0
++ // coordinates can range from [-maxCoordinate + centerSection*SECTION_SIZE, maxCoordinate + centerSection*SECTION_SIZE]
++ // we want a range of [0, maxCoordinate*2]
++ // so, 0 = -maxCoordinate + centerSection*SECTION_SIZE + offset
++ this.encodeOffsetX = maxCoordinate - (centerSectionX << SECTION_SHIFT);
++ this.encodeOffsetZ = maxCoordinate - (centerSectionZ << SECTION_SHIFT);
++
++ // encoded coordinates range from [0, SECTION_SIZE * SECTION_CACHE_WIDTH)
++ // coordinate index = (x + encodeOffsetX) + ((z + encodeOffsetZ) << COORDINATE_BITS)
++ this.coordinateOffset = this.encodeOffsetX + (this.encodeOffsetZ << COORDINATE_BITS);
++
++ // need encoded values to be >= 0
++ // so, 0 = (-SECTION_RADIUS + centerSectionX) + encodeOffset
++ this.encodeSectionOffsetX = SECTION_RADIUS - centerSectionX;
++ this.encodeSectionOffsetZ = SECTION_RADIUS - centerSectionZ;
++
++ // section index = (secX + encodeSectionOffsetX) + ((secZ + encodeSectionOffsetZ) * SECTION_CACHE_WIDTH)
++ this.sectionIndexOffset = this.encodeSectionOffsetX + (this.encodeSectionOffsetZ * SECTION_CACHE_WIDTH);
++ }
++
++ // must hold ticket lock for (centerSectionX,centerSectionZ) in radius rad
++ // must call setupEncodeOffset
++ protected final void setupCaches(final ThreadedTicketLevelPropagator propagator,
++ final int centerSectionX, final int centerSectionZ,
++ final int rad) {
++ for (int dz = -rad; dz <= rad; ++dz) {
++ for (int dx = -rad; dx <= rad; ++dx) {
++ final int sectionX = centerSectionX + dx;
++ final int sectionZ = centerSectionZ + dz;
++ final Coordinate coordinate = new Coordinate(sectionX, sectionZ);
++ final Section section = propagator.sections.get(coordinate);
++
++ if (section == null) {
++ throw new IllegalStateException("Section at " + coordinate + " should not be null");
++ }
++
++ this.setSectionInCache(sectionX, sectionZ, section);
++ }
++ }
++ }
++
++ protected final void setSectionInCache(final int sectionX, final int sectionZ, final Section section) {
++ this.sections[sectionX + SECTION_CACHE_WIDTH*sectionZ + this.sectionIndexOffset] = section;
++ }
++
++ protected final Section getSection(final int sectionX, final int sectionZ) {
++ return this.sections[sectionX + SECTION_CACHE_WIDTH*sectionZ + this.sectionIndexOffset];
++ }
++
++ protected final int getLevel(final int posX, final int posZ) {
++ final Section section = this.sections[(posX >> SECTION_SHIFT) + SECTION_CACHE_WIDTH*(posZ >> SECTION_SHIFT) + this.sectionIndexOffset];
++ if (section != null) {
++ return (int)section.levels[(posX & (SECTION_SIZE - 1)) | ((posZ & (SECTION_SIZE - 1)) << SECTION_SHIFT)] & 0xFF;
++ }
++
++ return 0;
++ }
++
++ protected final void setLevel(final int posX, final int posZ, final int to) {
++ final Section section = this.sections[(posX >> SECTION_SHIFT) + SECTION_CACHE_WIDTH*(posZ >> SECTION_SHIFT) + this.sectionIndexOffset];
++ if (section != null) {
++ final int index = (posX & (SECTION_SIZE - 1)) | ((posZ & (SECTION_SIZE - 1)) << SECTION_SHIFT);
++ final short level = section.levels[index];
++ section.levels[index] = (short)((level & ~0xFF) | (to & 0xFF));
++ this.updatedPositions.put(Coordinate.key(posX, posZ), (byte)to);
++ }
++ }
++
++ protected final void destroyCaches() {
++ Arrays.fill(this.sections, null);
++ }
++
++ // contains:
++ // lower (COORDINATE_BITS(9) + COORDINATE_BITS(9) = 18) bits encoded position: (x | (z << COORDINATE_BITS))
++ // next LEVEL_BITS (6) bits: propagated level [0, 63]
++ // propagation directions bitset (16 bits):
++ protected static final long ALL_DIRECTIONS_BITSET = (
++ // z = -1
++ (1L << ((1 - 1) | ((1 - 1) << 2))) |
++ (1L << ((1 + 0) | ((1 - 1) << 2))) |
++ (1L << ((1 + 1) | ((1 - 1) << 2))) |
++
++ // z = 0
++ (1L << ((1 - 1) | ((1 + 0) << 2))) |
++ //(1L << ((1 + 0) | ((1 + 0) << 2))) | // exclude (0,0)
++ (1L << ((1 + 1) | ((1 + 0) << 2))) |
++
++ // z = 1
++ (1L << ((1 - 1) | ((1 + 1) << 2))) |
++ (1L << ((1 + 0) | ((1 + 1) << 2))) |
++ (1L << ((1 + 1) | ((1 + 1) << 2)))
++ );
++
++ private void ex(int bitset) {
++ for (int i = 0, len = Integer.bitCount(bitset); i < len; ++i) {
++ final int set = Integer.numberOfTrailingZeros(bitset);
++ final int tailingBit = (-bitset) & bitset;
++ // XOR to remove the trailing bit
++ bitset ^= tailingBit;
++
++ // the encoded value set is (x_val) | (z_val << 2), totaling 4 bits
++ // thus, the bitset is 16 bits wide where each one represents a direction to propagate and the
++ // index of the set bit is the encoded value
++ // the encoded coordinate has 3 valid states:
++ // 0b00 (0) -> -1
++ // 0b01 (1) -> 0
++ // 0b10 (2) -> 1
++ // the decode operation then is val - 1, and the encode operation is val + 1
++ final int xOff = (set & 3) - 1;
++ final int zOff = ((set >>> 2) & 3) - 1;
++ System.out.println("Encoded: (" + xOff + "," + zOff + ")");
++ }
++ }
++
++ private void ch(long bs, int shift) {
++ int bitset = (int)(bs >>> shift);
++ for (int i = 0, len = Integer.bitCount(bitset); i < len; ++i) {
++ final int set = Integer.numberOfTrailingZeros(bitset);
++ final int tailingBit = (-bitset) & bitset;
++ // XOR to remove the trailing bit
++ bitset ^= tailingBit;
++
++ // the encoded value set is (x_val) | (z_val << 2), totaling 4 bits
++ // thus, the bitset is 16 bits wide where each one represents a direction to propagate and the
++ // index of the set bit is the encoded value
++ // the encoded coordinate has 3 valid states:
++ // 0b00 (0) -> -1
++ // 0b01 (1) -> 0
++ // 0b10 (2) -> 1
++ // the decode operation then is val - 1, and the encode operation is val + 1
++ final int xOff = (set & 3) - 1;
++ final int zOff = ((set >>> 2) & 3) - 1;
++ if (Math.abs(xOff) > 1 || Math.abs(zOff) > 1 || (xOff | zOff) == 0) {
++ throw new IllegalStateException();
++ }
++ }
++ }
++
++ // whether the increase propagator needs to write the propagated level to the position, used to avoid cascading
++ // updates for sources
++ protected static final long FLAG_WRITE_LEVEL = Long.MIN_VALUE >>> 1;
++ // whether the propagation needs to check if its current level is equal to the expected level
++ // used only in increase propagation
++ protected static final long FLAG_RECHECK_LEVEL = Long.MIN_VALUE >>> 0;
++
++ protected long[] increaseQueue = new long[SECTION_SIZE * SECTION_SIZE * 2];
++ protected int increaseQueueInitialLength;
++ protected long[] decreaseQueue = new long[SECTION_SIZE * SECTION_SIZE * 2];
++ protected int decreaseQueueInitialLength;
++
++ protected final Long2ByteLinkedOpenHashMap updatedPositions = new Long2ByteLinkedOpenHashMap();
++
++ protected final long[] resizeIncreaseQueue() {
++ return this.increaseQueue = Arrays.copyOf(this.increaseQueue, this.increaseQueue.length * 2);
++ }
++
++ protected final long[] resizeDecreaseQueue() {
++ return this.decreaseQueue = Arrays.copyOf(this.decreaseQueue, this.decreaseQueue.length * 2);
++ }
++
++ protected final void appendToIncreaseQueue(final long value) {
++ final int idx = this.increaseQueueInitialLength++;
++ long[] queue = this.increaseQueue;
++ if (idx >= queue.length) {
++ queue = this.resizeIncreaseQueue();
++ queue[idx] = value;
++ return;
++ } else {
++ queue[idx] = value;
++ return;
++ }
++ }
++
++ protected final void appendToDecreaseQueue(final long value) {
++ final int idx = this.decreaseQueueInitialLength++;
++ long[] queue = this.decreaseQueue;
++ if (idx >= queue.length) {
++ queue = this.resizeDecreaseQueue();
++ queue[idx] = value;
++ return;
++ } else {
++ queue[idx] = value;
++ return;
++ }
++ }
++
++ protected final void performIncrease() {
++ long[] queue = this.increaseQueue;
++ int queueReadIndex = 0;
++ int queueLength = this.increaseQueueInitialLength;
++ this.increaseQueueInitialLength = 0;
++ final int decodeOffsetX = -this.encodeOffsetX;
++ final int decodeOffsetZ = -this.encodeOffsetZ;
++ final int encodeOffset = this.coordinateOffset;
++ final int sectionOffset = this.sectionIndexOffset;
++
++ final Long2ByteLinkedOpenHashMap updatedPositions = this.updatedPositions;
++
++ while (queueReadIndex < queueLength) {
++ final long queueValue = queue[queueReadIndex++];
++
++ final int posX = ((int)queueValue & (COORDINATE_SIZE - 1)) + decodeOffsetX;
++ final int posZ = (((int)queueValue >>> COORDINATE_BITS) & (COORDINATE_SIZE - 1)) + decodeOffsetZ;
++ final int propagatedLevel = ((int)queueValue >>> (COORDINATE_BITS + COORDINATE_BITS)) & (LEVEL_COUNT - 1);
++ // note: the above code requires coordinate bits * 2 < 32
++ // bitset is 16 bits
++ int propagateDirectionBitset = (int)(queueValue >>> (COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)) & ((1 << 16) - 1);
++
++ if ((queueValue & FLAG_RECHECK_LEVEL) != 0L) {
++ if (this.getLevel(posX, posZ) != propagatedLevel) {
++ // not at the level we expect, so something changed.
++ continue;
++ }
++ } else if ((queueValue & FLAG_WRITE_LEVEL) != 0L) {
++ // these are used to restore sources after a propagation decrease
++ this.setLevel(posX, posZ, propagatedLevel);
++ }
++
++ // this bitset represents the values that we have not propagated to
++ // this bitset lets us determine what directions the neighbours we set should propagate to, in most cases
++ // significantly reducing the total number of ops
++ // since we propagate in a 1 radius, we need a 2 radius bitset to hold all possible values we would possibly need
++ // but if we use only 5x5 bits, then we need to use div/mod to retrieve coordinates from the bitset, so instead
++ // we use an 8x8 bitset and luckily that can be fit into only one long value (64 bits)
++ // to make things easy, we use positions [0, 4] in the bitset, with current position being 2
++ // index = x | (z << 3)
++
++ // to start, we eliminate everything 1 radius from the current position as the previous propagator
++ // must guarantee that either we propagate everything in 1 radius or we partially propagate for 1 radius
++ // but the rest not propagated are already handled
++ long currentPropagation = ~(
++ // z = -1
++ (1L << ((2 - 1) | ((2 - 1) << 3))) |
++ (1L << ((2 + 0) | ((2 - 1) << 3))) |
++ (1L << ((2 + 1) | ((2 - 1) << 3))) |
++
++ // z = 0
++ (1L << ((2 - 1) | ((2 + 0) << 3))) |
++ (1L << ((2 + 0) | ((2 + 0) << 3))) |
++ (1L << ((2 + 1) | ((2 + 0) << 3))) |
++
++ // z = 1
++ (1L << ((2 - 1) | ((2 + 1) << 3))) |
++ (1L << ((2 + 0) | ((2 + 1) << 3))) |
++ (1L << ((2 + 1) | ((2 + 1) << 3)))
++ );
++
++ final int toPropagate = propagatedLevel - 1;
++
++ // we could use while (propagateDirectionBitset != 0), but it's not a predictable branch. By counting
++ // the bits, the cpu loop predictor should perfectly predict the loop.
++ for (int l = 0, len = Integer.bitCount(propagateDirectionBitset); l < len; ++l) {
++ final int set = Integer.numberOfTrailingZeros(propagateDirectionBitset);
++ final int tailingBit = (-propagateDirectionBitset) & propagateDirectionBitset;
++ propagateDirectionBitset ^= tailingBit;
++
++ // pDecode is from [0, 2], and 1 must be subtracted to fully decode the offset
++ // it has been split to save some cycles via parallelism
++ final int pDecodeX = (set & 3);
++ final int pDecodeZ = ((set >>> 2) & 3);
++
++ // re-ordered -1 on the position decode into pos - 1 to occur in parallel with determining pDecodeX
++ final int offX = (posX - 1) + pDecodeX;
++ final int offZ = (posZ - 1) + pDecodeZ;
++
++ final int sectionIndex = (offX >> SECTION_SHIFT) + ((offZ >> SECTION_SHIFT) * SECTION_CACHE_WIDTH) + sectionOffset;
++ final int localIndex = (offX & (SECTION_SIZE - 1)) | ((offZ & (SECTION_SIZE - 1)) << SECTION_SHIFT);
++
++ // to retrieve a set of bits from a long value: (n_bitmask << (nstartidx)) & bitset
++ // bitset idx = x | (z << 3)
++
++ // read three bits, so we need 7L
++ // note that generally: off - pos = (pos - 1) + pDecode - pos = pDecode - 1
++ // nstartidx1 = x rel -1 for z rel -1
++ // = (offX - posX - 1 + 2) | ((offZ - posZ - 1 + 2) << 3)
++ // = (pDecodeX - 1 - 1 + 2) | ((pDecodeZ - 1 - 1 + 2) << 3)
++ // = pDecodeX | (pDecodeZ << 3) = start
++ final int start = pDecodeX | (pDecodeZ << 3);
++ final long bitsetLine1 = currentPropagation & (7L << (start));
++
++ // nstartidx2 = x rel -1 for z rel 0 = line after line1, so we can just add 8 (row length of bitset)
++ final long bitsetLine2 = currentPropagation & (7L << (start + 8));
++
++ // nstartidx2 = x rel -1 for z rel 0 = line after line2, so we can just add 8 (row length of bitset)
++ final long bitsetLine3 = currentPropagation & (7L << (start + (8 + 8)));
++
++ // remove ("take") lines from bitset
++ currentPropagation ^= (bitsetLine1 | bitsetLine2 | bitsetLine3);
++
++ // now try to propagate
++ final Section section = this.sections[sectionIndex];
++
++ // lower 8 bits are current level, next upper 7 bits are source level, next 1 bit is updated source flag
++ final short currentStoredLevel = section.levels[localIndex];
++ final int currentLevel = currentStoredLevel & 0xFF;
++
++ if (currentLevel >= toPropagate) {
++ continue; // already at the level we want
++ }
++
++ // update level
++ section.levels[localIndex] = (short)((currentStoredLevel & ~0xFF) | (toPropagate & 0xFF));
++ updatedPositions.putAndMoveToLast(Coordinate.key(offX, offZ), (byte)toPropagate);
++
++ // queue next
++ if (toPropagate > 1) {
++ // now combine into one bitset to pass to child
++ // the child bitset is 4x4, so we just shift each line by 4
++ // add the propagation bitset offset to each line to make it easy to OR it into the propagation queue value
++ final long childPropagation =
++ ((bitsetLine1 >>> (start)) << (COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)) | // z = -1
++ ((bitsetLine2 >>> (start + 8)) << (4 + COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)) | // z = 0
++ ((bitsetLine3 >>> (start + (8 + 8))) << (4 + 4 + COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)); // z = 1
++
++ // don't queue update if toPropagate cannot propagate anything to neighbours
++ // (for increase, propagating 0 to neighbours is useless)
++ if (queueLength >= queue.length) {
++ queue = this.resizeIncreaseQueue();
++ }
++ queue[queueLength++] =
++ ((long)(offX + (offZ << COORDINATE_BITS) + encodeOffset) & ((1L << (COORDINATE_BITS + COORDINATE_BITS)) - 1)) |
++ ((toPropagate & (LEVEL_COUNT - 1L)) << (COORDINATE_BITS + COORDINATE_BITS)) |
++ childPropagation; //(ALL_DIRECTIONS_BITSET << (COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS));
++ continue;
++ }
++ continue;
++ }
++ }
++ }
++
++ protected final void performDecrease() {
++ long[] queue = this.decreaseQueue;
++ long[] increaseQueue = this.increaseQueue;
++ int queueReadIndex = 0;
++ int queueLength = this.decreaseQueueInitialLength;
++ this.decreaseQueueInitialLength = 0;
++ int increaseQueueLength = this.increaseQueueInitialLength;
++ final int decodeOffsetX = -this.encodeOffsetX;
++ final int decodeOffsetZ = -this.encodeOffsetZ;
++ final int encodeOffset = this.coordinateOffset;
++ final int sectionOffset = this.sectionIndexOffset;
++
++ final Long2ByteLinkedOpenHashMap updatedPositions = this.updatedPositions;
++
++ while (queueReadIndex < queueLength) {
++ final long queueValue = queue[queueReadIndex++];
++
++ final int posX = ((int)queueValue & (COORDINATE_SIZE - 1)) + decodeOffsetX;
++ final int posZ = (((int)queueValue >>> COORDINATE_BITS) & (COORDINATE_SIZE - 1)) + decodeOffsetZ;
++ final int propagatedLevel = ((int)queueValue >>> (COORDINATE_BITS + COORDINATE_BITS)) & (LEVEL_COUNT - 1);
++ // note: the above code requires coordinate bits * 2 < 32
++ // bitset is 16 bits
++ int propagateDirectionBitset = (int)(queueValue >>> (COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)) & ((1 << 16) - 1);
++
++ // this bitset represents the values that we have not propagated to
++ // this bitset lets us determine what directions the neighbours we set should propagate to, in most cases
++ // significantly reducing the total number of ops
++ // since we propagate in a 1 radius, we need a 2 radius bitset to hold all possible values we would possibly need
++ // but if we use only 5x5 bits, then we need to use div/mod to retrieve coordinates from the bitset, so instead
++ // we use an 8x8 bitset and luckily that can be fit into only one long value (64 bits)
++ // to make things easy, we use positions [0, 4] in the bitset, with current position being 2
++ // index = x | (z << 3)
++
++ // to start, we eliminate everything 1 radius from the current position as the previous propagator
++ // must guarantee that either we propagate everything in 1 radius or we partially propagate for 1 radius
++ // but the rest not propagated are already handled
++ long currentPropagation = ~(
++ // z = -1
++ (1L << ((2 - 1) | ((2 - 1) << 3))) |
++ (1L << ((2 + 0) | ((2 - 1) << 3))) |
++ (1L << ((2 + 1) | ((2 - 1) << 3))) |
++
++ // z = 0
++ (1L << ((2 - 1) | ((2 + 0) << 3))) |
++ (1L << ((2 + 0) | ((2 + 0) << 3))) |
++ (1L << ((2 + 1) | ((2 + 0) << 3))) |
++
++ // z = 1
++ (1L << ((2 - 1) | ((2 + 1) << 3))) |
++ (1L << ((2 + 0) | ((2 + 1) << 3))) |
++ (1L << ((2 + 1) | ((2 + 1) << 3)))
++ );
++
++ final int toPropagate = propagatedLevel - 1;
++
++ // we could use while (propagateDirectionBitset != 0), but it's not a predictable branch. By counting
++ // the bits, the cpu loop predictor should perfectly predict the loop.
++ for (int l = 0, len = Integer.bitCount(propagateDirectionBitset); l < len; ++l) {
++ final int set = Integer.numberOfTrailingZeros(propagateDirectionBitset);
++ final int tailingBit = (-propagateDirectionBitset) & propagateDirectionBitset;
++ propagateDirectionBitset ^= tailingBit;
++
++
++ // pDecode is from [0, 2], and 1 must be subtracted to fully decode the offset
++ // it has been split to save some cycles via parallelism
++ final int pDecodeX = (set & 3);
++ final int pDecodeZ = ((set >>> 2) & 3);
++
++ // re-ordered -1 on the position decode into pos - 1 to occur in parallel with determining pDecodeX
++ final int offX = (posX - 1) + pDecodeX;
++ final int offZ = (posZ - 1) + pDecodeZ;
++
++ final int sectionIndex = (offX >> SECTION_SHIFT) + ((offZ >> SECTION_SHIFT) * SECTION_CACHE_WIDTH) + sectionOffset;
++ final int localIndex = (offX & (SECTION_SIZE - 1)) | ((offZ & (SECTION_SIZE - 1)) << SECTION_SHIFT);
++
++ // to retrieve a set of bits from a long value: (n_bitmask << (nstartidx)) & bitset
++ // bitset idx = x | (z << 3)
++
++ // read three bits, so we need 7L
++ // note that generally: off - pos = (pos - 1) + pDecode - pos = pDecode - 1
++ // nstartidx1 = x rel -1 for z rel -1
++ // = (offX - posX - 1 + 2) | ((offZ - posZ - 1 + 2) << 3)
++ // = (pDecodeX - 1 - 1 + 2) | ((pDecodeZ - 1 - 1 + 2) << 3)
++ // = pDecodeX | (pDecodeZ << 3) = start
++ final int start = pDecodeX | (pDecodeZ << 3);
++ final long bitsetLine1 = currentPropagation & (7L << (start));
++
++ // nstartidx2 = x rel -1 for z rel 0 = line after line1, so we can just add 8 (row length of bitset)
++ final long bitsetLine2 = currentPropagation & (7L << (start + 8));
++
++ // nstartidx2 = x rel -1 for z rel 0 = line after line2, so we can just add 8 (row length of bitset)
++ final long bitsetLine3 = currentPropagation & (7L << (start + (8 + 8)));
++
++ // now try to propagate
++ final Section section = this.sections[sectionIndex];
++
++ // lower 8 bits are current level, next upper 7 bits are source level, next 1 bit is updated source flag
++ final short currentStoredLevel = section.levels[localIndex];
++ final int currentLevel = currentStoredLevel & 0xFF;
++ final int sourceLevel = (currentStoredLevel >>> 8) & 0xFF;
++
++ if (currentLevel == 0) {
++ continue; // already at the level we want
++ }
++
++ if (currentLevel > toPropagate) {
++ // it looks like another source propagated here, so re-propagate it
++ if (increaseQueueLength >= increaseQueue.length) {
++ increaseQueue = this.resizeIncreaseQueue();
++ }
++ increaseQueue[increaseQueueLength++] =
++ ((long)(offX + (offZ << COORDINATE_BITS) + encodeOffset) & ((1L << (COORDINATE_BITS + COORDINATE_BITS)) - 1)) |
++ ((currentLevel & (LEVEL_COUNT - 1L)) << (COORDINATE_BITS + COORDINATE_BITS)) |
++ (FLAG_RECHECK_LEVEL | (ALL_DIRECTIONS_BITSET << (COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)));
++ continue;
++ }
++
++ // remove ("take") lines from bitset
++ // can't do this during decrease, TODO WHY?
++ //currentPropagation ^= (bitsetLine1 | bitsetLine2 | bitsetLine3);
++
++ // update level
++ section.levels[localIndex] = (short)((currentStoredLevel & ~0xFF));
++ updatedPositions.putAndMoveToLast(Coordinate.key(offX, offZ), (byte)0);
++
++ if (sourceLevel != 0) {
++ // re-propagate source
++ // note: do not set recheck level, or else the propagation will fail
++ if (increaseQueueLength >= increaseQueue.length) {
++ increaseQueue = this.resizeIncreaseQueue();
++ }
++ increaseQueue[increaseQueueLength++] =
++ ((long)(offX + (offZ << COORDINATE_BITS) + encodeOffset) & ((1L << (COORDINATE_BITS + COORDINATE_BITS)) - 1)) |
++ ((sourceLevel & (LEVEL_COUNT - 1L)) << (COORDINATE_BITS + COORDINATE_BITS)) |
++ (FLAG_WRITE_LEVEL | (ALL_DIRECTIONS_BITSET << (COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)));
++ }
++
++ // queue next
++ // note: targetLevel > 0 here, since toPropagate >= currentLevel and currentLevel > 0
++ // now combine into one bitset to pass to child
++ // the child bitset is 4x4, so we just shift each line by 4
++ // add the propagation bitset offset to each line to make it easy to OR it into the propagation queue value
++ final long childPropagation =
++ ((bitsetLine1 >>> (start)) << (COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)) | // z = -1
++ ((bitsetLine2 >>> (start + 8)) << (4 + COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)) | // z = 0
++ ((bitsetLine3 >>> (start + (8 + 8))) << (4 + 4 + COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)); // z = 1
++
++ // don't queue update if toPropagate cannot propagate anything to neighbours
++ // (for increase, propagating 0 to neighbours is useless)
++ if (queueLength >= queue.length) {
++ queue = this.resizeDecreaseQueue();
++ }
++ queue[queueLength++] =
++ ((long)(offX + (offZ << COORDINATE_BITS) + encodeOffset) & ((1L << (COORDINATE_BITS + COORDINATE_BITS)) - 1)) |
++ ((toPropagate & (LEVEL_COUNT - 1L)) << (COORDINATE_BITS + COORDINATE_BITS)) |
++ (ALL_DIRECTIONS_BITSET << (COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)); //childPropagation;
++ continue;
++ }
++ }
++
++ // propagate sources we clobbered
++ this.increaseQueueInitialLength = increaseQueueLength;
++ this.performIncrease();
++ }
++ }
++
++ private static final class Coordinate implements Comparable<Coordinate> {
++
++ public final long key;
++
++ public Coordinate(final long key) {
++ this.key = key;
++ }
++
++ public Coordinate(final int x, final int z) {
++ this.key = key(x, z);
++ }
++
++ public static long key(final int x, final int z) {
++ return ((long)z << 32) | (x & 0xFFFFFFFFL);
++ }
++
++ public static int x(final long key) {
++ return (int)key;
++ }
++
++ public static int z(final long key) {
++ return (int)(key >>> 32);
++ }
++
++ @Override
++ public int hashCode() {
++ return (int)HashCommon.mix(this.key);
++ }
++
++ @Override
++ public boolean equals(final Object obj) {
++ if (this == obj) {
++ return true;
++ }
++
++ if (!(obj instanceof Coordinate other)) {
++ return false;
++ }
++
++ return this.key == other.key;
++ }
++
++ // This class is intended for HashMap/ConcurrentHashMap usage, which do treeify bin nodes if the chain
++ // is too large. So we should implement compareTo to help.
++ @Override
++ public int compareTo(final Coordinate other) {
++ return Long.compare(this.key, other.key);
++ }
++
++ @Override
++ public String toString() {
++ return "[" + x(this.key) + "," + z(this.key) + "]";
++ }
++ }
++
++ /*
++ private static final java.util.Random random = new java.util.Random(4L);
++ private static final List<io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.SingleUserAreaMap<Void>> walkers =
++ new java.util.ArrayList<>();
++ static final int PLAYERS = 0;
++ static final int RAD_BLOCKS = 10000;
++ static final int RAD = RAD_BLOCKS >> 4;
++ static final int RAD_BIG_BLOCKS = 100_000;
++ static final int RAD_BIG = RAD_BIG_BLOCKS >> 4;
++ static final int VD = 4;
++ static final int BIG_PLAYERS = 50;
++ static final double WALK_CHANCE = 0.10;
++ static final double TP_CHANCE = 0.01;
++ static final int TP_BACK_PLAYERS = 200;
++ static final double TP_BACK_CHANCE = 0.25;
++ static final double TP_STEAL_CHANCE = 0.25;
++ private static final List<io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.SingleUserAreaMap<Void>> tpBack =
++ new java.util.ArrayList<>();
++
++ public static void main(final String[] args) {
++ final ReentrantAreaLock ticketLock = new ReentrantAreaLock(SECTION_SHIFT);
++ final ReentrantAreaLock schedulingLock = new ReentrantAreaLock(SECTION_SHIFT);
++ final Long2ByteLinkedOpenHashMap levelMap = new Long2ByteLinkedOpenHashMap();
++ final Long2ByteLinkedOpenHashMap refMap = new Long2ByteLinkedOpenHashMap();
++ final io.papermc.paper.util.misc.Delayed8WayDistancePropagator2D ref = new io.papermc.paper.util.misc.Delayed8WayDistancePropagator2D((final long coordinate, final byte oldLevel, final byte newLevel) -> {
++ if (newLevel == 0) {
++ refMap.remove(coordinate);
++ } else {
++ refMap.put(coordinate, newLevel);
++ }
++ });
++ final ThreadedTicketLevelPropagator propagator = new ThreadedTicketLevelPropagator() {
++ @Override
++ protected void processLevelUpdates(Long2ByteLinkedOpenHashMap updates) {
++ for (final long key : updates.keySet()) {
++ final byte val = updates.get(key);
++ if (val == 0) {
++ levelMap.remove(key);
++ } else {
++ levelMap.put(key, val);
++ }
++ }
++ }
++
++ @Override
++ protected void processSchedulingUpdates(Long2ByteLinkedOpenHashMap updates, List<ChunkProgressionTask> scheduledTasks, List<NewChunkHolder> changedFullStatus) {}
++ };
++
++ for (;;) {
++ if (walkers.isEmpty() && tpBack.isEmpty()) {
++ for (int i = 0; i < PLAYERS; ++i) {
++ int rad = i < BIG_PLAYERS ? RAD_BIG : RAD;
++ int posX = random.nextInt(-rad, rad + 1);
++ int posZ = random.nextInt(-rad, rad + 1);
++
++ io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.SingleUserAreaMap<Void> map = new io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.SingleUserAreaMap<>(null) {
++ @Override
++ protected void addCallback(Void parameter, int chunkX, int chunkZ) {
++ int src = 45 - 31 + 1;
++ ref.setSource(chunkX, chunkZ, src);
++ propagator.setSource(chunkX, chunkZ, src);
++ }
++
++ @Override
++ protected void removeCallback(Void parameter, int chunkX, int chunkZ) {
++ ref.removeSource(chunkX, chunkZ);
++ propagator.removeSource(chunkX, chunkZ);
++ }
++ };
++
++ map.add(posX, posZ, VD);
++
++ walkers.add(map);
++ }
++ for (int i = 0; i < TP_BACK_PLAYERS; ++i) {
++ int rad = RAD_BIG;
++ int posX = random.nextInt(-rad, rad + 1);
++ int posZ = random.nextInt(-rad, rad + 1);
++
++ io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.SingleUserAreaMap<Void> map = new io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.SingleUserAreaMap<>(null) {
++ @Override
++ protected void addCallback(Void parameter, int chunkX, int chunkZ) {
++ int src = 45 - 31 + 1;
++ ref.setSource(chunkX, chunkZ, src);
++ propagator.setSource(chunkX, chunkZ, src);
++ }
++
++ @Override
++ protected void removeCallback(Void parameter, int chunkX, int chunkZ) {
++ ref.removeSource(chunkX, chunkZ);
++ propagator.removeSource(chunkX, chunkZ);
++ }
++ };
++
++ map.add(posX, posZ, random.nextInt(1, 63));
++
++ tpBack.add(map);
++ }
++ } else {
++ for (int i = 0; i < PLAYERS; ++i) {
++ if (random.nextDouble() > WALK_CHANCE) {
++ continue;
++ }
++
++ io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.SingleUserAreaMap<Void> map = walkers.get(i);
++
++ int updateX = random.nextInt(-1, 2);
++ int updateZ = random.nextInt(-1, 2);
++
++ map.update(map.lastChunkX + updateX, map.lastChunkZ + updateZ, VD);
++ }
++
++ for (int i = 0; i < PLAYERS; ++i) {
++ if (random.nextDouble() > TP_CHANCE) {
++ continue;
++ }
++
++ int rad = i < BIG_PLAYERS ? RAD_BIG : RAD;
++ int posX = random.nextInt(-rad, rad + 1);
++ int posZ = random.nextInt(-rad, rad + 1);
++
++ io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.SingleUserAreaMap<Void> map = walkers.get(i);
++
++ map.update(posX, posZ, VD);
++ }
++
++ for (int i = 0; i < TP_BACK_PLAYERS; ++i) {
++ if (random.nextDouble() > TP_BACK_CHANCE) {
++ continue;
++ }
++
++ io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.SingleUserAreaMap<Void> map = tpBack.get(i);
++
++ map.update(-map.lastChunkX, -map.lastChunkZ, random.nextInt(1, 63));
++
++ if (random.nextDouble() > TP_STEAL_CHANCE) {
++ propagator.performUpdate(
++ map.lastChunkX >> SECTION_SHIFT, map.lastChunkZ >> SECTION_SHIFT, schedulingLock, null, null
++ );
++ propagator.performUpdate(
++ (-map.lastChunkX >> SECTION_SHIFT), (-map.lastChunkZ >> SECTION_SHIFT), schedulingLock, null, null
++ );
++ }
++ }
++ }
++
++ ref.propagateUpdates();
++ propagator.performUpdates(ticketLock, schedulingLock, null, null);
++
++ if (!refMap.equals(levelMap)) {
++ throw new IllegalStateException("Error!");
++ }
++ }
++ }
++ */
++}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/queue/RadiusAwarePrioritisedExecutor.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/queue/RadiusAwarePrioritisedExecutor.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..f7b0e2564ac4bd2db1d2b2bdc230c9f52f8a21b7
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/queue/RadiusAwarePrioritisedExecutor.java
+@@ -0,0 +1,667 @@
++package io.papermc.paper.chunk.system.scheduling.queue;
++
++import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
++import io.papermc.paper.util.CoordinateUtils;
++import it.unimi.dsi.fastutil.longs.Long2ReferenceOpenHashMap;
++import it.unimi.dsi.fastutil.objects.ReferenceOpenHashSet;
++import java.util.ArrayList;
++import java.util.Comparator;
++import java.util.List;
++import java.util.PriorityQueue;
++
++public class RadiusAwarePrioritisedExecutor {
++
++ private static final Comparator<DependencyNode> DEPENDENCY_NODE_COMPARATOR = (final DependencyNode t1, final DependencyNode t2) -> {
++ return Long.compare(t1.id, t2.id);
++ };
++
++ private final DependencyTree[] queues = new DependencyTree[PrioritisedExecutor.Priority.TOTAL_SCHEDULABLE_PRIORITIES];
++ private static final int NO_TASKS_QUEUED = -1;
++ private int selectedQueue = NO_TASKS_QUEUED;
++ private boolean canQueueTasks = true;
++
++ public RadiusAwarePrioritisedExecutor(final PrioritisedExecutor executor, final int maxToSchedule) {
++ for (int i = 0; i < this.queues.length; ++i) {
++ this.queues[i] = new DependencyTree(this, executor, maxToSchedule, i);
++ }
++ }
++
++ private boolean canQueueTasks() {
++ return this.canQueueTasks;
++ }
++
++ private List<PrioritisedExecutor.PrioritisedTask> treeFinished() {
++ this.canQueueTasks = true;
++ for (int priority = 0; priority < this.queues.length; ++priority) {
++ final DependencyTree queue = this.queues[priority];
++ if (queue.hasWaitingTasks()) {
++ final List<PrioritisedExecutor.PrioritisedTask> ret = queue.tryPushTasks();
++
++ if (ret == null || ret.isEmpty()) {
++ // this happens when the tasks in the wait queue were purged
++ // in this case, the queue was actually empty, we just had to purge it
++ // if we set the selected queue without scheduling any tasks, the queue will never be unselected
++ // as that requires a scheduled task completing...
++ continue;
++ }
++
++ this.selectedQueue = priority;
++ return ret;
++ }
++ }
++
++ this.selectedQueue = NO_TASKS_QUEUED;
++
++ return null;
++ }
++
++ private List<PrioritisedExecutor.PrioritisedTask> queue(final Task task, final PrioritisedExecutor.Priority priority) {
++ final int priorityId = priority.priority;
++ final DependencyTree queue = this.queues[priorityId];
++
++ final DependencyNode node = new DependencyNode(task, queue);
++
++ if (task.dependencyNode != null) {
++ throw new IllegalStateException();
++ }
++ task.dependencyNode = node;
++
++ queue.pushNode(node);
++
++ if (this.selectedQueue == NO_TASKS_QUEUED) {
++ this.canQueueTasks = true;
++ this.selectedQueue = priorityId;
++ return queue.tryPushTasks();
++ }
++
++ if (!this.canQueueTasks) {
++ return null;
++ }
++
++ if (PrioritisedExecutor.Priority.isHigherPriority(priorityId, this.selectedQueue)) {
++ // prevent the lower priority tree from queueing more tasks
++ this.canQueueTasks = false;
++ return null;
++ }
++
++ // priorityId != selectedQueue: lower priority, don't care - treeFinished will pick it up
++ return priorityId == this.selectedQueue ? queue.tryPushTasks() : null;
++ }
++
++ public PrioritisedExecutor.PrioritisedTask createTask(final int chunkX, final int chunkZ, final int radius,
++ final Runnable run, final PrioritisedExecutor.Priority priority) {
++ if (radius < 0) {
++ throw new IllegalArgumentException("Radius must be > 0: " + radius);
++ }
++ return new Task(this, chunkX, chunkZ, radius, run, priority);
++ }
++
++ public PrioritisedExecutor.PrioritisedTask createTask(final int chunkX, final int chunkZ, final int radius,
++ final Runnable run) {
++ return this.createTask(chunkX, chunkZ, radius, run, PrioritisedExecutor.Priority.NORMAL);
++ }
++
++ public PrioritisedExecutor.PrioritisedTask queueTask(final int chunkX, final int chunkZ, final int radius,
++ final Runnable run, final PrioritisedExecutor.Priority priority) {
++ final PrioritisedExecutor.PrioritisedTask ret = this.createTask(chunkX, chunkZ, radius, run, priority);
++
++ ret.queue();
++
++ return ret;
++ }
++
++ public PrioritisedExecutor.PrioritisedTask queueTask(final int chunkX, final int chunkZ, final int radius,
++ final Runnable run) {
++ final PrioritisedExecutor.PrioritisedTask ret = this.createTask(chunkX, chunkZ, radius, run);
++
++ ret.queue();
++
++ return ret;
++ }
++
++ public PrioritisedExecutor.PrioritisedTask createInfiniteRadiusTask(final Runnable run, final PrioritisedExecutor.Priority priority) {
++ return new Task(this, 0, 0, -1, run, priority);
++ }
++
++ public PrioritisedExecutor.PrioritisedTask createInfiniteRadiusTask(final Runnable run) {
++ return this.createInfiniteRadiusTask(run, PrioritisedExecutor.Priority.NORMAL);
++ }
++
++ public PrioritisedExecutor.PrioritisedTask queueInfiniteRadiusTask(final Runnable run, final PrioritisedExecutor.Priority priority) {
++ final PrioritisedExecutor.PrioritisedTask ret = this.createInfiniteRadiusTask(run, priority);
++
++ ret.queue();
++
++ return ret;
++ }
++
++ public PrioritisedExecutor.PrioritisedTask queueInfiniteRadiusTask(final Runnable run) {
++ final PrioritisedExecutor.PrioritisedTask ret = this.createInfiniteRadiusTask(run, PrioritisedExecutor.Priority.NORMAL);
++
++ ret.queue();
++
++ return ret;
++ }
++
++ // all accesses must be synchronised by the radius aware object
++ private static final class DependencyTree {
++
++ private final RadiusAwarePrioritisedExecutor scheduler;
++ private final PrioritisedExecutor executor;
++ private final int maxToSchedule;
++ private final int treeIndex;
++
++ private int currentlyExecuting;
++ private long idGenerator;
++
++ private final PriorityQueue<DependencyNode> awaiting = new PriorityQueue<>(DEPENDENCY_NODE_COMPARATOR);
++
++ private final PriorityQueue<DependencyNode> infiniteRadius = new PriorityQueue<>(DEPENDENCY_NODE_COMPARATOR);
++ private boolean isInfiniteRadiusScheduled;
++
++ private final Long2ReferenceOpenHashMap<DependencyNode> nodeByPosition = new Long2ReferenceOpenHashMap<>();
++
++ public DependencyTree(final RadiusAwarePrioritisedExecutor scheduler, final PrioritisedExecutor executor,
++ final int maxToSchedule, final int treeIndex) {
++ this.scheduler = scheduler;
++ this.executor = executor;
++ this.maxToSchedule = maxToSchedule;
++ this.treeIndex = treeIndex;
++ }
++
++ public boolean hasWaitingTasks() {
++ return !this.awaiting.isEmpty() || !this.infiniteRadius.isEmpty();
++ }
++
++ private long nextId() {
++ return this.idGenerator++;
++ }
++
++ private boolean isExecutingAnyTasks() {
++ return this.currentlyExecuting != 0;
++ }
++
++ private void pushNode(final DependencyNode node) {
++ if (!node.task.isFiniteRadius()) {
++ this.infiniteRadius.add(node);
++ return;
++ }
++
++ // set up dependency for node
++ final Task task = node.task;
++
++ final int centerX = task.chunkX;
++ final int centerZ = task.chunkZ;
++ final int radius = task.radius;
++
++ final int minX = centerX - radius;
++ final int maxX = centerX + radius;
++
++ final int minZ = centerZ - radius;
++ final int maxZ = centerZ + radius;
++
++ ReferenceOpenHashSet<DependencyNode> parents = null;
++ for (int currZ = minZ; currZ <= maxZ; ++currZ) {
++ for (int currX = minX; currX <= maxX; ++currX) {
++ final DependencyNode dependency = this.nodeByPosition.put(CoordinateUtils.getChunkKey(currX, currZ), node);
++ if (dependency != null) {
++ if (parents == null) {
++ parents = new ReferenceOpenHashSet<>();
++ }
++ if (parents.add(dependency)) {
++ // added a dependency, so we need to add as a child to the dependency
++ if (dependency.children == null) {
++ dependency.children = new ArrayList<>();
++ }
++ dependency.children.add(node);
++ }
++ }
++ }
++ }
++
++ if (parents == null) {
++ // no dependencies, add straight to awaiting
++ this.awaiting.add(node);
++ } else {
++ node.parents = parents.size();
++ // we will be added to awaiting once we have no parents
++ }
++ }
++
++ // called only when a node is returned after being executed
++ private List<PrioritisedExecutor.PrioritisedTask> returnNode(final DependencyNode node) {
++ final Task task = node.task;
++
++ // now that the task is completed, we can push its children to the awaiting queue
++ this.pushChildren(node);
++
++ if (task.isFiniteRadius()) {
++ // remove from dependency map
++ this.removeNodeFromMap(node);
++ } else {
++ // mark as no longer executing infinite radius
++ if (!this.isInfiniteRadiusScheduled) {
++ throw new IllegalStateException();
++ }
++ this.isInfiniteRadiusScheduled = false;
++ }
++
++ // decrement executing count, we are done executing this task
++ --this.currentlyExecuting;
++
++ if (this.currentlyExecuting == 0) {
++ return this.scheduler.treeFinished();
++ }
++
++ return this.scheduler.canQueueTasks() ? this.tryPushTasks() : null;
++ }
++
++ private List<PrioritisedExecutor.PrioritisedTask> tryPushTasks() {
++ // tasks are not queued, but only created here - we do hold the lock for the map
++ List<PrioritisedExecutor.PrioritisedTask> ret = null;
++ PrioritisedExecutor.PrioritisedTask pushedTask;
++ while ((pushedTask = this.tryPushTask()) != null) {
++ if (ret == null) {
++ ret = new ArrayList<>();
++ }
++ ret.add(pushedTask);
++ }
++
++ return ret;
++ }
++
++ private void removeNodeFromMap(final DependencyNode node) {
++ final Task task = node.task;
++
++ final int centerX = task.chunkX;
++ final int centerZ = task.chunkZ;
++ final int radius = task.radius;
++
++ final int minX = centerX - radius;
++ final int maxX = centerX + radius;
++
++ final int minZ = centerZ - radius;
++ final int maxZ = centerZ + radius;
++
++ for (int currZ = minZ; currZ <= maxZ; ++currZ) {
++ for (int currX = minX; currX <= maxX; ++currX) {
++ this.nodeByPosition.remove(CoordinateUtils.getChunkKey(currX, currZ), node);
++ }
++ }
++ }
++
++ private void pushChildren(final DependencyNode node) {
++ // add all the children that we can into awaiting
++ final List<DependencyNode> children = node.children;
++ if (children != null) {
++ for (int i = 0, len = children.size(); i < len; ++i) {
++ final DependencyNode child = children.get(i);
++ int newParents = --child.parents;
++ if (newParents == 0) {
++ // no more dependents, we can push to awaiting
++ // even if the child is purged, we need to push it so that its children will be pushed
++ this.awaiting.add(child);
++ } else if (newParents < 0) {
++ throw new IllegalStateException();
++ }
++ }
++ }
++ }
++
++ private DependencyNode pollAwaiting() {
++ final DependencyNode ret = this.awaiting.poll();
++ if (ret == null) {
++ return ret;
++ }
++
++ if (ret.parents != 0) {
++ throw new IllegalStateException();
++ }
++
++ if (ret.purged) {
++ // need to manually remove from state here
++ this.pushChildren(ret);
++ this.removeNodeFromMap(ret);
++ } // else: delay children push until the task has finished
++
++ return ret;
++ }
++
++ private DependencyNode pollInfinite() {
++ return this.infiniteRadius.poll();
++ }
++
++ public PrioritisedExecutor.PrioritisedTask tryPushTask() {
++ if (this.currentlyExecuting >= this.maxToSchedule || this.isInfiniteRadiusScheduled) {
++ return null;
++ }
++
++ DependencyNode firstInfinite;
++ while ((firstInfinite = this.infiniteRadius.peek()) != null && firstInfinite.purged) {
++ this.pollInfinite();
++ }
++
++ DependencyNode firstAwaiting;
++ while ((firstAwaiting = this.awaiting.peek()) != null && firstAwaiting.purged) {
++ this.pollAwaiting();
++ }
++
++ if (firstInfinite == null && firstAwaiting == null) {
++ return null;
++ }
++
++ // firstAwaiting compared to firstInfinite
++ final int compare;
++
++ if (firstAwaiting == null) {
++ // we choose first infinite, or infinite < awaiting
++ compare = 1;
++ } else if (firstInfinite == null) {
++ // we choose first awaiting, or awaiting < infinite
++ compare = -1;
++ } else {
++ compare = DEPENDENCY_NODE_COMPARATOR.compare(firstAwaiting, firstInfinite);
++ }
++
++ if (compare >= 0) {
++ if (this.currentlyExecuting != 0) {
++ // don't queue infinite task while other tasks are executing in parallel
++ return null;
++ }
++ ++this.currentlyExecuting;
++ this.pollInfinite();
++ this.isInfiniteRadiusScheduled = true;
++ return firstInfinite.task.pushTask(this.executor);
++ } else {
++ ++this.currentlyExecuting;
++ this.pollAwaiting();
++ return firstAwaiting.task.pushTask(this.executor);
++ }
++ }
++ }
++
++ private static final class DependencyNode {
++
++ private final Task task;
++ private final DependencyTree tree;
++
++ // dependency tree fields
++ // (must hold lock on the scheduler to use)
++ // null is the same as empty, we just use it so that we don't allocate the set unless we need to
++ private List<DependencyNode> children;
++ // 0 indicates that this task is considered "awaiting"
++ private int parents;
++ // false -> scheduled and not cancelled
++ // true -> scheduled but cancelled
++ private boolean purged;
++ private final long id;
++
++ public DependencyNode(final Task task, final DependencyTree tree) {
++ this.task = task;
++ this.id = tree.nextId();
++ this.tree = tree;
++ }
++ }
++
++ private static final class Task implements PrioritisedExecutor.PrioritisedTask, Runnable {
++
++ // task specific fields
++ private final RadiusAwarePrioritisedExecutor scheduler;
++ private final int chunkX;
++ private final int chunkZ;
++ private final int radius;
++ private Runnable run;
++ private PrioritisedExecutor.Priority priority;
++
++ private DependencyNode dependencyNode;
++ private PrioritisedExecutor.PrioritisedTask queuedTask;
++
++ private Task(final RadiusAwarePrioritisedExecutor scheduler, final int chunkX, final int chunkZ, final int radius,
++ final Runnable run, final PrioritisedExecutor.Priority priority) {
++ this.scheduler = scheduler;
++ this.chunkX = chunkX;
++ this.chunkZ = chunkZ;
++ this.radius = radius;
++ this.run = run;
++ this.priority = priority;
++ }
++
++ private boolean isFiniteRadius() {
++ return this.radius >= 0;
++ }
++
++ private PrioritisedExecutor.PrioritisedTask pushTask(final PrioritisedExecutor executor) {
++ return this.queuedTask = executor.createTask(this, this.priority);
++ }
++
++ private void executeTask() {
++ final Runnable run = this.run;
++ this.run = null;
++ run.run();
++ }
++
++ private static void scheduleTasks(final List<PrioritisedExecutor.PrioritisedTask> toSchedule) {
++ if (toSchedule != null) {
++ for (int i = 0, len = toSchedule.size(); i < len; ++i) {
++ toSchedule.get(i).queue();
++ }
++ }
++ }
++
++ private void returnNode() {
++ final List<PrioritisedExecutor.PrioritisedTask> toSchedule;
++ synchronized (this.scheduler) {
++ final DependencyNode node = this.dependencyNode;
++ this.dependencyNode = null;
++ toSchedule = node.tree.returnNode(node);
++ }
++
++ scheduleTasks(toSchedule);
++ }
++
++ @Override
++ public void run() {
++ final Runnable run = this.run;
++ this.run = null;
++ try {
++ run.run();
++ } finally {
++ this.returnNode();
++ }
++ }
++
++ @Override
++ public boolean queue() {
++ final List<PrioritisedExecutor.PrioritisedTask> toSchedule;
++ synchronized (this.scheduler) {
++ if (this.queuedTask != null || this.dependencyNode != null || this.priority == PrioritisedExecutor.Priority.COMPLETING) {
++ return false;
++ }
++
++ toSchedule = this.scheduler.queue(this, this.priority);
++ }
++
++ scheduleTasks(toSchedule);
++ return true;
++ }
++
++ @Override
++ public boolean cancel() {
++ final PrioritisedExecutor.PrioritisedTask task;
++ synchronized (this.scheduler) {
++ if ((task = this.queuedTask) == null) {
++ if (this.priority == PrioritisedExecutor.Priority.COMPLETING) {
++ return false;
++ }
++
++ this.priority = PrioritisedExecutor.Priority.COMPLETING;
++ if (this.dependencyNode != null) {
++ this.dependencyNode.purged = true;
++ this.dependencyNode = null;
++ }
++
++ return true;
++ }
++ }
++
++ if (task.cancel()) {
++ // must manually return the node
++ this.run = null;
++ this.returnNode();
++ return true;
++ }
++ return false;
++ }
++
++ @Override
++ public boolean execute() {
++ final PrioritisedExecutor.PrioritisedTask task;
++ synchronized (this.scheduler) {
++ if ((task = this.queuedTask) == null) {
++ if (this.priority == PrioritisedExecutor.Priority.COMPLETING) {
++ return false;
++ }
++
++ this.priority = PrioritisedExecutor.Priority.COMPLETING;
++ if (this.dependencyNode != null) {
++ this.dependencyNode.purged = true;
++ this.dependencyNode = null;
++ }
++ // fall through to execution logic
++ }
++ }
++
++ if (task != null) {
++ // will run the return node logic automatically
++ return task.execute();
++ } else {
++ // don't run node removal/insertion logic, we aren't actually removed from the dependency tree
++ this.executeTask();
++ return true;
++ }
++ }
++
++ @Override
++ public PrioritisedExecutor.Priority getPriority() {
++ final PrioritisedExecutor.PrioritisedTask task;
++ synchronized (this.scheduler) {
++ if ((task = this.queuedTask) == null) {
++ return this.priority;
++ }
++ }
++
++ return task.getPriority();
++ }
++
++ @Override
++ public boolean setPriority(final PrioritisedExecutor.Priority priority) {
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++
++ final PrioritisedExecutor.PrioritisedTask task;
++ List<PrioritisedExecutor.PrioritisedTask> toSchedule = null;
++ synchronized (this.scheduler) {
++ if ((task = this.queuedTask) == null) {
++ if (this.priority == PrioritisedExecutor.Priority.COMPLETING) {
++ return false;
++ }
++
++ if (this.priority == priority) {
++ return true;
++ }
++
++ this.priority = priority;
++ if (this.dependencyNode != null) {
++ // need to re-insert node
++ this.dependencyNode.purged = true;
++ this.dependencyNode = null;
++ toSchedule = this.scheduler.queue(this, priority);
++ }
++ }
++ }
++
++ if (task != null) {
++ return task.setPriority(priority);
++ }
++
++ scheduleTasks(toSchedule);
++
++ return true;
++ }
++
++ @Override
++ public boolean raisePriority(final PrioritisedExecutor.Priority priority) {
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++
++ final PrioritisedExecutor.PrioritisedTask task;
++ List<PrioritisedExecutor.PrioritisedTask> toSchedule = null;
++ synchronized (this.scheduler) {
++ if ((task = this.queuedTask) == null) {
++ if (this.priority == PrioritisedExecutor.Priority.COMPLETING) {
++ return false;
++ }
++
++ if (this.priority.isHigherOrEqualPriority(priority)) {
++ return true;
++ }
++
++ this.priority = priority;
++ if (this.dependencyNode != null) {
++ // need to re-insert node
++ this.dependencyNode.purged = true;
++ this.dependencyNode = null;
++ toSchedule = this.scheduler.queue(this, priority);
++ }
++ }
++ }
++
++ if (task != null) {
++ return task.raisePriority(priority);
++ }
++
++ scheduleTasks(toSchedule);
++
++ return true;
++ }
++
++ @Override
++ public boolean lowerPriority(final PrioritisedExecutor.Priority priority) {
++ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
++ throw new IllegalArgumentException("Invalid priority " + priority);
++ }
++
++ final PrioritisedExecutor.PrioritisedTask task;
++ List<PrioritisedExecutor.PrioritisedTask> toSchedule = null;
++ synchronized (this.scheduler) {
++ if ((task = this.queuedTask) == null) {
++ if (this.priority == PrioritisedExecutor.Priority.COMPLETING) {
++ return false;
++ }
++
++ if (this.priority.isLowerOrEqualPriority(priority)) {
++ return true;
++ }
++
++ this.priority = priority;
++ if (this.dependencyNode != null) {
++ // need to re-insert node
++ this.dependencyNode.purged = true;
++ this.dependencyNode = null;
++ toSchedule = this.scheduler.queue(this, priority);
++ }
++ }
++ }
++
++ if (task != null) {
++ return task.lowerPriority(priority);
++ }
++
++ scheduleTasks(toSchedule);
++
++ return true;
++ }
++ }
++}
+diff --git a/src/main/java/io/papermc/paper/command/PaperCommand.java b/src/main/java/io/papermc/paper/command/PaperCommand.java
+index e47fb2aa5e885162cae5cbfc9f33ff7864bf538e..b68b37274f22c2a89d723aec4d1c6be813eef73c 100644
+--- a/src/main/java/io/papermc/paper/command/PaperCommand.java
++++ b/src/main/java/io/papermc/paper/command/PaperCommand.java
+@@ -43,6 +43,7 @@ public final class PaperCommand extends Command {
+ commands.put(Set.of("mobcaps", "playermobcaps"), new MobcapsCommand());
+ commands.put(Set.of("dumplisteners"), new DumpListenersCommand());
+ commands.put(Set.of("fixlight"), new FixLightCommand());
++ commands.put(Set.of("debug", "chunkinfo", "holderinfo"), new ChunkDebugCommand());
+
+ return commands.entrySet().stream()
+ .flatMap(entry -> entry.getKey().stream().map(s -> Map.entry(s, entry.getValue())))
+diff --git a/src/main/java/io/papermc/paper/command/subcommands/ChunkDebugCommand.java b/src/main/java/io/papermc/paper/command/subcommands/ChunkDebugCommand.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..962d3cae6340fc11607b59355e291629618f289c
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/command/subcommands/ChunkDebugCommand.java
+@@ -0,0 +1,265 @@
++package io.papermc.paper.command.subcommands;
++
++import io.papermc.paper.command.CommandUtil;
++import io.papermc.paper.command.PaperSubcommand;
++import java.io.File;
++import java.time.LocalDateTime;
++import java.time.format.DateTimeFormatter;
++import java.util.ArrayList;
++import java.util.Collections;
++import java.util.List;
++import java.util.Locale;
++import io.papermc.paper.util.MCUtil;
++import net.minecraft.server.MinecraftServer;
++import net.minecraft.server.level.ChunkHolder;
++import net.minecraft.server.level.FullChunkStatus;
++import net.minecraft.server.level.ServerLevel;
++import net.minecraft.world.level.chunk.ChunkAccess;
++import net.minecraft.world.level.chunk.ImposterProtoChunk;
++import net.minecraft.world.level.chunk.LevelChunk;
++import net.minecraft.world.level.chunk.ProtoChunk;
++import org.bukkit.Bukkit;
++import org.bukkit.command.CommandSender;
++import org.bukkit.craftbukkit.CraftWorld;
++import org.checkerframework.checker.nullness.qual.NonNull;
++import org.checkerframework.checker.nullness.qual.Nullable;
++import org.checkerframework.framework.qual.DefaultQualifier;
++
++import static net.kyori.adventure.text.Component.text;
++import static net.kyori.adventure.text.format.NamedTextColor.BLUE;
++import static net.kyori.adventure.text.format.NamedTextColor.DARK_AQUA;
++import static net.kyori.adventure.text.format.NamedTextColor.GREEN;
++import static net.kyori.adventure.text.format.NamedTextColor.RED;
++
++@DefaultQualifier(NonNull.class)
++public final class ChunkDebugCommand implements PaperSubcommand {
++ @Override
++ public boolean execute(final CommandSender sender, final String subCommand, final String[] args) {
++ switch (subCommand) {
++ case "debug" -> this.doDebug(sender, args);
++ case "chunkinfo" -> this.doChunkInfo(sender, args);
++ case "holderinfo" -> this.doHolderInfo(sender, args);
++ }
++ return true;
++ }
++
++ @Override
++ public List<String> tabComplete(final CommandSender sender, final String subCommand, final String[] args) {
++ switch (subCommand) {
++ case "debug" -> {
++ if (args.length == 1) {
++ return CommandUtil.getListMatchingLast(sender, args, "help", "chunks");
++ }
++ }
++ case "holderinfo" -> {
++ List<String> worldNames = new ArrayList<>();
++ worldNames.add("*");
++ for (org.bukkit.World world : Bukkit.getWorlds()) {
++ worldNames.add(world.getName());
++ }
++ if (args.length == 1) {
++ return CommandUtil.getListMatchingLast(sender, args, worldNames);
++ }
++ }
++ case "chunkinfo" -> {
++ List<String> worldNames = new ArrayList<>();
++ worldNames.add("*");
++ for (org.bukkit.World world : Bukkit.getWorlds()) {
++ worldNames.add(world.getName());
++ }
++ if (args.length == 1) {
++ return CommandUtil.getListMatchingLast(sender, args, worldNames);
++ }
++ }
++ }
++ return Collections.emptyList();
++ }
++
++ private void doChunkInfo(final CommandSender sender, final String[] args) {
++ List<org.bukkit.World> worlds;
++ if (args.length < 1 || args[0].equals("*")) {
++ worlds = Bukkit.getWorlds();
++ } else {
++ worlds = new ArrayList<>(args.length);
++ for (final String arg : args) {
++ org.bukkit.@Nullable World world = Bukkit.getWorld(arg);
++ if (world == null) {
++ sender.sendMessage(text("World '" + arg + "' is invalid", RED));
++ return;
++ }
++ worlds.add(world);
++ }
++ }
++
++ int accumulatedTotal = 0;
++ int accumulatedInactive = 0;
++ int accumulatedBorder = 0;
++ int accumulatedTicking = 0;
++ int accumulatedEntityTicking = 0;
++
++ for (final org.bukkit.World bukkitWorld : worlds) {
++ final ServerLevel world = ((CraftWorld) bukkitWorld).getHandle();
++
++ int total = 0;
++ int inactive = 0;
++ int full = 0;
++ int blockTicking = 0;
++ int entityTicking = 0;
++
++ for (final ChunkHolder chunk : io.papermc.paper.chunk.system.ChunkSystem.getVisibleChunkHolders(world)) {
++ if (chunk.getFullChunkNowUnchecked() == null) {
++ continue;
++ }
++
++ ++total;
++
++ FullChunkStatus state = chunk.getFullStatus();
++
++ switch (state) {
++ case INACCESSIBLE -> ++inactive;
++ case FULL -> ++full;
++ case BLOCK_TICKING -> ++blockTicking;
++ case ENTITY_TICKING -> ++entityTicking;
++ }
++ }
++
++ accumulatedTotal += total;
++ accumulatedInactive += inactive;
++ accumulatedBorder += full;
++ accumulatedTicking += blockTicking;
++ accumulatedEntityTicking += entityTicking;
++
++ sender.sendMessage(text().append(text("Chunks in ", BLUE), text(bukkitWorld.getName(), GREEN), text(":")));
++ sender.sendMessage(text().color(DARK_AQUA).append(
++ text("Total: ", BLUE), text(total),
++ text(" Inactive: ", BLUE), text(inactive),
++ text(" Full: ", BLUE), text(full),
++ text(" Block Ticking: ", BLUE), text(blockTicking),
++ text(" Entity Ticking: ", BLUE), text(entityTicking)
++ ));
++ }
++ if (worlds.size() > 1) {
++ sender.sendMessage(text().append(text("Chunks in ", BLUE), text("all listed worlds", GREEN), text(":", DARK_AQUA)));
++ sender.sendMessage(text().color(DARK_AQUA).append(
++ text("Total: ", BLUE), text(accumulatedTotal),
++ text(" Inactive: ", BLUE), text(accumulatedInactive),
++ text(" Full: ", BLUE), text(accumulatedBorder),
++ text(" Block Ticking: ", BLUE), text(accumulatedTicking),
++ text(" Entity Ticking: ", BLUE), text(accumulatedEntityTicking)
++ ));
++ }
++ }
++
++ private void doHolderInfo(final CommandSender sender, final String[] args) {
++ List<org.bukkit.World> worlds;
++ if (args.length < 1 || args[0].equals("*")) {
++ worlds = Bukkit.getWorlds();
++ } else {
++ worlds = new ArrayList<>(args.length);
++ for (final String arg : args) {
++ org.bukkit.@Nullable World world = Bukkit.getWorld(arg);
++ if (world == null) {
++ sender.sendMessage(text("World '" + arg + "' is invalid", RED));
++ return;
++ }
++ worlds.add(world);
++ }
++ }
++
++ int accumulatedTotal = 0;
++ int accumulatedCanUnload = 0;
++ int accumulatedNull = 0;
++ int accumulatedReadOnly = 0;
++ int accumulatedProtoChunk = 0;
++ int accumulatedFullChunk = 0;
++
++ for (final org.bukkit.World bukkitWorld : worlds) {
++ final ServerLevel world = ((CraftWorld) bukkitWorld).getHandle();
++
++ int total = 0;
++ int canUnload = 0;
++ int nullChunks = 0;
++ int readOnly = 0;
++ int protoChunk = 0;
++ int fullChunk = 0;
++
++ for (final ChunkHolder chunk : world.chunkTaskScheduler.chunkHolderManager.getOldChunkHolders()) { // Paper - change updating chunks map
++ final ChunkAccess lastChunk = chunk.getAvailableChunkNow();
++
++ ++total;
++
++ if (lastChunk == null) {
++ ++nullChunks;
++ } else if (lastChunk instanceof ImposterProtoChunk) {
++ ++readOnly;
++ } else if (lastChunk instanceof ProtoChunk) {
++ ++protoChunk;
++ } else if (lastChunk instanceof LevelChunk) {
++ ++fullChunk;
++ }
++
++ if (chunk.newChunkHolder.isSafeToUnload() == null) {
++ ++canUnload;
++ }
++ }
++
++ accumulatedTotal += total;
++ accumulatedCanUnload += canUnload;
++ accumulatedNull += nullChunks;
++ accumulatedReadOnly += readOnly;
++ accumulatedProtoChunk += protoChunk;
++ accumulatedFullChunk += fullChunk;
++
++ sender.sendMessage(text().append(text("Chunks in ", BLUE), text(bukkitWorld.getName(), GREEN), text(":")));
++ sender.sendMessage(text().color(DARK_AQUA).append(
++ text("Total: ", BLUE), text(total),
++ text(" Unloadable: ", BLUE), text(canUnload),
++ text(" Null: ", BLUE), text(nullChunks),
++ text(" ReadOnly: ", BLUE), text(readOnly),
++ text(" Proto: ", BLUE), text(protoChunk),
++ text(" Full: ", BLUE), text(fullChunk)
++ ));
++ }
++ if (worlds.size() > 1) {
++ sender.sendMessage(text().append(text("Chunks in ", BLUE), text("all listed worlds", GREEN), text(":", DARK_AQUA)));
++ sender.sendMessage(text().color(DARK_AQUA).append(
++ text("Total: ", BLUE), text(accumulatedTotal),
++ text(" Unloadable: ", BLUE), text(accumulatedCanUnload),
++ text(" Null: ", BLUE), text(accumulatedNull),
++ text(" ReadOnly: ", BLUE), text(accumulatedReadOnly),
++ text(" Proto: ", BLUE), text(accumulatedProtoChunk),
++ text(" Full: ", BLUE), text(accumulatedFullChunk)
++ ));
++ }
++ }
++
++ private void doDebug(final CommandSender sender, final String[] args) {
++ if (args.length < 1) {
++ sender.sendMessage(text("Use /paper debug [chunks] help for more information on a specific command", RED));
++ return;
++ }
++
++ final String debugType = args[0].toLowerCase(Locale.ENGLISH);
++ switch (debugType) {
++ case "chunks" -> {
++ if (args.length >= 2 && args[1].toLowerCase(Locale.ENGLISH).equals("help")) {
++ sender.sendMessage(text("Use /paper debug chunks [world] to dump loaded chunk information to a file", RED));
++ break;
++ }
++ File file = new File(new File(new File("."), "debug"),
++ "chunks-" + DateTimeFormatter.ofPattern("yyyy-MM-dd_HH.mm.ss").format(LocalDateTime.now()) + ".txt");
++ sender.sendMessage(text("Writing chunk information dump to " + file, GREEN));
++ try {
++ MCUtil.dumpChunks(file, false);
++ sender.sendMessage(text("Successfully written chunk information!", GREEN));
++ } catch (Throwable thr) {
++ MinecraftServer.LOGGER.warn("Failed to dump chunk information to file " + file.toString(), thr);
++ sender.sendMessage(text("Failed to dump chunk information, see console", RED));
++ }
++ }
++ // "help" & default
++ default -> sender.sendMessage(text("Use /paper debug [chunks] help for more information on a specific command", RED));
++ }
++ }
++
++}
+diff --git a/src/main/java/io/papermc/paper/configuration/GlobalConfiguration.java b/src/main/java/io/papermc/paper/configuration/GlobalConfiguration.java
+index 4de88f74182bb596c6b5ad0351cc0dacefd0ce96..2874bc3001c4e7d9191e47ba512c5a68369c21f1 100644
+--- a/src/main/java/io/papermc/paper/configuration/GlobalConfiguration.java
++++ b/src/main/java/io/papermc/paper/configuration/GlobalConfiguration.java
+@@ -29,6 +29,45 @@ public class GlobalConfiguration extends ConfigurationPart {
+ public static GlobalConfiguration get() {
+ return instance;
+ }
++
++ public ChunkLoadingBasic chunkLoadingBasic;
++
++ public class ChunkLoadingBasic extends ConfigurationPart {
++ @Comment("The maximum rate in chunks per second that the server will send to any individual player. Set to -1 to disable this limit.")
++ public double playerMaxChunkSendRate = 75.0;
++
++ @Comment(
++ "The maximum rate at which chunks will load for any individual player. " +
++ "Note that this setting also affects chunk generations, since a chunk load is always first issued to test if a" +
++ "chunk is already generated. Set to -1 to disable this limit."
++ )
++ public double playerMaxChunkLoadRate = 100.0;
++
++ @Comment("The maximum rate at which chunks will generate for any individual player. Set to -1 to disable this limit.")
++ public double playerMaxChunkGenerateRate = -1.0;
++ }
++
++ public ChunkLoadingAdvanced chunkLoadingAdvanced;
++
++ public class ChunkLoadingAdvanced extends ConfigurationPart {
++ @Comment(
++ "Set to true if the server will match the chunk send radius that clients have configured" +
++ "in their view distance settings if the client is less-than the server's send distance."
++ )
++ public boolean autoConfigSendDistance = true;
++
++ @Comment(
++ "Specifies the maximum amount of concurrent chunk loads that an individual player can have." +
++ "Set to 0 to let the server configure it automatically per player, or set it to -1 to disable the limit."
++ )
++ public int playerMaxConcurrentChunkLoads = 0;
++
++ @Comment(
++ "Specifies the maximum amount of concurrent chunk generations that an individual player can have." +
++ "Set to 0 to let the server configure it automatically per player, or set it to -1 to disable the limit."
++ )
++ public int playerMaxConcurrentChunkGenerates = 0;
++ }
+ static void set(GlobalConfiguration instance) {
+ GlobalConfiguration.instance = instance;
+ }
+@@ -130,21 +169,6 @@ public class GlobalConfiguration extends ConfigurationPart {
+ public int incomingPacketThreshold = 300;
+ }
+
+- public ChunkLoading chunkLoading;
+-
+- public class ChunkLoading extends ConfigurationPart {
+- public int minLoadRadius = 2;
+- public int maxConcurrentSends = 2;
+- public boolean autoconfigSendDistance = true;
+- public double targetPlayerChunkSendRate = 100.0;
+- public double globalMaxChunkSendRate = -1.0;
+- public boolean enableFrustumPriority = false;
+- public double globalMaxChunkLoadRate = -1.0;
+- public double playerMaxConcurrentLoads = 20.0;
+- public double globalMaxConcurrentLoads = 500.0;
+- public double playerMaxChunkLoadRate = -1.0;
+- }
+-
+ public UnsupportedSettings unsupportedSettings;
+
+ public class UnsupportedSettings extends ConfigurationPart {
+@@ -201,7 +225,7 @@ public class GlobalConfiguration extends ConfigurationPart {
+
+ @PostProcess
+ private void postProcess() {
+- //io.papermc.paper.chunk.system.scheduling.ChunkTaskScheduler.init(this);
++ io.papermc.paper.chunk.system.scheduling.ChunkTaskScheduler.init(this);
+ }
+ }
+
+diff --git a/src/main/java/io/papermc/paper/threadedregions/TickRegions.java b/src/main/java/io/papermc/paper/threadedregions/TickRegions.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..d5d39e9c1f326e91010237b0db80d527ac52f4d6
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/threadedregions/TickRegions.java
+@@ -0,0 +1,9 @@
++package io.papermc.paper.threadedregions;
++
++// placeholder class for Folia
++public class TickRegions {
++
++ public static int getRegionChunkShift() {
++ return 4;
++ }
++}
+diff --git a/src/main/java/io/papermc/paper/util/IntervalledCounter.java b/src/main/java/io/papermc/paper/util/IntervalledCounter.java
+index cea9c098ade00ee87b8efc8164ab72f5279758f0..197224e31175252d8438a8df585bbb65f2288d7f 100644
+--- a/src/main/java/io/papermc/paper/util/IntervalledCounter.java
++++ b/src/main/java/io/papermc/paper/util/IntervalledCounter.java
+@@ -2,6 +2,8 @@ package io.papermc.paper.util;
+
+ public final class IntervalledCounter {
+
++ private static final int INITIAL_SIZE = 8;
++
+ protected long[] times;
+ protected long[] counts;
+ protected final long interval;
+@@ -11,8 +13,8 @@ public final class IntervalledCounter {
+ protected int tail; // exclusive
+
+ public IntervalledCounter(final long interval) {
+- this.times = new long[8];
+- this.counts = new long[8];
++ this.times = new long[INITIAL_SIZE];
++ this.counts = new long[INITIAL_SIZE];
+ this.interval = interval;
+ }
+
+@@ -67,13 +69,13 @@ public final class IntervalledCounter {
+ this.tail = nextTail;
+ }
+
+- public void updateAndAdd(final int count) {
++ public void updateAndAdd(final long count) {
+ final long currTime = System.nanoTime();
+ this.updateCurrentTime(currTime);
+ this.addTime(currTime, count);
+ }
+
+- public void updateAndAdd(final int count, final long currTime) {
++ public void updateAndAdd(final long count, final long currTime) {
+ this.updateCurrentTime(currTime);
+ this.addTime(currTime, count);
+ }
+@@ -93,9 +95,13 @@ public final class IntervalledCounter {
+ this.tail = size;
+
+ if (tail >= head) {
++ // sequentially ordered from [head, tail)
+ System.arraycopy(oldElements, head, newElements, 0, size);
+ System.arraycopy(oldCounts, head, newCounts, 0, size);
+ } else {
++ // ordered from [head, length)
++ // then followed by [0, tail)
++
+ System.arraycopy(oldElements, head, newElements, 0, oldElements.length - head);
+ System.arraycopy(oldElements, 0, newElements, oldElements.length - head, tail);
+
+@@ -106,10 +112,18 @@ public final class IntervalledCounter {
+
+ // returns in units per second
+ public double getRate() {
+- return this.size() / (this.interval * 1.0e-9);
++ return (double)this.sum / ((double)this.interval * 1.0E-9);
++ }
++
++ public long getInterval() {
++ return this.interval;
+ }
+
+- public long size() {
++ public long getSum() {
+ return this.sum;
+ }
++
++ public int totalDataPoints() {
++ return this.tail >= this.head ? (this.tail - this.head) : (this.tail + (this.counts.length - this.head));
++ }
+ }
+diff --git a/src/main/java/io/papermc/paper/util/MCUtil.java b/src/main/java/io/papermc/paper/util/MCUtil.java
+index c95a0af32178fe24448a5ae7a229c86ec883e8de..1d6b3fe2ce240af4ede61588795456b046eee6c9 100644
+--- a/src/main/java/io/papermc/paper/util/MCUtil.java
++++ b/src/main/java/io/papermc/paper/util/MCUtil.java
+@@ -7,17 +7,30 @@ import com.google.common.util.concurrent.ThreadFactoryBuilder;
+ import io.papermc.paper.math.BlockPosition;
+ import io.papermc.paper.math.FinePosition;
+ import io.papermc.paper.math.Position;
++import com.google.gson.JsonArray;
++import com.google.gson.JsonObject;
++import com.google.gson.internal.Streams;
++import com.google.gson.stream.JsonWriter;
++import com.mojang.datafixers.util.Either;
+ import it.unimi.dsi.fastutil.objects.ObjectRBTreeSet;
+ import java.lang.ref.Cleaner;
++import it.unimi.dsi.fastutil.objects.ReferenceArrayList;
+ import net.minecraft.core.BlockPos;
+ import net.minecraft.core.Direction;
+ import net.minecraft.core.Vec3i;
+ import net.minecraft.server.MinecraftServer;
++import net.minecraft.server.level.ChunkHolder;
++import net.minecraft.server.level.ChunkMap;
++import net.minecraft.server.level.DistanceManager;
+ import net.minecraft.server.level.ServerLevel;
++import net.minecraft.server.level.ServerPlayer;
++import net.minecraft.server.level.Ticket;
+ import net.minecraft.world.entity.Entity;
+ import net.minecraft.world.level.ChunkPos;
+ import net.minecraft.world.level.ClipContext;
+ import net.minecraft.world.level.Level;
++import net.minecraft.world.level.chunk.ChunkAccess;
++import net.minecraft.world.level.chunk.status.ChunkStatus;
+ import net.minecraft.world.phys.Vec3;
+ import org.apache.commons.lang.exception.ExceptionUtils;
+ import com.mojang.authlib.GameProfile;
+@@ -30,8 +43,11 @@ import org.spigotmc.AsyncCatcher;
+
+ import javax.annotation.Nonnull;
+ import javax.annotation.Nullable;
++import java.io.*;
++import java.nio.charset.StandardCharsets;
+ import java.util.List;
+ import java.util.Queue;
++import java.util.Set;
+ import java.util.concurrent.CompletableFuture;
+ import java.util.concurrent.ExecutionException;
+ import java.util.concurrent.LinkedBlockingQueue;
+@@ -532,6 +548,98 @@ public final class MCUtil {
+ }
+ }
+
++ public static ChunkStatus getChunkStatus(ChunkHolder chunk) {
++ return chunk.getChunkHolderStatus();
++ }
++
++ public static void dumpChunks(File file, boolean watchdog) throws IOException {
++ file.getParentFile().mkdirs();
++ file.createNewFile();
++ ReferenceArrayList<org.bukkit.World> worlds = new ReferenceArrayList<>(org.bukkit.Bukkit.getWorlds());
++ ReferenceArrayList<org.bukkit.World> loadedWorlds = new ReferenceArrayList<>(worlds);
++ JsonObject data = new JsonObject();
++
++ data.addProperty("server-version", org.bukkit.Bukkit.getVersion());
++ data.addProperty("data-version", 1);
++
++ {
++ JsonArray players = new JsonArray();
++ data.add("all-players", players);
++ List<ServerPlayer> playerList = MinecraftServer.getServer().getPlayerList().players;
++ for (ServerPlayer player : playerList) {
++ JsonObject playerData = new JsonObject();
++ players.add(playerData);
++
++ Level playerWorld = player.level();
++ org.bukkit.World craftWorld = playerWorld.getWorld();
++ Entity.RemovalReason removalReason = player.getRemovalReason();
++
++ playerData.addProperty("name", player.getScoreboardName());
++ playerData.addProperty("x", player.getX());
++ playerData.addProperty("y", player.getY());
++ playerData.addProperty("z", player.getZ());
++ playerData.addProperty("world", playerWorld == null ? "null world" : craftWorld.getName());
++ playerData.addProperty("removalReason", removalReason == null ? "null" : removalReason.name());
++
++ if (!worlds.contains(craftWorld)) {
++ worlds.add(craftWorld);
++ }
++ }
++ }
++
++ JsonArray chunkWaitInformation = new JsonArray();
++ data.add("chunk-wait-infos", chunkWaitInformation);
++
++ for (io.papermc.paper.chunk.system.scheduling.ChunkTaskScheduler.ChunkInfo chunkInfo : io.papermc.paper.chunk.system.scheduling.ChunkTaskScheduler.getChunkInfos()) {
++ chunkWaitInformation.add(chunkInfo.toString());
++ }
++
++ JsonArray worldsData = new JsonArray();
++
++ for (org.bukkit.World bukkitWorld : worlds) {
++ JsonObject worldData = new JsonObject();
++
++ ServerLevel world = ((org.bukkit.craftbukkit.CraftWorld)bukkitWorld).getHandle();
++ List<ServerPlayer> players = world.players();
++
++ worldData.addProperty("is-loaded", loadedWorlds.contains(bukkitWorld));
++ worldData.addProperty("name", world.getWorld().getName());
++ worldData.addProperty("view-distance", world.getWorld().getViewDistance()); // Paper - replace chunk loader system
++ worldData.addProperty("tick-view-distance", world.getWorld().getSimulationDistance()); // Paper - replace chunk loader system
++
++ JsonArray playersData = new JsonArray();
++
++ for (ServerPlayer player : players) {
++ JsonObject playerData = new JsonObject();
++
++ playerData.addProperty("name", player.getScoreboardName());
++ playerData.addProperty("x", player.getX());
++ playerData.addProperty("y", player.getY());
++ playerData.addProperty("z", player.getZ());
++
++ playersData.add(playerData);
++ }
++
++ worldData.add("players", playersData);
++ worldData.add("chunk-data", watchdog ? world.chunkTaskScheduler.chunkHolderManager.getDebugJsonForWatchdog() : world.chunkTaskScheduler.chunkHolderManager.getDebugJson());
++ worldsData.add(worldData);
++ }
++
++ data.add("worlds", worldsData);
++
++ StringWriter stringWriter = new StringWriter();
++ JsonWriter jsonWriter = new JsonWriter(stringWriter);
++ jsonWriter.setIndent(" ");
++ jsonWriter.setLenient(false);
++ Streams.write(data, jsonWriter);
++
++ String fileData = stringWriter.toString();
++
++ try (PrintStream out = new PrintStream(new FileOutputStream(file), false, StandardCharsets.UTF_8)) {
++ out.print(fileData);
++ }
++ }
++
+ public static int getTicketLevelFor(net.minecraft.world.level.chunk.status.ChunkStatus status) {
+ return net.minecraft.server.level.ChunkMap.MAX_VIEW_DISTANCE + net.minecraft.world.level.chunk.status.ChunkStatus.getDistance(status);
+ }
+diff --git a/src/main/java/io/papermc/paper/util/TickThread.java b/src/main/java/io/papermc/paper/util/TickThread.java
+index 73e83d56a340f0c7dcb8ff737d621003e72c6de4..bdaf062f9b66ceab303a0807eca301342886a8ea 100644
+--- a/src/main/java/io/papermc/paper/util/TickThread.java
++++ b/src/main/java/io/papermc/paper/util/TickThread.java
+@@ -1,12 +1,20 @@
+ package io.papermc.paper.util;
+
++import net.minecraft.core.BlockPos;
+ import net.minecraft.server.MinecraftServer;
+ import net.minecraft.server.level.ServerLevel;
++import net.minecraft.server.level.ServerPlayer;
++import net.minecraft.server.network.ServerGamePacketListenerImpl;
++import net.minecraft.util.Mth;
+ import net.minecraft.world.entity.Entity;
++import net.minecraft.world.level.ChunkPos;
++import net.minecraft.world.level.Level;
++import net.minecraft.world.phys.AABB;
++import net.minecraft.world.phys.Vec3;
+ import org.bukkit.Bukkit;
+ import java.util.concurrent.atomic.AtomicInteger;
+
+-public final class TickThread extends Thread {
++public class TickThread extends Thread {
+
+ public static final boolean STRICT_THREAD_CHECKS = Boolean.getBoolean("paper.strict-thread-checks");
+
+@@ -16,6 +24,10 @@ public final class TickThread extends Thread {
+ }
+ }
+
++ /**
++ * @deprecated
++ */
++ @Deprecated
+ public static void softEnsureTickThread(final String reason) {
+ if (!STRICT_THREAD_CHECKS) {
+ return;
+@@ -23,6 +35,10 @@ public final class TickThread extends Thread {
+ ensureTickThread(reason);
+ }
+
++ /**
++ * @deprecated
++ */
++ @Deprecated
+ public static void ensureTickThread(final String reason) {
+ if (!isTickThread()) {
+ MinecraftServer.LOGGER.error("Thread " + Thread.currentThread().getName() + " failed main thread check: " + reason, new Throwable());
+@@ -30,6 +46,20 @@ public final class TickThread extends Thread {
+ }
+ }
+
++ public static void ensureTickThread(final ServerLevel world, final BlockPos pos, final String reason) {
++ if (!isTickThreadFor(world, pos)) {
++ MinecraftServer.LOGGER.error("Thread " + Thread.currentThread().getName() + " failed main thread check: " + reason, new Throwable());
++ throw new IllegalStateException(reason);
++ }
++ }
++
++ public static void ensureTickThread(final ServerLevel world, final ChunkPos pos, final String reason) {
++ if (!isTickThreadFor(world, pos)) {
++ MinecraftServer.LOGGER.error("Thread " + Thread.currentThread().getName() + " failed main thread check: " + reason, new Throwable());
++ throw new IllegalStateException(reason);
++ }
++ }
++
+ public static void ensureTickThread(final ServerLevel world, final int chunkX, final int chunkZ, final String reason) {
+ if (!isTickThreadFor(world, chunkX, chunkZ)) {
+ MinecraftServer.LOGGER.error("Thread " + Thread.currentThread().getName() + " failed main thread check: " + reason, new Throwable());
+@@ -44,6 +74,20 @@ public final class TickThread extends Thread {
+ }
+ }
+
++ public static void ensureTickThread(final ServerLevel world, final AABB aabb, final String reason) {
++ if (!isTickThreadFor(world, aabb)) {
++ MinecraftServer.LOGGER.error("Thread " + Thread.currentThread().getName() + " failed main thread check: " + reason, new Throwable());
++ throw new IllegalStateException(reason);
++ }
++ }
++
++ public static void ensureTickThread(final ServerLevel world, final double blockX, final double blockZ, final String reason) {
++ if (!isTickThreadFor(world, blockX, blockZ)) {
++ MinecraftServer.LOGGER.error("Thread " + Thread.currentThread().getName() + " failed main thread check: " + reason, new Throwable());
++ throw new IllegalStateException(reason);
++ }
++ }
++
+ public final int id; /* We don't override getId as the spec requires that it be unique (with respect to all other threads) */
+
+ private static final AtomicInteger ID_GENERATOR = new AtomicInteger();
+@@ -66,13 +110,45 @@ public final class TickThread extends Thread {
+ }
+
+ public static boolean isTickThread() {
+- return Bukkit.isPrimaryThread();
++ return Thread.currentThread() instanceof TickThread;
++ }
++
++ public static boolean isShutdownThread() {
++ return false;
++ }
++
++ public static boolean isTickThreadFor(final ServerLevel world, final BlockPos pos) {
++ return isTickThread();
++ }
++
++ public static boolean isTickThreadFor(final ServerLevel world, final ChunkPos pos) {
++ return isTickThread();
++ }
++
++ public static boolean isTickThreadFor(final ServerLevel world, final Vec3 pos) {
++ return isTickThread();
+ }
+
+ public static boolean isTickThreadFor(final ServerLevel world, final int chunkX, final int chunkZ) {
+ return isTickThread();
+ }
+
++ public static boolean isTickThreadFor(final ServerLevel world, final AABB aabb) {
++ return isTickThread();
++ }
++
++ public static boolean isTickThreadFor(final ServerLevel world, final double blockX, final double blockZ) {
++ return isTickThread();
++ }
++
++ public static boolean isTickThreadFor(final ServerLevel world, final Vec3 position, final Vec3 deltaMovement, final int buffer) {
++ return isTickThread();
++ }
++
++ public static boolean isTickThreadFor(final ServerLevel world, final int fromChunkX, final int fromChunkZ, final int toChunkX, final int toChunkZ) {
++ return isTickThread();
++ }
++
+ public static boolean isTickThreadFor(final ServerLevel world, final int chunkX, final int chunkZ, final int radius) {
+ return isTickThread();
+ }
+diff --git a/src/main/java/io/papermc/paper/world/ChunkEntitySlices.java b/src/main/java/io/papermc/paper/world/ChunkEntitySlices.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..c78cbec447032de9fe69748591bef6be300160ed
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/world/ChunkEntitySlices.java
+@@ -0,0 +1,607 @@
++package io.papermc.paper.world;
++
++import com.destroystokyo.paper.util.maplist.EntityList;
++import io.papermc.paper.chunk.system.entity.EntityLookup;
++import io.papermc.paper.util.TickThread;
++import it.unimi.dsi.fastutil.objects.Reference2ObjectMap;
++import it.unimi.dsi.fastutil.objects.Reference2ObjectOpenHashMap;
++import net.minecraft.nbt.CompoundTag;
++import net.minecraft.server.level.ChunkHolder;
++import net.minecraft.server.level.FullChunkStatus;
++import net.minecraft.server.level.ServerLevel;
++import net.minecraft.util.Mth;
++import net.minecraft.world.entity.Entity;
++import net.minecraft.world.entity.EntityType;
++import net.minecraft.world.entity.boss.EnderDragonPart;
++import net.minecraft.world.entity.boss.enderdragon.EnderDragon;
++import net.minecraft.world.level.ChunkPos;
++import net.minecraft.world.level.chunk.storage.EntityStorage;
++import net.minecraft.world.level.entity.Visibility;
++import net.minecraft.world.phys.AABB;
++import org.bukkit.craftbukkit.event.CraftEventFactory;
++import java.util.ArrayList;
++import java.util.Arrays;
++import java.util.Iterator;
++import java.util.List;
++import java.util.function.Predicate;
++import org.bukkit.event.entity.EntityRemoveEvent;
++
++public final class ChunkEntitySlices {
++
++ protected final int minSection;
++ protected final int maxSection;
++ public final int chunkX;
++ public final int chunkZ;
++ protected final ServerLevel world;
++
++ protected final EntityCollectionBySection allEntities;
++ protected final EntityCollectionBySection hardCollidingEntities;
++ protected final Reference2ObjectOpenHashMap<Class<? extends Entity>, EntityCollectionBySection> entitiesByClass;
++ protected final EntityList entities = new EntityList();
++
++ public FullChunkStatus status;
++
++ protected boolean isTransient;
++
++ public boolean isTransient() {
++ return this.isTransient;
++ }
++
++ public void setTransient(final boolean value) {
++ this.isTransient = value;
++ }
++
++ // TODO implement container search optimisations
++
++ public ChunkEntitySlices(final ServerLevel world, final int chunkX, final int chunkZ, final FullChunkStatus status,
++ final int minSection, final int maxSection) { // inclusive, inclusive
++ this.minSection = minSection;
++ this.maxSection = maxSection;
++ this.chunkX = chunkX;
++ this.chunkZ = chunkZ;
++ this.world = world;
++
++ this.allEntities = new EntityCollectionBySection(this);
++ this.hardCollidingEntities = new EntityCollectionBySection(this);
++ this.entitiesByClass = new Reference2ObjectOpenHashMap<>();
++
++ this.status = status;
++ }
++
++ // Paper start - optimise CraftChunk#getEntities
++ public org.bukkit.entity.Entity[] getChunkEntities() {
++ List<org.bukkit.entity.Entity> ret = new java.util.ArrayList<>();
++ final Entity[] entities = this.entities.getRawData();
++ for (int i = 0, size = Math.min(entities.length, this.entities.size()); i < size; ++i) {
++ final Entity entity = entities[i];
++ if (entity == null) {
++ continue;
++ }
++ final org.bukkit.entity.Entity bukkit = entity.getBukkitEntity();
++ if (bukkit != null && bukkit.isValid()) {
++ ret.add(bukkit);
++ }
++ }
++
++ return ret.toArray(new org.bukkit.entity.Entity[0]);
++ }
++
++ public CompoundTag save() {
++ final int len = this.entities.size();
++ if (len == 0) {
++ return null;
++ }
++
++ final Entity[] rawData = this.entities.getRawData();
++ final List<Entity> collectedEntities = new ArrayList<>(len);
++ for (int i = 0; i < len; ++i) {
++ final Entity entity = rawData[i];
++ if (entity.shouldBeSaved()) {
++ collectedEntities.add(entity);
++ }
++ }
++
++ if (collectedEntities.isEmpty()) {
++ return null;
++ }
++
++ return EntityStorage.saveEntityChunk(collectedEntities, new ChunkPos(this.chunkX, this.chunkZ), this.world);
++ }
++
++ // returns true if this chunk has transient entities remaining
++ public boolean unload() {
++ final int len = this.entities.size();
++ final Entity[] collectedEntities = Arrays.copyOf(this.entities.getRawData(), len);
++
++ for (int i = 0; i < len; ++i) {
++ final Entity entity = collectedEntities[i];
++ if (entity.isRemoved()) {
++ // removed by us below
++ continue;
++ }
++ if (entity.shouldBeSaved()) {
++ entity.setRemoved(Entity.RemovalReason.UNLOADED_TO_CHUNK, EntityRemoveEvent.Cause.UNLOAD);
++ if (entity.isVehicle()) {
++ // we cannot assume that these entities are contained within this chunk, because entities can
++ // desync - so we need to remove them all
++ for (final Entity passenger : entity.getIndirectPassengers()) {
++ passenger.setRemoved(Entity.RemovalReason.UNLOADED_TO_CHUNK, EntityRemoveEvent.Cause.UNLOAD);
++ }
++ }
++ }
++ }
++
++ return this.entities.size() != 0;
++ }
++
++ private List<Entity> getAllEntities() {
++ final int len = this.entities.size();
++ if (len == 0) {
++ return new ArrayList<>();
++ }
++
++ final Entity[] rawData = this.entities.getRawData();
++ final List<Entity> collectedEntities = new ArrayList<>(len);
++ for (int i = 0; i < len; ++i) {
++ collectedEntities.add(rawData[i]);
++ }
++
++ return collectedEntities;
++ }
++
++ public void callEntitiesLoadEvent() {
++ CraftEventFactory.callEntitiesLoadEvent(this.world, new ChunkPos(this.chunkX, this.chunkZ), this.getAllEntities());
++ }
++
++ public void callEntitiesUnloadEvent() {
++ CraftEventFactory.callEntitiesUnloadEvent(this.world, new ChunkPos(this.chunkX, this.chunkZ), this.getAllEntities());
++ }
++ // Paper end - optimise CraftChunk#getEntities
++
++ public boolean isEmpty() {
++ return this.entities.size() == 0;
++ }
++
++ public void mergeInto(final ChunkEntitySlices slices) {
++ final Entity[] entities = this.entities.getRawData();
++ for (int i = 0, size = Math.min(entities.length, this.entities.size()); i < size; ++i) {
++ final Entity entity = entities[i];
++ slices.addEntity(entity, entity.sectionY);
++ }
++ }
++
++ private boolean preventStatusUpdates;
++ public boolean startPreventingStatusUpdates() {
++ final boolean ret = this.preventStatusUpdates;
++ this.preventStatusUpdates = true;
++ return ret;
++ }
++
++ public boolean isPreventingStatusUpdates() {
++ return this.preventStatusUpdates;
++ }
++
++ public void stopPreventingStatusUpdates(final boolean prev) {
++ this.preventStatusUpdates = prev;
++ }
++
++ public void updateStatus(final FullChunkStatus status, final EntityLookup lookup) {
++ this.status = status;
++
++ final Entity[] entities = this.entities.getRawData();
++
++ for (int i = 0, size = this.entities.size(); i < size; ++i) {
++ final Entity entity = entities[i];
++
++ final Visibility oldVisibility = EntityLookup.getEntityStatus(entity);
++ entity.chunkStatus = status;
++ final Visibility newVisibility = EntityLookup.getEntityStatus(entity);
++
++ lookup.entityStatusChange(entity, this, oldVisibility, newVisibility, false, false, false);
++ }
++ }
++
++ public boolean addEntity(final Entity entity, final int chunkSection) {
++ if (!this.entities.add(entity)) {
++ return false;
++ }
++ entity.chunkStatus = this.status;
++ final int sectionIndex = chunkSection - this.minSection;
++
++ this.allEntities.addEntity(entity, sectionIndex);
++
++ if (entity.hardCollides()) {
++ this.hardCollidingEntities.addEntity(entity, sectionIndex);
++ }
++
++ for (final Iterator<Reference2ObjectMap.Entry<Class<? extends Entity>, EntityCollectionBySection>> iterator =
++ this.entitiesByClass.reference2ObjectEntrySet().fastIterator(); iterator.hasNext();) {
++ final Reference2ObjectMap.Entry<Class<? extends Entity>, EntityCollectionBySection> entry = iterator.next();
++
++ if (entry.getKey().isInstance(entity)) {
++ entry.getValue().addEntity(entity, sectionIndex);
++ }
++ }
++
++ return true;
++ }
++
++ public boolean removeEntity(final Entity entity, final int chunkSection) {
++ if (!this.entities.remove(entity)) {
++ return false;
++ }
++ entity.chunkStatus = null;
++ final int sectionIndex = chunkSection - this.minSection;
++
++ this.allEntities.removeEntity(entity, sectionIndex);
++
++ if (entity.hardCollides()) {
++ this.hardCollidingEntities.removeEntity(entity, sectionIndex);
++ }
++
++ for (final Iterator<Reference2ObjectMap.Entry<Class<? extends Entity>, EntityCollectionBySection>> iterator =
++ this.entitiesByClass.reference2ObjectEntrySet().fastIterator(); iterator.hasNext();) {
++ final Reference2ObjectMap.Entry<Class<? extends Entity>, EntityCollectionBySection> entry = iterator.next();
++
++ if (entry.getKey().isInstance(entity)) {
++ entry.getValue().removeEntity(entity, sectionIndex);
++ }
++ }
++
++ return true;
++ }
++
++ public void getHardCollidingEntities(final Entity except, final AABB box, final List<Entity> into, final Predicate<? super Entity> predicate) {
++ this.hardCollidingEntities.getEntities(except, box, into, predicate);
++ }
++
++ public void getEntities(final Entity except, final AABB box, final List<Entity> into, final Predicate<? super Entity> predicate) {
++ this.allEntities.getEntitiesWithEnderDragonParts(except, box, into, predicate);
++ }
++
++ public void getEntitiesWithoutDragonParts(final Entity except, final AABB box, final List<Entity> into, final Predicate<? super Entity> predicate) {
++ this.allEntities.getEntities(except, box, into, predicate);
++ }
++
++ public <T extends Entity> void getEntities(final EntityType<?> type, final AABB box, final List<? super T> into,
++ final Predicate<? super T> predicate) {
++ this.allEntities.getEntities(type, box, (List)into, (Predicate)predicate);
++ }
++
++ protected EntityCollectionBySection initClass(final Class<? extends Entity> clazz) {
++ final EntityCollectionBySection ret = new EntityCollectionBySection(this);
++
++ for (int sectionIndex = 0; sectionIndex < this.allEntities.entitiesBySection.length; ++sectionIndex) {
++ final BasicEntityList<Entity> sectionEntities = this.allEntities.entitiesBySection[sectionIndex];
++ if (sectionEntities == null) {
++ continue;
++ }
++
++ final Entity[] storage = sectionEntities.storage;
++
++ for (int i = 0, len = Math.min(storage.length, sectionEntities.size()); i < len; ++i) {
++ final Entity entity = storage[i];
++
++ if (clazz.isInstance(entity)) {
++ ret.addEntity(entity, sectionIndex);
++ }
++ }
++ }
++
++ return ret;
++ }
++
++ public <T extends Entity> void getEntities(final Class<? extends T> clazz, final Entity except, final AABB box, final List<? super T> into,
++ final Predicate<? super T> predicate) {
++ EntityCollectionBySection collection = this.entitiesByClass.get(clazz);
++ if (collection != null) {
++ collection.getEntitiesWithEnderDragonParts(except, clazz, box, (List)into, (Predicate)predicate);
++ } else {
++ this.entitiesByClass.putIfAbsent(clazz, collection = this.initClass(clazz));
++ collection.getEntitiesWithEnderDragonParts(except, clazz, box, (List)into, (Predicate)predicate);
++ }
++ }
++
++ protected static final class BasicEntityList<E extends Entity> {
++
++ protected static final Entity[] EMPTY = new Entity[0];
++ protected static final int DEFAULT_CAPACITY = 4;
++
++ protected E[] storage;
++ protected int size;
++
++ public BasicEntityList() {
++ this(0);
++ }
++
++ public BasicEntityList(final int cap) {
++ this.storage = (E[])(cap <= 0 ? EMPTY : new Entity[cap]);
++ }
++
++ public boolean isEmpty() {
++ return this.size == 0;
++ }
++
++ public int size() {
++ return this.size;
++ }
++
++ private void resize() {
++ if (this.storage == EMPTY) {
++ this.storage = (E[])new Entity[DEFAULT_CAPACITY];
++ } else {
++ this.storage = Arrays.copyOf(this.storage, this.storage.length * 2);
++ }
++ }
++
++ public void add(final E entity) {
++ final int idx = this.size++;
++ if (idx >= this.storage.length) {
++ this.resize();
++ this.storage[idx] = entity;
++ } else {
++ this.storage[idx] = entity;
++ }
++ }
++
++ public int indexOf(final E entity) {
++ final E[] storage = this.storage;
++
++ for (int i = 0, len = Math.min(this.storage.length, this.size); i < len; ++i) {
++ if (storage[i] == entity) {
++ return i;
++ }
++ }
++
++ return -1;
++ }
++
++ public boolean remove(final E entity) {
++ final int idx = this.indexOf(entity);
++ if (idx == -1) {
++ return false;
++ }
++
++ final int size = --this.size;
++ final E[] storage = this.storage;
++ if (idx != size) {
++ System.arraycopy(storage, idx + 1, storage, idx, size - idx);
++ }
++
++ storage[size] = null;
++
++ return true;
++ }
++
++ public boolean has(final E entity) {
++ return this.indexOf(entity) != -1;
++ }
++ }
++
++ protected static final class EntityCollectionBySection {
++
++ protected final ChunkEntitySlices manager;
++ protected final long[] nonEmptyBitset;
++ protected final BasicEntityList<Entity>[] entitiesBySection;
++ protected int count;
++
++ public EntityCollectionBySection(final ChunkEntitySlices manager) {
++ this.manager = manager;
++
++ final int sectionCount = manager.maxSection - manager.minSection + 1;
++
++ this.nonEmptyBitset = new long[(sectionCount + (Long.SIZE - 1)) >>> 6]; // (sectionCount + (Long.SIZE - 1)) / Long.SIZE
++ this.entitiesBySection = new BasicEntityList[sectionCount];
++ }
++
++ public void addEntity(final Entity entity, final int sectionIndex) {
++ BasicEntityList<Entity> list = this.entitiesBySection[sectionIndex];
++
++ if (list != null && list.has(entity)) {
++ return;
++ }
++
++ if (list == null) {
++ this.entitiesBySection[sectionIndex] = list = new BasicEntityList<>();
++ this.nonEmptyBitset[sectionIndex >>> 6] |= (1L << (sectionIndex & (Long.SIZE - 1)));
++ }
++
++ list.add(entity);
++ ++this.count;
++ }
++
++ public void removeEntity(final Entity entity, final int sectionIndex) {
++ final BasicEntityList<Entity> list = this.entitiesBySection[sectionIndex];
++
++ if (list == null || !list.remove(entity)) {
++ return;
++ }
++
++ --this.count;
++
++ if (list.isEmpty()) {
++ this.entitiesBySection[sectionIndex] = null;
++ this.nonEmptyBitset[sectionIndex >>> 6] ^= (1L << (sectionIndex & (Long.SIZE - 1)));
++ }
++ }
++
++ public void getEntities(final Entity except, final AABB box, final List<Entity> into, final Predicate<? super Entity> predicate) {
++ if (this.count == 0) {
++ return;
++ }
++
++ final int minSection = this.manager.minSection;
++ final int maxSection = this.manager.maxSection;
++
++ final int min = Mth.clamp(Mth.floor(box.minY - 2.0) >> 4, minSection, maxSection);
++ final int max = Mth.clamp(Mth.floor(box.maxY + 2.0) >> 4, minSection, maxSection);
++
++ final BasicEntityList<Entity>[] entitiesBySection = this.entitiesBySection;
++
++ for (int section = min; section <= max; ++section) {
++ final BasicEntityList<Entity> list = entitiesBySection[section - minSection];
++
++ if (list == null) {
++ continue;
++ }
++
++ final Entity[] storage = list.storage;
++
++ for (int i = 0, len = Math.min(storage.length, list.size()); i < len; ++i) {
++ final Entity entity = storage[i];
++
++ if (entity == null || entity == except || !entity.getBoundingBox().intersects(box)) {
++ continue;
++ }
++
++ if (predicate != null && !predicate.test(entity)) {
++ continue;
++ }
++
++ into.add(entity);
++ }
++ }
++ }
++
++ public void getEntitiesWithEnderDragonParts(final Entity except, final AABB box, final List<Entity> into,
++ final Predicate<? super Entity> predicate) {
++ if (this.count == 0) {
++ return;
++ }
++
++ final int minSection = this.manager.minSection;
++ final int maxSection = this.manager.maxSection;
++
++ final int min = Mth.clamp(Mth.floor(box.minY - 2.0) >> 4, minSection, maxSection);
++ final int max = Mth.clamp(Mth.floor(box.maxY + 2.0) >> 4, minSection, maxSection);
++
++ final BasicEntityList<Entity>[] entitiesBySection = this.entitiesBySection;
++
++ for (int section = min; section <= max; ++section) {
++ final BasicEntityList<Entity> list = entitiesBySection[section - minSection];
++
++ if (list == null) {
++ continue;
++ }
++
++ final Entity[] storage = list.storage;
++
++ for (int i = 0, len = Math.min(storage.length, list.size()); i < len; ++i) {
++ final Entity entity = storage[i];
++
++ if (entity == null || entity == except || !entity.getBoundingBox().intersects(box)) {
++ continue;
++ }
++
++ if (predicate == null || predicate.test(entity)) {
++ into.add(entity);
++ } // else: continue to test the ender dragon parts
++
++ if (entity instanceof EnderDragon) {
++ for (final EnderDragonPart part : ((EnderDragon)entity).subEntities) {
++ if (part == except || !part.getBoundingBox().intersects(box)) {
++ continue;
++ }
++
++ if (predicate != null && !predicate.test(part)) {
++ continue;
++ }
++
++ into.add(part);
++ }
++ }
++ }
++ }
++ }
++
++ public void getEntitiesWithEnderDragonParts(final Entity except, final Class<?> clazz, final AABB box, final List<Entity> into,
++ final Predicate<? super Entity> predicate) {
++ if (this.count == 0) {
++ return;
++ }
++
++ final int minSection = this.manager.minSection;
++ final int maxSection = this.manager.maxSection;
++
++ final int min = Mth.clamp(Mth.floor(box.minY - 2.0) >> 4, minSection, maxSection);
++ final int max = Mth.clamp(Mth.floor(box.maxY + 2.0) >> 4, minSection, maxSection);
++
++ final BasicEntityList<Entity>[] entitiesBySection = this.entitiesBySection;
++
++ for (int section = min; section <= max; ++section) {
++ final BasicEntityList<Entity> list = entitiesBySection[section - minSection];
++
++ if (list == null) {
++ continue;
++ }
++
++ final Entity[] storage = list.storage;
++
++ for (int i = 0, len = Math.min(storage.length, list.size()); i < len; ++i) {
++ final Entity entity = storage[i];
++
++ if (entity == null || entity == except || !entity.getBoundingBox().intersects(box)) {
++ continue;
++ }
++
++ if (predicate == null || predicate.test(entity)) {
++ into.add(entity);
++ } // else: continue to test the ender dragon parts
++
++ if (entity instanceof EnderDragon) {
++ for (final EnderDragonPart part : ((EnderDragon)entity).subEntities) {
++ if (part == except || !part.getBoundingBox().intersects(box) || !clazz.isInstance(part)) {
++ continue;
++ }
++
++ if (predicate != null && !predicate.test(part)) {
++ continue;
++ }
++
++ into.add(part);
++ }
++ }
++ }
++ }
++ }
++
++ public <T extends Entity> void getEntities(final EntityType<?> type, final AABB box, final List<? super T> into,
++ final Predicate<? super T> predicate) {
++ if (this.count == 0) {
++ return;
++ }
++
++ final int minSection = this.manager.minSection;
++ final int maxSection = this.manager.maxSection;
++
++ final int min = Mth.clamp(Mth.floor(box.minY - 2.0) >> 4, minSection, maxSection);
++ final int max = Mth.clamp(Mth.floor(box.maxY + 2.0) >> 4, minSection, maxSection);
++
++ final BasicEntityList<Entity>[] entitiesBySection = this.entitiesBySection;
++
++ for (int section = min; section <= max; ++section) {
++ final BasicEntityList<Entity> list = entitiesBySection[section - minSection];
++
++ if (list == null) {
++ continue;
++ }
++
++ final Entity[] storage = list.storage;
++
++ for (int i = 0, len = Math.min(storage.length, list.size()); i < len; ++i) {
++ final Entity entity = storage[i];
++
++ if (entity == null || (type != null && entity.getType() != type) || !entity.getBoundingBox().intersects(box)) {
++ continue;
++ }
++
++ if (predicate != null && !predicate.test((T)entity)) {
++ continue;
++ }
++
++ into.add((T)entity);
++ }
++ }
++ }
++ }
++}
+diff --git a/src/main/java/net/minecraft/server/Main.java b/src/main/java/net/minecraft/server/Main.java
+index c33f85b570f159ab465b5a10a8044a81f2797f43..244a19ecd0234fa1d7a6ecfea20751595688605d 100644
+--- a/src/main/java/net/minecraft/server/Main.java
++++ b/src/main/java/net/minecraft/server/Main.java
+@@ -320,6 +320,7 @@ public class Main {
+
+ convertable_conversionsession.saveDataTag(iregistrycustom_dimension, savedata);
+ */
++ Class.forName(net.minecraft.world.entity.npc.VillagerTrades.class.getName()); // Paper - load this sync so it won't fail later async
+ final DedicatedServer dedicatedserver = (DedicatedServer) MinecraftServer.spin((thread) -> {
+ DedicatedServer dedicatedserver1 = new DedicatedServer(optionset, worldLoader.get(), thread, convertable_conversionsession, resourcepackrepository, worldstem, dedicatedserversettings, DataFixers.getDataFixer(), services, LoggerChunkProgressListener::createFromGameruleRadius);
+
+diff --git a/src/main/java/net/minecraft/server/MinecraftServer.java b/src/main/java/net/minecraft/server/MinecraftServer.java
+index 093a5d49d1a001b3ad5b4a880c60c0cb18874531..99e5b663b7cded44164b7b4e2ccb0f7a063b8bf9 100644
+--- a/src/main/java/net/minecraft/server/MinecraftServer.java
++++ b/src/main/java/net/minecraft/server/MinecraftServer.java
+@@ -315,7 +315,7 @@ public abstract class MinecraftServer extends ReentrantBlockableEventLoop<TickTa
+
+ public static <S extends MinecraftServer> S spin(Function<Thread, S> serverFactory) {
+ AtomicReference<S> atomicreference = new AtomicReference();
+- Thread thread = new Thread(() -> {
++ Thread thread = new io.papermc.paper.util.TickThread(() -> { // Paper - rewrite chunk system
+ ((MinecraftServer) atomicreference.get()).runServer();
+ }, "Server thread");
+
+@@ -651,7 +651,7 @@ public abstract class MinecraftServer extends ReentrantBlockableEventLoop<TickTa
+ this.forceDifficulty();
+ for (ServerLevel worldserver : this.getAllLevels()) {
+ this.prepareLevels(worldserver.getChunkSource().chunkMap.progressListener, worldserver);
+- worldserver.entityManager.tick(); // SPIGOT-6526: Load pending entities so they are available to the API
++ //worldserver.entityManager.tick(); // SPIGOT-6526: Load pending entities so they are available to the API // Paper - rewrite chunk system, not required to "tick" anything
+ this.server.getPluginManager().callEvent(new org.bukkit.event.world.WorldLoadEvent(worldserver.getWorld()));
+ }
+
+@@ -861,6 +861,12 @@ public abstract class MinecraftServer extends ReentrantBlockableEventLoop<TickTa
+ public abstract boolean shouldRconBroadcast();
+
+ public boolean saveAllChunks(boolean suppressLogs, boolean flush, boolean force) {
++ // Paper start - rewrite chunk system - add close param
++ // This allows us to avoid double saving chunks by closing instead of saving then closing
++ return this.saveAllChunks(suppressLogs, flush, force, false);
++ }
++ public boolean saveAllChunks(boolean suppressLogs, boolean flush, boolean force, boolean close) {
++ // Paper end - rewrite chunk system - add close param
+ boolean flag3 = false;
+
+ for (Iterator iterator = this.getAllLevels().iterator(); iterator.hasNext(); flag3 = true) {
+@@ -869,8 +875,12 @@ public abstract class MinecraftServer extends ReentrantBlockableEventLoop<TickTa
+ if (!suppressLogs) {
+ MinecraftServer.LOGGER.info("Saving chunks for level '{}'/{}", worldserver, worldserver.dimension().location());
+ }
+-
+- worldserver.save((ProgressListener) null, flush, worldserver.noSave && !force);
++ // Paper start - rewrite chunk system
++ worldserver.save((ProgressListener) null, flush, worldserver.noSave && !force, close);
++ if (flush) {
++ MinecraftServer.LOGGER.info("ThreadedAnvilChunkStorage ({}): All chunks are saved", worldserver.getChunkSource().chunkMap.getStorageName());
++ }
++ // Paper end - rewrite chunk system
+ }
+
+ // CraftBukkit start - moved to WorldServer.save
+@@ -889,7 +899,7 @@ public abstract class MinecraftServer extends ReentrantBlockableEventLoop<TickTa
+ while (iterator1.hasNext()) {
+ ServerLevel worldserver2 = (ServerLevel) iterator1.next();
+
+- MinecraftServer.LOGGER.info("ThreadedAnvilChunkStorage ({}): All chunks are saved", worldserver2.getChunkSource().chunkMap.getStorageName());
++ //MinecraftServer.LOGGER.info("ThreadedAnvilChunkStorage ({}): All chunks are saved", worldserver2.getChunkSource().chunkMap.getStorageName()); // Paper - move up
+ }
+
+ MinecraftServer.LOGGER.info("ThreadedAnvilChunkStorage: All dimensions are saved");
+@@ -971,36 +981,7 @@ public abstract class MinecraftServer extends ReentrantBlockableEventLoop<TickTa
+ }
+ }
+
+- while (this.levels.values().stream().anyMatch((worldserver1) -> {
+- return worldserver1.getChunkSource().chunkMap.hasWork();
+- })) {
+- this.nextTickTimeNanos = Util.getNanos() + TimeUtil.NANOSECONDS_PER_MILLISECOND;
+- iterator = this.getAllLevels().iterator();
+-
+- while (iterator.hasNext()) {
+- worldserver = (ServerLevel) iterator.next();
+- worldserver.getChunkSource().removeTicketsOnClosing();
+- worldserver.getChunkSource().tick(() -> {
+- return true;
+- }, false);
+- }
+-
+- this.waitUntilNextTick();
+- }
+-
+- this.saveAllChunks(false, true, false);
+- iterator = this.getAllLevels().iterator();
+-
+- while (iterator.hasNext()) {
+- worldserver = (ServerLevel) iterator.next();
+- if (worldserver != null) {
+- try {
+- worldserver.close();
+- } catch (IOException ioexception) {
+- MinecraftServer.LOGGER.error("Exception closing the level", ioexception);
+- }
+- }
+- }
++ this.saveAllChunks(false, true, false, true); // Paper - rewrite chunk system - move closing into here
+
+ this.isSaving = false;
+ this.resources.close();
+@@ -1020,6 +1001,7 @@ public abstract class MinecraftServer extends ReentrantBlockableEventLoop<TickTa
+ }
+ // Spigot end
+
++ io.papermc.paper.chunk.system.io.RegionFileIOThread.close(true); // Paper - rewrite chunk system
+ }
+
+ public String getLocalIp() {
+@@ -1112,6 +1094,8 @@ public abstract class MinecraftServer extends ReentrantBlockableEventLoop<TickTa
+ // Paper end
+ // Spigot End
+
++ public static volatile RuntimeException chunkSystemCrash; // Paper - rewrite chunk system
++
+ protected void runServer() {
+ try {
+ if (!this.initServer()) {
+@@ -1140,6 +1124,12 @@ public abstract class MinecraftServer extends ReentrantBlockableEventLoop<TickTa
+ // Paper end - Add onboarding message for initial server start
+
+ while (this.running) {
++ // Paper start - rewrite chunk system
++ // guarantee that nothing can stop the server from halting if it can at least still tick
++ if (this.chunkSystemCrash != null) {
++ throw this.chunkSystemCrash;
++ }
++ // Paper end - rewrite chunk system
+ long i;
+
+ if (!this.isPaused() && this.tickRateManager.isSprinting() && this.tickRateManager.checkShouldSprintThisTick()) {
+@@ -1302,6 +1292,11 @@ public abstract class MinecraftServer extends ReentrantBlockableEventLoop<TickTa
+ }
+
+ private boolean haveTime() {
++ // Paper start
++ if (this.forceTicks) {
++ return true;
++ }
++ // Paper end
+ // CraftBukkit start
+ if (isOversleep) return canOversleep(); // Paper - because of our changes, this logic is broken
+ return this.forceTicks || this.runningTask() || Util.getNanos() < (this.mayHaveDelayedTasks ? this.delayedTasksMaxNextTickTimeNanos : this.nextTickTimeNanos);
+@@ -1564,7 +1559,7 @@ public abstract class MinecraftServer extends ReentrantBlockableEventLoop<TickTa
+ // Paper start - Folia scheduler API
+ ((io.papermc.paper.threadedregions.scheduler.FoliaGlobalRegionScheduler) Bukkit.getGlobalRegionScheduler()).tick();
+ getAllLevels().forEach(level -> {
+- for (final Entity entity : level.getEntities().getAll()) {
++ for (final Entity entity : level.getEntityLookup().getAllCopy()) { // Paper - rewrite chunk system
+ if (entity.isRemoved()) {
+ continue;
+ }
+@@ -2622,6 +2617,13 @@ public abstract class MinecraftServer extends ReentrantBlockableEventLoop<TickTa
+
+ }
+
++ // Paper start - rewrite chunk system
++ @Override
++ public boolean isSameThread() {
++ return io.papermc.paper.util.TickThread.isTickThread();
++ }
++ // Paper end - rewrite chunk system
++
+ // CraftBukkit start
+ public boolean isDebugging() {
+ return false;
+diff --git a/src/main/java/net/minecraft/server/dedicated/DedicatedServer.java b/src/main/java/net/minecraft/server/dedicated/DedicatedServer.java
+index 00679b76715fde4b90a999fd11cca40d048b1349..89c2c26afc5f06c4f57716cadbebabb8854f3635 100644
+--- a/src/main/java/net/minecraft/server/dedicated/DedicatedServer.java
++++ b/src/main/java/net/minecraft/server/dedicated/DedicatedServer.java
+@@ -473,7 +473,34 @@ public class DedicatedServer extends MinecraftServer implements ServerInterface
+ return this.getProperties().allowNether;
+ }
+
++ static final java.util.concurrent.atomic.AtomicInteger ASYNC_DEBUG_CHUNKS_COUNT = new java.util.concurrent.atomic.AtomicInteger(); // Paper - rewrite chunk system
++
+ public void handleConsoleInput(String command, CommandSourceStack commandSource) {
++ // Paper start - rewrite chunk system
++ if (command.equalsIgnoreCase("paper debug chunks --async")) {
++ LOGGER.info("Scheduling async debug chunks");
++ Runnable run = () -> {
++ LOGGER.info("Async debug chunks executing");
++ io.papermc.paper.chunk.system.scheduling.ChunkTaskScheduler.dumpAllChunkLoadInfo(false);
++ CommandSender sender = MinecraftServer.getServer().console;
++ java.io.File file = new java.io.File(new java.io.File(new java.io.File("."), "debug"),
++ "chunks-" + java.time.format.DateTimeFormatter.ofPattern("yyyy-MM-dd_HH.mm.ss").format(java.time.LocalDateTime.now()) + ".txt");
++ sender.sendMessage(net.kyori.adventure.text.Component.text("Writing chunk information dump to " + file, net.kyori.adventure.text.format.NamedTextColor.GREEN));
++ try {
++ io.papermc.paper.util.MCUtil.dumpChunks(file, true);
++ sender.sendMessage(net.kyori.adventure.text.Component.text("Successfully written chunk information!", net.kyori.adventure.text.format.NamedTextColor.GREEN));
++ } catch (Throwable thr) {
++ MinecraftServer.LOGGER.warn("Failed to dump chunk information to file " + file.toString(), thr);
++ sender.sendMessage(net.kyori.adventure.text.Component.text("Failed to dump chunk information, see console", net.kyori.adventure.text.format.NamedTextColor.RED));
++ }
++ };
++ Thread t = new Thread(run);
++ t.setName("Async debug thread #" + ASYNC_DEBUG_CHUNKS_COUNT.getAndIncrement());
++ t.setDaemon(true);
++ t.start();
++ return;
++ }
++ // Paper end - rewrite chunk system
+ this.serverCommandQueue.add(new ConsoleInput(command, commandSource)); // Paper - Perf: use proper queue
+ }
+
+diff --git a/src/main/java/net/minecraft/server/level/ChunkHolder.java b/src/main/java/net/minecraft/server/level/ChunkHolder.java
+index 88729d92878f98729eb5669cce5ae5b1418865a1..13d15a135dd0373bef4a5ac9ffb56dbbf53353a0 100644
+--- a/src/main/java/net/minecraft/server/level/ChunkHolder.java
++++ b/src/main/java/net/minecraft/server/level/ChunkHolder.java
+@@ -46,17 +46,12 @@ public class ChunkHolder {
+ public static final ChunkResult<ChunkAccess> NOT_DONE_YET = ChunkResult.error("Not done yet");
+ private static final CompletableFuture<ChunkResult<LevelChunk>> UNLOADED_LEVEL_CHUNK_FUTURE = CompletableFuture.completedFuture(ChunkHolder.UNLOADED_LEVEL_CHUNK);
+ private static final List<ChunkStatus> CHUNK_STATUSES = ChunkStatus.getStatusList();
+- private final AtomicReferenceArray<CompletableFuture<ChunkResult<ChunkAccess>>> futures;
++ // Paper - rewrite chunk system
+ private final LevelHeightAccessor levelHeightAccessor;
+- private volatile CompletableFuture<ChunkResult<LevelChunk>> fullChunkFuture; private int fullChunkCreateCount; private volatile boolean isFullChunkReady; // Paper - cache chunk ticking stage
+- private volatile CompletableFuture<ChunkResult<LevelChunk>> tickingChunkFuture; private volatile boolean isTickingReady; // Paper - cache chunk ticking stage
+- private volatile CompletableFuture<ChunkResult<LevelChunk>> entityTickingChunkFuture; private volatile boolean isEntityTickingReady; // Paper - cache chunk ticking stage
+- public CompletableFuture<ChunkAccess> chunkToSave; // Paper - public
++ // Paper - rewrite chunk system
+ @Nullable
+ private final DebugBuffer<ChunkHolder.ChunkSaveDebug> chunkToSaveHistory;
+- public int oldTicketLevel;
+- private int ticketLevel;
+- private int queueLevel;
++ // Paper - rewrite chunk system
+ public final ChunkPos pos;
+ private boolean hasChangedSections;
+ private final ShortSet[] changedBlocksPerSection;
+@@ -65,11 +60,20 @@ public class ChunkHolder {
+ private final LevelLightEngine lightEngine;
+ private final ChunkHolder.LevelChangeListener onLevelChange;
+ public final ChunkHolder.PlayerProvider playerProvider;
+- private boolean wasAccessibleSinceLastSave;
+- private CompletableFuture<Void> pendingFullStateConfirmation;
+- private CompletableFuture<?> sendSync;
++ // Paper - rewrite chunk system
+
+ private final ChunkMap chunkMap; // Paper
++ // Paper start - no-tick view distance
++ public final LevelChunk getSendingChunk() {
++ // it's important that we use getChunkAtIfLoadedImmediately to mirror the chunk sending logic used
++ // in Chunk's neighbour callback
++ LevelChunk ret = this.chunkMap.level.getChunkSource().getChunkAtIfLoadedImmediately(this.pos.x, this.pos.z);
++ if (ret != null && ret.areNeighboursLoaded(1)) {
++ return ret;
++ }
++ return null;
++ }
++ // Paper end - no-tick view distance
+
+ // Paper start
+ public void onChunkAdd() {
+@@ -81,147 +85,131 @@ public class ChunkHolder {
+ }
+ // Paper end
+
+- public ChunkHolder(ChunkPos pos, int level, LevelHeightAccessor world, LevelLightEngine lightingProvider, ChunkHolder.LevelChangeListener levelUpdateListener, ChunkHolder.PlayerProvider playersWatchingChunkProvider) {
+- this.futures = new AtomicReferenceArray(ChunkHolder.CHUNK_STATUSES.size());
+- this.fullChunkFuture = ChunkHolder.UNLOADED_LEVEL_CHUNK_FUTURE;
+- this.tickingChunkFuture = ChunkHolder.UNLOADED_LEVEL_CHUNK_FUTURE;
+- this.entityTickingChunkFuture = ChunkHolder.UNLOADED_LEVEL_CHUNK_FUTURE;
+- this.chunkToSave = CompletableFuture.completedFuture(null); // CraftBukkit - decompile error
++ public final io.papermc.paper.chunk.system.scheduling.NewChunkHolder newChunkHolder; // Paper - rewrite chunk system
++
++ // Paper start - replace player chunk loader
++ private final com.destroystokyo.paper.util.maplist.ReferenceList<ServerPlayer> playersSentChunkTo = new com.destroystokyo.paper.util.maplist.ReferenceList<>();
++
++ public void addPlayer(ServerPlayer player) {
++ if (!this.playersSentChunkTo.add(player)) {
++ throw new IllegalStateException("Already sent chunk " + this.pos + " in world '" + this.chunkMap.level.getWorld().getName() + "' to player " + player);
++ }
++ }
++
++ public void removePlayer(ServerPlayer player) {
++ if (!this.playersSentChunkTo.remove(player)) {
++ throw new IllegalStateException("Have not sent chunk " + this.pos + " in world '" + this.chunkMap.level.getWorld().getName() + "' to player " + player);
++ }
++ }
++
++ public boolean hasChunkBeenSent() {
++ return this.playersSentChunkTo.size() != 0;
++ }
++
++ public boolean hasBeenSent(ServerPlayer to) {
++ return this.playersSentChunkTo.contains(to);
++ }
++ // Paper end - replace player chunk loader
++ public ChunkHolder(ChunkPos pos, LevelHeightAccessor world, LevelLightEngine lightingProvider, ChunkHolder.PlayerProvider playersWatchingChunkProvider, io.papermc.paper.chunk.system.scheduling.NewChunkHolder newChunkHolder) { // Paper - rewrite chunk system
++ this.newChunkHolder = newChunkHolder; // Paper - rewrite chunk system
+ this.chunkToSaveHistory = null;
+ this.blockChangedLightSectionFilter = new BitSet();
+ this.skyChangedLightSectionFilter = new BitSet();
+- this.pendingFullStateConfirmation = CompletableFuture.completedFuture(null); // CraftBukkit - decompile error
+- this.sendSync = CompletableFuture.completedFuture(null); // CraftBukkit - decompile error
++ // Paper - rewrite chunk system
+ this.pos = pos;
+ this.levelHeightAccessor = world;
+ this.lightEngine = lightingProvider;
+- this.onLevelChange = levelUpdateListener;
++ this.onLevelChange = null; // Paper - rewrite chunk system
+ this.playerProvider = playersWatchingChunkProvider;
+- this.oldTicketLevel = ChunkLevel.MAX_LEVEL + 1;
+- this.ticketLevel = this.oldTicketLevel;
+- this.queueLevel = this.oldTicketLevel;
+- this.setTicketLevel(level);
++ // Paper - rewrite chunk system
+ this.changedBlocksPerSection = new ShortSet[world.getSectionsCount()];
+ this.chunkMap = (ChunkMap)playersWatchingChunkProvider; // Paper
+ }
+
+ // Paper start
+ public @Nullable ChunkAccess getAvailableChunkNow() {
+- // TODO can we just getStatusFuture(EMPTY)?
+- for (ChunkStatus curr = ChunkStatus.FULL, next = curr.getParent(); curr != next; curr = next, next = next.getParent()) {
+- CompletableFuture<ChunkResult<ChunkAccess>> future = this.getFutureIfPresentUnchecked(curr);
+- ChunkResult<ChunkAccess> either = future.getNow(null);
+- if (either == null || either.isSuccess()) {
+- continue;
+- }
+- return either.orElseThrow(IllegalStateException::new);
+- }
+- return null;
++ return this.newChunkHolder.getCurrentChunk(); // Paper - rewrite chunk system
+ }
+ // Paper end
+ // CraftBukkit start
+ public LevelChunk getFullChunkNow() {
+- // Note: We use the oldTicketLevel for isLoaded checks.
+- if (!ChunkLevel.fullStatus(this.oldTicketLevel).isOrAfter(FullChunkStatus.FULL)) return null;
+- return this.getFullChunkNowUnchecked();
++ // Paper start - rewrite chunk system
++ if (!this.isFullChunkReady() || !(this.getAvailableChunkNow() instanceof LevelChunk chunk)) return null; // instanceof to avoid a race condition on off-main threads
++ return chunk;
++ // Paper end - rewrite chunk system
+ }
+
+ public LevelChunk getFullChunkNowUnchecked() {
+- CompletableFuture<ChunkResult<ChunkAccess>> statusFuture = this.getFutureIfPresentUnchecked(ChunkStatus.FULL);
+- ChunkResult<ChunkAccess> either = statusFuture.getNow(null);
+- return (either == null) ? null : (LevelChunk) either.orElse(null);
++ // Paper start - rewrite chunk system
++ return this.getAvailableChunkNow() instanceof LevelChunk chunk ? chunk : null;
++ // Paper end - rewrite chunk system
+ }
+ // CraftBukkit end
+
+ public CompletableFuture<ChunkResult<ChunkAccess>> getFutureIfPresentUnchecked(ChunkStatus leastStatus) {
+- CompletableFuture<ChunkResult<ChunkAccess>> completablefuture = (CompletableFuture) this.futures.get(leastStatus.getIndex());
+-
+- return completablefuture == null ? ChunkHolder.UNLOADED_CHUNK_FUTURE : completablefuture;
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ public CompletableFuture<ChunkResult<ChunkAccess>> getFutureIfPresent(ChunkStatus leastStatus) {
+- return ChunkLevel.generationStatus(this.ticketLevel).isOrAfter(leastStatus) ? this.getFutureIfPresentUnchecked(leastStatus) : ChunkHolder.UNLOADED_CHUNK_FUTURE;
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ public final CompletableFuture<ChunkResult<LevelChunk>> getTickingChunkFuture() { // Paper - final for inline
+- return this.tickingChunkFuture;
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ public final CompletableFuture<ChunkResult<LevelChunk>> getEntityTickingChunkFuture() { // Paper - final for inline
+- return this.entityTickingChunkFuture;
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk systemv
+ }
+
+ public final CompletableFuture<ChunkResult<LevelChunk>> getFullChunkFuture() { // Paper - final for inline
+- return this.fullChunkFuture;
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk systemv
+ }
+
+ @Nullable
+ public final LevelChunk getTickingChunk() { // Paper - final for inline
+- return (LevelChunk) ((ChunkResult) this.getTickingChunkFuture().getNow(ChunkHolder.UNLOADED_LEVEL_CHUNK)).orElse(null); // CraftBukkit - decompile error
++ // Paper start - rewrite chunk system
++ if (!this.isTickingReady()) {
++ return null;
++ }
++ return (LevelChunk)this.getAvailableChunkNow();
++ // Paper end - rewrite chunk system
+ }
+
+ public CompletableFuture<?> getChunkSendSyncFuture() {
+- return this.sendSync;
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ @Nullable
+ public LevelChunk getChunkToSend() {
+- return !this.sendSync.isDone() ? null : this.getTickingChunk();
++ return this.getSendingChunk(); // Paper - rewrite chunk system
+ }
+
+ @Nullable
+ public ChunkStatus getLastAvailableStatus() {
+- for (int i = ChunkHolder.CHUNK_STATUSES.size() - 1; i >= 0; --i) {
+- ChunkStatus chunkstatus = (ChunkStatus) ChunkHolder.CHUNK_STATUSES.get(i);
+- CompletableFuture<ChunkResult<ChunkAccess>> completablefuture = this.getFutureIfPresentUnchecked(chunkstatus);
+-
+- if (((ChunkResult) completablefuture.getNow(ChunkHolder.UNLOADED_CHUNK)).isSuccess()) {
+- return chunkstatus;
+- }
+- }
+-
+- return null;
++ return this.newChunkHolder.getCurrentGenStatus(); // Paper - rewrite chunk system
+ }
+
+ // Paper start
+ public @Nullable ChunkStatus getChunkHolderStatus() {
+- for (ChunkStatus curr = ChunkStatus.FULL, next = curr.getParent(); curr != next; curr = next, next = next.getParent()) {
+- CompletableFuture<ChunkResult<ChunkAccess>> future = this.getFutureIfPresentUnchecked(curr);
+- ChunkResult<ChunkAccess> either = future.getNow(null);
+- if (either == null || !either.isSuccess()) {
+- continue;
+- }
+- return curr;
+- }
+-
+- return null;
++ return this.newChunkHolder.getCurrentGenStatus(); // Paper - rewrite chunk system
+ }
+ // Paper end
+
+ @Nullable
+ public ChunkAccess getLastAvailable() {
+- for (int i = ChunkHolder.CHUNK_STATUSES.size() - 1; i >= 0; --i) {
+- ChunkStatus chunkstatus = (ChunkStatus) ChunkHolder.CHUNK_STATUSES.get(i);
+- CompletableFuture<ChunkResult<ChunkAccess>> completablefuture = this.getFutureIfPresentUnchecked(chunkstatus);
+-
+- if (!completablefuture.isCompletedExceptionally()) {
+- ChunkAccess ichunkaccess = (ChunkAccess) ((ChunkResult) completablefuture.getNow(ChunkHolder.UNLOADED_CHUNK)).orElse((Object) null);
+-
+- if (ichunkaccess != null) {
+- return ichunkaccess;
+- }
+- }
+- }
+-
+- return null;
++ return this.newChunkHolder.getCurrentChunk(); // Paper - rewrite chunk system
+ }
+
+- public final CompletableFuture<ChunkAccess> getChunkToSave() { // Paper - final for inline
+- return this.chunkToSave;
+- }
++ // Paper - rewrite chunk system
+
+ public void blockChanged(BlockPos pos) {
+- LevelChunk chunk = this.getTickingChunk();
++ // Paper start - replace player chunk loader
++ if (this.playersSentChunkTo.size() == 0) {
++ return;
++ }
++ // Paper end - replace player chunk loader
++ LevelChunk chunk = this.getSendingChunk(); // Paper - no-tick view distance
+
+ if (chunk != null) {
+ int i = this.levelHeightAccessor.getSectionIndex(pos.getY());
+@@ -237,13 +225,13 @@ public class ChunkHolder {
+ }
+
+ public void sectionLightChanged(LightLayer lightType, int y) {
+- ChunkAccess ichunkaccess = (ChunkAccess) ((ChunkResult) this.getFutureIfPresent(ChunkStatus.INITIALIZE_LIGHT).getNow(ChunkHolder.UNLOADED_CHUNK)).orElse(null); // CraftBukkit - decompile error
++ ChunkAccess ichunkaccess = this.getAvailableChunkNow(); // Paper - rewrite chunk system
+
+ if (ichunkaccess != null) {
+ ichunkaccess.setUnsaved(true);
+- LevelChunk chunk = this.getTickingChunk();
++ LevelChunk chunk = this.getSendingChunk(); // Paper - rewrite chunk system
+
+- if (chunk != null) {
++ if (this.playersSentChunkTo.size() != 0 && chunk != null) { // Paper - replace player chunk loader
+ int j = this.lightEngine.getMinLightSection();
+ int k = this.lightEngine.getMaxLightSection();
+
+@@ -263,7 +251,7 @@ public class ChunkHolder {
+
+ // Paper start - starlight
+ public void broadcast(Packet<?> packet, boolean onChunkViewEdge) {
+- this.broadcast(this.playerProvider.getPlayers(this.pos, onChunkViewEdge), packet);
++ this.broadcast(this.getPlayers(onChunkViewEdge), packet); // Paper - rewrite chunk system
+ }
+ // Paper end - starlight
+
+@@ -273,7 +261,7 @@ public class ChunkHolder {
+ List list;
+
+ if (!this.skyChangedLightSectionFilter.isEmpty() || !this.blockChangedLightSectionFilter.isEmpty()) {
+- list = this.playerProvider.getPlayers(this.pos, true);
++ list = this.getPlayers(true); // Paper - rewrite chunk system
+ if (!list.isEmpty()) {
+ ClientboundLightUpdatePacket packetplayoutlightupdate = new ClientboundLightUpdatePacket(chunk.getPos(), this.lightEngine, this.skyChangedLightSectionFilter, this.blockChangedLightSectionFilter);
+
+@@ -285,7 +273,7 @@ public class ChunkHolder {
+ }
+
+ if (this.hasChangedSections) {
+- list = this.playerProvider.getPlayers(this.pos, false);
++ list = this.getPlayers(false); // Paper - rewrite chunk system
+
+ for (int i = 0; i < this.changedBlocksPerSection.length; ++i) {
+ ShortSet shortset = this.changedBlocksPerSection[i];
+@@ -343,75 +331,33 @@ public class ChunkHolder {
+
+ }
+
+- private void broadcast(List<ServerPlayer> players, Packet<?> packet) {
+- players.forEach((entityplayer) -> {
+- entityplayer.connection.send(packet);
+- });
+- }
+-
+- public CompletableFuture<ChunkResult<ChunkAccess>> getOrScheduleFuture(ChunkStatus targetStatus, ChunkMap chunkStorage) {
+- int i = targetStatus.getIndex();
+- CompletableFuture<ChunkResult<ChunkAccess>> completablefuture = (CompletableFuture) this.futures.get(i);
++ // Paper start - rewrite chunk system
++ public List<ServerPlayer> getPlayers(boolean onlyOnWatchDistanceEdge) {
++ List<ServerPlayer> ret = new java.util.ArrayList<>();
+
+- if (completablefuture != null) {
+- ChunkResult<ChunkAccess> chunkresult = (ChunkResult) completablefuture.getNow(ChunkHolder.NOT_DONE_YET);
+-
+- if (chunkresult == null) {
+- String s = String.valueOf(targetStatus);
+- String s1 = "value in future for status: " + s + " was incorrectly set to null at chunk: " + String.valueOf(this.pos);
+-
+- throw chunkStorage.debugFuturesAndCreateReportedException(new IllegalStateException("null value previously set for chunk status"), s1);
+- }
+-
+- if (chunkresult == ChunkHolder.NOT_DONE_YET || chunkresult.isSuccess()) {
+- return completablefuture;
++ for (int i = 0, len = this.playersSentChunkTo.size(); i < len; ++i) {
++ ServerPlayer player = this.playersSentChunkTo.getUnchecked(i);
++ if (onlyOnWatchDistanceEdge && !this.chunkMap.level.playerChunkLoader.isChunkSent(player, this.pos.x, this.pos.z, onlyOnWatchDistanceEdge)) {
++ continue;
+ }
++ ret.add(player);
+ }
+
+- if (ChunkLevel.generationStatus(this.ticketLevel).isOrAfter(targetStatus)) {
+- CompletableFuture<ChunkResult<ChunkAccess>> completablefuture1 = chunkStorage.schedule(this, targetStatus);
+-
+- this.updateChunkToSave(completablefuture1, "schedule " + String.valueOf(targetStatus));
+- this.futures.set(i, completablefuture1);
+- return completablefuture1;
+- } else {
+- return completablefuture == null ? ChunkHolder.UNLOADED_CHUNK_FUTURE : completablefuture;
+- }
++ return ret;
+ }
++ // Paper end - rewrite chunk system
+
+- protected void addSaveDependency(String thenDesc, CompletableFuture<?> then) {
+- if (this.chunkToSaveHistory != null) {
+- this.chunkToSaveHistory.push(new ChunkHolder.ChunkSaveDebug(Thread.currentThread(), then, thenDesc));
+- }
+
+- this.chunkToSave = this.chunkToSave.thenCombine(then, (ichunkaccess, object) -> {
+- return ichunkaccess;
+- });
+- }
+-
+- private void updateChunkToSave(CompletableFuture<? extends ChunkResult<? extends ChunkAccess>> then, String thenDesc) {
+- if (this.chunkToSaveHistory != null) {
+- this.chunkToSaveHistory.push(new ChunkHolder.ChunkSaveDebug(Thread.currentThread(), then, thenDesc));
+- }
+-
+- this.chunkToSave = this.chunkToSave.thenCombine(then, (ichunkaccess, chunkresult) -> {
+- return (ChunkAccess) ChunkResult.orElse(chunkresult, ichunkaccess);
++ private void broadcast(List<ServerPlayer> players, Packet<?> packet) {
++ players.forEach((entityplayer) -> {
++ entityplayer.connection.send(packet);
+ });
+ }
+
+- public void addSendDependency(CompletableFuture<?> postProcessingFuture) {
+- if (this.sendSync.isDone()) {
+- this.sendSync = postProcessingFuture;
+- } else {
+- this.sendSync = this.sendSync.thenCombine(postProcessingFuture, (object, object1) -> {
+- return null;
+- });
+- }
+-
+- }
++ // Paper - rewrite chunk system
+
+ public FullChunkStatus getFullStatus() {
+- return ChunkLevel.fullStatus(this.ticketLevel);
++ return this.newChunkHolder.getChunkStatus(); // Paper - rewrite chunk system
+ }
+
+ public final ChunkPos getPos() { // Paper - final for inline
+@@ -419,238 +365,17 @@ public class ChunkHolder {
+ }
+
+ public final int getTicketLevel() { // Paper - final for inline
+- return this.ticketLevel;
+- }
+-
+- public int getQueueLevel() {
+- return this.queueLevel;
+- }
+-
+- private void setQueueLevel(int level) {
+- this.queueLevel = level;
+- }
+-
+- public void setTicketLevel(int level) {
+- this.ticketLevel = level;
+- }
+-
+- private void scheduleFullChunkPromotion(ChunkMap playerchunkmap, CompletableFuture<ChunkResult<LevelChunk>> completablefuture, Executor executor, FullChunkStatus fullchunkstatus) {
+- this.pendingFullStateConfirmation.cancel(false);
+- CompletableFuture<Void> completablefuture1 = new CompletableFuture();
+-
+- completablefuture1.thenRunAsync(() -> {
+- playerchunkmap.onFullChunkStatusChange(this.pos, fullchunkstatus);
+- }, executor);
+- this.pendingFullStateConfirmation = completablefuture1;
+- completablefuture.thenAccept((chunkresult) -> {
+- chunkresult.ifSuccess((chunk) -> {
+- completablefuture1.complete(null); // CraftBukkit - decompile error
+- });
+- });
+- }
+-
+- private void demoteFullChunk(ChunkMap playerchunkmap, FullChunkStatus fullchunkstatus) {
+- this.pendingFullStateConfirmation.cancel(false);
+- playerchunkmap.onFullChunkStatusChange(this.pos, fullchunkstatus);
+- }
+-
+- protected void updateFutures(ChunkMap chunkStorage, Executor executor) {
+- ChunkStatus chunkstatus = ChunkLevel.generationStatus(this.oldTicketLevel);
+- ChunkStatus chunkstatus1 = ChunkLevel.generationStatus(this.ticketLevel);
+- boolean flag = ChunkLevel.isLoaded(this.oldTicketLevel);
+- boolean flag1 = ChunkLevel.isLoaded(this.ticketLevel);
+- FullChunkStatus fullchunkstatus = ChunkLevel.fullStatus(this.oldTicketLevel);
+- FullChunkStatus fullchunkstatus1 = ChunkLevel.fullStatus(this.ticketLevel);
+- // CraftBukkit start
+- // ChunkUnloadEvent: Called before the chunk is unloaded: isChunkLoaded is still true and chunk can still be modified by plugins.
+- if (fullchunkstatus.isOrAfter(FullChunkStatus.FULL) && !fullchunkstatus1.isOrAfter(FullChunkStatus.FULL)) {
+- this.getFutureIfPresentUnchecked(ChunkStatus.FULL).thenAccept((either) -> {
+- LevelChunk chunk = (LevelChunk) either.orElse(null);
+- if (chunk != null) {
+- chunkStorage.callbackExecutor.execute(() -> {
+- // Minecraft will apply the chunks tick lists to the world once the chunk got loaded, and then store the tick
+- // lists again inside the chunk once the chunk becomes inaccessible and set the chunk's needsSaving flag.
+- // These actions may however happen deferred, so we manually set the needsSaving flag already here.
+- chunk.setUnsaved(true);
+- chunk.unloadCallback();
+- });
+- }
+- }).exceptionally((throwable) -> {
+- // ensure exceptions are printed, by default this is not the case
+- MinecraftServer.LOGGER.error("Failed to schedule unload callback for chunk " + ChunkHolder.this.pos, throwable);
+- return null;
+- });
+-
+- // Run callback right away if the future was already done
+- chunkStorage.callbackExecutor.run();
+- }
+- // CraftBukkit end
+-
+- if (flag) {
+- ChunkResult<ChunkAccess> chunkresult = ChunkResult.error(() -> {
+- return "Unloaded ticket level " + String.valueOf(this.pos);
+- });
+-
+- for (int i = flag1 ? chunkstatus1.getIndex() + 1 : 0; i <= chunkstatus.getIndex(); ++i) {
+- CompletableFuture<ChunkResult<ChunkAccess>> completablefuture = (CompletableFuture) this.futures.get(i);
+-
+- if (completablefuture == null) {
+- this.futures.set(i, CompletableFuture.completedFuture(chunkresult));
+- }
+- }
+- }
+-
+- boolean flag2 = fullchunkstatus.isOrAfter(FullChunkStatus.FULL);
+- boolean flag3 = fullchunkstatus1.isOrAfter(FullChunkStatus.FULL);
+-
+- this.wasAccessibleSinceLastSave |= flag3;
+- if (!flag2 && flag3) {
+- int expectCreateCount = ++this.fullChunkCreateCount; // Paper
+- this.fullChunkFuture = chunkStorage.prepareAccessibleChunk(this);
+- this.scheduleFullChunkPromotion(chunkStorage, this.fullChunkFuture, executor, FullChunkStatus.FULL);
+- // Paper start - cache ticking ready status
+- this.fullChunkFuture.thenAccept(chunkResult -> {
+- chunkResult.ifSuccess(chunk -> {
+- if (ChunkHolder.this.fullChunkCreateCount == expectCreateCount) {
+- ChunkHolder.this.isFullChunkReady = true;
+- io.papermc.paper.chunk.system.ChunkSystem.onChunkBorder(chunk, this);
+- }
+- });
+- });
+- this.updateChunkToSave(this.fullChunkFuture, "full");
+- }
+-
+- if (flag2 && !flag3) {
+- // Paper start
+- if (this.isFullChunkReady) {
+- io.papermc.paper.chunk.system.ChunkSystem.onChunkNotBorder(this.fullChunkFuture.join().orElseThrow(IllegalStateException::new), this); // Paper
+- }
+- // Paper end
+- this.fullChunkFuture.complete(ChunkHolder.UNLOADED_LEVEL_CHUNK);
+- this.fullChunkFuture = ChunkHolder.UNLOADED_LEVEL_CHUNK_FUTURE;
+- ++this.fullChunkCreateCount; // Paper - cache ticking ready status
+- this.isFullChunkReady = false; // Paper - cache ticking ready status
+- }
+-
+- boolean flag4 = fullchunkstatus.isOrAfter(FullChunkStatus.BLOCK_TICKING);
+- boolean flag5 = fullchunkstatus1.isOrAfter(FullChunkStatus.BLOCK_TICKING);
+-
+- if (!flag4 && flag5) {
+- this.tickingChunkFuture = chunkStorage.prepareTickingChunk(this);
+- this.scheduleFullChunkPromotion(chunkStorage, this.tickingChunkFuture, executor, FullChunkStatus.BLOCK_TICKING);
+- // Paper start - cache ticking ready status
+- this.tickingChunkFuture.thenAccept(chunkResult -> {
+- chunkResult.ifSuccess(chunk -> {
+- // note: Here is a very good place to add callbacks to logic waiting on this.
+- ChunkHolder.this.isTickingReady = true;
+- io.papermc.paper.chunk.system.ChunkSystem.onChunkTicking(chunk, this);
+- });
+- });
+- // Paper end
+- this.updateChunkToSave(this.tickingChunkFuture, "ticking");
+- }
+-
+- if (flag4 && !flag5) {
+- // Paper start
+- if (this.isTickingReady) {
+- io.papermc.paper.chunk.system.ChunkSystem.onChunkNotTicking(this.tickingChunkFuture.join().orElseThrow(IllegalStateException::new), this); // Paper
+- }
+- // Paper end
+- this.tickingChunkFuture.complete(ChunkHolder.UNLOADED_LEVEL_CHUNK); this.isTickingReady = false; // Paper - cache chunk ticking stage
+- this.tickingChunkFuture = ChunkHolder.UNLOADED_LEVEL_CHUNK_FUTURE;
+- }
+-
+- boolean flag6 = fullchunkstatus.isOrAfter(FullChunkStatus.ENTITY_TICKING);
+- boolean flag7 = fullchunkstatus1.isOrAfter(FullChunkStatus.ENTITY_TICKING);
+-
+- if (!flag6 && flag7) {
+- if (this.entityTickingChunkFuture != ChunkHolder.UNLOADED_LEVEL_CHUNK_FUTURE) {
+- throw (IllegalStateException) Util.pauseInIde(new IllegalStateException());
+- }
+-
+- this.entityTickingChunkFuture = chunkStorage.prepareEntityTickingChunk(this);
+- this.scheduleFullChunkPromotion(chunkStorage, this.entityTickingChunkFuture, executor, FullChunkStatus.ENTITY_TICKING);
+- // Paper start - cache ticking ready status
+- this.entityTickingChunkFuture.thenAccept(chunkResult -> {
+- chunkResult.ifSuccess(chunk -> {
+- ChunkHolder.this.isEntityTickingReady = true;
+- io.papermc.paper.chunk.system.ChunkSystem.onChunkEntityTicking(chunk, this);
+- });
+- });
+- // Paper end
+- this.updateChunkToSave(this.entityTickingChunkFuture, "entity ticking");
+- }
+-
+- if (flag6 && !flag7) {
+- // Paper start
+- if (this.isEntityTickingReady) {
+- io.papermc.paper.chunk.system.ChunkSystem.onChunkNotEntityTicking(this.entityTickingChunkFuture.join().orElseThrow(IllegalStateException::new), this);
+- }
+- // Paper end
+- this.entityTickingChunkFuture.complete(ChunkHolder.UNLOADED_LEVEL_CHUNK); this.isEntityTickingReady = false; // Paper - cache chunk ticking stage
+- this.entityTickingChunkFuture = ChunkHolder.UNLOADED_LEVEL_CHUNK_FUTURE;
+- }
+-
+- if (!fullchunkstatus1.isOrAfter(fullchunkstatus)) {
+- this.demoteFullChunk(chunkStorage, fullchunkstatus1);
+- }
+-
+- this.onLevelChange.onLevelChange(this.pos, this::getQueueLevel, this.ticketLevel, this::setQueueLevel);
+- this.oldTicketLevel = this.ticketLevel;
+- // CraftBukkit start
+- // ChunkLoadEvent: Called after the chunk is loaded: isChunkLoaded returns true and chunk is ready to be modified by plugins.
+- if (!fullchunkstatus.isOrAfter(FullChunkStatus.FULL) && fullchunkstatus1.isOrAfter(FullChunkStatus.FULL)) {
+- this.getFutureIfPresentUnchecked(ChunkStatus.FULL).thenAccept((either) -> {
+- LevelChunk chunk = (LevelChunk) either.orElse(null);
+- if (chunk != null) {
+- chunkStorage.callbackExecutor.execute(() -> {
+- chunk.loadCallback();
+- });
+- }
+- }).exceptionally((throwable) -> {
+- // ensure exceptions are printed, by default this is not the case
+- MinecraftServer.LOGGER.error("Failed to schedule load callback for chunk " + ChunkHolder.this.pos, throwable);
+- return null;
+- });
+-
+- // Run callback right away if the future was already done
+- chunkStorage.callbackExecutor.run();
+- }
+- // CraftBukkit end
+- }
+-
+- public boolean wasAccessibleSinceLastSave() {
+- return this.wasAccessibleSinceLastSave;
++ return this.newChunkHolder.getTicketLevel(); // Paper - rewrite chunk system
+ }
+
+- public void refreshAccessibility() {
+- this.wasAccessibleSinceLastSave = ChunkLevel.fullStatus(this.ticketLevel).isOrAfter(FullChunkStatus.FULL);
+- }
++ // Paper - rewrite chunk system
+
+ public void replaceProtoChunk(ImposterProtoChunk chunk) {
+- for (int i = 0; i < this.futures.length(); ++i) {
+- CompletableFuture<ChunkResult<ChunkAccess>> completablefuture = (CompletableFuture) this.futures.get(i);
+-
+- if (completablefuture != null) {
+- ChunkAccess ichunkaccess = (ChunkAccess) ((ChunkResult) completablefuture.getNow(ChunkHolder.UNLOADED_CHUNK)).orElse((Object) null);
+-
+- if (ichunkaccess instanceof ProtoChunk) {
+- this.futures.set(i, CompletableFuture.completedFuture(ChunkResult.of(chunk)));
+- }
+- }
+- }
+-
+- this.updateChunkToSave(CompletableFuture.completedFuture(ChunkResult.of(chunk.getWrapped())), "replaceProto");
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ public List<Pair<ChunkStatus, CompletableFuture<ChunkResult<ChunkAccess>>>> getAllFutures() {
+- List<Pair<ChunkStatus, CompletableFuture<ChunkResult<ChunkAccess>>>> list = new ArrayList();
+-
+- for (int i = 0; i < ChunkHolder.CHUNK_STATUSES.size(); ++i) {
+- list.add(Pair.of((ChunkStatus) ChunkHolder.CHUNK_STATUSES.get(i), (CompletableFuture) this.futures.get(i)));
+- }
+-
+- return list;
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ @FunctionalInterface
+@@ -670,15 +395,15 @@ public class ChunkHolder {
+
+ // Paper start
+ public final boolean isEntityTickingReady() {
+- return this.isEntityTickingReady;
++ return this.newChunkHolder.isEntityTickingReady(); // Paper - rewrite chunk system
+ }
+
+ public final boolean isTickingReady() {
+- return this.isTickingReady;
++ return this.newChunkHolder.isTickingReady(); // Paper - rewrite chunk system
+ }
+
+ public final boolean isFullChunkReady() {
+- return this.isFullChunkReady;
++ return this.newChunkHolder.isFullChunkReady(); // Paper - rewrite chunk system
+ }
+ // Paper end
+ }
+diff --git a/src/main/java/net/minecraft/server/level/ChunkMap.java b/src/main/java/net/minecraft/server/level/ChunkMap.java
+index d3f63185edd1db9fab3887ea3f08982435b3a23c..d6ecee1db17cb9eaeffa94b3d8dd150238fdefe5 100644
+--- a/src/main/java/net/minecraft/server/level/ChunkMap.java
++++ b/src/main/java/net/minecraft/server/level/ChunkMap.java
+@@ -122,10 +122,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ public static final int MIN_VIEW_DISTANCE = 2;
+ public static final int MAX_VIEW_DISTANCE = 32;
+ public static final int FORCED_TICKET_LEVEL = ChunkLevel.byStatus(FullChunkStatus.ENTITY_TICKING);
+- public final Long2ObjectLinkedOpenHashMap<ChunkHolder> updatingChunkMap = new Long2ObjectLinkedOpenHashMap();
+- public volatile Long2ObjectLinkedOpenHashMap<ChunkHolder> visibleChunkMap;
+- private final Long2ObjectLinkedOpenHashMap<ChunkHolder> pendingUnloads;
+- private final LongSet entitiesInLevel;
++ // Paper - rewrite chunk system
+ public final ServerLevel level;
+ private final ThreadedLevelLightEngine lightEngine;
+ public final BlockableEventLoop<Runnable> mainThreadExecutor; // Paper - public
+@@ -134,15 +131,13 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ private final ChunkGeneratorStructureState chunkGeneratorState;
+ public final Supplier<DimensionDataStorage> overworldDataStorage;
+ private final PoiManager poiManager;
+- public final LongSet toDrop;
++ // Paper - rewrite chunk system
+ private boolean modified;
+- private final ChunkTaskPriorityQueueSorter queueSorter;
+- private final ProcessorHandle<ChunkTaskPriorityQueueSorter.Message<Runnable>> worldgenMailbox;
+- private final ProcessorHandle<ChunkTaskPriorityQueueSorter.Message<Runnable>> mainThreadMailbox;
++ // Paper - rewrite chunk system
+ public final ChunkProgressListener progressListener;
+ private final ChunkStatusUpdateListener chunkStatusListener;
+ public final ChunkMap.ChunkDistanceManager distanceManager;
+- private final AtomicInteger tickingGenerated;
++ public final AtomicInteger tickingGenerated; // Paper - public
+ private final String storageName;
+ private final PlayerMap playerMap;
+ public final Int2ObjectMap<ChunkMap.TrackedEntity> entityMap;
+@@ -150,28 +145,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ private final Long2LongMap chunkSaveCooldowns;
+ private final Queue<Runnable> unloadQueue;
+ public int serverViewDistance;
+- private WorldGenContext worldGenContext;
+-
+- // CraftBukkit start - recursion-safe executor for Chunk loadCallback() and unloadCallback()
+- public final CallbackExecutor callbackExecutor = new CallbackExecutor();
+- public static final class CallbackExecutor implements java.util.concurrent.Executor, Runnable {
+-
+- private final java.util.Queue<Runnable> queue = new java.util.ArrayDeque<>();
+-
+- @Override
+- public void execute(Runnable runnable) {
+- this.queue.add(runnable);
+- }
+-
+- @Override
+- public void run() {
+- Runnable task;
+- while ((task = this.queue.poll()) != null) {
+- task.run();
+- }
+- }
+- };
+- // CraftBukkit end
++ private WorldGenContext worldGenContext; public final WorldGenContext getWorldGenContext() { return this.worldGenContext; } // Paper - rewrite chunk system
+
+ // Paper start - distance maps
+ private final com.destroystokyo.paper.util.misc.PooledLinkedHashSets<ServerPlayer> pooledLinkedPlayerHashSets = new com.destroystokyo.paper.util.misc.PooledLinkedHashSets<>();
+@@ -181,6 +155,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ int chunkZ = io.papermc.paper.util.MCUtil.getChunkCoordinate(player.getZ());
+ // Note: players need to be explicitly added to distance maps before they can be updated
+ this.nearbyPlayers.addPlayer(player);
++ this.level.playerChunkLoader.addPlayer(player); // Paper - replace chunk loader
+ }
+
+ void removePlayerFromDistanceMaps(ServerPlayer player) {
+@@ -188,6 +163,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ int chunkZ = io.papermc.paper.util.MCUtil.getChunkCoordinate(player.getZ());
+ // Note: players need to be explicitly added to distance maps before they can be updated
+ this.nearbyPlayers.removePlayer(player);
++ this.level.playerChunkLoader.removePlayer(player); // Paper - replace chunk loader
+ }
+
+ void updateMaps(ServerPlayer player) {
+@@ -195,6 +171,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ int chunkZ = io.papermc.paper.util.MCUtil.getChunkCoordinate(player.getZ());
+ // Note: players need to be explicitly added to distance maps before they can be updated
+ this.nearbyPlayers.tickPlayer(player);
++ this.level.playerChunkLoader.updatePlayer(player); // Paper - replace chunk loader
+ }
+ // Paper end
+ // Paper start
+@@ -224,17 +201,14 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ }
+
+ public final ChunkHolder getUnloadingChunkHolder(int chunkX, int chunkZ) {
+- return this.pendingUnloads.get(io.papermc.paper.util.CoordinateUtils.getChunkKey(chunkX, chunkZ));
++ return null; // Paper - rewrite chunk system
+ }
+ public final io.papermc.paper.util.player.NearbyPlayers nearbyPlayers;
+ // Paper end
+
+ public ChunkMap(ServerLevel world, LevelStorageSource.LevelStorageAccess session, DataFixer dataFixer, StructureTemplateManager structureTemplateManager, Executor executor, BlockableEventLoop<Runnable> mainThreadExecutor, LightChunkGetter chunkProvider, ChunkGenerator chunkGenerator, ChunkProgressListener worldGenerationProgressListener, ChunkStatusUpdateListener chunkStatusChangeListener, Supplier<DimensionDataStorage> persistentStateManagerFactory, int viewDistance, boolean dsync) {
+ super(new RegionStorageInfo(session.getLevelId(), world.dimension(), "chunk"), session.getDimensionPath(world.dimension()).resolve("region"), dataFixer, dsync);
+- this.visibleChunkMap = this.updatingChunkMap.clone();
+- this.pendingUnloads = new Long2ObjectLinkedOpenHashMap();
+- this.entitiesInLevel = new LongOpenHashSet();
+- this.toDrop = new LongOpenHashSet();
++ // Paper - rewrite chunk system
+ this.tickingGenerated = new AtomicInteger();
+ this.playerMap = new PlayerMap();
+ this.entityMap = new Int2ObjectOpenHashMap();
+@@ -263,19 +237,17 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+
+ this.chunkGeneratorState = chunkGenerator.createState(iregistrycustom.lookupOrThrow(Registries.STRUCTURE_SET), this.randomState, j, world.spigotConfig); // Spigot
+ this.mainThreadExecutor = mainThreadExecutor;
+- ProcessorMailbox<Runnable> threadedmailbox = ProcessorMailbox.create(executor, "worldgen");
++ // Paper - rewrite chunk system
+
+ Objects.requireNonNull(mainThreadExecutor);
+- ProcessorHandle<Runnable> mailbox = ProcessorHandle.of("main", mainThreadExecutor::tell);
++ // Paper - rewrite chunk system
+
+ this.progressListener = worldGenerationProgressListener;
+ this.chunkStatusListener = chunkStatusChangeListener;
+- ProcessorMailbox<Runnable> threadedmailbox1 = ProcessorMailbox.create(executor, "light");
++ // Paper - rewrite chunk system
+
+- this.queueSorter = new ChunkTaskPriorityQueueSorter(ImmutableList.of(threadedmailbox, mailbox, threadedmailbox1), executor, Integer.MAX_VALUE);
+- this.worldgenMailbox = this.queueSorter.getProcessor(threadedmailbox, false);
+- this.mainThreadMailbox = this.queueSorter.getProcessor(mailbox, false);
+- this.lightEngine = new ThreadedLevelLightEngine(chunkProvider, this, this.level.dimensionType().hasSkyLight(), threadedmailbox1, this.queueSorter.getProcessor(threadedmailbox1, false));
++ // Paper - rewrite chunk system
++ this.lightEngine = new ThreadedLevelLightEngine(chunkProvider, this, this.level.dimensionType().hasSkyLight(), null, null); // Paper - rewrite chunk system
+ this.distanceManager = new ChunkMap.ChunkDistanceManager(executor, mainThreadExecutor);
+ this.overworldDataStorage = persistentStateManagerFactory;
+ this.poiManager = new PoiManager(new RegionStorageInfo(session.getLevelId(), world.dimension(), "poi"), path.resolve("poi"), dataFixer, dsync, iregistrycustom, world);
+@@ -333,23 +305,15 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ }
+
+ boolean isChunkTracked(ServerPlayer player, int chunkX, int chunkZ) {
+- return player.getChunkTrackingView().contains(chunkX, chunkZ) && !player.connection.chunkSender.isPending(ChunkPos.asLong(chunkX, chunkZ));
++ // Paper start - rewrite player chunk loader
++ return this.level.playerChunkLoader.isChunkSent(player, chunkX, chunkZ);
++ // Paper end - rewrite player chunk loader
+ }
+
+ private boolean isChunkOnTrackedBorder(ServerPlayer player, int chunkX, int chunkZ) {
+- if (!this.isChunkTracked(player, chunkX, chunkZ)) {
+- return false;
+- } else {
+- for (int k = -1; k <= 1; ++k) {
+- for (int l = -1; l <= 1; ++l) {
+- if ((k != 0 || l != 0) && !this.isChunkTracked(player, chunkX + k, chunkZ + l)) {
+- return true;
+- }
+- }
+- }
+-
+- return false;
+- }
++ // Paper start - rewrite player chunk loader
++ return this.level.playerChunkLoader.isChunkSent(player, chunkX, chunkZ, true);
++ // Paper end - rewrite player chunk loader
+ }
+
+ protected ThreadedLevelLightEngine getLightEngine() {
+@@ -358,20 +322,22 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+
+ @Nullable
+ protected ChunkHolder getUpdatingChunkIfPresent(long pos) {
+- return (ChunkHolder) this.updatingChunkMap.get(pos);
++ // Paper start - rewrite chunk system
++ io.papermc.paper.chunk.system.scheduling.NewChunkHolder holder = this.level.chunkTaskScheduler.chunkHolderManager.getChunkHolder(pos);
++ return holder == null ? null : holder.vanillaChunkHolder;
++ // Paper end - rewrite chunk system
+ }
+
+ @Nullable
+ public ChunkHolder getVisibleChunkIfPresent(long pos) {
+- return (ChunkHolder) this.visibleChunkMap.get(pos);
++ // Paper start - rewrite chunk system
++ io.papermc.paper.chunk.system.scheduling.NewChunkHolder holder = this.level.chunkTaskScheduler.chunkHolderManager.getChunkHolder(pos);
++ return holder == null ? null : holder.vanillaChunkHolder;
++ // Paper end - rewrite chunk system
+ }
+
+ protected IntSupplier getChunkQueueLevel(long pos) {
+- return () -> {
+- ChunkHolder playerchunk = this.getVisibleChunkIfPresent(pos);
+-
+- return playerchunk == null ? ChunkTaskPriorityQueue.PRIORITY_LEVEL_COUNT - 1 : Math.min(playerchunk.getQueueLevel(), ChunkTaskPriorityQueue.PRIORITY_LEVEL_COUNT - 1);
+- };
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ public String getChunkDebugData(ChunkPos chunkPos) {
+@@ -400,80 +366,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ }
+
+ private CompletableFuture<ChunkResult<List<ChunkAccess>>> getChunkRangeFuture(ChunkHolder centerChunk, int margin, IntFunction<ChunkStatus> distanceToStatus) {
+- if (margin == 0) {
+- ChunkStatus chunkstatus = (ChunkStatus) distanceToStatus.apply(0);
+-
+- return centerChunk.getOrScheduleFuture(chunkstatus, this).thenApply((chunkresult) -> {
+- return chunkresult.map(List::of);
+- });
+- } else {
+- List<CompletableFuture<ChunkResult<ChunkAccess>>> list = new ArrayList();
+- List<ChunkHolder> list1 = new ArrayList();
+- ChunkPos chunkcoordintpair = centerChunk.getPos();
+- int j = chunkcoordintpair.x;
+- int k = chunkcoordintpair.z;
+-
+- for (int l = -margin; l <= margin; ++l) {
+- for (int i1 = -margin; i1 <= margin; ++i1) {
+- int j1 = Math.max(Math.abs(i1), Math.abs(l));
+- ChunkPos chunkcoordintpair1 = new ChunkPos(j + i1, k + l);
+- long k1 = chunkcoordintpair1.toLong();
+- ChunkHolder playerchunk1 = this.getUpdatingChunkIfPresent(k1);
+-
+- if (playerchunk1 == null) {
+- return CompletableFuture.completedFuture(ChunkResult.error(() -> {
+- return "Unloaded " + String.valueOf(chunkcoordintpair1);
+- }));
+- }
+-
+- ChunkStatus chunkstatus1 = (ChunkStatus) distanceToStatus.apply(j1);
+- CompletableFuture<ChunkResult<ChunkAccess>> completablefuture = playerchunk1.getOrScheduleFuture(chunkstatus1, this);
+-
+- list1.add(playerchunk1);
+- list.add(completablefuture);
+- }
+- }
+-
+- CompletableFuture<List<ChunkResult<ChunkAccess>>> completablefuture1 = Util.sequence(list);
+- CompletableFuture<ChunkResult<List<ChunkAccess>>> completablefuture2 = completablefuture1.thenApply((list2) -> {
+- List<ChunkAccess> list3 = Lists.newArrayList();
+- // CraftBukkit start - decompile error
+- int cnt = 0;
+-
+- for (Iterator iterator = list2.iterator(); iterator.hasNext(); ++cnt) {
+- final int l1 = cnt;
+- // CraftBukkit end
+- ChunkResult<ChunkAccess> chunkresult = (ChunkResult) iterator.next();
+-
+- if (chunkresult == null) {
+- throw this.debugFuturesAndCreateReportedException(new IllegalStateException("At least one of the chunk futures were null"), "n/a");
+- }
+-
+- ChunkAccess ichunkaccess = (ChunkAccess) chunkresult.orElse(null); // CraftBukkit - decompile error
+-
+- if (ichunkaccess == null) {
+- return ChunkResult.error(() -> {
+- String s = String.valueOf(new ChunkPos(j + l1 % (margin * 2 + 1), k + l1 / (margin * 2 + 1)));
+-
+- return "Unloaded " + s + " " + chunkresult.getError();
+- });
+- }
+-
+- list3.add(ichunkaccess);
+- }
+-
+- return ChunkResult.of(list3);
+- });
+- Iterator iterator = list1.iterator();
+-
+- while (iterator.hasNext()) {
+- ChunkHolder playerchunk2 = (ChunkHolder) iterator.next();
+-
+- playerchunk2.addSaveDependency("getChunkRangeFuture " + String.valueOf(chunkcoordintpair) + " " + margin, completablefuture2);
+- }
+-
+- return completablefuture2;
+- }
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ public ReportedException debugFuturesAndCreateReportedException(IllegalStateException exception, String details) {
+@@ -503,263 +396,72 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ }
+
+ public CompletableFuture<ChunkResult<LevelChunk>> prepareEntityTickingChunk(ChunkHolder chunk) {
+- return this.getChunkRangeFuture(chunk, 2, (i) -> {
+- return ChunkStatus.FULL;
+- }).thenApplyAsync((chunkresult) -> {
+- return chunkresult.map((list) -> {
+- return (LevelChunk) list.get(list.size() / 2);
+- });
+- }, this.mainThreadExecutor);
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ @Nullable
+ ChunkHolder updateChunkScheduling(long pos, int level, @Nullable ChunkHolder holder, int k) {
+- if (!ChunkLevel.isLoaded(k) && !ChunkLevel.isLoaded(level)) {
+- return holder;
+- } else {
+- if (holder != null) {
+- holder.setTicketLevel(level);
+- }
+-
+- if (holder != null) {
+- if (!ChunkLevel.isLoaded(level)) {
+- this.toDrop.add(pos);
+- } else {
+- this.toDrop.remove(pos);
+- }
+- }
+-
+- if (ChunkLevel.isLoaded(level) && holder == null) {
+- holder = (ChunkHolder) this.pendingUnloads.remove(pos);
+- if (holder != null) {
+- holder.setTicketLevel(level);
+- } else {
+- holder = new ChunkHolder(new ChunkPos(pos), level, this.level, this.lightEngine, this.queueSorter, this);
+- // Paper start
+- io.papermc.paper.chunk.system.ChunkSystem.onChunkHolderCreate(this.level, holder);
+- // Paper end
+- }
+-
+- // Paper start
+- holder.onChunkAdd();
+- // Paper end
+- this.updatingChunkMap.put(pos, holder);
+- this.modified = true;
+- }
+-
+- return holder;
+- }
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ @Override
+ public void close() throws IOException {
+- try {
+- this.queueSorter.close();
+- this.poiManager.close();
+- } finally {
+- super.close();
+- }
++ throw new UnsupportedOperationException("Use ServerChunkCache#close"); // Paper - rewrite chunk system
++ }
+
++ // Paper start - rewrite chunk system
++ protected void saveIncrementally() {
++ this.level.chunkTaskScheduler.chunkHolderManager.autoSave(); // Paper - rewrite chunk system
+ }
++ // Paper end - - rewrite chunk system
+
+ protected void saveAllChunks(boolean flush) {
+- if (flush) {
+- List<ChunkHolder> list = io.papermc.paper.chunk.system.ChunkSystem.getVisibleChunkHolders(this.level).stream().filter(ChunkHolder::wasAccessibleSinceLastSave).peek(ChunkHolder::refreshAccessibility).toList(); // Paper
+- MutableBoolean mutableboolean = new MutableBoolean();
+-
+- do {
+- mutableboolean.setFalse();
+- list.stream().map((playerchunk) -> {
+- CompletableFuture completablefuture;
+-
+- do {
+- completablefuture = playerchunk.getChunkToSave();
+- BlockableEventLoop iasynctaskhandler = this.mainThreadExecutor;
+-
+- Objects.requireNonNull(completablefuture);
+- iasynctaskhandler.managedBlock(completablefuture::isDone);
+- } while (completablefuture != playerchunk.getChunkToSave());
+-
+- return (ChunkAccess) completablefuture.join();
+- }).filter((ichunkaccess) -> {
+- return ichunkaccess instanceof ImposterProtoChunk || ichunkaccess instanceof LevelChunk;
+- }).filter(this::save).forEach((ichunkaccess) -> {
+- mutableboolean.setTrue();
+- });
+- } while (mutableboolean.isTrue());
+-
+- this.processUnloads(() -> {
+- return true;
+- });
+- this.flushWorker();
+- } else {
+- io.papermc.paper.chunk.system.ChunkSystem.getVisibleChunkHolders(this.level).forEach(this::saveChunkIfNeeded);
+- }
+-
++ this.level.chunkTaskScheduler.chunkHolderManager.saveAllChunks(flush, false, false); // Paper - rewrite chunk system
+ }
+
+ protected void tick(BooleanSupplier shouldKeepTicking) {
+ ProfilerFiller gameprofilerfiller = this.level.getProfiler();
+
++ try (Timing ignored = this.level.timings.poiUnload.startTiming()) { // Paper
+ gameprofilerfiller.push("poi");
+ this.poiManager.tick(shouldKeepTicking);
++ } // Paper
+ gameprofilerfiller.popPush("chunk_unload");
+ if (!this.level.noSave()) {
++ try (Timing ignored = this.level.timings.chunkUnload.startTiming()) { // Paper
+ this.processUnloads(shouldKeepTicking);
++ } // Paper
+ }
+
+ gameprofilerfiller.pop();
+ }
+
+ public boolean hasWork() {
+- return this.lightEngine.hasLightWork() || !this.pendingUnloads.isEmpty() || io.papermc.paper.chunk.system.ChunkSystem.hasAnyChunkHolders(this.level) || this.poiManager.hasWork() || !this.toDrop.isEmpty() || !this.unloadQueue.isEmpty() || this.queueSorter.hasWork() || this.distanceManager.hasTickets(); // Paper
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ private void processUnloads(BooleanSupplier shouldKeepTicking) {
+- LongIterator longiterator = this.toDrop.iterator();
+-
+- for (int i = 0; longiterator.hasNext() && (shouldKeepTicking.getAsBoolean() || i < 200 || this.toDrop.size() > 2000); longiterator.remove()) {
+- long j = longiterator.nextLong();
+- ChunkHolder playerchunk = (ChunkHolder) this.updatingChunkMap.remove(j);
+-
+- if (playerchunk != null) {
+- playerchunk.onChunkRemove(); // Paper
+- this.pendingUnloads.put(j, playerchunk);
+- this.modified = true;
+- ++i;
+- this.scheduleUnload(j, playerchunk);
+- }
+- }
+-
+- int k = Math.max(0, this.unloadQueue.size() - 2000);
+-
+- Runnable runnable;
+-
+- while ((shouldKeepTicking.getAsBoolean() || k > 0) && (runnable = (Runnable) this.unloadQueue.poll()) != null) {
+- --k;
+- runnable.run();
+- }
+-
+- int l = 0;
+- Iterator<ChunkHolder> objectiterator = io.papermc.paper.chunk.system.ChunkSystem.getVisibleChunkHolders(this.level).iterator(); // Paper
+-
+- while (l < 20 && shouldKeepTicking.getAsBoolean() && objectiterator.hasNext()) {
+- if (this.saveChunkIfNeeded((ChunkHolder) objectiterator.next())) {
+- ++l;
+- }
+- }
++ this.level.chunkTaskScheduler.chunkHolderManager.processUnloads(); // Paper - rewrite chunk system
+
+ }
+
+ private void scheduleUnload(long pos, ChunkHolder holder) {
+- CompletableFuture<ChunkAccess> completablefuture = holder.getChunkToSave();
+- Consumer<ChunkAccess> consumer = (ichunkaccess) -> { // CraftBukkit - decompile error
+- CompletableFuture<ChunkAccess> completablefuture1 = holder.getChunkToSave();
+-
+- if (completablefuture1 != completablefuture) {
+- this.scheduleUnload(pos, holder);
+- } else {
+- // Paper start
+- boolean removed;
+- if ((removed = this.pendingUnloads.remove(pos, holder)) && ichunkaccess != null) {
+- io.papermc.paper.chunk.system.ChunkSystem.onChunkHolderDelete(this.level, holder);
+- // Paper end
+- if (ichunkaccess instanceof LevelChunk) {
+- ((LevelChunk) ichunkaccess).setLoaded(false);
+- }
+-
+- this.save(ichunkaccess);
+- if (this.entitiesInLevel.remove(pos) && ichunkaccess instanceof LevelChunk) {
+- LevelChunk chunk = (LevelChunk) ichunkaccess;
+-
+- this.level.unload(chunk);
+- }
+-
+- this.lightEngine.updateChunkStatus(ichunkaccess.getPos());
+- this.lightEngine.tryScheduleUpdate();
+- this.progressListener.onStatusChange(ichunkaccess.getPos(), (ChunkStatus) null);
+- this.chunkSaveCooldowns.remove(ichunkaccess.getPos().toLong());
+- } else if (removed) { // Paper start
+- io.papermc.paper.chunk.system.ChunkSystem.onChunkHolderDelete(this.level, holder);
+- } // Paper end
+-
+- }
+- };
+- Queue queue = this.unloadQueue;
+-
+- Objects.requireNonNull(this.unloadQueue);
+- completablefuture.thenAcceptAsync(consumer, queue::add).whenComplete((ovoid, throwable) -> {
+- if (throwable != null) {
+- ChunkMap.LOGGER.error("Failed to save chunk {}", holder.getPos(), throwable);
+- }
+-
+- });
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ protected boolean promoteChunkMap() {
+- if (!this.modified) {
+- return false;
+- } else {
+- this.visibleChunkMap = this.updatingChunkMap.clone();
+- this.modified = false;
+- return true;
+- }
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ public CompletableFuture<ChunkResult<ChunkAccess>> schedule(ChunkHolder holder, ChunkStatus requiredStatus) {
+- ChunkPos chunkcoordintpair = holder.getPos();
+-
+- if (requiredStatus == ChunkStatus.EMPTY) {
+- return this.scheduleChunkLoad(chunkcoordintpair).thenApply(ChunkResult::of);
+- } else {
+- if (requiredStatus == ChunkStatus.LIGHT) {
+- this.distanceManager.addTicket(TicketType.LIGHT, chunkcoordintpair, ChunkLevel.byStatus(ChunkStatus.LIGHT), chunkcoordintpair);
+- }
+-
+- if (!requiredStatus.hasLoadDependencies()) {
+- ChunkAccess ichunkaccess = (ChunkAccess) ((ChunkResult) holder.getOrScheduleFuture(requiredStatus.getParent(), this).getNow(ChunkHolder.UNLOADED_CHUNK)).orElse((Object) null);
+-
+- if (ichunkaccess != null && ichunkaccess.getStatus().isOrAfter(requiredStatus)) {
+- CompletableFuture<ChunkAccess> completablefuture = requiredStatus.load(this.worldGenContext, (ichunkaccess1) -> {
+- return this.protoChunkToFullChunk(holder, ichunkaccess1);
+- }, ichunkaccess);
+-
+- this.progressListener.onStatusChange(chunkcoordintpair, requiredStatus);
+- return completablefuture.thenApply(ChunkResult::of);
+- }
+- }
+-
+- return this.scheduleChunkGeneration(holder, requiredStatus);
+- }
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ private CompletableFuture<ChunkAccess> scheduleChunkLoad(ChunkPos pos) {
+- return this.readChunk(pos).thenApply((optional) -> {
+- return optional.filter((nbttagcompound) -> {
+- boolean flag = ChunkMap.isChunkDataValid(nbttagcompound);
+-
+- if (!flag) {
+- ChunkMap.LOGGER.error("Chunk file at {} is missing level data, skipping", pos);
+- }
+-
+- return flag;
+- });
+- }).thenApplyAsync((optional) -> {
+- this.level.getProfiler().incrementCounter("chunkLoad");
+- if (optional.isPresent()) {
+- ProtoChunk protochunk = ChunkSerializer.read(this.level, this.poiManager, pos, (CompoundTag) optional.get());
+-
+- this.markPosition(pos, protochunk.getStatus().getChunkType());
+- return protochunk;
+- } else {
+- return this.createEmptyChunk(pos);
+- }
+- }, this.mainThreadExecutor).exceptionallyAsync((throwable) -> {
+- return this.handleChunkLoadFailure(throwable, pos);
+- }, this.mainThreadExecutor);
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+- private static boolean isChunkDataValid(CompoundTag nbt) {
++ public static boolean isChunkDataValid(CompoundTag nbt) { // Paper - async chunk loading
+ return nbt.contains("Status", 8);
+ }
+
+@@ -816,60 +518,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ }
+
+ private CompletableFuture<ChunkResult<ChunkAccess>> scheduleChunkGeneration(ChunkHolder holder, ChunkStatus requiredStatus) {
+- ChunkPos chunkcoordintpair = holder.getPos();
+- CompletableFuture<ChunkResult<List<ChunkAccess>>> completablefuture = this.getChunkRangeFuture(holder, requiredStatus.getRange(), (i) -> {
+- return this.getDependencyStatus(requiredStatus, i);
+- });
+-
+- this.level.getProfiler().incrementCounter(() -> {
+- return "chunkGenerate " + String.valueOf(requiredStatus);
+- });
+- Executor executor = (runnable) -> {
+- this.worldgenMailbox.tell(ChunkTaskPriorityQueueSorter.message(holder, runnable));
+- };
+-
+- return completablefuture.thenComposeAsync((chunkresult) -> {
+- List<ChunkAccess> list = (List) chunkresult.orElse(null); // CraftBukkit - decompile error
+-
+- if (list == null) {
+- this.releaseLightTicket(chunkcoordintpair);
+- Objects.requireNonNull(chunkresult);
+- return CompletableFuture.completedFuture(ChunkResult.error(chunkresult::getError));
+- } else {
+- try {
+- ChunkAccess ichunkaccess = (ChunkAccess) list.get(list.size() / 2);
+- CompletableFuture completablefuture1;
+-
+- if (ichunkaccess.getStatus().isOrAfter(requiredStatus)) {
+- completablefuture1 = requiredStatus.load(this.worldGenContext, (ichunkaccess1) -> {
+- return this.protoChunkToFullChunk(holder, ichunkaccess1);
+- }, ichunkaccess);
+- } else {
+- completablefuture1 = requiredStatus.generate(this.worldGenContext, executor, (ichunkaccess1) -> {
+- return this.protoChunkToFullChunk(holder, ichunkaccess1);
+- }, list);
+- }
+-
+- this.progressListener.onStatusChange(chunkcoordintpair, requiredStatus);
+- return completablefuture1.thenApply(ChunkResult::of);
+- } catch (Exception exception) {
+- exception.getStackTrace();
+- CrashReport crashreport = CrashReport.forThrowable(exception, "Exception generating new chunk");
+- CrashReportCategory crashreportsystemdetails = crashreport.addCategory("Chunk to be generated");
+-
+- crashreportsystemdetails.setDetail("Status being generated", () -> {
+- return BuiltInRegistries.CHUNK_STATUS.getKey(requiredStatus).toString();
+- });
+- crashreportsystemdetails.setDetail("Location", (Object) String.format(Locale.ROOT, "%d,%d", chunkcoordintpair.x, chunkcoordintpair.z));
+- crashreportsystemdetails.setDetail("Position hash", (Object) ChunkPos.asLong(chunkcoordintpair.x, chunkcoordintpair.z));
+- crashreportsystemdetails.setDetail("Generator", (Object) this.generator);
+- this.mainThreadExecutor.execute(() -> {
+- throw new ReportedException(crashreport);
+- });
+- throw new ReportedException(crashreport);
+- }
+- }
+- }, executor);
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ protected void releaseLightTicket(ChunkPos pos) {
+@@ -880,7 +529,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ }));
+ }
+
+- private ChunkStatus getDependencyStatus(ChunkStatus centerChunkTargetStatus, int distance) {
++ public static ChunkStatus getDependencyStatus(ChunkStatus centerChunkTargetStatus, int distance) { // Paper -> public, static
+ ChunkStatus chunkstatus1;
+
+ if (distance == 0) {
+@@ -892,7 +541,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ return chunkstatus1;
+ }
+
+- private static void postLoadProtoChunk(ServerLevel world, List<CompoundTag> nbt) {
++ public static void postLoadProtoChunk(ServerLevel world, List<CompoundTag> nbt, ChunkPos position) { // Paper - public and add chunk position parameter
+ if (!nbt.isEmpty()) {
+ // CraftBukkit start - these are spawned serialized (DefinedStructure) and we don't call an add event below at the moment due to ordering complexities
+ world.addWorldGenChunkEntities(EntityType.loadEntitiesRecursive(nbt, world).filter((entity) -> {
+@@ -908,45 +557,14 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ }
+ checkDupeUUID(world, entity); // Paper - duplicate uuid resolving
+ return !needsRemoval;
+- }));
++ }), position); // Paper - rewrite chunk system
+ // CraftBukkit end
+ }
+
+ }
+
+ private CompletableFuture<ChunkAccess> protoChunkToFullChunk(ChunkHolder playerchunk, ChunkAccess ichunkaccess) {
+- return CompletableFuture.supplyAsync(() -> {
+- ChunkPos chunkcoordintpair = playerchunk.getPos();
+- ProtoChunk protochunk = (ProtoChunk) ichunkaccess;
+- LevelChunk chunk;
+-
+- if (protochunk instanceof ImposterProtoChunk) {
+- chunk = ((ImposterProtoChunk) protochunk).getWrapped();
+- } else {
+- chunk = new LevelChunk(this.level, protochunk, (chunk1) -> {
+- ChunkMap.postLoadProtoChunk(this.level, protochunk.getEntities());
+- });
+- playerchunk.replaceProtoChunk(new ImposterProtoChunk(chunk, false));
+- }
+-
+- chunk.setFullStatus(() -> {
+- return ChunkLevel.fullStatus(playerchunk.getTicketLevel());
+- });
+- chunk.runPostLoad();
+- if (this.entitiesInLevel.add(chunkcoordintpair.toLong())) {
+- chunk.setLoaded(true);
+- chunk.registerAllBlockEntitiesAfterLevelLoad();
+- chunk.registerTickContainerInLevel(this.level);
+- }
+-
+- return chunk;
+- }, (runnable) -> {
+- ProcessorHandle mailbox = this.mainThreadMailbox;
+- long i = playerchunk.getPos().toLong();
+-
+- Objects.requireNonNull(playerchunk);
+- mailbox.tell(ChunkTaskPriorityQueueSorter.message(runnable, i, playerchunk::getTicketLevel));
+- });
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ // Paper start - duplicate uuid resolving
+@@ -990,61 +608,16 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ // Paper end - duplicate uuid resolving
+
+ public CompletableFuture<ChunkResult<LevelChunk>> prepareTickingChunk(ChunkHolder holder) {
+- CompletableFuture<ChunkResult<List<ChunkAccess>>> completablefuture = this.getChunkRangeFuture(holder, 1, (i) -> {
+- return ChunkStatus.FULL;
+- });
+- CompletableFuture<ChunkResult<LevelChunk>> completablefuture1 = completablefuture.thenApplyAsync((chunkresult) -> {
+- return chunkresult.map((list) -> {
+- return (LevelChunk) list.get(list.size() / 2);
+- });
+- }, (runnable) -> {
+- this.mainThreadMailbox.tell(ChunkTaskPriorityQueueSorter.message(holder, runnable));
+- }).thenApplyAsync((chunkresult) -> {
+- return chunkresult.ifSuccess((chunk) -> {
+- chunk.postProcessGeneration();
+- this.level.startTickingChunk(chunk);
+- CompletableFuture<?> completablefuture2 = holder.getChunkSendSyncFuture();
+-
+- if (completablefuture2.isDone()) {
+- this.onChunkReadyToSend(chunk);
+- } else {
+- completablefuture2.thenAcceptAsync((object) -> {
+- this.onChunkReadyToSend(chunk);
+- }, this.mainThreadExecutor);
+- }
+-
+- });
+- }, this.mainThreadExecutor);
+-
+- completablefuture1.handle((chunkresult, throwable) -> {
+- this.tickingGenerated.getAndIncrement();
+- return null;
+- });
+- return completablefuture1;
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ private void onChunkReadyToSend(LevelChunk chunk) {
+- ChunkPos chunkcoordintpair = chunk.getPos();
+- Iterator iterator = this.playerMap.getAllPlayers().iterator();
+-
+- while (iterator.hasNext()) {
+- ServerPlayer entityplayer = (ServerPlayer) iterator.next();
+-
+- if (entityplayer.getChunkTrackingView().contains(chunkcoordintpair)) {
+- ChunkMap.markChunkPendingToSend(entityplayer, chunk);
+- }
+- }
++ throw new UnsupportedOperationException(); // Paper - rewrite player chunk loader
+
+ }
+
+ public CompletableFuture<ChunkResult<LevelChunk>> prepareAccessibleChunk(ChunkHolder holder) {
+- return this.getChunkRangeFuture(holder, 1, ChunkStatus::getStatusAroundFullChunk).thenApplyAsync((chunkresult) -> {
+- return chunkresult.map((list) -> {
+- return (LevelChunk) list.get(list.size() / 2);
+- });
+- }, (runnable) -> {
+- this.mainThreadMailbox.tell(ChunkTaskPriorityQueueSorter.message(holder, runnable));
+- });
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ public int getTickingGenerated() {
+@@ -1052,96 +625,15 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ }
+
+ private boolean saveChunkIfNeeded(ChunkHolder chunkHolder) {
+- if (!chunkHolder.wasAccessibleSinceLastSave()) {
+- return false;
+- } else {
+- ChunkAccess ichunkaccess = (ChunkAccess) chunkHolder.getChunkToSave().getNow(null); // CraftBukkit - decompile error
+-
+- if (!(ichunkaccess instanceof ImposterProtoChunk) && !(ichunkaccess instanceof LevelChunk)) {
+- return false;
+- } else {
+- long i = ichunkaccess.getPos().toLong();
+- long j = this.chunkSaveCooldowns.getOrDefault(i, -1L);
+- long k = System.currentTimeMillis();
+-
+- if (k < j) {
+- return false;
+- } else {
+- boolean flag = this.save(ichunkaccess);
+-
+- chunkHolder.refreshAccessibility();
+- if (flag) {
+- this.chunkSaveCooldowns.put(i, k + 10000L);
+- }
+-
+- return flag;
+- }
+- }
+- }
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ public boolean save(ChunkAccess chunk) {
+- this.poiManager.flush(chunk.getPos());
+- if (!chunk.isUnsaved()) {
+- return false;
+- } else {
+- chunk.setUnsaved(false);
+- ChunkPos chunkcoordintpair = chunk.getPos();
+-
+- try {
+- ChunkStatus chunkstatus = chunk.getStatus();
+-
+- if (chunkstatus.getChunkType() != ChunkType.LEVELCHUNK) {
+- if (this.isExistingChunkFull(chunkcoordintpair)) {
+- return false;
+- }
+-
+- if (chunkstatus == ChunkStatus.EMPTY && chunk.getAllStarts().values().stream().noneMatch(StructureStart::isValid)) {
+- return false;
+- }
+- }
+-
+- this.level.getProfiler().incrementCounter("chunkSave");
+- CompoundTag nbttagcompound = ChunkSerializer.write(this.level, chunk);
+-
+- this.write(chunkcoordintpair, nbttagcompound).exceptionallyAsync((throwable) -> {
+- this.level.getServer().reportChunkSaveFailure(chunkcoordintpair);
+- return null;
+- }, this.mainThreadExecutor);
+- this.markPosition(chunkcoordintpair, chunkstatus.getChunkType());
+- return true;
+- } catch (Exception exception) {
+- ChunkMap.LOGGER.error("Failed to save chunk {},{}", new Object[]{chunkcoordintpair.x, chunkcoordintpair.z, exception});
+- this.level.getServer().reportChunkSaveFailure(chunkcoordintpair);
+- return false;
+- }
+- }
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ private boolean isExistingChunkFull(ChunkPos pos) {
+- byte b0 = this.chunkTypeCache.get(pos.toLong());
+-
+- if (b0 != 0) {
+- return b0 == 1;
+- } else {
+- CompoundTag nbttagcompound;
+-
+- try {
+- nbttagcompound = (CompoundTag) ((Optional) this.readChunk(pos).join()).orElse((Object) null);
+- if (nbttagcompound == null) {
+- this.markPositionReplaceable(pos);
+- return false;
+- }
+- } catch (Exception exception) {
+- ChunkMap.LOGGER.error("Failed to read chunk {}", pos, exception);
+- this.markPositionReplaceable(pos);
+- return false;
+- }
+-
+- ChunkType chunktype = ChunkSerializer.getChunkTypeFromTag(nbttagcompound);
+-
+- return this.markPosition(pos, chunktype) == 1;
+- }
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ public void setServerViewDistance(int watchDistance) { // Paper - public
+@@ -1149,37 +641,36 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+
+ if (j != this.serverViewDistance) {
+ this.serverViewDistance = j;
+- this.distanceManager.updatePlayerTickets(this.serverViewDistance);
+- Iterator iterator = this.playerMap.getAllPlayers().iterator();
++ this.level.playerChunkLoader.setLoadDistance(this.serverViewDistance + 1); // Paper - replace player loader system
++ }
+
+- while (iterator.hasNext()) {
+- ServerPlayer entityplayer = (ServerPlayer) iterator.next();
++ }
+
+- this.updateChunkTracking(entityplayer);
+- }
+- }
++ // Paper start - replace player loader system
++ public void setTickViewDistance(int distance) {
++ this.level.playerChunkLoader.setTickDistance(distance);
++ }
+
++ public void setSendViewDistance(int distance) {
++ this.level.playerChunkLoader.setSendDistance(distance);
+ }
++ // Paper end - replace player loader system
+
+ public int getPlayerViewDistance(ServerPlayer player) { // Paper - public
+- return Mth.clamp(player.requestedViewDistance(), 2, this.serverViewDistance);
++ return io.papermc.paper.chunk.system.ChunkSystem.getSendViewDistance(player); // Paper - per player view distance
+ }
+
+ private void markChunkPendingToSend(ServerPlayer player, ChunkPos pos) {
+- LevelChunk chunk = this.getChunkToSend(pos.toLong());
+-
+- if (chunk != null) {
+- ChunkMap.markChunkPendingToSend(player, chunk);
+- }
++ throw new UnsupportedOperationException(); // Paper - per player view distance
+
+ }
+
+ private static void markChunkPendingToSend(ServerPlayer player, LevelChunk chunk) {
+- player.connection.chunkSender.markChunkPendingToSend(chunk);
++ throw new UnsupportedOperationException(); // Paper - rewrite player chunk loader
+ }
+
+ private static void dropChunk(ServerPlayer player, ChunkPos pos) {
+- player.connection.chunkSender.dropChunk(player, pos);
++ // Paper - rewrite player chunk loader
+ }
+
+ @Nullable
+@@ -1202,30 +693,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ }
+
+ void dumpChunks(Writer writer) throws IOException {
+- CsvOutput csvwriter = CsvOutput.builder().addColumn("x").addColumn("z").addColumn("level").addColumn("in_memory").addColumn("status").addColumn("full_status").addColumn("accessible_ready").addColumn("ticking_ready").addColumn("entity_ticking_ready").addColumn("ticket").addColumn("spawning").addColumn("block_entity_count").addColumn("ticking_ticket").addColumn("ticking_level").addColumn("block_ticks").addColumn("fluid_ticks").build(writer);
+- TickingTracker tickingtracker = this.distanceManager.tickingTracker();
+- Iterator<ChunkHolder> objectbidirectionaliterator = io.papermc.paper.chunk.system.ChunkSystem.getVisibleChunkHolders(this.level).iterator(); // Paper
+-
+- while (objectbidirectionaliterator.hasNext()) {
+- ChunkHolder playerchunk = objectbidirectionaliterator.next(); // Paper
+- long i = playerchunk.pos.toLong(); // Paper
+- ChunkPos chunkcoordintpair = new ChunkPos(i);
+- // Paper
+- Optional<ChunkAccess> optional = Optional.ofNullable(playerchunk.getLastAvailable());
+- Optional<LevelChunk> optional1 = optional.flatMap((ichunkaccess) -> {
+- return ichunkaccess instanceof LevelChunk ? Optional.of((LevelChunk) ichunkaccess) : Optional.empty();
+- });
+-
+- // CraftBukkit - decompile error
+- csvwriter.writeRow(chunkcoordintpair.x, chunkcoordintpair.z, playerchunk.getTicketLevel(), optional.isPresent(), optional.map(ChunkAccess::getStatus).orElse(null), optional1.map(LevelChunk::getFullStatus).orElse(null), ChunkMap.printFuture(playerchunk.getFullChunkFuture()), ChunkMap.printFuture(playerchunk.getTickingChunkFuture()), ChunkMap.printFuture(playerchunk.getEntityTickingChunkFuture()), this.distanceManager.getTicketDebugString(i), this.anyPlayerCloseEnoughForSpawning(chunkcoordintpair), optional1.map((chunk) -> {
+- return chunk.getBlockEntities().size();
+- }).orElse(0), tickingtracker.getTicketDebugString(i), tickingtracker.getLevel(i), optional1.map((chunk) -> {
+- return chunk.getBlockTicks().count();
+- }).orElse(0), optional1.map((chunk) -> {
+- return chunk.getFluidTicks().count();
+- }).orElse(0));
+- }
+-
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ private static String printFuture(CompletableFuture<ChunkResult<LevelChunk>> future) {
+@@ -1240,6 +708,32 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ }
+ }
+
++ // Paper start - Asynchronous chunk io
++ @Nullable
++ @Override
++ public CompoundTag readSync(ChunkPos chunkcoordintpair) throws IOException {
++ if (!io.papermc.paper.chunk.system.io.RegionFileIOThread.isRegionFileThread()) {
++ return io.papermc.paper.chunk.system.io.RegionFileIOThread.loadData(
++ this.level, chunkcoordintpair.x, chunkcoordintpair.z, io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.CHUNK_DATA,
++ io.papermc.paper.chunk.system.io.RegionFileIOThread.getIOBlockingPriorityForCurrentThread()
++ );
++ }
++ return super.readSync(chunkcoordintpair);
++ }
++
++ @Override
++ public CompletableFuture<Void> write(ChunkPos chunkcoordintpair, CompoundTag nbttagcompound) throws IOException {
++ if (!io.papermc.paper.chunk.system.io.RegionFileIOThread.isRegionFileThread()) {
++ io.papermc.paper.chunk.system.io.RegionFileIOThread.scheduleSave(
++ this.level, chunkcoordintpair.x, chunkcoordintpair.z, nbttagcompound,
++ io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.CHUNK_DATA);
++ return null;
++ }
++ super.write(chunkcoordintpair, nbttagcompound);
++ return null;
++ }
++ // Paper end
++
+ private CompletableFuture<Optional<CompoundTag>> readChunk(ChunkPos chunkPos) {
+ return this.read(chunkPos).thenApplyAsync((optional) -> {
+ return optional.map((nbttagcompound) -> this.upgradeChunkTag(nbttagcompound, chunkPos)); // CraftBukkit
+@@ -1340,8 +834,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ this.distanceManager.addPlayer(SectionPos.of((EntityAccess) player), player);
+ }
+
+- player.setChunkTrackingView(ChunkTrackingView.EMPTY);
+- this.updateChunkTracking(player);
++ // Paper - handled by player chunk loader
+ this.addPlayerToDistanceMaps(player); // Paper - distance maps
+ } else {
+ SectionPos sectionposition = player.getLastSectionPos();
+@@ -1352,7 +845,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ }
+
+ this.removePlayerFromDistanceMaps(player); // Paper - distance maps
+- this.applyChunkTrackingView(player, ChunkTrackingView.EMPTY);
++ // Paper - handled by player chunk loader
+ }
+
+ }
+@@ -1400,71 +893,30 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ this.playerMap.unIgnorePlayer(player);
+ }
+
+- this.updateChunkTracking(player);
++ // Paper - replaced by PlayerChunkLoader
+ }
+
+ this.updateMaps(player); // Paper - distance maps
+ }
+
+ private void updateChunkTracking(ServerPlayer player) {
+- ChunkPos chunkcoordintpair = player.chunkPosition();
+- int i = this.getPlayerViewDistance(player);
+- ChunkTrackingView chunktrackingview = player.getChunkTrackingView();
+-
+- if (chunktrackingview instanceof ChunkTrackingView.Positioned chunktrackingview_a) {
+- if (chunktrackingview_a.center().equals(chunkcoordintpair) && chunktrackingview_a.viewDistance() == i) {
+- return;
+- }
+- }
+-
+- this.applyChunkTrackingView(player, ChunkTrackingView.of(chunkcoordintpair, i));
++ throw new UnsupportedOperationException(); // Paper - replaced by PlayerChunkLoader
+ }
+
+ private void applyChunkTrackingView(ServerPlayer player, ChunkTrackingView chunkFilter) {
+- if (player.level() == this.level) {
+- ChunkTrackingView chunktrackingview1 = player.getChunkTrackingView();
+-
+- if (chunkFilter instanceof ChunkTrackingView.Positioned) {
+- label15:
+- {
+- ChunkTrackingView.Positioned chunktrackingview_a = (ChunkTrackingView.Positioned) chunkFilter;
+-
+- if (chunktrackingview1 instanceof ChunkTrackingView.Positioned) {
+- ChunkTrackingView.Positioned chunktrackingview_a1 = (ChunkTrackingView.Positioned) chunktrackingview1;
+-
+- if (chunktrackingview_a1.center().equals(chunktrackingview_a.center())) {
+- break label15;
+- }
+- }
+-
+- player.connection.send(new ClientboundSetChunkCacheCenterPacket(chunktrackingview_a.center().x, chunktrackingview_a.center().z));
+- }
+- }
+-
+- ChunkTrackingView.difference(chunktrackingview1, chunkFilter, (chunkcoordintpair) -> {
+- this.markChunkPendingToSend(player, chunkcoordintpair);
+- }, (chunkcoordintpair) -> {
+- ChunkMap.dropChunk(player, chunkcoordintpair);
+- });
+- player.setChunkTrackingView(chunkFilter);
+- }
++ throw new UnsupportedOperationException(); // Paper - replaced by PlayerChunkLoader
+ }
+
+ @Override
+ public List<ServerPlayer> getPlayers(ChunkPos chunkPos, boolean onlyOnWatchDistanceEdge) {
+- Set<ServerPlayer> set = this.playerMap.getAllPlayers();
+- Builder<ServerPlayer> builder = ImmutableList.builder();
+- Iterator iterator = set.iterator();
+-
+- while (iterator.hasNext()) {
+- ServerPlayer entityplayer = (ServerPlayer) iterator.next();
+-
+- if (onlyOnWatchDistanceEdge && this.isChunkOnTrackedBorder(entityplayer, chunkPos.x, chunkPos.z) || !onlyOnWatchDistanceEdge && this.isChunkTracked(entityplayer, chunkPos.x, chunkPos.z)) {
+- builder.add(entityplayer);
+- }
++ // Paper start - per player view distance
++ ChunkHolder holder = this.getVisibleChunkIfPresent(chunkPos.toLong());
++ if (holder == null) {
++ return new java.util.ArrayList<>();
++ } else {
++ return holder.getPlayers(onlyOnWatchDistanceEdge);
+ }
+-
+- return builder.build();
++ // Paper end - per player view distance
+ }
+
+ public void addEntity(Entity entity) {
+@@ -1535,13 +987,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ }
+
+ protected void tick() {
+- Iterator iterator = this.playerMap.getAllPlayers().iterator();
+-
+- while (iterator.hasNext()) {
+- ServerPlayer entityplayer = (ServerPlayer) iterator.next();
+-
+- this.updateChunkTracking(entityplayer);
+- }
++ // Paper - replaced by PlayerChunkLoader
+
+ List<ServerPlayer> list = Lists.newArrayList();
+ List<ServerPlayer> list1 = this.level.players();
+@@ -1648,16 +1094,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+ }
+
+ public void waitForLightBeforeSending(ChunkPos centerPos, int radius) {
+- int j = radius + 1;
+-
+- ChunkPos.rangeClosed(centerPos, j).forEach((chunkcoordintpair1) -> {
+- ChunkHolder playerchunk = this.getVisibleChunkIfPresent(chunkcoordintpair1.toLong());
+-
+- if (playerchunk != null) {
+- playerchunk.addSendDependency(this.lightEngine.waitForPendingTasks(chunkcoordintpair1.x, chunkcoordintpair1.z));
+- }
+-
+- });
++ // Paper - rewrite player chunk loader
+ }
+
+ public class ChunkDistanceManager extends DistanceManager { // Paper - public
+@@ -1668,7 +1105,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+
+ @Override
+ protected boolean isChunkToRemove(long pos) {
+- return ChunkMap.this.toDrop.contains(pos);
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ @Nullable
+diff --git a/src/main/java/net/minecraft/server/level/DistanceManager.java b/src/main/java/net/minecraft/server/level/DistanceManager.java
+index 7a48ae2ba962ff56d0abff581b51f28b48bd9aae..ed5154e41ca858f4d6b4d1c276c66831c038d2a6 100644
+--- a/src/main/java/net/minecraft/server/level/DistanceManager.java
++++ b/src/main/java/net/minecraft/server/level/DistanceManager.java
+@@ -38,65 +38,28 @@ import org.slf4j.Logger;
+
+ public abstract class DistanceManager {
+
++ // Paper start - rewrite chunk system
++ public io.papermc.paper.chunk.system.scheduling.ChunkHolderManager getChunkHolderManager() {
++ return this.chunkMap.level.chunkTaskScheduler.chunkHolderManager;
++ }
++ // Paper end - rewrite chunk system
++
+ static final Logger LOGGER = LogUtils.getLogger();
+ static final int PLAYER_TICKET_LEVEL = ChunkLevel.byStatus(FullChunkStatus.ENTITY_TICKING);
+ private static final int INITIAL_TICKET_LIST_CAPACITY = 4;
+ final Long2ObjectMap<ObjectSet<ServerPlayer>> playersPerChunk = new Long2ObjectOpenHashMap();
+- public final Long2ObjectOpenHashMap<SortedArraySet<Ticket<?>>> tickets = new Long2ObjectOpenHashMap();
+- private final DistanceManager.ChunkTicketTracker ticketTracker = new DistanceManager.ChunkTicketTracker();
++ // Paper - rewrite chunk system
+ private final DistanceManager.FixedPlayerDistanceChunkTracker naturalSpawnChunkCounter = new DistanceManager.FixedPlayerDistanceChunkTracker(8);
+- private final TickingTracker tickingTicketsTracker = new TickingTracker();
+- private final DistanceManager.PlayerTicketTracker playerTicketManager = new DistanceManager.PlayerTicketTracker(32);
+- final Set<ChunkHolder> chunksToUpdateFutures = Sets.newHashSet();
+- final ChunkTaskPriorityQueueSorter ticketThrottler;
+- final ProcessorHandle<ChunkTaskPriorityQueueSorter.Message<Runnable>> ticketThrottlerInput;
+- final ProcessorHandle<ChunkTaskPriorityQueueSorter.Release> ticketThrottlerReleaser;
+- final LongSet ticketsToRelease = new LongOpenHashSet();
+- final Executor mainThreadExecutor;
+- private long ticketTickCounter;
+- public int simulationDistance = 10;
++ // Paper - rewrite chunk system
+ private final ChunkMap chunkMap; // Paper
+
+ protected DistanceManager(Executor workerExecutor, Executor mainThreadExecutor, ChunkMap chunkMap) {
+- Objects.requireNonNull(mainThreadExecutor);
+- ProcessorHandle<Runnable> mailbox = ProcessorHandle.of("player ticket throttler", mainThreadExecutor::execute);
+- ChunkTaskPriorityQueueSorter chunktaskqueuesorter = new ChunkTaskPriorityQueueSorter(ImmutableList.of(mailbox), workerExecutor, 4);
+-
+- this.ticketThrottler = chunktaskqueuesorter;
+- this.ticketThrottlerInput = chunktaskqueuesorter.getProcessor(mailbox, true);
+- this.ticketThrottlerReleaser = chunktaskqueuesorter.getReleaseProcessor(mailbox);
+- this.mainThreadExecutor = mainThreadExecutor;
++ // Paper - rewrite chunk system
+ this.chunkMap = chunkMap; // Paper
+ }
+
+ protected void purgeStaleTickets() {
+- ++this.ticketTickCounter;
+- ObjectIterator<Entry<SortedArraySet<Ticket<?>>>> objectiterator = this.tickets.long2ObjectEntrySet().fastIterator();
+-
+- while (objectiterator.hasNext()) {
+- Entry<SortedArraySet<Ticket<?>>> entry = (Entry) objectiterator.next();
+- Iterator<Ticket<?>> iterator = ((SortedArraySet) entry.getValue()).iterator();
+- boolean flag = false;
+-
+- while (iterator.hasNext()) {
+- Ticket<?> ticket = (Ticket) iterator.next();
+-
+- if (ticket.timedOut(this.ticketTickCounter)) {
+- iterator.remove();
+- flag = true;
+- this.tickingTicketsTracker.removeTicket(entry.getLongKey(), ticket);
+- }
+- }
+-
+- if (flag) {
+- this.ticketTracker.update(entry.getLongKey(), DistanceManager.getTicketLevelAt((SortedArraySet) entry.getValue()), false);
+- }
+-
+- if (((SortedArraySet) entry.getValue()).isEmpty()) {
+- objectiterator.remove();
+- }
+- }
+-
++ this.getChunkHolderManager().tick(); // Paper - rewrite chunk system
+ }
+
+ private static int getTicketLevelAt(SortedArraySet<Ticket<?>> tickets) {
+@@ -112,108 +75,25 @@ public abstract class DistanceManager {
+ protected abstract ChunkHolder updateChunkScheduling(long pos, int level, @Nullable ChunkHolder holder, int k);
+
+ public boolean runAllUpdates(ChunkMap chunkStorage) {
+- this.naturalSpawnChunkCounter.runAllUpdates();
+- this.tickingTicketsTracker.runAllUpdates();
+- this.playerTicketManager.runAllUpdates();
+- int i = Integer.MAX_VALUE - this.ticketTracker.runDistanceUpdates(Integer.MAX_VALUE);
+- boolean flag = i != 0;
+-
+- if (flag) {
+- ;
+- }
+-
+- if (!this.chunksToUpdateFutures.isEmpty()) {
+- // CraftBukkit start
+- // Iterate pending chunk updates with protection against concurrent modification exceptions
+- java.util.Iterator<ChunkHolder> iter = this.chunksToUpdateFutures.iterator();
+- int expectedSize = this.chunksToUpdateFutures.size();
+- do {
+- ChunkHolder playerchunk = iter.next();
+- iter.remove();
+- expectedSize--;
+-
+- playerchunk.updateFutures(chunkStorage, this.mainThreadExecutor);
+-
+- // Reset iterator if set was modified using add()
+- if (this.chunksToUpdateFutures.size() != expectedSize) {
+- expectedSize = this.chunksToUpdateFutures.size();
+- iter = this.chunksToUpdateFutures.iterator();
+- }
+- } while (iter.hasNext());
+- // CraftBukkit end
+-
+- return true;
+- } else {
+- if (!this.ticketsToRelease.isEmpty()) {
+- LongIterator longiterator = this.ticketsToRelease.iterator();
+-
+- while (longiterator.hasNext()) {
+- long j = longiterator.nextLong();
+-
+- if (this.getTickets(j).stream().anyMatch((ticket) -> {
+- return ticket.getType() == TicketType.PLAYER;
+- })) {
+- ChunkHolder playerchunk = chunkStorage.getUpdatingChunkIfPresent(j);
+-
+- if (playerchunk == null) {
+- throw new IllegalStateException();
+- }
+-
+- CompletableFuture<ChunkResult<LevelChunk>> completablefuture = playerchunk.getEntityTickingChunkFuture();
+-
+- completablefuture.thenAccept((chunkresult) -> {
+- this.mainThreadExecutor.execute(() -> {
+- this.ticketThrottlerReleaser.tell(ChunkTaskPriorityQueueSorter.release(() -> {
+- }, j, false));
+- });
+- });
+- }
+- }
+-
+- this.ticketsToRelease.clear();
+- }
+-
+- return flag;
+- }
++ return this.getChunkHolderManager().processTicketUpdates(); // Paper - rewrite chunk system
+ }
+
+ boolean addTicket(long i, Ticket<?> ticket) { // CraftBukkit - void -> boolean
+- SortedArraySet<Ticket<?>> arraysetsorted = this.getTickets(i);
+- int j = DistanceManager.getTicketLevelAt(arraysetsorted);
+- Ticket<?> ticket1 = (Ticket) arraysetsorted.addOrGet(ticket);
+-
+- ticket1.setCreatedTick(this.ticketTickCounter);
+- if (ticket.getTicketLevel() < j) {
+- this.ticketTracker.update(i, ticket.getTicketLevel(), true);
+- }
+-
+- return ticket == ticket1; // CraftBukkit
++ org.spigotmc.AsyncCatcher.catchOp("ChunkMapDistance::addTicket"); // Paper
++ return this.getChunkHolderManager().addTicketAtLevel((TicketType)ticket.getType(), i, ticket.getTicketLevel(), ticket.key); // Paper - rewrite chunk system
+ }
+
+ boolean removeTicket(long i, Ticket<?> ticket) { // CraftBukkit - void -> boolean
+- SortedArraySet<Ticket<?>> arraysetsorted = this.getTickets(i);
+-
+- boolean removed = false; // CraftBukkit
+- if (arraysetsorted.remove(ticket)) {
+- removed = true; // CraftBukkit
+- }
+-
+- if (arraysetsorted.isEmpty()) {
+- this.tickets.remove(i);
+- }
+-
+- this.ticketTracker.update(i, DistanceManager.getTicketLevelAt(arraysetsorted), false);
+- return removed; // CraftBukkit
++ org.spigotmc.AsyncCatcher.catchOp("ChunkMapDistance::removeTicket"); // Paper
++ return this.getChunkHolderManager().removeTicketAtLevel((TicketType)ticket.getType(), i, ticket.getTicketLevel(), ticket.key); // Paper - rewrite chunk system
+ }
+
+ public <T> void addTicket(TicketType<T> type, ChunkPos pos, int level, T argument) {
+- this.addTicket(pos.toLong(), new Ticket<>(type, level, argument));
++ this.getChunkHolderManager().addTicketAtLevel(type, pos, level, argument); // Paper - rewrite chunk system
+ }
+
+ public <T> void removeTicket(TicketType<T> type, ChunkPos pos, int level, T argument) {
+- Ticket<T> ticket = new Ticket<>(type, level, argument);
+-
+- this.removeTicket(pos.toLong(), ticket);
++ this.getChunkHolderManager().removeTicketAtLevel(type, pos, level, argument); // Paper - rewrite chunk system
+ }
+
+ public <T> void addRegionTicket(TicketType<T> type, ChunkPos pos, int radius, T argument) {
+@@ -222,13 +102,7 @@ public abstract class DistanceManager {
+ }
+
+ public <T> boolean addRegionTicketAtDistance(TicketType<T> tickettype, ChunkPos chunkcoordintpair, int i, T t0) {
+- // CraftBukkit end
+- Ticket<T> ticket = new Ticket<>(tickettype, ChunkLevel.byStatus(FullChunkStatus.FULL) - i, t0);
+- long j = chunkcoordintpair.toLong();
+-
+- boolean added = this.addTicket(j, ticket); // CraftBukkit
+- this.tickingTicketsTracker.addTicket(j, ticket);
+- return added; // CraftBukkit
++ return this.getChunkHolderManager().addTicketAtLevel(tickettype, chunkcoordintpair, ChunkLevel.byStatus(FullChunkStatus.FULL) - i, t0); // Paper - rewrite chunk system
+ }
+
+ public <T> void removeRegionTicket(TicketType<T> type, ChunkPos pos, int radius, T argument) {
+@@ -237,31 +111,21 @@ public abstract class DistanceManager {
+ }
+
+ public <T> boolean removeRegionTicketAtDistance(TicketType<T> tickettype, ChunkPos chunkcoordintpair, int i, T t0) {
+- // CraftBukkit end
+- Ticket<T> ticket = new Ticket<>(tickettype, ChunkLevel.byStatus(FullChunkStatus.FULL) - i, t0);
+- long j = chunkcoordintpair.toLong();
+-
+- boolean removed = this.removeTicket(j, ticket); // CraftBukkit
+- this.tickingTicketsTracker.removeTicket(j, ticket);
+- return removed; // CraftBukkit
++ return this.getChunkHolderManager().removeTicketAtLevel(tickettype, chunkcoordintpair, ChunkLevel.byStatus(FullChunkStatus.FULL) - i, t0); // Paper - rewrite chunk system
+ }
+
+- private SortedArraySet<Ticket<?>> getTickets(long position) {
+- return (SortedArraySet) this.tickets.computeIfAbsent(position, (j) -> {
+- return SortedArraySet.create(4);
+- });
+- }
++ // Paper - rewrite chunk system
+
+ protected void updateChunkForced(ChunkPos pos, boolean forced) {
+- Ticket<ChunkPos> ticket = new Ticket<>(TicketType.FORCED, ChunkMap.FORCED_TICKET_LEVEL, pos);
++ Ticket<ChunkPos> ticket = new Ticket<>(TicketType.FORCED, ChunkMap.FORCED_TICKET_LEVEL, pos, 0L); // Paper - rewrite chunk system
+ long i = pos.toLong();
+
+ if (forced) {
+ this.addTicket(i, ticket);
+- this.tickingTicketsTracker.addTicket(i, ticket);
++ //this.tickingTicketsTracker.addTicket(i, ticket); // Paper - no longer used
+ } else {
+ this.removeTicket(i, ticket);
+- this.tickingTicketsTracker.removeTicket(i, ticket);
++ //this.tickingTicketsTracker.removeTicket(i, ticket); // Paper - no longer used
+ }
+
+ }
+@@ -270,12 +134,10 @@ public abstract class DistanceManager {
+ ChunkPos chunkcoordintpair = pos.chunk();
+ long i = chunkcoordintpair.toLong();
+
+- ((ObjectSet) this.playersPerChunk.computeIfAbsent(i, (j) -> {
+- return new ObjectOpenHashSet();
+- })).add(player);
++ // Paper - no longer used
+ this.naturalSpawnChunkCounter.update(i, 0, true);
+- this.playerTicketManager.update(i, 0, true);
+- this.tickingTicketsTracker.addTicket(TicketType.PLAYER, chunkcoordintpair, this.getPlayerTicketLevel(), chunkcoordintpair);
++ //this.playerTicketManager.update(i, 0, true); // Paper - no longer used
++ //this.tickingTicketsTracker.addTicket(TicketType.PLAYER, chunkcoordintpair, this.getPlayerTicketLevel(), chunkcoordintpair); // Paper - no longer used
+ }
+
+ public void removePlayer(SectionPos pos, ServerPlayer player) {
+@@ -288,40 +150,44 @@ public abstract class DistanceManager {
+ if (objectset == null || objectset.isEmpty()) { // Paper
+ this.playersPerChunk.remove(i);
+ this.naturalSpawnChunkCounter.update(i, Integer.MAX_VALUE, false);
+- this.playerTicketManager.update(i, Integer.MAX_VALUE, false);
+- this.tickingTicketsTracker.removeTicket(TicketType.PLAYER, chunkcoordintpair, this.getPlayerTicketLevel(), chunkcoordintpair);
++ //this.playerTicketManager.update(i, Integer.MAX_VALUE, false); // Paper - no longer used
++ //this.tickingTicketsTracker.removeTicket(TicketType.PLAYER, chunkcoordintpair, this.getPlayerTicketLevel(), chunkcoordintpair); // Paper - no longer used
+ }
+
+ }
+
+- private int getPlayerTicketLevel() {
+- return Math.max(0, ChunkLevel.byStatus(FullChunkStatus.ENTITY_TICKING) - this.simulationDistance);
+- }
++ // Paper - rewrite chunk system
+
+ public boolean inEntityTickingRange(long chunkPos) {
+- return ChunkLevel.isEntityTicking(this.tickingTicketsTracker.getLevel(chunkPos));
++ // Paper start - replace player chunk loader system
++ ChunkHolder holder = this.chunkMap.getVisibleChunkIfPresent(chunkPos);
++ return holder != null && holder.isEntityTickingReady();
++ // Paper end - replace player chunk loader system
+ }
+
+ public boolean inBlockTickingRange(long chunkPos) {
+- return ChunkLevel.isBlockTicking(this.tickingTicketsTracker.getLevel(chunkPos));
++ // Paper start - replace player chunk loader system
++ ChunkHolder holder = this.chunkMap.getVisibleChunkIfPresent(chunkPos);
++ return holder != null && holder.isTickingReady();
++ // Paper end - replace player chunk loader system
+ }
+
+ protected String getTicketDebugString(long pos) {
+- SortedArraySet<Ticket<?>> arraysetsorted = (SortedArraySet) this.tickets.get(pos);
+-
+- return arraysetsorted != null && !arraysetsorted.isEmpty() ? ((Ticket) arraysetsorted.first()).toString() : "no_ticket";
++ return this.getChunkHolderManager().getTicketDebugString(pos); // Paper - rewrite chunk system
+ }
+
+ protected void updatePlayerTickets(int viewDistance) {
+- this.playerTicketManager.updateViewDistance(viewDistance);
++ this.chunkMap.setServerViewDistance(viewDistance); // Paper - route to player chunk manager
+ }
+
+- public void updateSimulationDistance(int simulationDistance) {
+- if (simulationDistance != this.simulationDistance) {
+- this.simulationDistance = simulationDistance;
+- this.tickingTicketsTracker.replacePlayerTicketsLevel(this.getPlayerTicketLevel());
+- }
++ // Paper start
++ public int getSimulationDistance() {
++ return this.chunkMap.level.playerChunkLoader.getAPITickDistance();
++ }
++ // Paper end
+
++ public void updateSimulationDistance(int simulationDistance) {
++ this.chunkMap.level.playerChunkLoader.setTickDistance(simulationDistance); // Paper - route to player chunk manager
+ }
+
+ public int getNaturalSpawnChunkCount() {
+@@ -335,103 +201,26 @@ public abstract class DistanceManager {
+ }
+
+ public String getDebugStatus() {
+- return this.ticketThrottler.getDebugStatus();
++ return "No DistanceManager stats available"; // Paper - rewrite chunk system
+ }
+
+- private void dumpTickets(String path) {
+- try {
+- FileOutputStream fileoutputstream = new FileOutputStream(new File(path));
+-
+- try {
+- ObjectIterator objectiterator = this.tickets.long2ObjectEntrySet().iterator();
+-
+- while (objectiterator.hasNext()) {
+- Entry<SortedArraySet<Ticket<?>>> entry = (Entry) objectiterator.next();
+- ChunkPos chunkcoordintpair = new ChunkPos(entry.getLongKey());
+- Iterator iterator = ((SortedArraySet) entry.getValue()).iterator();
+-
+- while (iterator.hasNext()) {
+- Ticket<?> ticket = (Ticket) iterator.next();
+-
+- fileoutputstream.write((chunkcoordintpair.x + "\t" + chunkcoordintpair.z + "\t" + String.valueOf(ticket.getType()) + "\t" + ticket.getTicketLevel() + "\t\n").getBytes(StandardCharsets.UTF_8));
+- }
+- }
+- } catch (Throwable throwable) {
+- try {
+- fileoutputstream.close();
+- } catch (Throwable throwable1) {
+- throwable.addSuppressed(throwable1);
+- }
+-
+- throw throwable;
+- }
+-
+- fileoutputstream.close();
+- } catch (IOException ioexception) {
+- DistanceManager.LOGGER.error("Failed to dump tickets to {}", path, ioexception);
+- }
+-
+- }
+-
+- @VisibleForTesting
+- TickingTracker tickingTracker() {
+- return this.tickingTicketsTracker;
+- }
++ // Paper - rewrite chunk system
+
+ public void removeTicketsOnClosing() {
+- ImmutableSet<TicketType<?>> immutableset = ImmutableSet.of(TicketType.UNKNOWN, TicketType.POST_TELEPORT, TicketType.LIGHT, TicketType.FUTURE_AWAIT, TicketType.CHUNK_RELIGHT, ca.spottedleaf.starlight.common.light.StarLightInterface.CHUNK_WORK_TICKET); // Paper - add additional tickets to preserve
+- ObjectIterator<Entry<SortedArraySet<Ticket<?>>>> objectiterator = this.tickets.long2ObjectEntrySet().fastIterator();
+-
+- while (objectiterator.hasNext()) {
+- Entry<SortedArraySet<Ticket<?>>> entry = (Entry) objectiterator.next();
+- Iterator<Ticket<?>> iterator = ((SortedArraySet) entry.getValue()).iterator();
+- boolean flag = false;
+-
+- while (iterator.hasNext()) {
+- Ticket<?> ticket = (Ticket) iterator.next();
+-
+- if (!immutableset.contains(ticket.getType())) {
+- iterator.remove();
+- flag = true;
+- this.tickingTicketsTracker.removeTicket(entry.getLongKey(), ticket);
+- }
+- }
+-
+- if (flag) {
+- this.ticketTracker.update(entry.getLongKey(), DistanceManager.getTicketLevelAt((SortedArraySet) entry.getValue()), false);
+- }
+-
+- if (((SortedArraySet) entry.getValue()).isEmpty()) {
+- objectiterator.remove();
+- }
+- }
+-
++ // Paper - rewrite chunk system - this stupid hack ain't needed anymore
+ }
+
+ public boolean hasTickets() {
+- return !this.tickets.isEmpty();
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ // CraftBukkit start
+ public <T> void removeAllTicketsFor(TicketType<T> ticketType, int ticketLevel, T ticketIdentifier) {
+- Ticket<T> target = new Ticket<>(ticketType, ticketLevel, ticketIdentifier);
+-
+- for (java.util.Iterator<Entry<SortedArraySet<Ticket<?>>>> iterator = this.tickets.long2ObjectEntrySet().fastIterator(); iterator.hasNext();) {
+- Entry<SortedArraySet<Ticket<?>>> entry = iterator.next();
+- SortedArraySet<Ticket<?>> tickets = entry.getValue();
+- if (tickets.remove(target)) {
+- // copied from removeTicket
+- this.ticketTracker.update(entry.getLongKey(), DistanceManager.getTicketLevelAt(tickets), false);
+-
+- // can't use entry after it's removed
+- if (tickets.isEmpty()) {
+- iterator.remove();
+- }
+- }
+- }
++ this.getChunkHolderManager().removeAllTicketsFor(ticketType, ticketLevel, ticketIdentifier); // Paper - rewrite chunk system
+ }
+ // CraftBukkit end
+
++ /* Paper - rewrite chunk system
+ private class ChunkTicketTracker extends ChunkTracker {
+
+ private static final int MAX_LEVEL = ChunkLevel.MAX_LEVEL + 1;
+@@ -478,6 +267,7 @@ public abstract class DistanceManager {
+ return this.runUpdates(distance);
+ }
+ }
++ */ // Paper - rewrite chunk system
+
+ private class FixedPlayerDistanceChunkTracker extends ChunkTracker {
+
+@@ -557,6 +347,7 @@ public abstract class DistanceManager {
+ }
+ }
+
++ /* Paper - rewrite chunk system
+ private class PlayerTicketTracker extends DistanceManager.FixedPlayerDistanceChunkTracker {
+
+ private int viewDistance = 0;
+@@ -652,4 +443,5 @@ public abstract class DistanceManager {
+ return distance <= this.viewDistance;
+ }
+ }
++ */ // Paper - rewrite chunk system
+ }
+diff --git a/src/main/java/net/minecraft/server/level/ServerChunkCache.java b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
+index 2d9d4d06b75873f888ef4d8f5779a52706f821a8..f74efe41cd0da2f9749fc96fb9e0f7cf237ad1c6 100644
+--- a/src/main/java/net/minecraft/server/level/ServerChunkCache.java
++++ b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
+@@ -71,7 +71,7 @@ public class ServerChunkCache extends ChunkSource {
+ public final io.papermc.paper.util.maplist.IteratorSafeOrderedReferenceSet<LevelChunk> entityTickingChunks = new io.papermc.paper.util.maplist.IteratorSafeOrderedReferenceSet<>(4096, 0.75f, 4096, 0.15, true);
+ final com.destroystokyo.paper.util.concurrent.WeakSeqLock loadedChunkMapSeqLock = new com.destroystokyo.paper.util.concurrent.WeakSeqLock();
+ final it.unimi.dsi.fastutil.longs.Long2ObjectOpenHashMap<LevelChunk> loadedChunkMap = new it.unimi.dsi.fastutil.longs.Long2ObjectOpenHashMap<>(8192, 0.5f);
+- long chunkFutureAwaitCounter;
++ final java.util.concurrent.atomic.AtomicLong chunkFutureAwaitCounter = new java.util.concurrent.atomic.AtomicLong(); // Paper - chunk system rewrite
+ private final LevelChunk[] lastLoadedChunks = new LevelChunk[4 * 4];
+ // Paper end
+
+@@ -195,7 +195,7 @@ public class ServerChunkCache extends ChunkSource {
+ public LevelChunk getChunkAtIfLoadedImmediately(int x, int z) {
+ long k = ChunkPos.asLong(x, z);
+
+- if (Thread.currentThread() == this.mainThread) {
++ if (io.papermc.paper.util.TickThread.isTickThread()) { // Paper - rewrite chunk system
+ return this.getChunkAtIfLoadedMainThread(x, z);
+ }
+
+@@ -247,7 +247,8 @@ public class ServerChunkCache extends ChunkSource {
+ @Nullable
+ @Override
+ public ChunkAccess getChunk(int x, int z, ChunkStatus leastStatus, boolean create) {
+- if (Thread.currentThread() != this.mainThread) {
++ final int x1 = x; final int z1 = z; // Paper - conflict on variable change
++ if (!io.papermc.paper.util.TickThread.isTickThread()) { // Paper - rewrite chunk system
+ return (ChunkAccess) CompletableFuture.supplyAsync(() -> {
+ return this.getChunk(x, z, leastStatus, create);
+ }, this.mainThreadProcessor).join();
+@@ -263,15 +264,7 @@ public class ServerChunkCache extends ChunkSource {
+ gameprofilerfiller.incrementCounter("getChunk");
+ long k = ChunkPos.asLong(x, z);
+
+- for (int l = 0; l < 4; ++l) {
+- if (k == this.lastChunkPos[l] && leastStatus == this.lastChunkStatus[l]) {
+- ChunkAccess ichunkaccess = this.lastChunk[l];
+-
+- if (ichunkaccess != null) { // CraftBukkit - the chunk can become accessible in the meantime TODO for non-null chunks it might also make sense to check that the chunk's state hasn't changed in the meantime
+- return ichunkaccess;
+- }
+- }
+- }
++ // Paper - rewrite chunk system - there are no correct callbacks to remove items from cache in the new chunk system
+
+ gameprofilerfiller.incrementCounter("getChunkCacheMiss");
+ CompletableFuture<ChunkResult<ChunkAccess>> completablefuture = this.getChunkFutureMainThread(x, z, leastStatus, create);
+@@ -279,9 +272,11 @@ public class ServerChunkCache extends ChunkSource {
+
+ Objects.requireNonNull(completablefuture);
+ if (!completablefuture.isDone()) { // Paper
++ io.papermc.paper.chunk.system.scheduling.ChunkTaskScheduler.pushChunkWait(this.level, x1, z1); // Paper - rewrite chunk system
+ com.destroystokyo.paper.io.SyncLoadFinder.logSyncLoad(this.level, x, z); // Paper - Add debug for sync chunk loads
+ this.level.timings.syncChunkLoad.startTiming(); // Paper
+ chunkproviderserver_b.managedBlock(completablefuture::isDone);
++ io.papermc.paper.chunk.system.scheduling.ChunkTaskScheduler.popChunkWait(); // Paper - rewrite chunk system
+ this.level.timings.syncChunkLoad.stopTiming(); // Paper
+ } // Paper
+ ChunkResult<ChunkAccess> chunkresult = (ChunkResult) completablefuture.join();
+@@ -299,7 +294,7 @@ public class ServerChunkCache extends ChunkSource {
+ @Nullable
+ @Override
+ public LevelChunk getChunkNow(int chunkX, int chunkZ) {
+- if (Thread.currentThread() != this.mainThread) {
++ if (!io.papermc.paper.util.TickThread.isTickThread()) { // Paper - rewrite chunk system
+ return null;
+ } else {
+ return this.getChunkAtIfLoadedMainThread(chunkX, chunkZ); // Paper - Perf: Optimise getChunkAt calls for loaded chunks
+@@ -313,7 +308,7 @@ public class ServerChunkCache extends ChunkSource {
+ }
+
+ public CompletableFuture<ChunkResult<ChunkAccess>> getChunkFuture(int chunkX, int chunkZ, ChunkStatus leastStatus, boolean create) {
+- boolean flag1 = Thread.currentThread() == this.mainThread;
++ boolean flag1 = io.papermc.paper.util.TickThread.isTickThread(); // Paper - rewrite chunk system
+ CompletableFuture completablefuture;
+
+ if (flag1) {
+@@ -333,48 +328,54 @@ public class ServerChunkCache extends ChunkSource {
+ return completablefuture;
+ }
+
++
+ private CompletableFuture<ChunkResult<ChunkAccess>> getChunkFutureMainThread(int chunkX, int chunkZ, ChunkStatus leastStatus, boolean create) {
+- ChunkPos chunkcoordintpair = new ChunkPos(chunkX, chunkZ);
+- long k = chunkcoordintpair.toLong();
+- int l = ChunkLevel.byStatus(leastStatus);
+- ChunkHolder playerchunk = this.getVisibleChunkIfPresent(k);
++ // Paper start - add isUrgent - old sig left in place for dirty nms plugins
++ return getChunkFutureMainThread(chunkX, chunkZ, leastStatus, create, false);
++ }
++ private CompletableFuture<ChunkResult<ChunkAccess>> getChunkFutureMainThread(int chunkX, int chunkZ, ChunkStatus leastStatus, boolean create, boolean isUrgent) {
++ // Paper start - rewrite chunk system
++ io.papermc.paper.util.TickThread.ensureTickThread(this.level, chunkX, chunkZ, "Scheduling chunk load off-main");
++ int minLevel = ChunkLevel.byStatus(leastStatus);
++ io.papermc.paper.chunk.system.scheduling.NewChunkHolder chunkHolder = this.level.chunkTaskScheduler.chunkHolderManager.getChunkHolder(chunkX, chunkZ);
+
+- // CraftBukkit start - don't add new ticket for currently unloading chunk
+- boolean currentlyUnloading = false;
+- if (playerchunk != null) {
+- FullChunkStatus oldChunkState = ChunkLevel.fullStatus(playerchunk.oldTicketLevel);
+- FullChunkStatus currentChunkState = ChunkLevel.fullStatus(playerchunk.getTicketLevel());
+- currentlyUnloading = (oldChunkState.isOrAfter(FullChunkStatus.FULL) && !currentChunkState.isOrAfter(FullChunkStatus.FULL));
++ boolean needsFullScheduling = leastStatus == ChunkStatus.FULL && (chunkHolder == null || !chunkHolder.getChunkStatus().isOrAfter(FullChunkStatus.FULL));
++
++ if ((chunkHolder == null || chunkHolder.getTicketLevel() > minLevel || needsFullScheduling) && !create) {
++ return ChunkHolder.UNLOADED_CHUNK_FUTURE;
+ }
+- if (create && !currentlyUnloading) {
+- // CraftBukkit end
+- this.distanceManager.addTicket(TicketType.UNKNOWN, chunkcoordintpair, l, chunkcoordintpair);
+- if (this.chunkAbsent(playerchunk, l)) {
+- ProfilerFiller gameprofilerfiller = this.level.getProfiler();
+-
+- gameprofilerfiller.push("chunkLoad");
+- this.runDistanceManagerUpdates();
+- playerchunk = this.getVisibleChunkIfPresent(k);
+- gameprofilerfiller.pop();
+- if (this.chunkAbsent(playerchunk, l)) {
+- throw (IllegalStateException) Util.pauseInIde(new IllegalStateException("No chunk holder after ticket has been added"));
++
++ io.papermc.paper.chunk.system.scheduling.NewChunkHolder.ChunkCompletion chunkCompletion = chunkHolder == null ? null : chunkHolder.getLastChunkCompletion();
++ if (needsFullScheduling || chunkCompletion == null || !chunkCompletion.genStatus().isOrAfter(leastStatus)) {
++ // schedule
++ CompletableFuture<ChunkResult<ChunkAccess>> ret = new CompletableFuture<>();
++ Consumer<ChunkAccess> complete = (ChunkAccess chunk) -> {
++ if (chunk == null) {
++ ret.complete(ChunkResult.error("Unexpected chunk unload"));
++ } else {
++ ret.complete(ChunkResult.of(chunk));
+ }
+- }
+- }
++ };
+
+- return this.chunkAbsent(playerchunk, l) ? ChunkHolder.UNLOADED_CHUNK_FUTURE : playerchunk.getOrScheduleFuture(leastStatus, this.chunkMap);
+- }
++ this.level.chunkTaskScheduler.scheduleChunkLoad(
++ chunkX, chunkZ, leastStatus, true,
++ isUrgent ? ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.BLOCKING : ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.NORMAL,
++ complete
++ );
+
+- private boolean chunkAbsent(@Nullable ChunkHolder holder, int maxLevel) {
+- return holder == null || holder.oldTicketLevel > maxLevel; // CraftBukkit using oldTicketLevel for isLoaded checks
++ return ret;
++ } else {
++ // can return now
++ return CompletableFuture.completedFuture(ChunkResult.of(chunkCompletion.chunk()));
++ }
++ // Paper end - rewrite chunk system
+ }
+
++ // Paper - rewrite chunk system
++
+ @Override
+ public boolean hasChunk(int x, int z) {
+- ChunkHolder playerchunk = this.getVisibleChunkIfPresent((new ChunkPos(x, z)).toLong());
+- int k = ChunkLevel.byStatus(ChunkStatus.FULL);
+-
+- return !this.chunkAbsent(playerchunk, k);
++ return this.getChunkAtIfLoadedImmediately(x, z) != null; // Paper - rewrite chunk system
+ }
+
+ @Nullable
+@@ -386,22 +387,13 @@ public class ServerChunkCache extends ChunkSource {
+ if (playerchunk == null) {
+ return null;
+ } else {
+- int l = ServerChunkCache.CHUNK_STATUSES.size() - 1;
+-
+- while (true) {
+- ChunkStatus chunkstatus = (ChunkStatus) ServerChunkCache.CHUNK_STATUSES.get(l);
+- ChunkAccess ichunkaccess = (ChunkAccess) ((ChunkResult) playerchunk.getFutureIfPresentUnchecked(chunkstatus).getNow(ChunkHolder.UNLOADED_CHUNK)).orElse((Object) null);
+-
+- if (ichunkaccess != null) {
+- return ichunkaccess;
+- }
+-
+- if (chunkstatus == ChunkStatus.INITIALIZE_LIGHT.getParent()) {
+- return null;
+- }
+-
+- --l;
++ // Paper start - rewrite chunk system
++ ChunkStatus status = playerchunk.getChunkHolderStatus();
++ if (status != null && !status.isOrAfter(ChunkStatus.LIGHT.getParent())) {
++ return null;
+ }
++ return playerchunk.getAvailableChunkNow();
++ // Paper end - rewrite chunk system
+ }
+ }
+
+@@ -415,15 +407,7 @@ public class ServerChunkCache extends ChunkSource {
+ }
+
+ public boolean runDistanceManagerUpdates() { // Paper - public
+- boolean flag = this.distanceManager.runAllUpdates(this.chunkMap);
+- boolean flag1 = this.chunkMap.promoteChunkMap();
+-
+- if (!flag && !flag1) {
+- return false;
+- } else {
+- this.clearCache();
+- return true;
+- }
++ return this.level.chunkTaskScheduler.chunkHolderManager.processTicketUpdates(); // Paper - rewrite chunk system
+ }
+
+ // Paper start
+@@ -433,9 +417,10 @@ public class ServerChunkCache extends ChunkSource {
+ // Paper end
+
+ public boolean isPositionTicking(long pos) {
+- ChunkHolder playerchunk = this.getVisibleChunkIfPresent(pos);
+-
+- return playerchunk == null ? false : (!this.level.shouldTickBlocksAt(pos) ? false : ((ChunkResult) playerchunk.getTickingChunkFuture().getNow(ChunkHolder.UNLOADED_LEVEL_CHUNK)).isSuccess());
++ // Paper start - replace player chunk loader system
++ ChunkHolder holder = this.chunkMap.getVisibleChunkIfPresent(pos);
++ return holder != null && holder.isTickingReady();
++ // Paper end - replace player chunk loader system
+ }
+
+ public void save(boolean flush) {
+@@ -451,17 +436,13 @@ public class ServerChunkCache extends ChunkSource {
+ this.close(true);
+ }
+
+- public void close(boolean save) throws IOException {
+- if (save) {
+- this.save(true);
+- }
+- // CraftBukkit end
+- this.lightEngine.close();
+- this.chunkMap.close();
++ public void close(boolean save) { // Paper - rewrite chunk system
++ this.level.chunkTaskScheduler.chunkHolderManager.close(save, true); // Paper - rewrite chunk system
+ }
+
+ // CraftBukkit start - modelled on below
+ public void purgeUnload() {
++ if (true) return; // Paper - tickets will be removed later, this behavior isn't really well accounted for by the chunk system
+ this.level.getProfiler().push("purge");
+ this.distanceManager.purgeStaleTickets();
+ this.runDistanceManagerUpdates();
+@@ -485,6 +466,7 @@ public class ServerChunkCache extends ChunkSource {
+ this.level.getProfiler().popPush("chunks");
+ if (tickChunks) {
+ this.level.timings.chunks.startTiming(); // Paper - timings
++ this.chunkMap.level.playerChunkLoader.tick(); // Paper - replace player chunk loader - this is mostly required to account for view distance changes
+ this.tickChunks();
+ this.level.timings.chunks.stopTiming(); // Paper - timings
+ this.chunkMap.tick();
+@@ -587,7 +569,12 @@ public class ServerChunkCache extends ChunkSource {
+ ChunkHolder playerchunk = this.getVisibleChunkIfPresent(pos);
+
+ if (playerchunk != null) {
+- ((ChunkResult) playerchunk.getFullChunkFuture().getNow(ChunkHolder.UNLOADED_LEVEL_CHUNK)).ifSuccess(chunkConsumer);
++ // Paper start - rewrite chunk system
++ LevelChunk chunk = playerchunk.getFullChunkNow();
++ if (chunk != null) {
++ chunkConsumer.accept(chunk);
++ }
++ // Paper end - rewrite chunk system
+ }
+
+ }
+@@ -753,17 +740,10 @@ public class ServerChunkCache extends ChunkSource {
+ @Override
+ // CraftBukkit start - process pending Chunk loadCallback() and unloadCallback() after each run task
+ public boolean pollTask() {
+- try {
+ if (ServerChunkCache.this.runDistanceManagerUpdates()) {
+ return true;
+- } else {
+- ServerChunkCache.this.lightEngine.tryScheduleUpdate();
+- return super.pollTask();
+ }
+- } finally {
+- ServerChunkCache.this.chunkMap.callbackExecutor.run();
+- }
+- // CraftBukkit end
++ return super.pollTask() | ServerChunkCache.this.level.chunkTaskScheduler.executeMainThreadTask(); // Paper - rewrite chunk system
+ }
+ }
+
+diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
+index b33bf957b1541756e3b983b87b1c83629757739a..0ccdc8d135dd3edb410fbc1d248c20a4a45b37fa 100644
+--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
++++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
+@@ -199,7 +199,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ public final PrimaryLevelData serverLevelData; // CraftBukkit - type
+ private int lastSpawnChunkRadius;
+ final EntityTickList entityTickList;
+- public final PersistentEntitySectionManager<Entity> entityManager;
++ //public final PersistentEntitySectionManager<Entity> entityManager; // Paper - rewrite chunk system
+ private final GameEventDispatcher gameEventDispatcher;
+ public boolean noSave;
+ private final SleepStatus sleepStatus;
+@@ -268,50 +268,65 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ return true;
+ }
+
+- public final void loadChunksForMoveAsync(AABB axisalignedbb, ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority priority,
+- java.util.function.Consumer<List<net.minecraft.world.level.chunk.ChunkAccess>> onLoad) {
+- if (Thread.currentThread() != this.thread) {
+- this.getChunkSource().mainThreadProcessor.execute(() -> {
+- this.loadChunksForMoveAsync(axisalignedbb, priority, onLoad);
+- });
+- return;
+- }
++ public final void loadChunksAsync(BlockPos pos, int radiusBlocks,
++ ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority priority,
++ java.util.function.Consumer<List<net.minecraft.world.level.chunk.ChunkAccess>> onLoad) {
++ loadChunksAsync(
++ (pos.getX() - radiusBlocks) >> 4,
++ (pos.getX() + radiusBlocks) >> 4,
++ (pos.getZ() - radiusBlocks) >> 4,
++ (pos.getZ() + radiusBlocks) >> 4,
++ priority, onLoad
++ );
++ }
++
++ public final void loadChunksAsync(BlockPos pos, int radiusBlocks,
++ net.minecraft.world.level.chunk.status.ChunkStatus chunkStatus,
++ ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority priority,
++ java.util.function.Consumer<List<net.minecraft.world.level.chunk.ChunkAccess>> onLoad) {
++ loadChunksAsync(
++ (pos.getX() - radiusBlocks) >> 4,
++ (pos.getX() + radiusBlocks) >> 4,
++ (pos.getZ() - radiusBlocks) >> 4,
++ (pos.getZ() + radiusBlocks) >> 4,
++ chunkStatus, priority, onLoad
++ );
++ }
++
++ public final void loadChunksAsync(int minChunkX, int maxChunkX, int minChunkZ, int maxChunkZ,
++ ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority priority,
++ java.util.function.Consumer<List<net.minecraft.world.level.chunk.ChunkAccess>> onLoad) {
++ this.loadChunksAsync(minChunkX, maxChunkX, minChunkZ, maxChunkZ, net.minecraft.world.level.chunk.status.ChunkStatus.FULL, priority, onLoad);
++ }
++
++ public final void loadChunksAsync(int minChunkX, int maxChunkX, int minChunkZ, int maxChunkZ,
++ net.minecraft.world.level.chunk.status.ChunkStatus chunkStatus,
++ ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority priority,
++ java.util.function.Consumer<List<net.minecraft.world.level.chunk.ChunkAccess>> onLoad) {
+ List<net.minecraft.world.level.chunk.ChunkAccess> ret = new java.util.ArrayList<>();
+- it.unimi.dsi.fastutil.ints.IntArrayList ticketLevels = new it.unimi.dsi.fastutil.ints.IntArrayList();
+-
+- int minBlockX = Mth.floor(axisalignedbb.minX - 1.0E-7D) - 3;
+- int maxBlockX = Mth.floor(axisalignedbb.maxX + 1.0E-7D) + 3;
+-
+- int minBlockZ = Mth.floor(axisalignedbb.minZ - 1.0E-7D) - 3;
+- int maxBlockZ = Mth.floor(axisalignedbb.maxZ + 1.0E-7D) + 3;
+-
+- int minChunkX = minBlockX >> 4;
+- int maxChunkX = maxBlockX >> 4;
+-
+- int minChunkZ = minBlockZ >> 4;
+- int maxChunkZ = maxBlockZ >> 4;
+
+ ServerChunkCache chunkProvider = this.getChunkSource();
+
+ int requiredChunks = (maxChunkX - minChunkX + 1) * (maxChunkZ - minChunkZ + 1);
+- int[] loadedChunks = new int[1];
++ java.util.concurrent.atomic.AtomicInteger loadedChunks = new java.util.concurrent.atomic.AtomicInteger();
+
+- Long holderIdentifier = Long.valueOf(chunkProvider.chunkFutureAwaitCounter++);
++ Long holderIdentifier = Long.valueOf(chunkProvider.chunkFutureAwaitCounter.getAndIncrement());
++
++ int ticketLevel = 33 + net.minecraft.world.level.chunk.status.ChunkStatus.getDistance(chunkStatus);
+
+ java.util.function.Consumer<net.minecraft.world.level.chunk.ChunkAccess> consumer = (net.minecraft.world.level.chunk.ChunkAccess chunk) -> {
+ if (chunk != null) {
+- int ticketLevel = Math.max(33, chunkProvider.chunkMap.getUpdatingChunkIfPresent(chunk.getPos().toLong()).getTicketLevel());
++ synchronized (ret) {
+ ret.add(chunk);
+- ticketLevels.add(ticketLevel);
++ }
+ chunkProvider.addTicketAtLevel(TicketType.FUTURE_AWAIT, chunk.getPos(), ticketLevel, holderIdentifier);
+ }
+- if (++loadedChunks[0] == requiredChunks) {
++ if (loadedChunks.incrementAndGet() == requiredChunks) {
+ try {
+ onLoad.accept(java.util.Collections.unmodifiableList(ret));
+ } finally {
+ for (int i = 0, len = ret.size(); i < len; ++i) {
+ ChunkPos chunkPos = ret.get(i).getPos();
+- int ticketLevel = ticketLevels.getInt(i);
+
+ chunkProvider.addTicketAtLevel(TicketType.UNKNOWN, chunkPos, ticketLevel, chunkPos);
+ chunkProvider.removeTicketAtLevel(TicketType.FUTURE_AWAIT, chunkPos, ticketLevel, holderIdentifier);
+@@ -323,12 +338,228 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ for (int cx = minChunkX; cx <= maxChunkX; ++cx) {
+ for (int cz = minChunkZ; cz <= maxChunkZ; ++cz) {
+ io.papermc.paper.chunk.system.ChunkSystem.scheduleChunkLoad(
+- this, cx, cz, net.minecraft.world.level.chunk.status.ChunkStatus.FULL, true, priority, consumer
++ this, cx, cz, chunkStatus, true, priority, consumer
+ );
+ }
+ }
+ }
+- // Paper end
++
++ public final void loadChunksForMoveAsync(AABB axisalignedbb, ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority priority,
++ java.util.function.Consumer<List<net.minecraft.world.level.chunk.ChunkAccess>> onLoad) {
++
++ int minBlockX = Mth.floor(axisalignedbb.minX - 1.0E-7D) - 3;
++ int maxBlockX = Mth.floor(axisalignedbb.maxX + 1.0E-7D) + 3;
++
++ int minBlockZ = Mth.floor(axisalignedbb.minZ - 1.0E-7D) - 3;
++ int maxBlockZ = Mth.floor(axisalignedbb.maxZ + 1.0E-7D) + 3;
++
++ int minChunkX = minBlockX >> 4;
++ int maxChunkX = maxBlockX >> 4;
++
++ int minChunkZ = minBlockZ >> 4;
++ int maxChunkZ = maxBlockZ >> 4;
++
++ this.loadChunksAsync(minChunkX, maxChunkX, minChunkZ, maxChunkZ, priority, onLoad);
++ }
++
++ // Paper start - rewrite chunk system
++ public final io.papermc.paper.chunk.system.scheduling.ChunkTaskScheduler chunkTaskScheduler;
++ public final io.papermc.paper.chunk.system.io.RegionFileIOThread.ChunkDataController chunkDataControllerNew
++ = new io.papermc.paper.chunk.system.io.RegionFileIOThread.ChunkDataController(io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.CHUNK_DATA) {
++
++ @Override
++ public net.minecraft.world.level.chunk.storage.RegionFileStorage getCache() {
++ return ServerLevel.this.getChunkSource().chunkMap.regionFileCache;
++ }
++
++ @Override
++ public void writeData(int chunkX, int chunkZ, net.minecraft.nbt.CompoundTag compound) throws IOException {
++ ServerLevel.this.getChunkSource().chunkMap.write(new ChunkPos(chunkX, chunkZ), compound);
++ }
++
++ @Override
++ public net.minecraft.nbt.CompoundTag readData(int chunkX, int chunkZ) throws IOException {
++ return ServerLevel.this.getChunkSource().chunkMap.readSync(new ChunkPos(chunkX, chunkZ));
++ }
++ };
++ public final io.papermc.paper.chunk.system.io.RegionFileIOThread.ChunkDataController poiDataControllerNew
++ = new io.papermc.paper.chunk.system.io.RegionFileIOThread.ChunkDataController(io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.POI_DATA) {
++
++ @Override
++ public net.minecraft.world.level.chunk.storage.RegionFileStorage getCache() {
++ return ServerLevel.this.getChunkSource().chunkMap.getPoiManager();
++ }
++
++ @Override
++ public void writeData(int chunkX, int chunkZ, net.minecraft.nbt.CompoundTag compound) throws IOException {
++ ServerLevel.this.getChunkSource().chunkMap.getPoiManager().write(new ChunkPos(chunkX, chunkZ), compound);
++ }
++
++ @Override
++ public net.minecraft.nbt.CompoundTag readData(int chunkX, int chunkZ) throws IOException {
++ return ServerLevel.this.getChunkSource().chunkMap.getPoiManager().read(new ChunkPos(chunkX, chunkZ));
++ }
++ };
++ public final io.papermc.paper.chunk.system.io.RegionFileIOThread.ChunkDataController entityDataControllerNew
++ = new io.papermc.paper.chunk.system.io.RegionFileIOThread.ChunkDataController(io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.ENTITY_DATA) {
++
++ @Override
++ public net.minecraft.world.level.chunk.storage.RegionFileStorage getCache() {
++ return ServerLevel.this.entityStorage;
++ }
++
++ @Override
++ public void writeData(int chunkX, int chunkZ, net.minecraft.nbt.CompoundTag compound) throws IOException {
++ ServerLevel.this.writeEntityChunk(chunkX, chunkZ, compound);
++ }
++
++ @Override
++ public net.minecraft.nbt.CompoundTag readData(int chunkX, int chunkZ) throws IOException {
++ return ServerLevel.this.readEntityChunk(chunkX, chunkZ);
++ }
++ };
++ private final EntityRegionFileStorage entityStorage;
++
++ private static final class EntityRegionFileStorage extends net.minecraft.world.level.chunk.storage.RegionFileStorage {
++
++ public EntityRegionFileStorage(RegionStorageInfo storageKey, Path directory, boolean dsync) {
++ super(storageKey, directory, dsync);
++ }
++
++ protected void write(ChunkPos pos, net.minecraft.nbt.CompoundTag nbt) throws IOException {
++ ChunkPos nbtPos = nbt == null ? null : EntityStorage.readChunkPos(nbt);
++ if (nbtPos != null && !pos.equals(nbtPos)) {
++ throw new IllegalArgumentException(
++ "Entity chunk coordinate and serialized data do not have matching coordinates, trying to serialize coordinate " + pos.toString()
++ + " but compound says coordinate is " + nbtPos + " for world: " + this
++ );
++ }
++ super.write(pos, nbt);
++ }
++ }
++
++ private void writeEntityChunk(int chunkX, int chunkZ, net.minecraft.nbt.CompoundTag compound) throws IOException {
++ if (!io.papermc.paper.chunk.system.io.RegionFileIOThread.isRegionFileThread()) {
++ io.papermc.paper.chunk.system.io.RegionFileIOThread.scheduleSave(
++ this, chunkX, chunkZ, compound,
++ io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.ENTITY_DATA);
++ return;
++ }
++ this.entityStorage.write(new ChunkPos(chunkX, chunkZ), compound);
++ }
++
++ private net.minecraft.nbt.CompoundTag readEntityChunk(int chunkX, int chunkZ) throws IOException {
++ if (!io.papermc.paper.chunk.system.io.RegionFileIOThread.isRegionFileThread()) {
++ return io.papermc.paper.chunk.system.io.RegionFileIOThread.loadData(
++ this, chunkX, chunkZ, io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.ENTITY_DATA,
++ io.papermc.paper.chunk.system.io.RegionFileIOThread.getIOBlockingPriorityForCurrentThread()
++ );
++ }
++ return this.entityStorage.read(new ChunkPos(chunkX, chunkZ));
++ }
++
++ private final io.papermc.paper.chunk.system.entity.EntityLookup entityLookup;
++ public final io.papermc.paper.chunk.system.entity.EntityLookup getEntityLookup() {
++ return this.entityLookup;
++ }
++
++ private final java.util.concurrent.atomic.AtomicLong nonFullSyncLoadIdGenerator = new java.util.concurrent.atomic.AtomicLong();
++
++ private ChunkAccess getIfAboveStatus(int chunkX, int chunkZ, net.minecraft.world.level.chunk.status.ChunkStatus status) {
++ io.papermc.paper.chunk.system.scheduling.NewChunkHolder loaded =
++ this.chunkTaskScheduler.chunkHolderManager.getChunkHolder(chunkX, chunkZ);
++ io.papermc.paper.chunk.system.scheduling.NewChunkHolder.ChunkCompletion loadedCompletion;
++ if (loaded != null && (loadedCompletion = loaded.getLastChunkCompletion()) != null && loadedCompletion.genStatus().isOrAfter(status)) {
++ return loadedCompletion.chunk();
++ }
++
++ return null;
++ }
++
++ @Override
++ public ChunkAccess syncLoadNonFull(int chunkX, int chunkZ, net.minecraft.world.level.chunk.status.ChunkStatus status) {
++ if (status == null || status.isOrAfter(net.minecraft.world.level.chunk.status.ChunkStatus.FULL)) {
++ throw new IllegalArgumentException("Status: " + status);
++ }
++ ChunkAccess loaded = this.getIfAboveStatus(chunkX, chunkZ, status);
++ if (loaded != null) {
++ return loaded;
++ }
++
++ Long ticketId = Long.valueOf(this.nonFullSyncLoadIdGenerator.getAndIncrement());
++ int ticketLevel = 33 + net.minecraft.world.level.chunk.status.ChunkStatus.getDistance(status);
++ this.chunkTaskScheduler.chunkHolderManager.addTicketAtLevel(
++ TicketType.NON_FULL_SYNC_LOAD, chunkX, chunkZ, ticketLevel, ticketId
++ );
++ this.chunkTaskScheduler.chunkHolderManager.processTicketUpdates();
++
++ this.chunkTaskScheduler.beginChunkLoadForNonFullSync(chunkX, chunkZ, status, ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.BLOCKING);
++
++ // we could do a simple spinwait here, since we do not need to process tasks while performing this load
++ // but we process tasks only because it's a better use of the time spent
++ this.chunkSource.mainThreadProcessor.managedBlock(() -> {
++ return ServerLevel.this.getIfAboveStatus(chunkX, chunkZ, status) != null;
++ });
++
++ loaded = ServerLevel.this.getIfAboveStatus(chunkX, chunkZ, status);
++ if (loaded == null) {
++ throw new IllegalStateException("Expected chunk to be loaded for status " + status);
++ }
++
++ this.chunkTaskScheduler.chunkHolderManager.removeTicketAtLevel(
++ TicketType.NON_FULL_SYNC_LOAD, chunkX, chunkZ, ticketLevel, ticketId
++ );
++
++ return loaded;
++ }
++
++ public final int getRegionChunkShift() {
++ // placeholder for folia
++ return io.papermc.paper.threadedregions.TickRegions.getRegionChunkShift();
++ }
++ // Paper end - rewrite chunk system
++
++ public final io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader playerChunkLoader = new io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader(this);
++ private final java.util.concurrent.atomic.AtomicReference<io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.ViewDistances> viewDistances = new java.util.concurrent.atomic.AtomicReference<>(new io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.ViewDistances(-1, -1, -1));
++
++ public io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.ViewDistances getViewDistances() {
++ return this.viewDistances.get();
++ }
++
++ private void updateViewDistance(final java.util.function.Function<io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.ViewDistances, io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.ViewDistances> update) {
++ for (io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.ViewDistances curr = this.viewDistances.get();;) {
++ if (this.viewDistances.compareAndSet(curr, update.apply(curr))) {
++ return;
++ }
++ }
++ }
++
++ public void setTickViewDistance(final int distance) {
++ if ((distance < io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MIN_VIEW_DISTANCE || distance > io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MAX_VIEW_DISTANCE)) {
++ throw new IllegalArgumentException("Tick view distance must be a number between " + io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MIN_VIEW_DISTANCE + " and " + (io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MAX_VIEW_DISTANCE) + ", got: " + distance);
++ }
++ this.updateViewDistance((input) -> {
++ return input.setTickViewDistance(distance);
++ });
++ }
++
++ public void setLoadViewDistance(final int distance) {
++ if (distance != -1 && (distance < io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MIN_VIEW_DISTANCE || distance > io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MAX_VIEW_DISTANCE + 1)) {
++ throw new IllegalArgumentException("Load view distance must be a number between " + io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MIN_VIEW_DISTANCE + " and " + (io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MAX_VIEW_DISTANCE + 1) + " or -1, got: " + distance);
++ }
++ this.updateViewDistance((input) -> {
++ return input.setLoadViewDistance(distance);
++ });
++ }
++
++ public void setSendViewDistance(final int distance) {
++ if (distance != -1 && (distance < io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MIN_VIEW_DISTANCE || distance > io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MAX_VIEW_DISTANCE + 1)) {
++ throw new IllegalArgumentException("Send view distance must be a number between " + io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MIN_VIEW_DISTANCE + " and " + (io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MAX_VIEW_DISTANCE + 1) + " or -1, got: " + distance);
++ }
++ this.updateViewDistance((input) -> {
++ return input.setSendViewDistance(distance);
++ });
++ }
+
+ // Paper start - optimise getPlayerByUUID
+ @Nullable
+@@ -382,16 +613,16 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ // CraftBukkit end
+ boolean flag2 = minecraftserver.forceSynchronousWrites();
+ DataFixer datafixer = minecraftserver.getFixerUpper();
+- EntityPersistentStorage<Entity> entitypersistentstorage = new EntityStorage(new SimpleRegionStorage(new RegionStorageInfo(convertable_conversionsession.getLevelId(), resourcekey, "entities"), convertable_conversionsession.getDimensionPath(resourcekey).resolve("entities"), datafixer, flag2, DataFixTypes.ENTITY_CHUNK), this, minecraftserver);
++ this.entityStorage = new EntityRegionFileStorage(new RegionStorageInfo(convertable_conversionsession.getLevelId(), resourcekey, "entities"), convertable_conversionsession.getDimensionPath(resourcekey).resolve("entities"), flag2); // Paper - rewrite chunk system
+
+- this.entityManager = new PersistentEntitySectionManager<>(Entity.class, new ServerLevel.EntityCallbacks(), entitypersistentstorage);
++ // this.entityManager = new PersistentEntitySectionManager<>(Entity.class, new ServerLevel.EntityCallbacks(), entitypersistentstorage, this.entitySliceManager); // Paper // Paper - rewrite chunk system
+ StructureTemplateManager structuretemplatemanager = minecraftserver.getStructureManager();
+ int j = this.spigotConfig.viewDistance; // Spigot
+ int k = this.spigotConfig.simulationDistance; // Spigot
+- PersistentEntitySectionManager persistententitysectionmanager = this.entityManager;
++ //PersistentEntitySectionManager persistententitysectionmanager = this.entityManager; // Paper - rewrite chunk system
+
+- Objects.requireNonNull(this.entityManager);
+- this.chunkSource = new ServerChunkCache(this, convertable_conversionsession, datafixer, structuretemplatemanager, executor, chunkgenerator, j, k, flag2, worldloadlistener, persistententitysectionmanager::updateChunkStatus, () -> {
++ //Objects.requireNonNull(this.entityManager); // Paper - rewrite chunk system
++ this.chunkSource = new ServerChunkCache(this, convertable_conversionsession, datafixer, structuretemplatemanager, executor, chunkgenerator, j, k, flag2, worldloadlistener, null, () -> { // Paper - rewrite chunk system
+ return minecraftserver.overworld().getDataStorage();
+ });
+ this.chunkSource.getGeneratorState().ensureStructuresGenerated();
+@@ -420,6 +651,9 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ return (RandomSequences) this.getDataStorage().computeIfAbsent(RandomSequences.factory(l), "random_sequences");
+ });
+ this.getCraftServer().addWorld(this.getWorld()); // CraftBukkit
++
++ this.chunkTaskScheduler = new io.papermc.paper.chunk.system.scheduling.ChunkTaskScheduler(this, io.papermc.paper.chunk.system.scheduling.ChunkTaskScheduler.workerThreads); // Paper - rewrite chunk system
++ this.entityLookup = new io.papermc.paper.chunk.system.entity.EntityLookup(this, new EntityCallbacks()); // Paper - rewrite chunk system
+ }
+
+ // Paper start
+@@ -552,7 +786,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ gameprofilerfiller.push("checkDespawn");
+ entity.checkDespawn();
+ gameprofilerfiller.pop();
+- if (this.chunkSource.chunkMap.getDistanceManager().inEntityTickingRange(entity.chunkPosition().toLong())) {
++ if (true || this.chunkSource.chunkMap.getDistanceManager().inEntityTickingRange(entity.chunkPosition().toLong())) { // Paper - now always true if in the ticking list
+ Entity entity1 = entity.getVehicle();
+
+ if (entity1 != null) {
+@@ -577,13 +811,16 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ }
+
+ gameprofilerfiller.push("entityManagement");
+- this.entityManager.tick();
++ //this.entityManager.tick(); // Paper - rewrite chunk system
+ gameprofilerfiller.pop();
+ }
+
+ @Override
+ public boolean shouldTickBlocksAt(long chunkPos) {
+- return this.chunkSource.chunkMap.getDistanceManager().inBlockTickingRange(chunkPos);
++ // Paper start - replace player chunk loader system
++ ChunkHolder holder = this.chunkSource.chunkMap.getVisibleChunkIfPresent(chunkPos);
++ return holder != null && holder.isTickingReady();
++ // Paper end - replace player chunk loader system
+ }
+
+ protected void tickTime() {
+@@ -1060,6 +1297,11 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ }
+
+ public void save(@Nullable ProgressListener progressListener, boolean flush, boolean savingDisabled) {
++ // Paper start - rewrite chunk system - add close param
++ this.save(progressListener, flush, savingDisabled, false);
++ }
++ public void save(@Nullable ProgressListener progressListener, boolean flush, boolean savingDisabled, boolean close) {
++ // Paper end - rewrite chunk system - add close param
+ ServerChunkCache chunkproviderserver = this.getChunkSource();
+
+ if (!savingDisabled) {
+@@ -1075,16 +1317,13 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ }
+
+ timings.worldSaveChunks.startTiming(); // Paper
+- chunkproviderserver.save(flush);
++ if (!close) chunkproviderserver.save(flush); // Paper - rewrite chunk system
++ if (close) chunkproviderserver.close(true); // Paper - rewrite chunk system
+ timings.worldSaveChunks.stopTiming(); // Paper
+ }// Paper
+- if (flush) {
+- this.entityManager.saveAll();
+- } else {
+- this.entityManager.autoSave();
+- }
++ // Paper - rewrite chunk system - entity saving moved into ChunkHolder
+
+- }
++ } else if (close) { chunkproviderserver.close(false); } // Paper - rewrite chunk system
+
+ // CraftBukkit start - moved from MinecraftServer.saveChunks
+ ServerLevel worldserver1 = this;
+@@ -1220,7 +1459,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ this.removePlayerImmediately((ServerPlayer) entity, Entity.RemovalReason.DISCARDED);
+ }
+
+- this.entityManager.addNewEntity(player);
++ this.entityLookup.addNewEntity(player); // Paper - rewite chunk system
+ }
+
+ // CraftBukkit start
+@@ -1251,7 +1490,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ }
+ // CraftBukkit end
+
+- return this.entityManager.addNewEntity(entity);
++ return this.entityLookup.addNewEntity(entity); // Paper - rewrite chunk system
+ }
+ }
+
+@@ -1263,10 +1502,10 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ public boolean tryAddFreshEntityWithPassengers(Entity entity, org.bukkit.event.entity.CreatureSpawnEvent.SpawnReason reason) {
+ // CraftBukkit end
+ Stream<UUID> stream = entity.getSelfAndPassengers().map(Entity::getUUID); // CraftBukkit - decompile error
+- PersistentEntitySectionManager persistententitysectionmanager = this.entityManager;
++ //PersistentEntitySectionManager persistententitysectionmanager = this.entityManager; // Paper - rewrite chunk system
+
+- Objects.requireNonNull(this.entityManager);
+- if (stream.anyMatch(persistententitysectionmanager::isLoaded)) {
++ //Objects.requireNonNull(this.entityManager); // Paper - rewrite chunk system
++ if (stream.anyMatch(this.entityLookup::hasEntity)) { // Paper - rewrite chunk system
+ return false;
+ } else {
+ this.addFreshEntityWithPassengers(entity, reason); // CraftBukkit
+@@ -1852,7 +2091,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ }
+ }
+
+- bufferedwriter.write(String.format(Locale.ROOT, "entities: %s\n", this.entityManager.gatherStats()));
++ bufferedwriter.write(String.format(Locale.ROOT, "entities: %s\n", this.entityLookup.getDebugInfo())); // Paper - rewrite chunk system
+ bufferedwriter.write(String.format(Locale.ROOT, "block_entity_tickers: %d\n", this.blockEntityTickers.size()));
+ bufferedwriter.write(String.format(Locale.ROOT, "block_ticks: %d\n", this.getBlockTicks().count()));
+ bufferedwriter.write(String.format(Locale.ROOT, "fluid_ticks: %d\n", this.getFluidTicks().count()));
+@@ -1901,7 +2140,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ BufferedWriter bufferedwriter2 = Files.newBufferedWriter(path1);
+
+ try {
+- playerchunkmap.dumpChunks(bufferedwriter2);
++ //playerchunkmap.dumpChunks(bufferedwriter2); // Paper - rewrite chunk system
+ } catch (Throwable throwable4) {
+ if (bufferedwriter2 != null) {
+ try {
+@@ -1922,7 +2161,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ BufferedWriter bufferedwriter3 = Files.newBufferedWriter(path2);
+
+ try {
+- this.entityManager.dumpSections(bufferedwriter3);
++ //this.entityManager.dumpSections(bufferedwriter3); // Paper - rewrite chunk system
+ } catch (Throwable throwable6) {
+ if (bufferedwriter3 != null) {
+ try {
+@@ -2064,7 +2303,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+
+ @VisibleForTesting
+ public String getWatchdogStats() {
+- return String.format(Locale.ROOT, "players: %s, entities: %s [%s], block_entities: %d [%s], block_ticks: %d, fluid_ticks: %d, chunk_source: %s", this.players.size(), this.entityManager.gatherStats(), ServerLevel.getTypeCount(this.entityManager.getEntityGetter().getAll(), (entity) -> {
++ return String.format(Locale.ROOT, "players: %s, entities: %s [%s], block_entities: %d [%s], block_ticks: %d, fluid_ticks: %d, chunk_source: %s", this.players.size(), this.entityLookup.getDebugInfo(), ServerLevel.getTypeCount(this.entityLookup.getAll(), (entity) -> { // Paper - rewrite chunk system
+ return BuiltInRegistries.ENTITY_TYPE.getKey(entity.getType()).toString();
+ }), this.blockEntityTickers.size(), ServerLevel.getTypeCount(this.blockEntityTickers, TickingBlockEntity::getType), this.getBlockTicks().count(), this.getFluidTicks().count(), this.gatherChunkSourceStats());
+ }
+@@ -2124,15 +2363,15 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ @Override
+ public LevelEntityGetter<Entity> getEntities() {
+ org.spigotmc.AsyncCatcher.catchOp("Chunk getEntities call"); // Spigot
+- return this.entityManager.getEntityGetter();
++ return this.entityLookup; // Paper - rewrite chunk system
+ }
+
+- public void addLegacyChunkEntities(Stream<Entity> entities) {
+- this.entityManager.addLegacyChunkEntities(entities);
++ public void addLegacyChunkEntities(Stream<Entity> entities, ChunkPos forChunk) { // Paper - rewrite chunk system
++ this.entityLookup.addLegacyChunkEntities(entities.toList(), forChunk); // Paper - rewrite chunk system
+ }
+
+- public void addWorldGenChunkEntities(Stream<Entity> entities) {
+- this.entityManager.addWorldGenChunkEntities(entities);
++ public void addWorldGenChunkEntities(Stream<Entity> entities, ChunkPos forChunk) { // Paper - rewrite chunk system
++ this.entityLookup.addWorldGenChunkEntities(entities.toList(), forChunk); // Paper - rewrite chunk system
+ }
+
+ public void startTickingChunk(LevelChunk chunk) {
+@@ -2152,34 +2391,49 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ @Override
+ public void close() throws IOException {
+ super.close();
+- this.entityManager.close();
++ //this.entityManager.close(); // Paper - rewrite chunk system
+ }
+
+ @Override
+ public String gatherChunkSourceStats() {
+ String s = this.chunkSource.gatherStats();
+
+- return "Chunks[S] W: " + s + " E: " + this.entityManager.gatherStats();
++ return "Chunks[S] W: " + s + " E: " + this.entityLookup.getDebugInfo(); // Paper - rewrite chunk system
+ }
+
+ public boolean areEntitiesLoaded(long chunkPos) {
+- return this.entityManager.areEntitiesLoaded(chunkPos);
++ // Paper start - rewrite chunk system
++ return this.getChunkIfLoadedImmediately(ChunkPos.getX(chunkPos), ChunkPos.getZ(chunkPos)) != null;
++ // Paper end - rewrite chunk system
+ }
+
+ private boolean isPositionTickingWithEntitiesLoaded(long chunkPos) {
+- return this.areEntitiesLoaded(chunkPos) && this.chunkSource.isPositionTicking(chunkPos);
++ // Paper start - optimize is ticking ready type functions
++ io.papermc.paper.chunk.system.scheduling.NewChunkHolder chunkHolder = this.chunkTaskScheduler.chunkHolderManager.getChunkHolder(chunkPos);
++ // isTicking implies the chunk is loaded, and the chunk is loaded now implies the entities are loaded
++ return chunkHolder != null && chunkHolder.isTickingReady();
++ // Paper end
+ }
+
+ public boolean isPositionEntityTicking(BlockPos pos) {
+- return this.entityManager.canPositionTick(pos) && this.chunkSource.chunkMap.getDistanceManager().inEntityTickingRange(ChunkPos.asLong(pos));
++ // Paper start - rewrite chunk system
++ io.papermc.paper.chunk.system.scheduling.NewChunkHolder chunkHolder = this.chunkTaskScheduler.chunkHolderManager.getChunkHolder(io.papermc.paper.util.CoordinateUtils.getChunkKey(pos));
++ return chunkHolder != null && chunkHolder.isEntityTickingReady();
++ // Paper end - rewrite chunk system
+ }
+
+ public boolean isNaturalSpawningAllowed(BlockPos pos) {
+- return this.entityManager.canPositionTick(pos);
++ // Paper start - rewrite chunk system
++ io.papermc.paper.chunk.system.scheduling.NewChunkHolder chunkHolder = this.chunkTaskScheduler.chunkHolderManager.getChunkHolder(io.papermc.paper.util.CoordinateUtils.getChunkKey(pos));
++ return chunkHolder != null && chunkHolder.isEntityTickingReady();
++ // Paper end - rewrite chunk system
+ }
+
+ public boolean isNaturalSpawningAllowed(ChunkPos pos) {
+- return this.entityManager.canPositionTick(pos);
++ // Paper start - rewrite chunk system
++ io.papermc.paper.chunk.system.scheduling.NewChunkHolder chunkHolder = this.chunkTaskScheduler.chunkHolderManager.getChunkHolder(io.papermc.paper.util.CoordinateUtils.getChunkKey(pos));
++ return chunkHolder != null && chunkHolder.isEntityTickingReady();
++ // Paper end - rewrite chunk system
+ }
+
+ @Override
+@@ -2205,7 +2459,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ CrashReportCategory crashreportsystemdetails = super.fillReportDetails(report);
+
+ crashreportsystemdetails.setDetail("Loaded entity count", () -> {
+- return String.valueOf(this.entityManager.count());
++ return String.valueOf(this.entityLookup.getAllCopy().length); // Paper
+ });
+ return crashreportsystemdetails;
+ }
+diff --git a/src/main/java/net/minecraft/server/level/ServerPlayer.java b/src/main/java/net/minecraft/server/level/ServerPlayer.java
+index 3a3c17e62244a16cbad5558d55bcf8e330997acb..683d2cc82e1ffce45d533eab0a1ee7c367af62c8 100644
+--- a/src/main/java/net/minecraft/server/level/ServerPlayer.java
++++ b/src/main/java/net/minecraft/server/level/ServerPlayer.java
+@@ -293,6 +293,50 @@ public class ServerPlayer extends Player {
+ public @Nullable String clientBrandName = null; // Paper - Brand support
+ public org.bukkit.event.player.PlayerQuitEvent.QuitReason quitReason = null; // Paper - Add API for quit reason; there are a lot of changes to do if we change all methods leading to the event
+
++ // Paper start - replace player chunk loader
++ private final java.util.concurrent.atomic.AtomicReference<io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.ViewDistances> viewDistances = new java.util.concurrent.atomic.AtomicReference<>(new io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.ViewDistances(-1, -1, -1));
++ public io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.PlayerChunkLoaderData chunkLoader;
++
++ public io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.ViewDistances getViewDistances() {
++ return this.viewDistances.get();
++ }
++
++ private void updateViewDistance(final java.util.function.Function<io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.ViewDistances, io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.ViewDistances> update) {
++ for (io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.ViewDistances curr = this.viewDistances.get();;) {
++ if (this.viewDistances.compareAndSet(curr, update.apply(curr))) {
++ return;
++ }
++ }
++ }
++
++ public void setTickViewDistance(final int distance) {
++ if ((distance < io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MIN_VIEW_DISTANCE || distance > io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MAX_VIEW_DISTANCE)) {
++ throw new IllegalArgumentException("Tick view distance must be a number between " + io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MIN_VIEW_DISTANCE + " and " + (io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MAX_VIEW_DISTANCE) + ", got: " + distance);
++ }
++ this.updateViewDistance((input) -> {
++ return input.setTickViewDistance(distance);
++ });
++ }
++
++ public void setLoadViewDistance(final int distance) {
++ if (distance != -1 && (distance < io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MIN_VIEW_DISTANCE || distance > io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MAX_VIEW_DISTANCE + 1)) {
++ throw new IllegalArgumentException("Load view distance must be a number between " + io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MIN_VIEW_DISTANCE + " and " + (io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MAX_VIEW_DISTANCE + 1) + " or -1, got: " + distance);
++ }
++ this.updateViewDistance((input) -> {
++ return input.setLoadViewDistance(distance);
++ });
++ }
++
++ public void setSendViewDistance(final int distance) {
++ if (distance != -1 && (distance < io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MIN_VIEW_DISTANCE || distance > io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MAX_VIEW_DISTANCE + 1)) {
++ throw new IllegalArgumentException("Send view distance must be a number between " + io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MIN_VIEW_DISTANCE + " and " + (io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.MAX_VIEW_DISTANCE + 1) + " or -1, got: " + distance);
++ }
++ this.updateViewDistance((input) -> {
++ return input.setSendViewDistance(distance);
++ });
++ }
++ // Paper end - replace player chunk loader
++
+ public ServerPlayer(MinecraftServer server, ServerLevel world, GameProfile profile, ClientInformation clientOptions) {
+ super(world, world.getSharedSpawnPos(), world.getSharedSpawnAngle(), profile);
+ this.chatVisibility = ChatVisiblity.FULL;
+diff --git a/src/main/java/net/minecraft/server/level/ThreadedLevelLightEngine.java b/src/main/java/net/minecraft/server/level/ThreadedLevelLightEngine.java
+index f206df06a7d8895175db31d4a840d7467ffe826f..8ef22f8f0d6da49247a90152e5cfa9ffc7f596a4 100644
+--- a/src/main/java/net/minecraft/server/level/ThreadedLevelLightEngine.java
++++ b/src/main/java/net/minecraft/server/level/ThreadedLevelLightEngine.java
+@@ -37,15 +37,12 @@ import net.minecraft.world.level.chunk.status.ChunkStatus;
+ public class ThreadedLevelLightEngine extends LevelLightEngine implements AutoCloseable {
+ public static final int DEFAULT_BATCH_SIZE = 1000;
+ private static final Logger LOGGER = LogUtils.getLogger();
+- private final ProcessorMailbox<Runnable> taskMailbox;
+- private final ObjectList<Pair<ThreadedLevelLightEngine.TaskType, Runnable>> lightTasks = new ObjectArrayList<>();
++ // Paper - rewrite chunk system
+ private final ChunkMap chunkMap;
+- private final ProcessorHandle<ChunkTaskPriorityQueueSorter.Message<Runnable>> sorterMailbox;
+- private final int taskPerBatch = 1000;
+- private final AtomicBoolean scheduled = new AtomicBoolean();
++ // Paper - rewrite chunk system
+
+ // Paper start - replace light engine impl
+- protected final ca.spottedleaf.starlight.common.light.StarLightInterface theLightEngine;
++ public final ca.spottedleaf.starlight.common.light.StarLightInterface theLightEngine;
+ public final boolean hasBlockLight;
+ public final boolean hasSkyLight;
+ // Paper end - replace light engine impl
+@@ -59,8 +56,7 @@ public class ThreadedLevelLightEngine extends LevelLightEngine implements AutoCl
+ ) {
+ super(chunkProvider, false, false); // Paper - destroy vanilla light engine state
+ this.chunkMap = chunkStorage;
+- this.sorterMailbox = executor;
+- this.taskMailbox = processor;
++ // Paper - rewrite chunk system
+ // Paper start - replace light engine impl
+ this.hasBlockLight = true;
+ this.hasSkyLight = hasBlockLight; // Nice variable name.
+@@ -104,7 +100,7 @@ public class ThreadedLevelLightEngine extends LevelLightEngine implements AutoCl
+ ++totalChunks;
+ }
+
+- this.taskMailbox.tell(() -> {
++ this.chunkMap.level.chunkTaskScheduler.radiusAwareScheduler.queueInfiniteRadiusTask(() -> { // Paper - rewrite chunk system
+ this.theLightEngine.relightChunks(chunks, (ChunkPos chunkPos) -> {
+ chunkLightCallback.accept(chunkPos);
+ ((java.util.concurrent.Executor)((ServerLevel)this.theLightEngine.getWorld()).getChunkSource().mainThreadProcessor).execute(() -> {
+@@ -121,7 +117,7 @@ public class ThreadedLevelLightEngine extends LevelLightEngine implements AutoCl
+ private final Long2IntOpenHashMap chunksBeingWorkedOn = new Long2IntOpenHashMap();
+
+ private void queueTaskForSection(final int chunkX, final int chunkY, final int chunkZ,
+- final Supplier<ca.spottedleaf.starlight.common.light.StarLightInterface.LightQueue.ChunkTasks> runnable) {
++ final Supplier<io.papermc.paper.chunk.system.light.LightQueue.ChunkTasks> runnable) { // Paper - rewrite chunk system
+ final ServerLevel world = (ServerLevel)this.theLightEngine.getWorld();
+
+ final ChunkAccess center = this.theLightEngine.getAnyChunkNow(chunkX, chunkZ);
+@@ -148,7 +144,7 @@ public class ThreadedLevelLightEngine extends LevelLightEngine implements AutoCl
+
+ final long key = CoordinateUtils.getChunkKey(chunkX, chunkZ);
+
+- final ca.spottedleaf.starlight.common.light.StarLightInterface.LightQueue.ChunkTasks updateFuture = runnable.get();
++ final io.papermc.paper.chunk.system.light.LightQueue.ChunkTasks updateFuture = runnable.get(); // Paper - rewrite chunk system
+
+ if (updateFuture == null) {
+ // not scheduled
+@@ -285,16 +281,11 @@ public class ThreadedLevelLightEngine extends LevelLightEngine implements AutoCl
+ }
+
+ private void addTask(int x, int z, ThreadedLevelLightEngine.TaskType stage, Runnable task) {
+- this.addTask(x, z, this.chunkMap.getChunkQueueLevel(ChunkPos.asLong(x, z)), stage, task);
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ private void addTask(int x, int z, IntSupplier completedLevelSupplier, ThreadedLevelLightEngine.TaskType stage, Runnable task) {
+- this.sorterMailbox.tell(ChunkTaskPriorityQueueSorter.message(() -> {
+- this.lightTasks.add(Pair.of(stage, task));
+- if (this.lightTasks.size() >= 1000) {
+- this.runUpdate();
+- }
+- }, ChunkPos.asLong(x, z), completedLevelSupplier));
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ @Override
+@@ -327,83 +318,15 @@ public class ThreadedLevelLightEngine extends LevelLightEngine implements AutoCl
+ }
+
+ public CompletableFuture<ChunkAccess> lightChunk(ChunkAccess chunk, boolean excludeBlocks) {
+- // Paper start - replace light engine impl
+- if (true) {
+- boolean lit = excludeBlocks;
+- final ChunkPos chunkPos = chunk.getPos();
+-
+- return CompletableFuture.supplyAsync(() -> {
+- final Boolean[] emptySections = StarLightEngine.getEmptySectionsForChunk(chunk);
+- if (!lit) {
+- chunk.setLightCorrect(false);
+- this.theLightEngine.lightChunk(chunk, emptySections);
+- chunk.setLightCorrect(true);
+- } else {
+- this.theLightEngine.forceLoadInChunk(chunk, emptySections);
+- // can't really force the chunk to be edged checked, as we need neighbouring chunks - but we don't have
+- // them, so if it's not loaded then i guess we can't do edge checks. later loads of the chunk should
+- // catch what we miss here.
+- this.theLightEngine.checkChunkEdges(chunkPos.x, chunkPos.z);
+- }
+-
+- this.chunkMap.releaseLightTicket(chunkPos);
+- return chunk;
+- }, (runnable) -> {
+- this.theLightEngine.scheduleChunkLight(chunkPos, runnable);
+- this.tryScheduleUpdate();
+- }).whenComplete((final ChunkAccess c, final Throwable throwable) -> {
+- if (throwable != null) {
+- LOGGER.error("Failed to light chunk " + chunkPos, throwable);
+- }
+- });
+- }
+- // Paper end - replace light engine impl
+- ChunkPos chunkPos = chunk.getPos();
+- chunk.setLightCorrect(false);
+- this.addTask(chunkPos.x, chunkPos.z, ThreadedLevelLightEngine.TaskType.PRE_UPDATE, Util.name(() -> {
+- if (!excludeBlocks) {
+- super.propagateLightSources(chunkPos);
+- }
+- }, () -> "lightChunk " + chunkPos + " " + excludeBlocks));
+- return CompletableFuture.supplyAsync(() -> {
+- chunk.setLightCorrect(true);
+- this.chunkMap.releaseLightTicket(chunkPos);
+- return chunk;
+- }, task -> this.addTask(chunkPos.x, chunkPos.z, ThreadedLevelLightEngine.TaskType.POST_UPDATE, task));
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ public void tryScheduleUpdate() {
+- if (this.hasLightWork() && this.scheduled.compareAndSet(false, true)) { // Paper // Paper - rewrite light engine
+- this.taskMailbox.tell(() -> {
+- this.runUpdate();
+- this.scheduled.set(false);
+- });
+- }
++ // Paper - rewrite chunk system
+ }
+
+ private void runUpdate() {
+- int i = Math.min(this.lightTasks.size(), 1000);
+- ObjectListIterator<Pair<ThreadedLevelLightEngine.TaskType, Runnable>> objectListIterator = this.lightTasks.iterator();
+-
+- int j;
+- for (j = 0; objectListIterator.hasNext() && j < i; j++) {
+- Pair<ThreadedLevelLightEngine.TaskType, Runnable> pair = objectListIterator.next();
+- if (pair.getFirst() == ThreadedLevelLightEngine.TaskType.PRE_UPDATE) {
+- pair.getSecond().run();
+- }
+- }
+-
+- objectListIterator.back(j);
+- this.theLightEngine.propagateChanges(); // Paper - rewrite light engine
+-
+- for (int var5 = 0; objectListIterator.hasNext() && var5 < i; var5++) {
+- Pair<ThreadedLevelLightEngine.TaskType, Runnable> pair2 = objectListIterator.next();
+- if (pair2.getFirst() == ThreadedLevelLightEngine.TaskType.POST_UPDATE) {
+- pair2.getSecond().run();
+- }
+-
+- objectListIterator.remove();
+- }
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ public CompletableFuture<?> waitForPendingTasks(int x, int z) {
+diff --git a/src/main/java/net/minecraft/server/level/Ticket.java b/src/main/java/net/minecraft/server/level/Ticket.java
+index eba83b085435150e5954fd5d41dda9ce1d0601ad..e97329f867de2acbdd666925ba5d2aafa7a90574 100644
+--- a/src/main/java/net/minecraft/server/level/Ticket.java
++++ b/src/main/java/net/minecraft/server/level/Ticket.java
+@@ -6,9 +6,12 @@ public final class Ticket<T> implements Comparable<Ticket<?>> {
+ private final TicketType<T> type;
+ private final int ticketLevel;
+ public final T key;
+- private long createdTick;
++ // Paper start - rewrite chunk system
++ public long removeDelay;
+
+- protected Ticket(TicketType<T> type, int level, T argument) {
++ public Ticket(TicketType<T> type, int level, T argument, long removeDelay) {
++ this.removeDelay = removeDelay;
++ // Paper end - rewrite chunk system
+ this.type = type;
+ this.ticketLevel = level;
+ this.key = argument;
+@@ -41,7 +44,7 @@ public final class Ticket<T> implements Comparable<Ticket<?>> {
+
+ @Override
+ public String toString() {
+- return "Ticket[" + this.type + " " + this.ticketLevel + " (" + this.key + ")] at " + this.createdTick;
++ return "Ticket[" + this.type + " " + this.ticketLevel + " (" + this.key + ")] to die in " + this.removeDelay; // Paper - rewrite chunk system
+ }
+
+ public TicketType<T> getType() {
+@@ -53,11 +56,10 @@ public final class Ticket<T> implements Comparable<Ticket<?>> {
+ }
+
+ protected void setCreatedTick(long tickCreated) {
+- this.createdTick = tickCreated;
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ protected boolean timedOut(long currentTick) {
+- long l = this.type.timeout();
+- return l != 0L && currentTick - this.createdTick > l;
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+ }
+diff --git a/src/main/java/net/minecraft/server/level/TicketType.java b/src/main/java/net/minecraft/server/level/TicketType.java
+index 6051e5f272838ef23276a90e21c2fc821ca155d1..658e63ebde81dc14c8ab5850fb246dc0aab25dea 100644
+--- a/src/main/java/net/minecraft/server/level/TicketType.java
++++ b/src/main/java/net/minecraft/server/level/TicketType.java
+@@ -8,6 +8,7 @@ import net.minecraft.world.level.ChunkPos;
+
+ public class TicketType<T> {
+ public static final TicketType<Long> FUTURE_AWAIT = create("future_await", Long::compareTo); // Paper
++ public static final TicketType<Long> ASYNC_LOAD = create("async_load", Long::compareTo); // Paper
+
+ private final String name;
+ private final Comparator<T> comparator;
+@@ -27,6 +28,15 @@ public class TicketType<T> {
+ public static final TicketType<Unit> PLUGIN = TicketType.create("plugin", (a, b) -> 0); // CraftBukkit
+ public static final TicketType<org.bukkit.plugin.Plugin> PLUGIN_TICKET = TicketType.create("plugin_ticket", (plugin1, plugin2) -> plugin1.getClass().getName().compareTo(plugin2.getClass().getName())); // CraftBukkit
+ public static final TicketType<Long> CHUNK_RELIGHT = create("light_update", Long::compareTo); // Paper - ensure chunks stay loaded for lighting
++ // Paper start - rewrite chunk system
++ public static final TicketType<Long> CHUNK_LOAD = create("chunk_load", Long::compareTo);
++ public static final TicketType<Long> STATUS_UPGRADE = create("status_upgrade", Long::compareTo);
++ public static final TicketType<Long> ENTITY_LOAD = create("entity_load", Long::compareTo);
++ public static final TicketType<Long> POI_LOAD = create("poi_load", Long::compareTo);
++ public static final TicketType<Unit> UNLOAD_COOLDOWN = create("unload_cooldown", (u1, u2) -> 0, 5 * 20);
++ public static final TicketType<Long> NON_FULL_SYNC_LOAD = create("non_full_sync_load", Long::compareTo);
++ public static final TicketType<ChunkPos> DELAY_UNLOAD = create("delay_unload", Comparator.comparingLong(ChunkPos::toLong), 1);
++ // Paper end - rewrite chunk system
+
+ public static <T> TicketType<T> create(String name, Comparator<T> argumentComparator) {
+ return new TicketType<>(name, argumentComparator, 0L);
+diff --git a/src/main/java/net/minecraft/server/level/WorldGenRegion.java b/src/main/java/net/minecraft/server/level/WorldGenRegion.java
+index ca4c8e256047a4af45811c3e772b5a959e2ae941..1351423a12c19a01f602a202832372a399e6a867 100644
+--- a/src/main/java/net/minecraft/server/level/WorldGenRegion.java
++++ b/src/main/java/net/minecraft/server/level/WorldGenRegion.java
+@@ -544,4 +544,21 @@ public class WorldGenRegion implements WorldGenLevel {
+ public long nextSubTickCount() {
+ return this.subTickCount.getAndIncrement();
+ }
++
++ // Paper start
++ // No-op, this class doesn't provide entity access
++ @Override
++ public List<Entity> getHardCollidingEntities(Entity except, AABB box, Predicate<? super Entity> predicate) {
++ return Collections.emptyList();
++ }
++
++ @Override
++ public void getEntities(Entity except, AABB box, Predicate<? super Entity> predicate, List<Entity> into) {}
++
++ @Override
++ public void getHardCollidingEntities(Entity except, AABB box, Predicate<? super Entity> predicate, List<Entity> into) {}
++
++ @Override
++ public <T> void getEntitiesByClass(Class<? extends T> clazz, Entity except, AABB box, List<? super T> into, Predicate<? super T> predicate) {}
++ // Paper end
+ }
+diff --git a/src/main/java/net/minecraft/server/network/PlayerChunkSender.java b/src/main/java/net/minecraft/server/network/PlayerChunkSender.java
+index c502d9b85eb68b277ae17dfea34e0475f0156647..27d0f1ed58948039004f8f1eba2f7f9609fdeec0 100644
+--- a/src/main/java/net/minecraft/server/network/PlayerChunkSender.java
++++ b/src/main/java/net/minecraft/server/network/PlayerChunkSender.java
+@@ -43,16 +43,23 @@ public class PlayerChunkSender {
+
+ public void dropChunk(ServerPlayer player, ChunkPos pos) {
+ if (!this.pendingChunks.remove(pos.toLong()) && player.isAlive()) {
++ // Paper start - rewrite player chunk loader
++ dropChunkStatic(player, pos);
++ }
++ }
++ public static void dropChunkStatic(ServerPlayer player, ChunkPos pos) {
++ player.serverLevel().chunkSource.chunkMap.getVisibleChunkIfPresent(pos.toLong()).removePlayer(player);
+ player.connection.send(new ClientboundForgetLevelChunkPacket(pos));
+ // Paper start - PlayerChunkUnloadEvent
+ if (io.papermc.paper.event.packet.PlayerChunkUnloadEvent.getHandlerList().getRegisteredListeners().length > 0) {
+ new io.papermc.paper.event.packet.PlayerChunkUnloadEvent(player.getBukkitEntity().getWorld().getChunkAt(pos.longKey), player.getBukkitEntity()).callEvent();
+ }
+ // Paper end - PlayerChunkUnloadEvent
+- }
+ }
++ // Paper end - rewrite player chunk loader
+
+ public void sendNextChunks(ServerPlayer player) {
++ if (true) return; // Paper - rewrite player chunk loader
+ if (this.unacknowledgedBatches < this.maxUnacknowledgedBatches) {
+ float f = Math.max(1.0F, this.desiredChunksPerTick);
+ this.batchQuota = Math.min(this.batchQuota + this.desiredChunksPerTick, f);
+@@ -78,7 +85,8 @@ public class PlayerChunkSender {
+ }
+ }
+
+- private static void sendChunk(ServerGamePacketListenerImpl handler, ServerLevel world, LevelChunk chunk) {
++ public static void sendChunk(ServerGamePacketListenerImpl handler, ServerLevel world, LevelChunk chunk) { // Paper - rewrite chunk loader - public
++ handler.player.serverLevel().chunkSource.chunkMap.getVisibleChunkIfPresent(chunk.getPos().toLong()).addPlayer(handler.player);
+ handler.send(new ClientboundLevelChunkWithLightPacket(chunk, world.getLightEngine(), null, null));
+ // Paper start - PlayerChunkLoadEvent
+ if (io.papermc.paper.event.packet.PlayerChunkLoadEvent.getHandlerList().getRegisteredListeners().length > 0) {
+@@ -118,6 +126,7 @@ public class PlayerChunkSender {
+ }
+
+ public void onChunkBatchReceivedByClient(float desiredBatchSize) {
++ if (true) return; // Paper - rewrite player chunk loader
+ this.unacknowledgedBatches--;
+ this.desiredChunksPerTick = Double.isNaN((double)desiredBatchSize) ? 0.01F : Mth.clamp(desiredBatchSize, 0.01F, 64.0F);
+ if (this.unacknowledgedBatches == 0) {
+diff --git a/src/main/java/net/minecraft/server/players/PlayerList.java b/src/main/java/net/minecraft/server/players/PlayerList.java
+index 0aa28caa1254137c0bae8e213bd08c9a654f5335..c4b4e5f5c9366b241686e881cda34568a57b4877 100644
+--- a/src/main/java/net/minecraft/server/players/PlayerList.java
++++ b/src/main/java/net/minecraft/server/players/PlayerList.java
+@@ -296,7 +296,7 @@ public abstract class PlayerList {
+ boolean flag2 = gamerules.getBoolean(GameRules.RULE_LIMITED_CRAFTING);
+
+ // Spigot - view distance
+- playerconnection.send(new ClientboundLoginPacket(player.getId(), worlddata.isHardcore(), this.server.levelKeys(), this.getMaxPlayers(), worldserver1.spigotConfig.viewDistance, worldserver1.spigotConfig.simulationDistance, flag1, !flag, flag2, player.createCommonSpawnInfo(worldserver1), this.server.enforceSecureProfile()));
++ playerconnection.send(new ClientboundLoginPacket(player.getId(), worlddata.isHardcore(), this.server.levelKeys(), this.getMaxPlayers(), worldserver1.getWorld().getSendViewDistance(), worldserver1.getWorld().getSimulationDistance(), flag1, !flag, flag2, player.createCommonSpawnInfo(worldserver1), this.server.enforceSecureProfile())); // Paper - replace old player chunk management
+ player.getBukkitEntity().sendSupportedChannels(); // CraftBukkit
+ playerconnection.send(new ClientboundChangeDifficultyPacket(worlddata.getDifficulty(), worlddata.isDifficultyLocked()));
+ playerconnection.send(new ClientboundPlayerAbilitiesPacket(player.getAbilities()));
+@@ -943,8 +943,8 @@ public abstract class PlayerList {
+ LevelData worlddata = worldserver2.getLevelData();
+
+ entityplayer1.connection.send(new ClientboundRespawnPacket(entityplayer1.createCommonSpawnInfo(worldserver2), (byte) i));
+- entityplayer1.connection.send(new ClientboundSetChunkCacheRadiusPacket(worldserver1.spigotConfig.viewDistance)); // Spigot
+- entityplayer1.connection.send(new ClientboundSetSimulationDistancePacket(worldserver1.spigotConfig.simulationDistance)); // Spigot
++ entityplayer1.connection.send(new ClientboundSetChunkCacheRadiusPacket(worldserver1.getWorld().getSendViewDistance())); // Spigot // Paper - replace old player chunk management
++ entityplayer1.connection.send(new ClientboundSetSimulationDistancePacket(worldserver1.getWorld().getSimulationDistance())); // Spigot // Paper - replace old player chunk management
+ entityplayer1.connection.teleport(CraftLocation.toBukkit(entityplayer1.position(), worldserver2.getWorld(), entityplayer1.getYRot(), entityplayer1.getXRot())); // CraftBukkit
+ entityplayer1.connection.send(new ClientboundSetDefaultSpawnPositionPacket(worldserver1.getSharedSpawnPos(), worldserver1.getSharedSpawnAngle()));
+ entityplayer1.connection.send(new ClientboundChangeDifficultyPacket(worlddata.getDifficulty(), worlddata.isDifficultyLocked()));
+@@ -1496,7 +1496,7 @@ public abstract class PlayerList {
+
+ public void setViewDistance(int viewDistance) {
+ this.viewDistance = viewDistance;
+- this.broadcastAll(new ClientboundSetChunkCacheRadiusPacket(viewDistance));
++ //this.broadcastAll(new ClientboundSetChunkCacheRadiusPacket(viewDistance)); // Paper - move into setViewDistance
+ Iterator iterator = this.server.getAllLevels().iterator();
+
+ while (iterator.hasNext()) {
+@@ -1511,7 +1511,7 @@ public abstract class PlayerList {
+
+ public void setSimulationDistance(int simulationDistance) {
+ this.simulationDistance = simulationDistance;
+- this.broadcastAll(new ClientboundSetSimulationDistancePacket(simulationDistance));
++ //this.broadcastAll(new ClientboundSetSimulationDistancePacket(simulationDistance)); // Paper - handled by playerchunkloader
+ Iterator iterator = this.server.getAllLevels().iterator();
+
+ while (iterator.hasNext()) {
+diff --git a/src/main/java/net/minecraft/util/SortedArraySet.java b/src/main/java/net/minecraft/util/SortedArraySet.java
+index ea72dcb064a35bc6245bc5c94d592efedd8faf41..0793dfe47e68a2b48b010aad5b12dcfa1701293a 100644
+--- a/src/main/java/net/minecraft/util/SortedArraySet.java
++++ b/src/main/java/net/minecraft/util/SortedArraySet.java
+@@ -14,6 +14,14 @@ public class SortedArraySet<T> extends AbstractSet<T> {
+ T[] contents;
+ int size;
+
++ // Paper start - rewrite chunk system
++ public SortedArraySet(final SortedArraySet<T> other) {
++ this.comparator = other.comparator;
++ this.size = other.size;
++ this.contents = Arrays.copyOf(other.contents, this.size);
++ }
++ // Paper end - rewrite chunk system
++
+ private SortedArraySet(int initialCapacity, Comparator<T> comparator) {
+ this.comparator = comparator;
+ if (initialCapacity < 0) {
+@@ -22,6 +30,41 @@ public class SortedArraySet<T> extends AbstractSet<T> {
+ this.contents = (T[])castRawArray(new Object[initialCapacity]);
+ }
+ }
++ // Paper start - optimise removeIf
++ @Override
++ public boolean removeIf(java.util.function.Predicate<? super T> filter) {
++ // prev. impl used an iterator, which could be n^2 and creates garbage
++ int i = 0, len = this.size;
++ T[] backingArray = this.contents;
++
++ for (;;) {
++ if (i >= len) {
++ return false;
++ }
++ if (!filter.test(backingArray[i])) {
++ ++i;
++ continue;
++ }
++ break;
++ }
++
++ // we only want to write back to backingArray if we really need to
++
++ int lastIndex = i; // this is where new elements are shifted to
++
++ for (; i < len; ++i) {
++ T curr = backingArray[i];
++ if (!filter.test(curr)) { // if test throws we're screwed
++ backingArray[lastIndex++] = curr;
++ }
++ }
++
++ // cleanup end
++ Arrays.fill(backingArray, lastIndex, len, null);
++ this.size = lastIndex;
++ return true;
++ }
++ // Paper end - optimise removeIf
+
+ public static <T extends Comparable<T>> SortedArraySet<T> create() {
+ return create(10);
+@@ -110,6 +153,31 @@ public class SortedArraySet<T> extends AbstractSet<T> {
+ }
+ }
+
++ // Paper start - rewrite chunk system
++ public T replace(T object) {
++ int i = this.findIndex(object);
++ if (i >= 0) {
++ T old = this.contents[i];
++ this.contents[i] = object;
++ return old;
++ } else {
++ this.addInternal(object, getInsertionPosition(i));
++ return object;
++ }
++ }
++
++ public T removeAndGet(T object) {
++ int i = this.findIndex(object);
++ if (i >= 0) {
++ final T ret = this.contents[i];
++ this.removeInternal(i);
++ return ret;
++ } else {
++ return null;
++ }
++ }
++ // Paper end - rewrite chunk system
++
+ @Override
+ public boolean remove(Object object) {
+ int i = this.findIndex((T)object);
+diff --git a/src/main/java/net/minecraft/util/worldupdate/WorldUpgrader.java b/src/main/java/net/minecraft/util/worldupdate/WorldUpgrader.java
+index 7984f17cd9c4cef8100909b6c33b3144c8096fcf..639f72618a7c22fa94effa9d0406b97fffc64cb5 100644
+--- a/src/main/java/net/minecraft/util/worldupdate/WorldUpgrader.java
++++ b/src/main/java/net/minecraft/util/worldupdate/WorldUpgrader.java
+@@ -227,7 +227,13 @@ public class WorldUpgrader {
+ this.previousWriteFuture.join();
+ }
+
++ // Paper start - async chunk io
++ try {
+ this.previousWriteFuture = storage.write(chunkPos, nbttagcompound1);
++ } catch (final IOException e) {
++ com.destroystokyo.paper.util.SneakyThrow.sneaky(e);
++ }
++ // Paper end - async chunk io
+ return true;
+ }
+ }
+diff --git a/src/main/java/net/minecraft/world/entity/Entity.java b/src/main/java/net/minecraft/world/entity/Entity.java
+index 3c1bcf8d1a07b35a8688160c9f05f792451338a3..03840f520624662d4ce3ac9f3065a01c71b5f299 100644
+--- a/src/main/java/net/minecraft/world/entity/Entity.java
++++ b/src/main/java/net/minecraft/world/entity/Entity.java
+@@ -482,6 +482,58 @@ public abstract class Entity implements SyncedDataHolder, Nameable, EntityAccess
+ }
+ // Paper end
+
++ // Paper start
++ /**
++ * Overriding this field will cause memory leaks.
++ */
++ private final boolean hardCollides;
++
++ private static final java.util.Map<Class<? extends Entity>, Boolean> cachedOverrides = java.util.Collections.synchronizedMap(new java.util.WeakHashMap<>());
++ {
++ /* // Goodbye, broken on reobf...
++ Boolean hardCollides = cachedOverrides.get(this.getClass());
++ if (hardCollides == null) {
++ try {
++ java.lang.reflect.Method getHardCollisionBoxEntityMethod = Entity.class.getMethod("canCollideWith", Entity.class);
++ java.lang.reflect.Method hasHardCollisionBoxMethod = Entity.class.getMethod("canBeCollidedWith");
++ if (!this.getClass().getMethod(hasHardCollisionBoxMethod.getName(), hasHardCollisionBoxMethod.getParameterTypes()).equals(hasHardCollisionBoxMethod)
++ || !this.getClass().getMethod(getHardCollisionBoxEntityMethod.getName(), getHardCollisionBoxEntityMethod.getParameterTypes()).equals(getHardCollisionBoxEntityMethod)) {
++ hardCollides = Boolean.TRUE;
++ } else {
++ hardCollides = Boolean.FALSE;
++ }
++ cachedOverrides.put(this.getClass(), hardCollides);
++ }
++ catch (ThreadDeath thr) { throw thr; }
++ catch (Throwable thr) {
++ // shouldn't happen, just explode
++ throw new RuntimeException(thr);
++ }
++ } */
++ this.hardCollides = this instanceof Boat
++ || this instanceof net.minecraft.world.entity.monster.Shulker
++ || this instanceof net.minecraft.world.entity.vehicle.AbstractMinecart
++ || this.shouldHardCollide();
++ }
++
++ // plugins can override
++ protected boolean shouldHardCollide() {
++ return false;
++ }
++
++ public final boolean hardCollides() {
++ return this.hardCollides;
++ }
++
++ public net.minecraft.server.level.FullChunkStatus chunkStatus;
++
++ public int sectionX = Integer.MIN_VALUE;
++ public int sectionY = Integer.MIN_VALUE;
++ public int sectionZ = Integer.MIN_VALUE;
++
++ public boolean updatingSectionStatus = false;
++ // Paper end
++
+ public Entity(EntityType<?> type, Level world) {
+ this.id = Entity.ENTITY_COUNTER.incrementAndGet();
+ this.passengers = ImmutableList.of();
+@@ -2608,11 +2660,11 @@ public abstract class Entity implements SyncedDataHolder, Nameable, EntityAccess
+ return InteractionResult.PASS;
+ }
+
+- public boolean canCollideWith(Entity other) {
++ public boolean canCollideWith(Entity other) { // Paper - diff on change, hard colliding entities override this - TODO CHECK ON UPDATE - AbstractMinecart/Boat override
+ return other.canBeCollidedWith() && !this.isPassengerOfSameVehicle(other);
+ }
+
+- public boolean canBeCollidedWith() {
++ public boolean canBeCollidedWith() { // Paper - diff on change, hard colliding entities override this TODO CHECK ON UPDATE - Boat/Shulker override
+ return false;
+ }
+
+@@ -4037,6 +4089,13 @@ public abstract class Entity implements SyncedDataHolder, Nameable, EntityAccess
+ }).count();
+ }
+
++ // Paper start - rewrite chunk system
++ public boolean hasAnyPlayerPassengers() {
++ // copied from below
++ if (this.passengers.isEmpty()) { return false; }
++ return this.getIndirectPassengersStream().anyMatch((entity) -> entity instanceof Player);
++ }
++ // Paper end - rewrite chunk system
+ public boolean hasExactlyOnePlayerPassenger() {
+ if (this.passengers.isEmpty()) { return false; } // Paper - Optimize indirect passenger iteration
+ return this.countPlayerPassengers() == 1;
+@@ -4387,6 +4446,12 @@ public abstract class Entity implements SyncedDataHolder, Nameable, EntityAccess
+ return;
+ }
+ // Paper end - Block invalid positions and bounding box
++ // Paper start - rewrite chunk system
++ if (this.updatingSectionStatus) {
++ LOGGER.error("Refusing to update position for entity {} to position {} since it is processing a section status update", this, new Vec3(x, y, z), new Throwable());
++ return;
++ }
++ // Paper end - rewrite chunk system
+ // Paper start - Fix MC-4
+ if (this instanceof ItemEntity) {
+ if (io.papermc.paper.configuration.GlobalConfiguration.get().misc.fixEntityPositionDesync) {
+@@ -4514,6 +4579,13 @@ public abstract class Entity implements SyncedDataHolder, Nameable, EntityAccess
+
+ @Override
+ public final void setRemoved(Entity.RemovalReason entity_removalreason, EntityRemoveEvent.Cause cause) {
++ // Paper start - rewrite chunk system
++ io.papermc.paper.util.TickThread.ensureTickThread(this, "Cannot remove entity off-main");
++ if (!((ServerLevel)this.level).getEntityLookup().canRemoveEntity(this)) {
++ LOGGER.warn("Entity " + this + " is currently prevented from being removed from the world since it is processing section status updates", new Throwable());
++ return;
++ }
++ // Paper end - rewrite chunk system
+ CraftEventFactory.callEntityRemoveEvent(this, cause);
+ // CraftBukkit end
+ final boolean alreadyRemoved = this.removalReason != null; // Paper - Folia schedulers
+@@ -4525,7 +4597,7 @@ public abstract class Entity implements SyncedDataHolder, Nameable, EntityAccess
+ this.stopRiding();
+ }
+
+- this.getPassengers().forEach(Entity::stopRiding);
++ if (entity_removalreason != RemovalReason.UNLOADED_TO_CHUNK) this.getPassengers().forEach(Entity::stopRiding); // Paper - chunk system - don't adjust passenger state when unloading, it's just not safe (and messes with our logic in entity chunk unload)
+ this.levelCallback.onRemove(entity_removalreason);
+ // Paper start - Folia schedulers
+ if (!(this instanceof ServerPlayer) && entity_removalreason != RemovalReason.CHANGED_DIMENSION && !alreadyRemoved) {
+@@ -4556,7 +4628,7 @@ public abstract class Entity implements SyncedDataHolder, Nameable, EntityAccess
+
+ @Override
+ public boolean shouldBeSaved() {
+- return this.removalReason != null && !this.removalReason.shouldSave() ? false : (this.isPassenger() ? false : !this.isVehicle() || !this.hasExactlyOnePlayerPassenger());
++ return this.removalReason != null && !this.removalReason.shouldSave() ? false : (this.isPassenger() ? false : !this.isVehicle() || !this.hasAnyPlayerPassengers()); // Paper - rewrite chunk system - it should check if the entity has ANY player passengers
+ }
+
+ @Override
+diff --git a/src/main/java/net/minecraft/world/entity/ai/village/poi/PoiManager.java b/src/main/java/net/minecraft/world/entity/ai/village/poi/PoiManager.java
+index 71d8909f35a22256406a2232d21adfd7d94dc3a5..7b52b0507cbda76aee1db954641f397bef51f94d 100644
+--- a/src/main/java/net/minecraft/world/entity/ai/village/poi/PoiManager.java
++++ b/src/main/java/net/minecraft/world/entity/ai/village/poi/PoiManager.java
+@@ -40,20 +40,40 @@ import net.minecraft.world.level.chunk.storage.SimpleRegionStorage;
+ public class PoiManager extends SectionStorage<PoiSection> {
+ public static final int MAX_VILLAGE_DISTANCE = 6;
+ public static final int VILLAGE_SECTION_SIZE = 1;
+- private final PoiManager.DistanceTracker distanceTracker;
+- private final LongSet loadedChunks = new LongOpenHashSet();
++ // Paper start - rewrite chunk system
++ // the vanilla tracker needs to be replaced because it does not support level removes
++ public final net.minecraft.server.level.ServerLevel world;
++ private final io.papermc.paper.util.misc.Delayed26WayDistancePropagator3D villageDistanceTracker = new io.papermc.paper.util.misc.Delayed26WayDistancePropagator3D();
++ static final int POI_DATA_SOURCE = 7;
++ public static int convertBetweenLevels(final int level) {
++ return POI_DATA_SOURCE - level;
++ }
++
++ protected void updateDistanceTracking(long section) {
++ if (this.isVillageCenter(section)) {
++ this.villageDistanceTracker.setSource(section, POI_DATA_SOURCE);
++ } else {
++ this.villageDistanceTracker.removeSource(section);
++ }
++ }
++ // Paper end - rewrite chunk system
+
+ public PoiManager(
+ RegionStorageInfo storageKey, Path directory, DataFixer dataFixer, boolean dsync, RegistryAccess registryManager, LevelHeightAccessor world
+ ) {
+ super(
++ // Paper start
++ storageKey,
++ directory,
++ dsync,
++ // Paper end
+ new SimpleRegionStorage(storageKey, directory, dataFixer, dsync, DataFixTypes.POI_CHUNK),
+ PoiSection::codec,
+ PoiSection::new,
+ registryManager,
+ world
+ );
+- this.distanceTracker = new PoiManager.DistanceTracker();
++ this.world = (net.minecraft.server.level.ServerLevel)world; // Paper - rewrite chunk system
+ }
+
+ public void add(BlockPos pos, Holder<PoiType> type) {
+@@ -187,8 +207,8 @@ public class PoiManager extends SectionStorage<PoiSection> {
+ }
+
+ public int sectionsToVillage(SectionPos pos) {
+- this.distanceTracker.runAllUpdates();
+- return this.distanceTracker.getLevel(pos.asLong());
++ this.villageDistanceTracker.propagateUpdates(); // Paper - replace distance tracking util
++ return convertBetweenLevels(this.villageDistanceTracker.getLevel(io.papermc.paper.util.CoordinateUtils.getChunkSectionKey(pos))); // Paper - replace distance tracking util
+ }
+
+ boolean isVillageCenter(long pos) {
+@@ -202,20 +222,117 @@ public class PoiManager extends SectionStorage<PoiSection> {
+
+ @Override
+ public void tick(BooleanSupplier shouldKeepTicking) {
+- super.tick(shouldKeepTicking);
+- this.distanceTracker.runAllUpdates();
++ this.villageDistanceTracker.propagateUpdates(); // Paper - rewrite chunk system
+ }
+
+ @Override
+- protected void setDirty(long pos) {
+- super.setDirty(pos);
+- this.distanceTracker.update(pos, this.distanceTracker.getLevelFromSource(pos), false);
++ public void setDirty(long pos) {
++ // Paper start - rewrite chunk system
++ int chunkX = io.papermc.paper.util.CoordinateUtils.getChunkSectionX(pos);
++ int chunkZ = io.papermc.paper.util.CoordinateUtils.getChunkSectionZ(pos);
++ io.papermc.paper.chunk.system.scheduling.ChunkHolderManager manager = this.world.chunkTaskScheduler.chunkHolderManager;
++ io.papermc.paper.chunk.system.poi.PoiChunk chunk = manager.getPoiChunkIfLoaded(chunkX, chunkZ, false);
++ if (chunk != null) {
++ chunk.setDirty(true);
++ }
++ this.updateDistanceTracking(pos);
++ // Paper end - rewrite chunk system
+ }
+
+ @Override
+ protected void onSectionLoad(long pos) {
+- this.distanceTracker.update(pos, this.distanceTracker.getLevelFromSource(pos), false);
++ this.updateDistanceTracking(pos); // Paper - move to new distance tracking util
++ }
++
++ // Paper start - rewrite chunk system
++ @Override
++ public Optional<PoiSection> get(long pos) {
++ int chunkX = io.papermc.paper.util.CoordinateUtils.getChunkSectionX(pos);
++ int chunkY = io.papermc.paper.util.CoordinateUtils.getChunkSectionY(pos);
++ int chunkZ = io.papermc.paper.util.CoordinateUtils.getChunkSectionZ(pos);
++
++ io.papermc.paper.util.TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Accessing poi chunk off-main");
++
++ io.papermc.paper.chunk.system.scheduling.ChunkHolderManager manager = this.world.chunkTaskScheduler.chunkHolderManager;
++ io.papermc.paper.chunk.system.poi.PoiChunk ret = manager.getPoiChunkIfLoaded(chunkX, chunkZ, true);
++
++ return ret == null ? Optional.empty() : ret.getSectionForVanilla(chunkY);
++ }
++
++ @Override
++ public Optional<PoiSection> getOrLoad(long pos) {
++ int chunkX = io.papermc.paper.util.CoordinateUtils.getChunkSectionX(pos);
++ int chunkY = io.papermc.paper.util.CoordinateUtils.getChunkSectionY(pos);
++ int chunkZ = io.papermc.paper.util.CoordinateUtils.getChunkSectionZ(pos);
++
++ io.papermc.paper.util.TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Accessing poi chunk off-main");
++
++ io.papermc.paper.chunk.system.scheduling.ChunkHolderManager manager = this.world.chunkTaskScheduler.chunkHolderManager;
++
++ if (chunkY >= io.papermc.paper.util.WorldUtil.getMinSection(this.world) &&
++ chunkY <= io.papermc.paper.util.WorldUtil.getMaxSection(this.world)) {
++ io.papermc.paper.chunk.system.poi.PoiChunk ret = manager.getPoiChunkIfLoaded(chunkX, chunkZ, true);
++ if (ret != null) {
++ return ret.getSectionForVanilla(chunkY);
++ } else {
++ return manager.loadPoiChunk(chunkX, chunkZ).getSectionForVanilla(chunkY);
++ }
++ }
++ // retain vanilla behavior: do not load section if out of bounds!
++ return Optional.empty();
++ }
++
++ @Override
++ protected PoiSection getOrCreate(long pos) {
++ int chunkX = io.papermc.paper.util.CoordinateUtils.getChunkSectionX(pos);
++ int chunkY = io.papermc.paper.util.CoordinateUtils.getChunkSectionY(pos);
++ int chunkZ = io.papermc.paper.util.CoordinateUtils.getChunkSectionZ(pos);
++
++ io.papermc.paper.util.TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Accessing poi chunk off-main");
++
++ io.papermc.paper.chunk.system.scheduling.ChunkHolderManager manager = this.world.chunkTaskScheduler.chunkHolderManager;
++
++ io.papermc.paper.chunk.system.poi.PoiChunk ret = manager.getPoiChunkIfLoaded(chunkX, chunkZ, true);
++ if (ret != null) {
++ return ret.getOrCreateSection(chunkY);
++ } else {
++ return manager.loadPoiChunk(chunkX, chunkZ).getOrCreateSection(chunkY);
++ }
++ }
++
++ public void onUnload(long coordinate) { // Paper - rewrite chunk system
++ int chunkX = io.papermc.paper.util.MCUtil.getCoordinateX(coordinate);
++ int chunkZ = io.papermc.paper.util.MCUtil.getCoordinateZ(coordinate);
++ io.papermc.paper.util.TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Unloading poi chunk off-main");
++ for (int section = this.levelHeightAccessor.getMinSection(); section < this.levelHeightAccessor.getMaxSection(); ++section) {
++ long sectionPos = SectionPos.asLong(chunkX, section, chunkZ);
++ this.updateDistanceTracking(sectionPos);
++ }
++ }
++
++ public void loadInPoiChunk(io.papermc.paper.chunk.system.poi.PoiChunk poiChunk) {
++ int chunkX = poiChunk.chunkX;
++ int chunkZ = poiChunk.chunkZ;
++ io.papermc.paper.util.TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Loading poi chunk off-main");
++ for (int sectionY = this.levelHeightAccessor.getMinSection(); sectionY < this.levelHeightAccessor.getMaxSection(); ++sectionY) {
++ PoiSection section = poiChunk.getSection(sectionY);
++ if (section != null && !section.isEmpty()) {
++ this.onSectionLoad(SectionPos.asLong(chunkX, sectionY, chunkZ));
++ }
++ }
++ }
++
++ public void checkConsistency(net.minecraft.world.level.chunk.ChunkAccess chunk) {
++ int chunkX = chunk.getPos().x;
++ int chunkZ = chunk.getPos().z;
++ int minY = io.papermc.paper.util.WorldUtil.getMinSection(chunk);
++ int maxY = io.papermc.paper.util.WorldUtil.getMaxSection(chunk);
++ LevelChunkSection[] sections = chunk.getSections();
++ for (int section = minY; section <= maxY; ++section) {
++ this.checkConsistencyWithBlocks(SectionPos.of(chunkX, section, chunkZ), sections[section - minY]);
++ }
+ }
++ // Paper end - rewrite chunk system
+
+ public void checkConsistencyWithBlocks(SectionPos sectionPos, LevelChunkSection chunkSection) {
+ Util.ifElse(this.getOrLoad(sectionPos.asLong()), poiSet -> poiSet.refresh(populator -> {
+@@ -251,7 +368,7 @@ public class PoiManager extends SectionStorage<PoiSection> {
+ .map(sectionPos -> Pair.of(sectionPos, this.getOrLoad(sectionPos.asLong())))
+ .filter(pair -> !pair.getSecond().map(PoiSection::isValid).orElse(false))
+ .map(pair -> pair.getFirst().chunk())
+- .filter(chunkPos -> this.loadedChunks.add(chunkPos.toLong()))
++ // Paper - rewrite chunk system
+ .forEach(chunkPos -> world.getChunk(chunkPos.x, chunkPos.z, ChunkStatus.EMPTY));
+ }
+
+@@ -265,7 +382,7 @@ public class PoiManager extends SectionStorage<PoiSection> {
+
+ @Override
+ protected int getLevelFromSource(long id) {
+- return PoiManager.this.isVillageCenter(id) ? 0 : 7;
++ return PoiManager.this.isVillageCenter(id) ? 0 : 7; // Paper - rewrite chunk system - diff on change, this specifies the source level to use for distance tracking
+ }
+
+ @Override
+@@ -287,6 +404,35 @@ public class PoiManager extends SectionStorage<PoiSection> {
+ }
+ }
+
++ // Paper start - Asynchronous chunk io
++ @javax.annotation.Nullable
++ @Override
++ public net.minecraft.nbt.CompoundTag read(ChunkPos chunkcoordintpair) throws java.io.IOException {
++ // Paper start - rewrite chunk system
++ if (!io.papermc.paper.chunk.system.io.RegionFileIOThread.isRegionFileThread()) {
++ return io.papermc.paper.chunk.system.io.RegionFileIOThread.loadData(
++ this.world, chunkcoordintpair.x, chunkcoordintpair.z, io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.POI_DATA,
++ io.papermc.paper.chunk.system.io.RegionFileIOThread.getIOBlockingPriorityForCurrentThread()
++ );
++ }
++ // Paper end - rewrite chunk system
++ return super.read(chunkcoordintpair);
++ }
++
++ @Override
++ public void write(ChunkPos chunkcoordintpair, net.minecraft.nbt.CompoundTag nbttagcompound) throws java.io.IOException {
++ // Paper start - rewrite chunk system
++ if (!io.papermc.paper.chunk.system.io.RegionFileIOThread.isRegionFileThread()) {
++ io.papermc.paper.chunk.system.io.RegionFileIOThread.scheduleSave(
++ this.world, chunkcoordintpair.x, chunkcoordintpair.z, nbttagcompound,
++ io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.POI_DATA);
++ return;
++ }
++ // Paper end - rewrite chunk system
++ super.write(chunkcoordintpair, nbttagcompound);
++ }
++ // Paper end
++
+ public static enum Occupancy {
+ HAS_SPACE(PoiRecord::hasSpace),
+ IS_OCCUPIED(PoiRecord::isOccupied),
+diff --git a/src/main/java/net/minecraft/world/entity/ai/village/poi/PoiSection.java b/src/main/java/net/minecraft/world/entity/ai/village/poi/PoiSection.java
+index 971fb29a2c3dc713cb8ab1d2eed054cc16f9c93c..5b7deae326228e482b218aeebd857a59b7434eaf 100644
+--- a/src/main/java/net/minecraft/world/entity/ai/village/poi/PoiSection.java
++++ b/src/main/java/net/minecraft/world/entity/ai/village/poi/PoiSection.java
+@@ -29,6 +29,7 @@ public class PoiSection {
+ private final Map<Holder<PoiType>, Set<PoiRecord>> byType = Maps.newHashMap();
+ private final Runnable setDirty;
+ private boolean isValid;
++ public final Optional<PoiSection> noAllocateOptional = Optional.of(this); // Paper - rewrite chunk system
+
+ public static Codec<PoiSection> codec(Runnable updateListener) {
+ return RecordCodecBuilder.<PoiSection>create(
+@@ -46,6 +47,12 @@ public class PoiSection {
+ this(updateListener, true, ImmutableList.of());
+ }
+
++ // Paper start - isEmpty
++ public boolean isEmpty() {
++ return this.isValid && this.records.isEmpty() && this.byType.isEmpty();
++ }
++ // Paper end
++
+ private PoiSection(Runnable updateListener, boolean valid, List<PoiRecord> pois) {
+ this.setDirty = updateListener;
+ this.isValid = valid;
+diff --git a/src/main/java/net/minecraft/world/level/EntityGetter.java b/src/main/java/net/minecraft/world/level/EntityGetter.java
+index bd20bea7f76a7307f1698fb2dfef37125032d166..9a28912f52824acdc80a62243b136e6f365bf567 100644
+--- a/src/main/java/net/minecraft/world/level/EntityGetter.java
++++ b/src/main/java/net/minecraft/world/level/EntityGetter.java
+@@ -19,6 +19,18 @@ import net.minecraft.world.phys.shapes.Shapes;
+ import net.minecraft.world.phys.shapes.VoxelShape;
+
+ public interface EntityGetter {
++
++ // Paper start
++ List<Entity> getHardCollidingEntities(Entity except, AABB box, Predicate<? super Entity> predicate);
++
++ void getEntities(Entity except, AABB box, Predicate<? super Entity> predicate, List<Entity> into);
++
++ void getHardCollidingEntities(Entity except, AABB box, Predicate<? super Entity> predicate, List<Entity> into);
++
++ <T> void getEntitiesByClass(Class<? extends T> clazz, Entity except, final AABB box, List<? super T> into,
++ Predicate<? super T> predicate);
++ // Paper end
++
+ List<Entity> getEntities(@Nullable Entity except, AABB box, Predicate<? super Entity> predicate);
+
+ <T extends Entity> List<T> getEntities(EntityTypeTest<Entity, T> filter, AABB box, Predicate<? super T> predicate);
+diff --git a/src/main/java/net/minecraft/world/level/Level.java b/src/main/java/net/minecraft/world/level/Level.java
+index 975fcd4b8f93cb8c602ddeb165c485214eac10a4..d3137c9e5cc42ef191ea233b0d37eafeffc6f82c 100644
+--- a/src/main/java/net/minecraft/world/level/Level.java
++++ b/src/main/java/net/minecraft/world/level/Level.java
+@@ -547,6 +547,11 @@ public abstract class Level implements LevelAccessor, AutoCloseable {
+
+ if ((i & 2) != 0 && (!this.isClientSide || (i & 4) == 0) && (this.isClientSide || chunk == null || (chunk.getFullStatus() != null && chunk.getFullStatus().isOrAfter(FullChunkStatus.BLOCK_TICKING)))) { // allow chunk to be null here as chunk.isReady() is false when we send our notification during block placement
+ this.sendBlockUpdated(blockposition, iblockdata1, iblockdata, i);
++ // Paper start - per player view distance - allow block updates for non-ticking chunks in player view distance
++ // if copied from above
++ } else if ((i & 2) != 0 && (!this.isClientSide || (i & 4) == 0)) { // Paper - replace old player chunk management
++ ((ServerLevel)this).getChunkSource().blockChanged(blockposition);
++ // Paper end - per player view distance
+ }
+
+ if ((i & 1) != 0) {
+@@ -941,7 +946,7 @@ public abstract class Level implements LevelAccessor, AutoCloseable {
+ }
+ // Paper end - Perf: Optimize capturedTileEntities lookup
+ // CraftBukkit end
+- return this.isOutsideBuildHeight(blockposition) ? null : (!this.isClientSide && Thread.currentThread() != this.thread ? null : this.getChunkAt(blockposition).getBlockEntity(blockposition, LevelChunk.EntityCreationType.IMMEDIATE));
++ return this.isOutsideBuildHeight(blockposition) ? null : (!this.isClientSide && !io.papermc.paper.util.TickThread.isTickThread() ? null : this.getChunkAt(blockposition).getBlockEntity(blockposition, LevelChunk.EntityCreationType.IMMEDIATE)); // Paper - rewrite chunk system
+ }
+
+ public void setBlockEntity(BlockEntity blockEntity) {
+@@ -1032,26 +1037,7 @@ public abstract class Level implements LevelAccessor, AutoCloseable {
+ public List<Entity> getEntities(@Nullable Entity except, AABB box, Predicate<? super Entity> predicate) {
+ this.getProfiler().incrementCounter("getEntities");
+ List<Entity> list = Lists.newArrayList();
+-
+- this.getEntities().get(box, (entity1) -> {
+- if (entity1 != except && predicate.test(entity1)) {
+- list.add(entity1);
+- }
+-
+- if (entity1 instanceof EnderDragon) {
+- EnderDragonPart[] aentitycomplexpart = ((EnderDragon) entity1).getSubEntities();
+- int i = aentitycomplexpart.length;
+-
+- for (int j = 0; j < i; ++j) {
+- EnderDragonPart entitycomplexpart = aentitycomplexpart[j];
+-
+- if (entity1 != except && predicate.test(entitycomplexpart)) {
+- list.add(entitycomplexpart);
+- }
+- }
+- }
+-
+- });
++ ((ServerLevel)this).getEntityLookup().getEntities(except, box, list, predicate); // Paper - optimise this call
+ return list;
+ }
+
+@@ -1069,33 +1055,23 @@ public abstract class Level implements LevelAccessor, AutoCloseable {
+
+ public <T extends Entity> void getEntities(EntityTypeTest<Entity, T> filter, AABB box, Predicate<? super T> predicate, List<? super T> result, int limit) {
+ this.getProfiler().incrementCounter("getEntities");
+- this.getEntities().get(filter, box, (entity) -> {
+- if (predicate.test(entity)) {
+- result.add(entity);
+- if (result.size() >= limit) {
+- return AbortableIterationConsumer.Continuation.ABORT;
+- }
+- }
+-
+- if (entity instanceof EnderDragon entityenderdragon) {
+- EnderDragonPart[] aentitycomplexpart = entityenderdragon.getSubEntities();
+- int j = aentitycomplexpart.length;
+-
+- for (int k = 0; k < j; ++k) {
+- EnderDragonPart entitycomplexpart = aentitycomplexpart[k];
+- T t0 = filter.tryCast(entitycomplexpart); // CraftBukkit - decompile error
+-
+- if (t0 != null && predicate.test(t0)) {
+- result.add(t0);
+- if (result.size() >= limit) {
+- return AbortableIterationConsumer.Continuation.ABORT;
+- }
+- }
+- }
++ // Paper start - optimise this call
++ //TODO use limit
++ if (filter instanceof net.minecraft.world.entity.EntityType entityTypeTest) {
++ ((ServerLevel) this).getEntityLookup().getEntities(entityTypeTest, box, result, predicate);
++ } else {
++ Predicate<? super T> test = (obj) -> {
++ return filter.tryCast(obj) != null;
++ };
++ predicate = predicate == null ? test : test.and((Predicate) predicate);
++ Class base;
++ if (filter == null || (base = filter.getBaseClass()) == null || base == Entity.class) {
++ ((ServerLevel) this).getEntityLookup().getEntities((Entity) null, box, (List) result, (Predicate)predicate);
++ } else {
++ ((ServerLevel) this).getEntityLookup().getEntities(base, null, box, (List) result, (Predicate)predicate); // Paper - optimise this call
+ }
+-
+- return AbortableIterationConsumer.Continuation.CONTINUE;
+- });
++ }
++ // Paper end - optimise this call
+ }
+
+ @Nullable
+@@ -1385,4 +1361,45 @@ public abstract class Level implements LevelAccessor, AutoCloseable {
+ }
+ }
+ // Paper end - notify observers even if grow failed
++ // Paper start
++ //protected final io.papermc.paper.world.EntitySliceManager entitySliceManager; // Paper - rewrite chunk system
++
++ public org.bukkit.entity.Entity[] getChunkEntities(int chunkX, int chunkZ) {
++ io.papermc.paper.world.ChunkEntitySlices slices = ((ServerLevel)this).getEntityLookup().getChunk(chunkX, chunkZ);
++ if (slices == null) {
++ return new org.bukkit.entity.Entity[0];
++ }
++ return slices.getChunkEntities();
++ }
++
++ @Override
++ public List<Entity> getHardCollidingEntities(Entity except, AABB box, Predicate<? super Entity> predicate) {
++ List<Entity> ret = new java.util.ArrayList<>();
++ ((ServerLevel)this).getEntityLookup().getHardCollidingEntities(except, box, ret, predicate);
++ return ret;
++ }
++
++ @Override
++ public void getEntities(Entity except, AABB box, Predicate<? super Entity> predicate, List<Entity> into) {
++ ((ServerLevel)this).getEntityLookup().getEntities(except, box, into, predicate);
++ }
++
++ @Override
++ public void getHardCollidingEntities(Entity except, AABB box, Predicate<? super Entity> predicate, List<Entity> into) {
++ ((ServerLevel)this).getEntityLookup().getHardCollidingEntities(except, box, into, predicate);
++ }
++
++ @Override
++ public <T> void getEntitiesByClass(Class<? extends T> clazz, Entity except, final AABB box, List<? super T> into,
++ Predicate<? super T> predicate) {
++ ((ServerLevel)this).getEntityLookup().getEntities((Class)clazz, except, box, (List)into, (Predicate)predicate);
++ }
++
++ @Override
++ public <T extends Entity> List<T> getEntitiesOfClass(Class<T> entityClass, AABB box, Predicate<? super T> predicate) {
++ List<T> ret = new java.util.ArrayList<>();
++ ((ServerLevel)this).getEntityLookup().getEntities(entityClass, null, box, ret, predicate);
++ return ret;
++ }
++ // Paper end
+ }
+diff --git a/src/main/java/net/minecraft/world/level/LevelReader.java b/src/main/java/net/minecraft/world/level/LevelReader.java
+index a0ae26d6197e1069ca09982b4f8b706c55ae8491..32bfeb9aa87b43a9d2ce46dcc99dbd0ff355b412 100644
+--- a/src/main/java/net/minecraft/world/level/LevelReader.java
++++ b/src/main/java/net/minecraft/world/level/LevelReader.java
+@@ -26,6 +26,15 @@ public interface LevelReader extends BlockAndTintGetter, CollisionGetter, Signal
+ @Nullable
+ ChunkAccess getChunk(int chunkX, int chunkZ, ChunkStatus leastStatus, boolean create);
+
++ // Paper start - rewrite chunk system
++ default ChunkAccess syncLoadNonFull(int chunkX, int chunkZ, ChunkStatus status) {
++ if (status == null || status.isOrAfter(ChunkStatus.FULL)) {
++ throw new IllegalArgumentException("Status: " + status.toString());
++ }
++ return this.getChunk(chunkX, chunkZ, status, true);
++ }
++ // Paper end - rewrite chunk system
++
+ @Nullable ChunkAccess getChunkIfLoadedImmediately(int x, int z); // Paper - ifLoaded api (we need this since current impl blocks if the chunk is loading)
+ @Nullable default ChunkAccess getChunkIfLoadedImmediately(BlockPos pos) { return this.getChunkIfLoadedImmediately(pos.getX() >> 4, pos.getZ() >> 4);}
+
+diff --git a/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java b/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
+index c9cd18ce79a6ee7297a8fd14f4dbe712570b3ced..927bdebdb8ae01613f0cea074b3367bd7ffe9ab1 100644
+--- a/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
++++ b/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
+@@ -120,7 +120,7 @@ public abstract class ChunkGenerator {
+ return CompletableFuture.supplyAsync(Util.wrapThreadWithTaskName("init_biomes", () -> {
+ chunk.fillBiomesFromNoise(this.biomeSource, noiseConfig.sampler());
+ return chunk;
+- }), Util.backgroundExecutor());
++ }), executor); // Paper - run with supplied executor
+ }
+
+ public abstract void applyCarvers(WorldGenRegion chunkRegion, long seed, RandomState noiseConfig, BiomeManager biomeAccess, StructureManager structureAccessor, ChunkAccess chunk, GenerationStep.Carving carverStep);
+@@ -315,7 +315,7 @@ public abstract class ChunkGenerator {
+ return Pair.of(placement.getLocatePos(pos), holder);
+ }
+
+- ChunkAccess ichunkaccess = world.getChunk(pos.x, pos.z, ChunkStatus.STRUCTURE_STARTS);
++ ChunkAccess ichunkaccess = world.syncLoadNonFull(pos.x, pos.z, ChunkStatus.STRUCTURE_STARTS); // Paper - rewrite chunk system
+
+ structurestart = structureAccessor.getStartForStructure(SectionPos.bottomOf(ichunkaccess), (Structure) holder.value(), ichunkaccess);
+ } while (structurestart == null);
+diff --git a/src/main/java/net/minecraft/world/level/chunk/LevelChunk.java b/src/main/java/net/minecraft/world/level/chunk/LevelChunk.java
+index bac191f92ea3735df19c68d5568c2c7962c8680f..5d94aee1303d9eca5f1fa9a2e033ad0d12909635 100644
+--- a/src/main/java/net/minecraft/world/level/chunk/LevelChunk.java
++++ b/src/main/java/net/minecraft/world/level/chunk/LevelChunk.java
+@@ -86,6 +86,7 @@ public class LevelChunk extends ChunkAccess {
+ private final Int2ObjectMap<GameEventListenerRegistry> gameEventListenerRegistrySections;
+ private final LevelChunkTicks<Block> blockTicks;
+ private final LevelChunkTicks<Fluid> fluidTicks;
++ public volatile FullChunkStatus chunkStatus = FullChunkStatus.INACCESSIBLE; // Paper - rewrite chunk system
+
+ public LevelChunk(Level world, ChunkPos pos) {
+ this(world, pos, UpgradeData.EMPTY, new LevelChunkTicks<>(), new LevelChunkTicks<>(), 0L, (LevelChunkSection[]) null, (LevelChunk.PostLoadProcessor) null, (BlendingData) null);
+@@ -690,9 +691,26 @@ public class LevelChunk extends ChunkAccess {
+
+ }
+
+- // CraftBukkit start
+- public void loadCallback() {
+- // Paper start - neighbour cache
++ // Paper start - new load callbacks
++ private io.papermc.paper.chunk.system.scheduling.NewChunkHolder chunkHolder;
++ public io.papermc.paper.chunk.system.scheduling.NewChunkHolder getChunkHolder() {
++ return this.chunkHolder;
++ }
++
++ public void setChunkHolder(io.papermc.paper.chunk.system.scheduling.NewChunkHolder chunkHolder) {
++ if (chunkHolder == null) {
++ throw new NullPointerException("Chunkholder cannot be null");
++ }
++ if (this.chunkHolder != null) {
++ throw new IllegalStateException("Already have chunkholder: " + this.chunkHolder + ", cannot replace with " + chunkHolder);
++ }
++ this.chunkHolder = chunkHolder;
++ this.playerChunk = chunkHolder.vanillaChunkHolder;
++ }
++
++ /* Note: We skip the light neighbour chunk loading done for the vanilla full chunk */
++ /* Starlight does not need these chunks for lighting purposes because of edge checks */
++ public void pushChunkIntoLoadedMap() {
+ int chunkX = this.chunkPos.x;
+ int chunkZ = this.chunkPos.z;
+ net.minecraft.server.level.ServerChunkCache chunkProvider = this.level.getChunkSource();
+@@ -707,10 +725,55 @@ public class LevelChunk extends ChunkAccess {
+ }
+ }
+ this.setNeighbourLoaded(0, 0, this);
++ this.level.getChunkSource().addLoadedChunk(this);
++ }
++
++ public void onChunkLoad(io.papermc.paper.chunk.system.scheduling.NewChunkHolder chunkHolder) {
++ // figure out how this should interface with:
++ // the entity chunk load event // -> moved to the FULL status
++ // the chunk load event // -> stays here
++ // any entity add to world events // -> in FULL status
++ this.loadCallback();
++ io.papermc.paper.chunk.system.ChunkSystem.onChunkBorder(this, chunkHolder.vanillaChunkHolder);
++ }
++
++ public void onChunkUnload(io.papermc.paper.chunk.system.scheduling.NewChunkHolder chunkHolder) {
++ // figure out how this should interface with:
++ // the entity chunk load event // -> moved to chunk unload to disk (not written yet)
++ // the chunk load event // -> stays here
++ // any entity add to world events // -> goes into the unload logic, it will completely explode
++ // etc later
++ this.unloadCallback();
++ io.papermc.paper.chunk.system.ChunkSystem.onChunkNotBorder(this, chunkHolder.vanillaChunkHolder);
++ }
++
++ public void onChunkTicking(io.papermc.paper.chunk.system.scheduling.NewChunkHolder chunkHolder) {
++ this.postProcessGeneration();
++ this.level.startTickingChunk(this);
++ io.papermc.paper.chunk.system.ChunkSystem.onChunkTicking(this, chunkHolder.vanillaChunkHolder);
++ }
++
++ public void onChunkNotTicking(io.papermc.paper.chunk.system.scheduling.NewChunkHolder chunkHolder) {
++ io.papermc.paper.chunk.system.ChunkSystem.onChunkNotTicking(this, chunkHolder.vanillaChunkHolder);
++ }
++
++ public void onChunkEntityTicking(io.papermc.paper.chunk.system.scheduling.NewChunkHolder chunkHolder) {
++ io.papermc.paper.chunk.system.ChunkSystem.onChunkEntityTicking(this, chunkHolder.vanillaChunkHolder);
++ }
++
++ public void onChunkNotEntityTicking(io.papermc.paper.chunk.system.scheduling.NewChunkHolder chunkHolder) {
++ io.papermc.paper.chunk.system.ChunkSystem.onChunkNotEntityTicking(this, chunkHolder.vanillaChunkHolder);
++ }
++ // Paper end - new load callbacks
++
++ // CraftBukkit start
++ public void loadCallback() {
++ if (this.loadedTicketLevel) { LOGGER.error("Double calling chunk load!", new Throwable()); } // Paper
++ // Paper - rewrite chunk system - move into separate callback
+ this.loadedTicketLevel = true;
+- // Paper end - neighbour cache
++ // Paper - rewrite chunk system - move into separate callback
+ org.bukkit.Server server = this.level.getCraftServer();
+- this.level.getChunkSource().addLoadedChunk(this); // Paper
++ // Paper - rewrite chunk system - move into separate callback
+ if (server != null) {
+ /*
+ * If it's a new world, the first few chunks are generated inside
+@@ -719,6 +782,7 @@ public class LevelChunk extends ChunkAccess {
+ */
+ org.bukkit.Chunk bukkitChunk = new org.bukkit.craftbukkit.CraftChunk(this);
+ server.getPluginManager().callEvent(new org.bukkit.event.world.ChunkLoadEvent(bukkitChunk, this.needsDecoration));
++ this.chunkHolder.getEntityChunk().callEntitiesLoadEvent(); // Paper - rewrite chunk system
+
+ if (this.needsDecoration) {
+ try (co.aikar.timings.Timing ignored = this.level.timings.chunkLoadPopulate.startTiming()) { // Paper
+@@ -747,9 +811,11 @@ public class LevelChunk extends ChunkAccess {
+ }
+
+ public void unloadCallback() {
++ if (!this.loadedTicketLevel) { LOGGER.error("Double calling chunk unload!", new Throwable()); } // Paper
+ org.bukkit.Server server = this.level.getCraftServer();
++ this.chunkHolder.getEntityChunk().callEntitiesUnloadEvent(); // Paper - rewrite chunk system
+ org.bukkit.Chunk bukkitChunk = new org.bukkit.craftbukkit.CraftChunk(this);
+- org.bukkit.event.world.ChunkUnloadEvent unloadEvent = new org.bukkit.event.world.ChunkUnloadEvent(bukkitChunk, this.isUnsaved());
++ org.bukkit.event.world.ChunkUnloadEvent unloadEvent = new org.bukkit.event.world.ChunkUnloadEvent(bukkitChunk, true); // Paper - rewrite chunk system - force save to true so that mustNotSave is correctly set below
+ server.getPluginManager().callEvent(unloadEvent);
+ // note: saving can be prevented, but not forced if no saving is actually required
+ this.mustNotSave = !unloadEvent.isSaveChunk();
+@@ -771,9 +837,26 @@ public class LevelChunk extends ChunkAccess {
+ // Paper end
+ }
+
++ // Paper start - add dirty system to tick lists
++ @Override
++ public void setUnsaved(boolean needsSaving) {
++ if (!needsSaving) {
++ this.blockTicks.clearDirty();
++ this.fluidTicks.clearDirty();
++ }
++ super.setUnsaved(needsSaving);
++ }
++ // Paper end - add dirty system to tick lists
++
+ @Override
+ public boolean isUnsaved() {
+- return super.isUnsaved() && !this.mustNotSave;
++ // Paper start - add dirty system to tick lists
++ long gameTime = this.level.getLevelData().getGameTime();
++ if (this.blockTicks.isDirty(gameTime) || this.fluidTicks.isDirty(gameTime)) {
++ return true;
++ }
++ // Paper end - add dirty system to tick lists
++ return super.isUnsaved(); // Paper - rewrite chunk system - do NOT clobber the dirty flag
+ }
+ // CraftBukkit end
+
+@@ -842,7 +925,9 @@ public class LevelChunk extends ChunkAccess {
+ return this.blockEntities;
+ }
+
++ public boolean isPostProcessingDone; // Paper - replace chunk loader system
+ public void postProcessGeneration() {
++ try { // Paper - replace chunk loader system
+ ChunkPos chunkcoordintpair = this.getPos();
+
+ for (int i = 0; i < this.postProcessing.length; ++i) {
+@@ -863,6 +948,7 @@ public class LevelChunk extends ChunkAccess {
+ BlockState iblockdata1 = Block.updateFromNeighbourShapes(iblockdata, this.level, blockposition);
+
+ this.level.setBlock(blockposition, iblockdata1, 20);
++ if (iblockdata1 != iblockdata) this.level.chunkSource.blockChanged(blockposition); // Paper - replace player chunk loader - notify since we send before processing full updates
+ }
+ }
+
+@@ -880,6 +966,10 @@ public class LevelChunk extends ChunkAccess {
+
+ this.pendingBlockEntities.clear();
+ this.upgradeData.upgrade(this);
++ } finally { // Paper start - replace chunk loader system
++ this.isPostProcessingDone = true;
++ }
++ // Paper end - replace chunk loader system
+ }
+
+ @Nullable
+@@ -929,7 +1019,7 @@ public class LevelChunk extends ChunkAccess {
+ }
+
+ public FullChunkStatus getFullStatus() {
+- return this.fullStatus == null ? FullChunkStatus.FULL : (FullChunkStatus) this.fullStatus.get();
++ return this.chunkHolder == null ? FullChunkStatus.INACCESSIBLE : this.chunkHolder.getChunkStatus(); // Paper - rewrite chunk system
+ }
+
+ public void setFullStatus(Supplier<FullChunkStatus> levelTypeProvider) {
+diff --git a/src/main/java/net/minecraft/world/level/chunk/status/ChunkStatus.java b/src/main/java/net/minecraft/world/level/chunk/status/ChunkStatus.java
+index 95318092f8281d98132d1d3ceb4a5c36cf32eb05..b81c548c0e1ac53784e9c94b34b65db5f123309c 100644
+--- a/src/main/java/net/minecraft/world/level/chunk/status/ChunkStatus.java
++++ b/src/main/java/net/minecraft/world/level/chunk/status/ChunkStatus.java
+@@ -21,13 +21,15 @@ import net.minecraft.world.level.chunk.ProtoChunk;
+ import net.minecraft.world.level.levelgen.Heightmap;
+
+ public class ChunkStatus {
++ static final ChunkStatus.LoadingTask PASSTHROUGH_LOAD_TASK = (WorldGenContext context, ChunkStatus status, ToFullChunk fullChunkConverter, ChunkAccess chunk) -> CompletableFuture.completedFuture(chunk); // Paper - rewrite chunk system
++ protected static final java.util.List<ChunkStatus> statuses = new java.util.ArrayList<>(); // Paper - rewrite chunk system
+ public static final int MAX_STRUCTURE_DISTANCE = 8;
+ private static final EnumSet<Heightmap.Types> PRE_FEATURES = EnumSet.of(Heightmap.Types.OCEAN_FLOOR_WG, Heightmap.Types.WORLD_SURFACE_WG);
+ public static final EnumSet<Heightmap.Types> POST_FEATURES = EnumSet.of(
+ Heightmap.Types.OCEAN_FLOOR, Heightmap.Types.WORLD_SURFACE, Heightmap.Types.MOTION_BLOCKING, Heightmap.Types.MOTION_BLOCKING_NO_LEAVES
+ );
+ public static final ChunkStatus EMPTY = register(
+- "empty", null, -1, false, PRE_FEATURES, ChunkType.PROTOCHUNK, ChunkStatusTasks::generateEmpty, ChunkStatusTasks::loadPassThrough
++ "empty", null, -1, false, PRE_FEATURES, ChunkType.PROTOCHUNK, ChunkStatusTasks::generateEmpty, PASSTHROUGH_LOAD_TASK // Paper - rewrite chunk system
+ );
+ public static final ChunkStatus STRUCTURE_STARTS = register(
+ "structure_starts",
+@@ -47,22 +49,22 @@ public class ChunkStatus {
+ PRE_FEATURES,
+ ChunkType.PROTOCHUNK,
+ ChunkStatusTasks::generateStructureReferences,
+- ChunkStatusTasks::loadPassThrough
++ PASSTHROUGH_LOAD_TASK // Paper - rewrite chunk system
+ );
+ public static final ChunkStatus BIOMES = register(
+- "biomes", STRUCTURE_REFERENCES, 8, false, PRE_FEATURES, ChunkType.PROTOCHUNK, ChunkStatusTasks::generateBiomes, ChunkStatusTasks::loadPassThrough
++ "biomes", STRUCTURE_REFERENCES, 8, false, PRE_FEATURES, ChunkType.PROTOCHUNK, ChunkStatusTasks::generateBiomes, PASSTHROUGH_LOAD_TASK // Paper - rewrite chunk system
+ );
+ public static final ChunkStatus NOISE = register(
+- "noise", BIOMES, 8, false, PRE_FEATURES, ChunkType.PROTOCHUNK, ChunkStatusTasks::generateNoise, ChunkStatusTasks::loadPassThrough
++ "noise", BIOMES, 8, false, PRE_FEATURES, ChunkType.PROTOCHUNK, ChunkStatusTasks::generateNoise, PASSTHROUGH_LOAD_TASK // Paper - rewrite chunk system
+ );
+ public static final ChunkStatus SURFACE = register(
+- "surface", NOISE, 8, false, PRE_FEATURES, ChunkType.PROTOCHUNK, ChunkStatusTasks::generateSurface, ChunkStatusTasks::loadPassThrough
++ "surface", NOISE, 8, false, PRE_FEATURES, ChunkType.PROTOCHUNK, ChunkStatusTasks::generateSurface, PASSTHROUGH_LOAD_TASK // Paper - rewrite chunk system
+ );
+ public static final ChunkStatus CARVERS = register(
+- "carvers", SURFACE, 8, false, POST_FEATURES, ChunkType.PROTOCHUNK, ChunkStatusTasks::generateCarvers, ChunkStatusTasks::loadPassThrough
++ "carvers", SURFACE, 8, false, POST_FEATURES, ChunkType.PROTOCHUNK, ChunkStatusTasks::generateCarvers, PASSTHROUGH_LOAD_TASK // Paper - rewrite chunk system
+ );
+ public static final ChunkStatus FEATURES = register(
+- "features", CARVERS, 8, false, POST_FEATURES, ChunkType.PROTOCHUNK, ChunkStatusTasks::generateFeatures, ChunkStatusTasks::loadPassThrough
++ "features", CARVERS, 8, false, POST_FEATURES, ChunkType.PROTOCHUNK, ChunkStatusTasks::generateFeatures, PASSTHROUGH_LOAD_TASK // Paper - rewrite chunk system
+ );
+ public static final ChunkStatus INITIALIZE_LIGHT = register(
+ "initialize_light",
+@@ -78,7 +80,7 @@ public class ChunkStatus {
+ "light", INITIALIZE_LIGHT, 1, true, POST_FEATURES, ChunkType.PROTOCHUNK, ChunkStatusTasks::generateLight, ChunkStatusTasks::loadLight
+ );
+ public static final ChunkStatus SPAWN = register(
+- "spawn", LIGHT, 1, false, POST_FEATURES, ChunkType.PROTOCHUNK, ChunkStatusTasks::generateSpawn, ChunkStatusTasks::loadPassThrough
++ "spawn", LIGHT, 1, false, POST_FEATURES, ChunkType.PROTOCHUNK, ChunkStatusTasks::generateSpawn, PASSTHROUGH_LOAD_TASK // Paper - rewrite chunk system
+ );
+ public static final ChunkStatus FULL = register(
+ "full", SPAWN, 0, false, POST_FEATURES, ChunkType.LEVELCHUNK, ChunkStatusTasks::generateFull, ChunkStatusTasks::loadFull
+@@ -128,6 +130,27 @@ public class ChunkStatus {
+ }
+ }
+ // Paper end - starlight
++ // Paper start - rewrite chunk system
++ public boolean isParallelCapable; // Paper
++ public int writeRadius = -1;
++ public int loadRange = 0;
++
++ private ChunkStatus nextStatus;
++
++ public final ChunkStatus getNextStatus() {
++ return this.nextStatus;
++ }
++
++ public final boolean isEmptyLoadStatus() {
++ return this.loadingTask == PASSTHROUGH_LOAD_TASK;
++ }
++
++ public final boolean isEmptyGenStatus() {
++ return this == ChunkStatus.EMPTY;
++ }
++
++ public final java.util.concurrent.atomic.AtomicBoolean warnedAboutNoImmediateComplete = new java.util.concurrent.atomic.AtomicBoolean();
++ // Paper end - rewrite chunk system
+
+ private static ChunkStatus register(
+ String id,
+@@ -190,6 +213,13 @@ public class ChunkStatus {
+ this.chunkType = chunkType;
+ this.heightmapsAfter = heightMapTypes;
+ this.index = previous == null ? 0 : previous.getIndex() + 1;
++ // Paper start
++ this.nextStatus = this;
++ if (statuses.size() > 0) {
++ statuses.get(statuses.size() - 1).nextStatus = this;
++ }
++ statuses.add(this);
++ // Paper end
+ }
+
+ public int getIndex() {
+diff --git a/src/main/java/net/minecraft/world/level/chunk/status/ChunkStatusTasks.java b/src/main/java/net/minecraft/world/level/chunk/status/ChunkStatusTasks.java
+index ce7f154b9dad4e78ee0189405cf57dcb3d5301b8..a5e8078b99161272b0f826b8c39e56d17588c264 100644
+--- a/src/main/java/net/minecraft/world/level/chunk/status/ChunkStatusTasks.java
++++ b/src/main/java/net/minecraft/world/level/chunk/status/ChunkStatusTasks.java
+@@ -26,8 +26,9 @@ public class ChunkStatusTasks {
+ return CompletableFuture.completedFuture(chunk);
+ }
+
+- static CompletableFuture<ChunkAccess> loadPassThrough(WorldGenContext context, ChunkStatus status, ToFullChunk fullChunkConverter, ChunkAccess chunk) {
+- return CompletableFuture.completedFuture(chunk);
++ @io.papermc.paper.annotation.DoNotUse @Deprecated(forRemoval = true) // Paper - rewrite chunk system - use ChunkStatus.PASSTHROUGH_LOAD_TASK instead
++ static CompletableFuture<ChunkAccess> loadPassThrough(WorldGenContext context, ChunkStatus status, ToFullChunk fullChunkConverter, ChunkAccess chunk) { // Paper - rewrite chunk system - diff on change
++ return CompletableFuture.completedFuture(chunk); // Paper - rewrite chunk system - diff on change
+ }
+
+ static CompletableFuture<ChunkAccess> generateStructureStarts(WorldGenContext context, ChunkStatus status, Executor executor, ToFullChunk fullChunkConverter, List<ChunkAccess> chunks, ChunkAccess chunk) {
+@@ -125,7 +126,7 @@ public class ChunkStatusTasks {
+ ((ProtoChunk) chunk).setLightEngine(lightingProvider);
+ boolean flag = ChunkStatusTasks.isLighted(chunk);
+
+- return lightingProvider.initializeLight(chunk, flag);
++ return CompletableFuture.completedFuture(chunk); // Paper - rewrite chunk system
+ }
+
+ static CompletableFuture<ChunkAccess> generateLight(WorldGenContext context, ChunkStatus status, Executor executor, ToFullChunk fullChunkConverter, List<ChunkAccess> chunks, ChunkAccess chunk) {
+@@ -139,7 +140,7 @@ public class ChunkStatusTasks {
+ private static CompletableFuture<ChunkAccess> lightChunk(ThreadedLevelLightEngine lightingProvider, ChunkAccess chunk) {
+ boolean flag = ChunkStatusTasks.isLighted(chunk);
+
+- return lightingProvider.lightChunk(chunk, flag);
++ return CompletableFuture.completedFuture(chunk); // Paper - rewrite chunk system
+ }
+
+ static CompletableFuture<ChunkAccess> generateSpawn(WorldGenContext context, ChunkStatus status, Executor executor, ToFullChunk fullChunkConverter, List<ChunkAccess> chunks, ChunkAccess chunk) {
+diff --git a/src/main/java/net/minecraft/world/level/chunk/storage/ChunkSerializer.java b/src/main/java/net/minecraft/world/level/chunk/storage/ChunkSerializer.java
+index 01d6b8683a9fa30d05b03ebfef8ee2dca4e83a56..5f85d8d82212f9a8133304dc05bf2cd39da1f9e7 100644
+--- a/src/main/java/net/minecraft/world/level/chunk/storage/ChunkSerializer.java
++++ b/src/main/java/net/minecraft/world/level/chunk/storage/ChunkSerializer.java
+@@ -112,7 +112,25 @@ public class ChunkSerializer {
+ }
+ }
+ // Paper end - guard against serializing mismatching coordinates
++ // Paper start - rewrite chunk system
++ public static final class InProgressChunkHolder {
++
++ public final ProtoChunk protoChunk;
++
++ public CompoundTag poiData;
++
++ public InProgressChunkHolder(final ProtoChunk protoChunk) {
++ this.protoChunk = protoChunk;
++ }
++ }
+ public static ProtoChunk read(ServerLevel world, PoiManager poiStorage, ChunkPos chunkPos, CompoundTag nbt) {
++ // Paper start - rewrite chunk system
++ InProgressChunkHolder holder = readInProgressChunkHolder(world, poiStorage, chunkPos, nbt);
++ return holder.protoChunk;
++ }
++
++ public static InProgressChunkHolder readInProgressChunkHolder(ServerLevel world, PoiManager poiStorage, ChunkPos chunkPos, CompoundTag nbt) {
++ // Paper end - rewrite chunk system
+ // Paper start - Do not let the server load chunks from newer versions
+ if (nbt.contains("DataVersion", net.minecraft.nbt.Tag.TAG_ANY_NUMERIC)) {
+ final int dataVersion = nbt.getInt("DataVersion");
+@@ -178,7 +196,7 @@ public class ChunkSerializer {
+ achunksection[k] = chunksection;
+ SectionPos sectionposition = SectionPos.of(chunkPos, b0);
+
+- poiStorage.checkConsistencyWithBlocks(sectionposition, chunksection);
++ // Paper - rewrite chunk system - moved to final load stage
+ }
+
+ boolean flag3 = nbttagcompound1.contains("BlockLight", 7);
+@@ -325,7 +343,7 @@ public class ChunkSerializer {
+ }
+
+ if (chunktype == ChunkType.LEVELCHUNK) {
+- return new ImposterProtoChunk((LevelChunk) object1, false);
++ return new InProgressChunkHolder(new ImposterProtoChunk((LevelChunk) object1, false)); // Paper - Async chunk loading
+ } else {
+ ProtoChunk protochunk1 = (ProtoChunk) object1;
+
+@@ -360,9 +378,41 @@ public class ChunkSerializer {
+ protochunk1.setCarvingMask(worldgenstage_features, new CarvingMask(nbttagcompound5.getLongArray(s1), ((ChunkAccess) object1).getMinBuildHeight()));
+ }
+
+- return protochunk1;
++ return new InProgressChunkHolder(protochunk1); // Paper - Async chunk loading
++ }
++ }
++
++ // Paper start - async chunk save for unload
++ public record AsyncSaveData(
++ Tag blockTickList, // non-null if we had to go to the server's tick list
++ Tag fluidTickList, // non-null if we had to go to the server's tick list
++ ListTag blockEntities,
++ long worldTime
++ ) {}
++
++ // must be called sync
++ public static AsyncSaveData getAsyncSaveData(ServerLevel world, ChunkAccess chunk) {
++ org.spigotmc.AsyncCatcher.catchOp("preparation of chunk data for async save");
++
++ final CompoundTag tickLists = new CompoundTag();
++ ChunkSerializer.saveTicks(world, tickLists, chunk.getTicksForSerialization());
++
++ ListTag blockEntitiesSerialized = new ListTag();
++ for (final BlockPos blockPos : chunk.getBlockEntitiesPos()) {
++ final CompoundTag blockEntityNbt = chunk.getBlockEntityNbtForSaving(blockPos, world.registryAccess());
++ if (blockEntityNbt != null) {
++ blockEntitiesSerialized.add(blockEntityNbt);
++ }
+ }
++
++ return new AsyncSaveData(
++ tickLists.get(BLOCK_TICKS_TAG),
++ tickLists.get(FLUID_TICKS_TAG),
++ blockEntitiesSerialized,
++ world.getGameTime()
++ );
+ }
++ // Paper end
+
+ private static void logErrors(ChunkPos chunkPos, int y, String message) {
+ ChunkSerializer.LOGGER.error("Recoverable errors when loading section [" + chunkPos.x + ", " + y + ", " + chunkPos.z + "]: " + message);
+@@ -379,6 +429,11 @@ public class ChunkSerializer {
+ // CraftBukkit end
+
+ public static CompoundTag write(ServerLevel world, ChunkAccess chunk) {
++ // Paper start
++ return saveChunk(world, chunk, null);
++ }
++ public static CompoundTag saveChunk(ServerLevel world, ChunkAccess chunk, @org.checkerframework.checker.nullness.qual.Nullable AsyncSaveData asyncsavedata) {
++ // Paper end
+ // Paper start - rewrite light impl
+ final int minSection = io.papermc.paper.util.WorldUtil.getMinLightSection(world);
+ final int maxSection = io.papermc.paper.util.WorldUtil.getMaxLightSection(world);
+@@ -391,7 +446,7 @@ public class ChunkSerializer {
+ nbttagcompound.putInt("xPos", chunkcoordintpair.x);
+ nbttagcompound.putInt("yPos", chunk.getMinSection());
+ nbttagcompound.putInt("zPos", chunkcoordintpair.z);
+- nbttagcompound.putLong("LastUpdate", world.getGameTime());
++ nbttagcompound.putLong("LastUpdate", asyncsavedata != null ? asyncsavedata.worldTime : world.getGameTime()); // Paper - async chunk unloading
+ nbttagcompound.putLong("InhabitedTime", chunk.getInhabitedTime());
+ nbttagcompound.putString("Status", BuiltInRegistries.CHUNK_STATUS.getKey(chunk.getStatus()).toString());
+ BlendingData blendingdata = chunk.getBlendingData();
+@@ -485,8 +540,17 @@ public class ChunkSerializer {
+ nbttagcompound.putBoolean("isLightOn", false); // Paper - set to false but still store, this allows us to detect --eraseCache (as eraseCache _removes_)
+ }
+
+- ListTag nbttaglist1 = new ListTag();
+- Iterator iterator = chunk.getBlockEntitiesPos().iterator();
++ // Paper start
++ ListTag nbttaglist1;
++ Iterator<BlockPos> iterator;
++ if (asyncsavedata != null) {
++ nbttaglist1 = asyncsavedata.blockEntities;
++ iterator = java.util.Collections.emptyIterator();
++ } else {
++ nbttaglist1 = new ListTag();
++ iterator = chunk.getBlockEntitiesPos().iterator();
++ }
++ // Paper end
+
+ CompoundTag nbttagcompound2;
+
+@@ -522,7 +586,14 @@ public class ChunkSerializer {
+ nbttagcompound.put("CarvingMasks", nbttagcompound2);
+ }
+
++ // Paper start
++ if (asyncsavedata != null) {
++ nbttagcompound.put(BLOCK_TICKS_TAG, asyncsavedata.blockTickList);
++ nbttagcompound.put(FLUID_TICKS_TAG, asyncsavedata.fluidTickList);
++ } else {
+ ChunkSerializer.saveTicks(world, nbttagcompound, chunk.getTicksForSerialization());
++ }
++ // Paper end
+ nbttagcompound.put("PostProcessing", ChunkSerializer.packOffsets(chunk.getPostProcessing()));
+ CompoundTag nbttagcompound3 = new CompoundTag();
+ Iterator iterator1 = chunk.getHeightmaps().iterator();
+@@ -578,7 +649,7 @@ public class ChunkSerializer {
+
+ return nbttaglist == null && nbttaglist1 == null ? null : (chunk) -> {
+ if (nbttaglist != null) {
+- world.addLegacyChunkEntities(EntityType.loadEntitiesRecursive(nbttaglist, world));
++ world.addLegacyChunkEntities(EntityType.loadEntitiesRecursive(nbttaglist, world), chunk.getPos()); // Paper - rewrite chunk system
+ }
+
+ if (nbttaglist1 != null) {
+diff --git a/src/main/java/net/minecraft/world/level/chunk/storage/ChunkStorage.java b/src/main/java/net/minecraft/world/level/chunk/storage/ChunkStorage.java
+index a62c90e10c0dfa4c6211a05c4071932756d7b218..554dede2ad0e45d3ee4ccc5510b7644f2e9e4250 100644
+--- a/src/main/java/net/minecraft/world/level/chunk/storage/ChunkStorage.java
++++ b/src/main/java/net/minecraft/world/level/chunk/storage/ChunkStorage.java
+@@ -31,18 +31,21 @@ import net.minecraft.world.level.storage.DimensionDataStorage;
+ public class ChunkStorage implements AutoCloseable {
+
+ public static final int LAST_MONOLYTH_STRUCTURE_DATA_VERSION = 1493;
+- private final IOWorker worker;
++ // Paper start - rewrite chunk system; async chunk IO
++ private final Object persistentDataLock = new Object();
++ public final RegionFileStorage regionFileCache;
++ // Paper end - rewrite chunk system
+ protected final DataFixer fixerUpper;
+ @Nullable
+ private volatile LegacyStructureDataHandler legacyStructureHandler;
+
+ public ChunkStorage(RegionStorageInfo storageKey, Path directory, DataFixer dataFixer, boolean dsync) {
+ this.fixerUpper = dataFixer;
+- this.worker = new IOWorker(storageKey, directory, dsync);
++ this.regionFileCache = new RegionFileStorage(storageKey, directory, dsync); // Paper - rewrite chunk system; async chunk IO
+ }
+
+ public boolean isOldChunkAround(ChunkPos chunkPos, int checkRadius) {
+- return this.worker.isOldChunkAround(chunkPos, checkRadius);
++ return true; // Paper - rewrite chunk system
+ }
+
+ // CraftBukkit start
+@@ -50,8 +53,9 @@ public class ChunkStorage implements AutoCloseable {
+ if (true) return true; // Paper - Perf: this isn't even needed anymore, light is purged updating to 1.14+, why are we holding up the conversion process reading chunk data off disk - return true, we need to set light populated to true so the converter recognizes the chunk as being "full"
+ ChunkPos pos = new ChunkPos(x, z);
+ if (cps != null) {
+- com.google.common.base.Preconditions.checkState(org.bukkit.Bukkit.isPrimaryThread(), "primary thread");
+- if (cps.hasChunk(x, z)) {
++ // Paper start - rewrite chunk system; async chunk IO
++ if (cps.getChunkAtIfCachedImmediately(x, z) != null) { // isLoaded is a ticket level check, not a chunk loaded check!
++ // Paper end - rewrite chunk system
+ return true;
+ }
+ }
+@@ -79,6 +83,7 @@ public class ChunkStorage implements AutoCloseable {
+
+ public CompoundTag upgradeChunkTag(ResourceKey<LevelStem> resourcekey, Supplier<DimensionDataStorage> supplier, CompoundTag nbttagcompound, Optional<ResourceKey<MapCodec<? extends ChunkGenerator>>> optional, ChunkPos pos, @Nullable LevelAccessor generatoraccess) {
+ // CraftBukkit end
++ nbttagcompound = nbttagcompound.copy(); // Paper - defensive copy, another thread might modify this
+ int i = ChunkStorage.getVersion(nbttagcompound);
+
+ try {
+@@ -97,9 +102,11 @@ public class ChunkStorage implements AutoCloseable {
+ if (i < 1493) {
+ ca.spottedleaf.dataconverter.minecraft.MCDataConverter.convertTag(ca.spottedleaf.dataconverter.minecraft.datatypes.MCTypeRegistry.CHUNK, nbttagcompound, i, 1493); // Paper - replace chunk converter
+ if (nbttagcompound.getCompound("Level").getBoolean("hasLegacyStructureData")) {
++ synchronized (this.persistentDataLock) { // Paper - Async chunk loading
+ LegacyStructureDataHandler persistentstructurelegacy = this.getLegacyStructureHandler(resourcekey, supplier);
+
+ nbttagcompound = persistentstructurelegacy.updateFromLegacy(nbttagcompound);
++ } // Paper - Async chunk loading
+ }
+ }
+
+@@ -139,7 +146,7 @@ public class ChunkStorage implements AutoCloseable {
+ LegacyStructureDataHandler persistentstructurelegacy = this.legacyStructureHandler;
+
+ if (persistentstructurelegacy == null) {
+- synchronized (this) {
++ synchronized (this.persistentDataLock) { // Paper - async chunk loading
+ persistentstructurelegacy = this.legacyStructureHandler;
+ if (persistentstructurelegacy == null) {
+ this.legacyStructureHandler = persistentstructurelegacy = LegacyStructureDataHandler.getLegacyStructureHandler(worldKey, (DimensionDataStorage) stateManagerGetter.get());
+@@ -165,10 +172,20 @@ public class ChunkStorage implements AutoCloseable {
+ }
+
+ public CompletableFuture<Optional<CompoundTag>> read(ChunkPos chunkPos) {
+- return this.worker.loadAsync(chunkPos);
++ // Paper start - async chunk io
++ try {
++ return CompletableFuture.completedFuture(Optional.ofNullable(this.readSync(chunkPos)));
++ } catch (Throwable thr) {
++ return CompletableFuture.failedFuture(thr);
++ }
++ }
++ @Nullable
++ public CompoundTag readSync(ChunkPos chunkPos) throws IOException {
++ return this.regionFileCache.read(chunkPos);
+ }
++ // Paper end - async chunk io
+
+- public CompletableFuture<Void> write(ChunkPos chunkPos, CompoundTag nbt) {
++ public CompletableFuture<Void> write(ChunkPos chunkPos, CompoundTag nbt) throws IOException { // Paper - rewrite chunk system; async chunk io
+ // Paper start - guard against serializing mismatching coordinates
+ if (nbt != null && !chunkPos.equals(ChunkSerializer.getChunkCoordinate(nbt))) {
+ final String world = (this instanceof net.minecraft.server.level.ChunkMap) ? ((net.minecraft.server.level.ChunkMap) this).level.getWorld().getName() : null;
+@@ -176,26 +193,39 @@ public class ChunkStorage implements AutoCloseable {
+ + " but compound says coordinate is " + ChunkSerializer.getChunkCoordinate(nbt) + (world == null ? " for an unknown world" : (" for world: " + world)));
+ }
+ // Paper end - guard against serializing mismatching coordinates
++ this.regionFileCache.write(chunkPos, nbt); // Paper - rewrite chunk system; async chunk io, move above legacy structure index
+ this.handleLegacyStructureIndex(chunkPos);
+- return this.worker.store(chunkPos, nbt);
++ // Paper - rewrite chunk system; async chunk io, move above legacy structure index
++ return null;
+ }
+
+ protected void handleLegacyStructureIndex(ChunkPos chunkPos) {
+ if (this.legacyStructureHandler != null) {
++ synchronized (this.persistentDataLock) { // Paper - rewrite chunk system; async chunk io
+ this.legacyStructureHandler.removeIndex(chunkPos.toLong());
++ } // Paper - rewrite chunk system; async chunk io
+ }
+
+ }
+
+ public void flushWorker() {
+- this.worker.synchronize(true).join();
++ io.papermc.paper.chunk.system.io.RegionFileIOThread.flush(); // Paper - rewrite chunk system
+ }
+
+ public void close() throws IOException {
+- this.worker.close();
++ this.regionFileCache.close(); // Paper - nuke IO worker
+ }
+
+ public ChunkScanAccess chunkScanner() {
+- return this.worker;
++ // Paper start - nuke IO worker
++ return ((chunkPos, streamTagVisitor) -> {
++ try {
++ this.regionFileCache.scanChunk(chunkPos, streamTagVisitor);
++ return java.util.concurrent.CompletableFuture.completedFuture(null);
++ } catch (IOException e) {
++ throw new RuntimeException(e);
++ }
++ });
++ // Paper end
+ }
+ }
+diff --git a/src/main/java/net/minecraft/world/level/chunk/storage/EntityStorage.java b/src/main/java/net/minecraft/world/level/chunk/storage/EntityStorage.java
+index 49d8a62d2b6ca6da4e02b3cec7e42c38b7781b57..9fdf8f857a5f9b231c6d0633eaba498244214f74 100644
+--- a/src/main/java/net/minecraft/world/level/chunk/storage/EntityStorage.java
++++ b/src/main/java/net/minecraft/world/level/chunk/storage/EntityStorage.java
+@@ -27,43 +27,30 @@ public class EntityStorage implements EntityPersistentStorage<Entity> {
+ private static final String ENTITIES_TAG = "Entities";
+ private static final String POSITION_TAG = "Position";
+ public final ServerLevel level;
+- private final SimpleRegionStorage simpleRegionStorage;
++ // Paper - rewrite chunk system
+ private final LongSet emptyChunks = new LongOpenHashSet();
+- public final ProcessorMailbox<Runnable> entityDeserializerQueue;
++ // Paper - rewrite chunk system
+
+ public EntityStorage(SimpleRegionStorage storage, ServerLevel world, Executor executor) {
+- this.simpleRegionStorage = storage;
++ // Paper - rewrite chunk system
+ this.level = world;
+- this.entityDeserializerQueue = ProcessorMailbox.create(executor, "entity-deserializer");
++ // Paper - rewrite chunk system
+ }
+
+ @Override
+ public CompletableFuture<ChunkEntities<Entity>> loadEntities(ChunkPos pos) {
+- return this.emptyChunks.contains(pos.toLong())
+- ? CompletableFuture.completedFuture(emptyChunk(pos))
+- : this.simpleRegionStorage.read(pos).thenApplyAsync(nbt -> {
+- if (nbt.isEmpty()) {
+- this.emptyChunks.add(pos.toLong());
+- return emptyChunk(pos);
+- } else {
+- try {
+- ChunkPos chunkPos2 = readChunkPos(nbt.get());
+- if (!Objects.equals(pos, chunkPos2)) {
+- LOGGER.error("Chunk file at {} is in the wrong location. (Expected {}, got {})", pos, pos, chunkPos2);
+- }
+- } catch (Exception var6) {
+- LOGGER.warn("Failed to parse chunk {} position info", pos, var6);
+- }
+-
+- CompoundTag compoundTag = this.simpleRegionStorage.upgradeChunkTag(nbt.get(), -1);
+- ListTag listTag = compoundTag.getList("Entities", 10);
+- List<Entity> list = EntityType.loadEntitiesRecursive(listTag, this.level).collect(ImmutableList.toImmutableList());
+- return new ChunkEntities<>(pos, list);
+- }
+- }, this.entityDeserializerQueue::tell);
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system - copy out read logic into readEntities
+ }
+
+- private static ChunkPos readChunkPos(CompoundTag chunkNbt) {
++ // Paper start - rewrite chunk system
++ public static List<Entity> readEntities(ServerLevel level, CompoundTag compoundTag) {
++ ListTag listTag = compoundTag.getList("Entities", 10);
++ List<Entity> list = EntityType.loadEntitiesRecursive(listTag, level).collect(ImmutableList.toImmutableList());
++ return list;
++ }
++ // Paper end - rewrite chunk system
++
++ public static ChunkPos readChunkPos(CompoundTag chunkNbt) { // Paper - public
+ int[] is = chunkNbt.getIntArray("Position");
+ return new ChunkPos(is[0], is[1]);
+ }
+@@ -78,38 +65,74 @@ public class EntityStorage implements EntityPersistentStorage<Entity> {
+
+ @Override
+ public void storeEntities(ChunkEntities<Entity> dataList) {
++ // Paper start - rewrite chunk system
++ if (true) {
++ throw new UnsupportedOperationException();
++ }
++ // Paper end - rewrite chunk system
+ ChunkPos chunkPos = dataList.getPos();
+ if (dataList.isEmpty()) {
+ if (this.emptyChunks.add(chunkPos.toLong())) {
+- this.simpleRegionStorage.write(chunkPos, null);
++ // Paper - rewrite chunk system - fix compile for unused field in dead code
+ }
+ } else {
+- ListTag listTag = new ListTag();
+- dataList.getEntities().forEach(entity -> {
+- CompoundTag compoundTagx = new CompoundTag();
+- if (entity.save(compoundTagx)) {
+- listTag.add(compoundTagx);
+- }
+- });
+- CompoundTag compoundTag = NbtUtils.addCurrentDataVersion(new CompoundTag());
+- compoundTag.put("Entities", listTag);
+- writeChunkPos(compoundTag, chunkPos);
+- this.simpleRegionStorage.write(chunkPos, compoundTag).exceptionally(ex -> {
+- LOGGER.error("Failed to store chunk {}", chunkPos, ex);
+- return null;
+- });
++ // Paper - move into saveEntityChunk0
+ this.emptyChunks.remove(chunkPos.toLong());
+ }
+ }
+
++ // Paper start - rewrite chunk system
++ public static void copyEntities(final CompoundTag from, final CompoundTag into) {
++ if (from == null) {
++ return;
++ }
++ final ListTag entitiesFrom = from.getList("Entities", net.minecraft.nbt.Tag.TAG_COMPOUND);
++ if (entitiesFrom == null || entitiesFrom.isEmpty()) {
++ return;
++ }
++
++ final ListTag entitiesInto = into.getList("Entities", net.minecraft.nbt.Tag.TAG_COMPOUND);
++ into.put("Entities", entitiesInto); // this is in case into doesn't have any entities
++ entitiesInto.addAll(0, entitiesFrom.copy()); // need to copy, this is coming from the save thread
++ }
++
++ public static CompoundTag saveEntityChunk(List<Entity> entities, ChunkPos chunkPos, ServerLevel level) {
++ return saveEntityChunk0(entities, chunkPos, level, false);
++ }
++ private static CompoundTag saveEntityChunk0(List<Entity> entities, ChunkPos chunkPos, ServerLevel level, boolean force) {
++ if (!force && entities.isEmpty()) {
++ return null;
++ }
++
++ ListTag listTag = new ListTag();
++ entities.forEach((entity) -> { // diff here: use entities parameter
++ CompoundTag compoundTag = new CompoundTag();
++ if (entity.save(compoundTag)) {
++ listTag.add(compoundTag);
++ }
++
++ });
++ CompoundTag compoundTag = NbtUtils.addCurrentDataVersion(new CompoundTag());
++ compoundTag.put("Entities", listTag);
++ writeChunkPos(compoundTag, chunkPos);
++ // Paper - remove worker usage
++
++ return !force && listTag.isEmpty() ? null : compoundTag;
++ }
++
++ public static CompoundTag upgradeChunkTag(CompoundTag chunkNbt) {
++ int i = NbtUtils.getDataVersion(chunkNbt, -1);
++ return ca.spottedleaf.dataconverter.minecraft.MCDataConverter.convertTag(ca.spottedleaf.dataconverter.minecraft.datatypes.MCTypeRegistry.ENTITY_CHUNK, chunkNbt, i, net.minecraft.SharedConstants.getCurrentVersion().getDataVersion().getVersion());
++ }
++ // Paper end - rewrite chunk system
++
+ @Override
+ public void flush(boolean sync) {
+- this.simpleRegionStorage.synchronize(sync).join();
+- this.entityDeserializerQueue.runAll();
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+
+ @Override
+ public void close() throws IOException {
+- this.simpleRegionStorage.close();
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
+ }
+ }
+diff --git a/src/main/java/net/minecraft/world/level/chunk/storage/RegionFile.java b/src/main/java/net/minecraft/world/level/chunk/storage/RegionFile.java
+index e858436bcf1b234d4bc6e6a117f5224d5c2d9f90..307196b2a58d4f8db3e6e3c3517a8004d4908b13 100644
+--- a/src/main/java/net/minecraft/world/level/chunk/storage/RegionFile.java
++++ b/src/main/java/net/minecraft/world/level/chunk/storage/RegionFile.java
+@@ -48,6 +48,7 @@ public class RegionFile implements AutoCloseable {
+ private final IntBuffer timestamps;
+ @VisibleForTesting
+ protected final RegionBitmap usedSectors;
++ public final java.util.concurrent.locks.ReentrantLock fileLock = new java.util.concurrent.locks.ReentrantLock(); // Paper
+
+ public RegionFile(RegionStorageInfo storageKey, Path directory, Path path, boolean dsync) throws IOException {
+ this(storageKey, directory, path, RegionFileVersion.getCompressionFormat(), dsync); // Paper - Configurable region compression format
+@@ -250,7 +251,7 @@ public class RegionFile implements AutoCloseable {
+ return (byteCount + 4096 - 1) / 4096;
+ }
+
+- public boolean doesChunkExist(ChunkPos pos) {
++ public synchronized boolean doesChunkExist(ChunkPos pos) { // Paper - synchronized
+ int i = this.getOffset(pos);
+
+ if (i == 0) {
+@@ -417,6 +418,11 @@ public class RegionFile implements AutoCloseable {
+ }
+
+ public void close() throws IOException {
++ // Paper start - Prevent regionfiles from being closed during use
++ this.fileLock.lock();
++ synchronized (this) {
++ try {
++ // Paper end
+ try {
+ this.padToFullSector();
+ } finally {
+@@ -426,6 +432,10 @@ public class RegionFile implements AutoCloseable {
+ this.file.close();
+ }
+ }
++ } finally { // Paper start - Prevent regionfiles from being closed during use
++ this.fileLock.unlock();
++ }
++ } // Paper end
+
+ }
+
+diff --git a/src/main/java/net/minecraft/world/level/chunk/storage/RegionFileStorage.java b/src/main/java/net/minecraft/world/level/chunk/storage/RegionFileStorage.java
+index c4eef3aade889c69cefd873bec2d031cc54103ea..3f6955be976064eb542b5c50a9d6d74457c1833c 100644
+--- a/src/main/java/net/minecraft/world/level/chunk/storage/RegionFileStorage.java
++++ b/src/main/java/net/minecraft/world/level/chunk/storage/RegionFileStorage.java
+@@ -26,31 +26,99 @@ public class RegionFileStorage implements AutoCloseable {
+ private final Path folder;
+ private final boolean sync;
+
+- RegionFileStorage(RegionStorageInfo storageKey, Path directory, boolean dsync) {
++ // Paper start - cache regionfile does not exist state
++ static final int MAX_NON_EXISTING_CACHE = 1024 * 64;
++ private final it.unimi.dsi.fastutil.longs.LongLinkedOpenHashSet nonExistingRegionFiles = new it.unimi.dsi.fastutil.longs.LongLinkedOpenHashSet();
++ private synchronized boolean doesRegionFilePossiblyExist(long position) {
++ if (this.nonExistingRegionFiles.contains(position)) {
++ this.nonExistingRegionFiles.addAndMoveToFirst(position);
++ return false;
++ }
++ return true;
++ }
++
++ private synchronized void createRegionFile(long position) {
++ this.nonExistingRegionFiles.remove(position);
++ }
++
++ private synchronized void markNonExisting(long position) {
++ if (this.nonExistingRegionFiles.addAndMoveToFirst(position)) {
++ while (this.nonExistingRegionFiles.size() >= MAX_NON_EXISTING_CACHE) {
++ this.nonExistingRegionFiles.removeLastLong();
++ }
++ }
++ }
++
++ public synchronized boolean doesRegionFileNotExistNoIO(ChunkPos pos) {
++ long key = ChunkPos.asLong(pos.getRegionX(), pos.getRegionZ());
++ return !this.doesRegionFilePossiblyExist(key);
++ }
++ // Paper end - cache regionfile does not exist state
++
++ protected RegionFileStorage(RegionStorageInfo storageKey, Path directory, boolean dsync) { // Paper - protected constructor
+ this.folder = directory;
+ this.sync = dsync;
+ this.info = storageKey;
+ }
+
+- private RegionFile getRegionFile(ChunkPos chunkcoordintpair, boolean existingOnly) throws IOException { // CraftBukkit
+- long i = ChunkPos.asLong(chunkcoordintpair.getRegionX(), chunkcoordintpair.getRegionZ());
++ // Paper start
++ public synchronized RegionFile getRegionFileIfLoaded(ChunkPos chunkcoordintpair) {
++ return this.regionCache.getAndMoveToFirst(ChunkPos.asLong(chunkcoordintpair.getRegionX(), chunkcoordintpair.getRegionZ()));
++ }
++
++ public synchronized boolean chunkExists(ChunkPos pos) throws IOException {
++ RegionFile regionfile = getRegionFile(pos, true);
++
++ return regionfile != null ? regionfile.hasChunk(pos) : false;
++ }
++
++ public synchronized RegionFile getRegionFile(ChunkPos chunkcoordintpair, boolean existingOnly) throws IOException { // CraftBukkit
++ return this.getRegionFile(chunkcoordintpair, existingOnly, false);
++ }
++ public synchronized RegionFile getRegionFile(ChunkPos chunkcoordintpair, boolean existingOnly, boolean lock) throws IOException {
++ // Paper end
++ long i = ChunkPos.asLong(chunkcoordintpair.getRegionX(), chunkcoordintpair.getRegionZ()); final long regionPos = i; // Paper - OBFHELPER
+ RegionFile regionfile = (RegionFile) this.regionCache.getAndMoveToFirst(i);
+
+ if (regionfile != null) {
++ // Paper start
++ if (lock) {
++ // must be in this synchronized block
++ regionfile.fileLock.lock();
++ }
++ // Paper end
+ return regionfile;
+ } else {
++ // Paper start - cache regionfile does not exist state
++ if (existingOnly && !this.doesRegionFilePossiblyExist(regionPos)) {
++ return null;
++ }
++ // Paper end - cache regionfile does not exist state
+ if (this.regionCache.size() >= io.papermc.paper.configuration.GlobalConfiguration.get().misc.regionFileCacheSize) { // Paper - Sanitise RegionFileCache and make configurable
+ ((RegionFile) this.regionCache.removeLast()).close();
+ }
+
+- FileUtil.createDirectoriesSafe(this.folder);
++ // Paper - only create directory if not existing only - moved down
+ Path path = this.folder;
+ int j = chunkcoordintpair.getRegionX();
+ Path path1 = path.resolve("r." + j + "." + chunkcoordintpair.getRegionZ() + ".mca");
+- if (existingOnly && !java.nio.file.Files.exists(path1)) return null; // CraftBukkit
++ if (existingOnly && !java.nio.file.Files.exists(path1)) { // Paper start - cache regionfile does not exist state
++ this.markNonExisting(regionPos);
++ return null; // CraftBukkit
++ } else {
++ this.createRegionFile(regionPos);
++ }
++ // Paper end - cache regionfile does not exist state
++ FileUtil.createDirectoriesSafe(this.folder); // Paper - only create directory if not existing only - moved from above
+ RegionFile regionfile1 = new RegionFile(this.info, path1, this.folder, this.sync);
+
+ this.regionCache.putAndMoveToFirst(i, regionfile1);
++ // Paper start
++ if (lock) {
++ // must be in this synchronized block
++ regionfile1.fileLock.lock();
++ }
++ // Paper end
+ return regionfile1;
+ }
+ }
+@@ -58,11 +126,12 @@ public class RegionFileStorage implements AutoCloseable {
+ @Nullable
+ public CompoundTag read(ChunkPos pos) throws IOException {
+ // CraftBukkit start - SPIGOT-5680: There's no good reason to preemptively create files on read, save that for writing
+- RegionFile regionfile = this.getRegionFile(pos, true);
++ RegionFile regionfile = this.getRegionFile(pos, true, true); // Paper
+ if (regionfile == null) {
+ return null;
+ }
+ // CraftBukkit end
++ try { // Paper
+ DataInputStream datainputstream = regionfile.getChunkDataInputStream(pos);
+
+ CompoundTag nbttagcompound;
+@@ -99,6 +168,9 @@ public class RegionFileStorage implements AutoCloseable {
+ }
+
+ return nbttagcompound;
++ } finally { // Paper start
++ regionfile.fileLock.unlock();
++ } // Paper end
+ }
+
+ public void scanChunk(ChunkPos chunkPos, StreamTagVisitor scanner) throws IOException {
+@@ -133,7 +205,13 @@ public class RegionFileStorage implements AutoCloseable {
+ }
+
+ protected void write(ChunkPos pos, @Nullable CompoundTag nbt) throws IOException {
+- RegionFile regionfile = this.getRegionFile(pos, false); // CraftBukkit
++ // Paper start - rewrite chunk system
++ RegionFile regionfile = this.getRegionFile(pos, nbt == null, true); // CraftBukkit
++ if (nbt == null && regionfile == null) {
++ return;
++ }
++ try { // Try finally to unlock the region file
++ // Paper end - rewrite chunk system
+ // Paper start - Chunk save reattempt
+ int attempts = 0;
+ Exception lastException = null;
+@@ -179,9 +257,14 @@ public class RegionFileStorage implements AutoCloseable {
+ net.minecraft.server.MinecraftServer.LOGGER.error("Failed to save chunk {}", pos, lastException);
+ }
+ // Paper end - Chunk save reattempt
++ // Paper start - rewrite chunk system
++ } finally {
++ regionfile.fileLock.unlock();
++ }
++ // Paper end - rewrite chunk system
+ }
+
+- public void close() throws IOException {
++ public synchronized void close() throws IOException { // Paper -> synchronized
+ ExceptionCollector<IOException> exceptionsuppressor = new ExceptionCollector<>();
+ ObjectIterator objectiterator = this.regionCache.values().iterator();
+
+@@ -198,7 +281,7 @@ public class RegionFileStorage implements AutoCloseable {
+ exceptionsuppressor.throwIfPresent();
+ }
+
+- public void flush() throws IOException {
++ public synchronized void flush() throws IOException { // Paper - synchronize
+ ObjectIterator objectiterator = this.regionCache.values().iterator();
+
+ while (objectiterator.hasNext()) {
+diff --git a/src/main/java/net/minecraft/world/level/chunk/storage/SectionStorage.java b/src/main/java/net/minecraft/world/level/chunk/storage/SectionStorage.java
+index 151fcbca34e02783e19fbb7b54ec4fbec2eed190..883fbe5c81e3be27007a1a0489f80ba1863e5a04 100644
+--- a/src/main/java/net/minecraft/world/level/chunk/storage/SectionStorage.java
++++ b/src/main/java/net/minecraft/world/level/chunk/storage/SectionStorage.java
+@@ -12,6 +12,7 @@ import it.unimi.dsi.fastutil.longs.Long2ObjectMap;
+ import it.unimi.dsi.fastutil.longs.Long2ObjectOpenHashMap;
+ import it.unimi.dsi.fastutil.longs.LongLinkedOpenHashSet;
+ import java.io.IOException;
++import java.nio.file.Path;
+ import java.util.Map;
+ import java.util.Optional;
+ import java.util.concurrent.CompletableFuture;
+@@ -31,25 +32,30 @@ import net.minecraft.world.level.ChunkPos;
+ import net.minecraft.world.level.LevelHeightAccessor;
+ import org.slf4j.Logger;
+
+-public class SectionStorage<R> implements AutoCloseable {
++public class SectionStorage<R> extends RegionFileStorage implements AutoCloseable { // Paper - nuke IOWorker
+ private static final Logger LOGGER = LogUtils.getLogger();
+ private static final String SECTIONS_TAG = "Sections";
+- private final SimpleRegionStorage simpleRegionStorage;
++ // Paper - remove mojang I/O thread
+ private final Long2ObjectMap<Optional<R>> storage = new Long2ObjectOpenHashMap<>();
+ private final LongLinkedOpenHashSet dirty = new LongLinkedOpenHashSet();
+ private final Function<Runnable, Codec<R>> codec;
+ private final Function<Runnable, R> factory;
+- private final RegistryAccess registryAccess;
++ public final RegistryAccess registryAccess; // Paper - rewrite chunk system - public
+ protected final LevelHeightAccessor levelHeightAccessor;
+
+ public SectionStorage(
++ // Paper start
++ RegionStorageInfo regionStorageInfo,
++ Path path,
++ boolean dsync,
++ // Paper end
+ SimpleRegionStorage storageAccess,
+ Function<Runnable, Codec<R>> codecFactory,
+ Function<Runnable, R> factory,
+ RegistryAccess registryManager,
+ LevelHeightAccessor world
+ ) {
+- this.simpleRegionStorage = storageAccess;
++ super(regionStorageInfo, path, dsync); // Paper - remove mojang I/O thread
+ this.codec = codecFactory;
+ this.factory = factory;
+ this.registryAccess = registryManager;
+@@ -112,23 +118,21 @@ public class SectionStorage<R> implements AutoCloseable {
+ }
+
+ private void readColumn(ChunkPos pos) {
+- Optional<CompoundTag> optional = this.tryRead(pos).join();
+- RegistryOps<Tag> registryOps = this.registryAccess.createSerializationContext(NbtOps.INSTANCE);
+- this.readColumn(pos, registryOps, optional.orElse(null));
++ throw new IllegalStateException("Only chunk system can load in state, offending class:" + this.getClass().getName()); // Paper - rewrite chunk system
+ }
+
+ private CompletableFuture<Optional<CompoundTag>> tryRead(ChunkPos pos) {
+- return this.simpleRegionStorage.read(pos).exceptionally(throwable -> {
+- if (throwable instanceof IOException iOException) {
+- LOGGER.error("Error reading chunk {} data from disk", pos, iOException);
+- return Optional.empty();
+- } else {
+- throw new CompletionException(throwable);
+- }
+- });
++ // Paper start - rewrite chunk system
++ try {
++ return CompletableFuture.completedFuture(Optional.ofNullable(this.read(pos)));
++ } catch (Throwable thr) {
++ return CompletableFuture.failedFuture(thr);
++ }
++ // Paper end - rewrite chunk system
+ }
+
+ private void readColumn(ChunkPos pos, RegistryOps<Tag> ops, @Nullable CompoundTag nbt) {
++ if (true) throw new IllegalStateException("Only chunk system can load in state, offending class:" + this.getClass().getName()); // Paper - rewrite chunk system
+ if (nbt == null) {
+ for (int i = this.levelHeightAccessor.getMinSection(); i < this.levelHeightAccessor.getMaxSection(); i++) {
+ this.storage.put(getKey(pos, i), Optional.empty());
+@@ -138,7 +142,7 @@ public class SectionStorage<R> implements AutoCloseable {
+ int j = getVersion(dynamic);
+ int k = SharedConstants.getCurrentVersion().getDataVersion().getVersion();
+ boolean bl = j != k;
+- Dynamic<Tag> dynamic2 = this.simpleRegionStorage.upgradeChunkTag(dynamic, j);
++ Dynamic<Tag> dynamic2 = null; // Paper - rewrite chunk system
+ OptionalDynamic<Tag> optionalDynamic = dynamic2.get("Sections");
+
+ for (int l = this.levelHeightAccessor.getMinSection(); l < this.levelHeightAccessor.getMaxSection(); l++) {
+@@ -162,7 +166,7 @@ public class SectionStorage<R> implements AutoCloseable {
+ Dynamic<Tag> dynamic = this.writeColumn(pos, registryOps);
+ Tag tag = dynamic.getValue();
+ if (tag instanceof CompoundTag) {
+- this.simpleRegionStorage.write(pos, (CompoundTag)tag);
++ try { this.write(pos, (CompoundTag)tag); } catch (IOException ex) { SectionStorage.LOGGER.error("Error writing poi chunk data to disk for chunk " + pos, ex); } // Paper - nuke IOWorker
+ } else {
+ LOGGER.error("Expected compound tag, got {}", tag);
+ }
+@@ -212,7 +216,7 @@ public class SectionStorage<R> implements AutoCloseable {
+ }
+
+ private static int getVersion(Dynamic<?> dynamic) {
+- return dynamic.get("DataVersion").asInt(1945);
++ return dynamic.get("DataVersion").asInt(1945); // Paper - diff on change, constant used in ChunkLoadTask
+ }
+
+ public void flush(ChunkPos pos) {
+@@ -229,6 +233,6 @@ public class SectionStorage<R> implements AutoCloseable {
+
+ @Override
+ public void close() throws IOException {
+- this.simpleRegionStorage.close();
++ super.close(); // Paper - nuke I/O worker - don't call the worker
+ }
+ }
+diff --git a/src/main/java/net/minecraft/world/level/entity/EntityTickList.java b/src/main/java/net/minecraft/world/level/entity/EntityTickList.java
+index 74a285b8b018a9c94ccea519f1ce8b9e2ef3cb64..83a39f900551e39d5af6f17a339a386ddee4feef 100644
+--- a/src/main/java/net/minecraft/world/level/entity/EntityTickList.java
++++ b/src/main/java/net/minecraft/world/level/entity/EntityTickList.java
+@@ -9,52 +9,41 @@ import javax.annotation.Nullable;
+ import net.minecraft.world.entity.Entity;
+
+ public class EntityTickList {
+- private Int2ObjectMap<Entity> active = new Int2ObjectLinkedOpenHashMap<>();
+- private Int2ObjectMap<Entity> passive = new Int2ObjectLinkedOpenHashMap<>();
+- @Nullable
+- private Int2ObjectMap<Entity> iterated;
++ private final io.papermc.paper.util.maplist.IteratorSafeOrderedReferenceSet<Entity> entities = new io.papermc.paper.util.maplist.IteratorSafeOrderedReferenceSet<>(true); // Paper - rewrite this, always keep this updated - why would we EVER tick an entity that's not ticking?
+
+ private void ensureActiveIsNotIterated() {
+- if (this.iterated == this.active) {
+- this.passive.clear();
+-
+- for (Entry<Entity> entry : Int2ObjectMaps.fastIterable(this.active)) {
+- this.passive.put(entry.getIntKey(), entry.getValue());
+- }
+-
+- Int2ObjectMap<Entity> int2ObjectMap = this.active;
+- this.active = this.passive;
+- this.passive = int2ObjectMap;
+- }
++ // Paper - replace with better logic, do not delay removals
+ }
+
+ public void add(Entity entity) {
++ io.papermc.paper.util.TickThread.ensureTickThread("Asynchronous entity ticklist addition"); // Paper
+ this.ensureActiveIsNotIterated();
+- this.active.put(entity.getId(), entity);
++ this.entities.add(entity); // Paper - replace with better logic, do not delay removals/additions
+ }
+
+ public void remove(Entity entity) {
++ io.papermc.paper.util.TickThread.ensureTickThread("Asynchronous entity ticklist removal"); // Paper
+ this.ensureActiveIsNotIterated();
+- this.active.remove(entity.getId());
++ this.entities.remove(entity); // Paper - replace with better logic, do not delay removals/additions
+ }
+
+ public boolean contains(Entity entity) {
+- return this.active.containsKey(entity.getId());
++ return this.entities.contains(entity); // Paper - replace with better logic, do not delay removals/additions
+ }
+
+ public void forEach(Consumer<Entity> action) {
+- if (this.iterated != null) {
+- throw new UnsupportedOperationException("Only one concurrent iteration supported");
+- } else {
+- this.iterated = this.active;
+-
+- try {
+- for (Entity entity : this.active.values()) {
+- action.accept(entity);
+- }
+- } finally {
+- this.iterated = null;
++ io.papermc.paper.util.TickThread.ensureTickThread("Asynchronous entity ticklist iteration"); // Paper
++ // Paper start - replace with better logic, do not delay removals/additions
++ // To ensure nothing weird happens with dimension travelling, do not iterate over new entries...
++ // (by dfl iterator() is configured to not iterate over new entries)
++ io.papermc.paper.util.maplist.IteratorSafeOrderedReferenceSet.Iterator<Entity> iterator = this.entities.iterator();
++ try {
++ while (iterator.hasNext()) {
++ action.accept(iterator.next());
+ }
++ } finally {
++ iterator.finishedIterating();
+ }
++ // Paper end - replace with better logic, do not delay removals/additions
+ }
+ }
+diff --git a/src/main/java/net/minecraft/world/level/levelgen/NoiseBasedChunkGenerator.java b/src/main/java/net/minecraft/world/level/levelgen/NoiseBasedChunkGenerator.java
+index 5d15c228c044a36c67014793decb314240cf6be1..dc765b92cc90f5f370254e68bbbdfa5add7935ce 100644
+--- a/src/main/java/net/minecraft/world/level/levelgen/NoiseBasedChunkGenerator.java
++++ b/src/main/java/net/minecraft/world/level/levelgen/NoiseBasedChunkGenerator.java
+@@ -87,7 +87,7 @@ public final class NoiseBasedChunkGenerator extends ChunkGenerator {
+ return CompletableFuture.supplyAsync(Util.wrapThreadWithTaskName("init_biomes", () -> {
+ this.doCreateBiomes(blender, noiseConfig, structureAccessor, chunk);
+ return chunk;
+- }), Util.backgroundExecutor());
++ }), executor); // Paper - run with supplied executor
+ }
+
+ private void doCreateBiomes(Blender blender, RandomState noiseConfig, StructureManager structureAccessor, ChunkAccess chunk) {
+@@ -286,7 +286,7 @@ public final class NoiseBasedChunkGenerator extends ChunkGenerator {
+
+ return CompletableFuture.supplyAsync(Util.wrapThreadWithTaskName("wgen_fill_noise", () -> {
+ return this.doFill(blender, structureAccessor, noiseConfig, chunk, j, k);
+- }), Util.backgroundExecutor()).whenCompleteAsync((ichunkaccess1, throwable) -> {
++ }), executor).whenCompleteAsync((ichunkaccess1, throwable) -> { // Paper - run with supplied executor
+ Iterator iterator = set.iterator();
+
+ while (iterator.hasNext()) {
+diff --git a/src/main/java/net/minecraft/world/level/levelgen/structure/StructureCheck.java b/src/main/java/net/minecraft/world/level/levelgen/structure/StructureCheck.java
+index 609100ed7aa0b23aa5a9c6fbf6878ea320bd3a93..7068657b28a9bc175ee23f5a18defb41168f1d76 100644
+--- a/src/main/java/net/minecraft/world/level/levelgen/structure/StructureCheck.java
++++ b/src/main/java/net/minecraft/world/level/levelgen/structure/StructureCheck.java
+@@ -47,8 +47,101 @@ public class StructureCheck {
+ private final BiomeSource biomeSource;
+ private final long seed;
+ private final DataFixer fixerUpper;
+- private final Long2ObjectMap<Object2IntMap<Structure>> loadedChunks = new Long2ObjectOpenHashMap<>();
+- private final Map<Structure, Long2BooleanMap> featureChecks = new HashMap<>();
++ // Paper start - rewrite chunk system - synchronise this class
++ // additionally, make sure to purge entries from the maps so it does not leak memory
++ private static final int CHUNK_TOTAL_LIMIT = 50 * (2 * 100 + 1) * (2 * 100 + 1); // cache 50 structure lookups
++ private static final int PER_FEATURE_CHECK_LIMIT = 50 * (2 * 100 + 1) * (2 * 100 + 1); // cache 50 structure lookups
++
++ private final SynchronisedLong2ObjectMap<Object2IntMap<Structure>> loadedChunksSafe = new SynchronisedLong2ObjectMap<>(CHUNK_TOTAL_LIMIT);
++ private final java.util.concurrent.ConcurrentHashMap<Structure, SynchronisedLong2BooleanMap> featureChecksSafe = new java.util.concurrent.ConcurrentHashMap<>();
++
++ private static final class SynchronisedLong2ObjectMap<V> {
++ private final it.unimi.dsi.fastutil.longs.Long2ObjectLinkedOpenHashMap<V> map = new it.unimi.dsi.fastutil.longs.Long2ObjectLinkedOpenHashMap<>();
++ private final int limit;
++
++ public SynchronisedLong2ObjectMap(final int limit) {
++ this.limit = limit;
++ }
++
++ // must hold lock on map
++ private void purgeEntries() {
++ while (this.map.size() > this.limit) {
++ this.map.removeLast();
++ }
++ }
++
++ public V get(final long key) {
++ synchronized (this.map) {
++ return this.map.getAndMoveToFirst(key);
++ }
++ }
++
++ public V put(final long key, final V value) {
++ synchronized (this.map) {
++ final V ret = this.map.putAndMoveToFirst(key, value);
++ this.purgeEntries();
++ return ret;
++ }
++ }
++
++ public V compute(final long key, final java.util.function.BiFunction<? super Long, ? super V, ? extends V> remappingFunction) {
++ synchronized (this.map) {
++ // first, compute the value - if one is added, it will be at the last entry
++ this.map.compute(key, remappingFunction);
++ // move the entry to first, just in case it was added at last
++ final V ret = this.map.getAndMoveToFirst(key);
++ // now purge the last entries
++ this.purgeEntries();
++
++ return ret;
++ }
++ }
++ }
++
++ private static final class SynchronisedLong2BooleanMap {
++ private final it.unimi.dsi.fastutil.longs.Long2BooleanLinkedOpenHashMap map = new it.unimi.dsi.fastutil.longs.Long2BooleanLinkedOpenHashMap();
++ private final int limit;
++
++ public SynchronisedLong2BooleanMap(final int limit) {
++ this.limit = limit;
++ }
++
++ // must hold lock on map
++ private void purgeEntries() {
++ while (this.map.size() > this.limit) {
++ this.map.removeLastBoolean();
++ }
++ }
++
++ public boolean remove(final long key) {
++ synchronized (this.map) {
++ return this.map.remove(key);
++ }
++ }
++
++ // note:
++ public boolean getOrCompute(final long key, final it.unimi.dsi.fastutil.longs.Long2BooleanFunction ifAbsent) {
++ synchronized (this.map) {
++ if (this.map.containsKey(key)) {
++ return this.map.getAndMoveToFirst(key);
++ }
++ }
++
++ final boolean put = ifAbsent.get(key);
++
++ synchronized (this.map) {
++ if (this.map.containsKey(key)) {
++ return this.map.getAndMoveToFirst(key);
++ }
++ this.map.putAndMoveToFirst(key, put);
++
++ this.purgeEntries();
++
++ return put;
++ }
++ }
++ }
++ // Paper end - rewrite chunk system - synchronise this class
+
+ public StructureCheck(
+ ChunkScanAccess chunkIoWorker,
+@@ -90,7 +183,7 @@ public class StructureCheck {
+
+ public StructureCheckResult checkStart(ChunkPos pos, Structure type, StructurePlacement placement, boolean skipReferencedStructures) {
+ long l = pos.toLong();
+- Object2IntMap<Structure> object2IntMap = this.loadedChunks.get(l);
++ Object2IntMap<Structure> object2IntMap = this.loadedChunksSafe.get(l); // Paper - rewrite chunk system - synchronise this class
+ if (object2IntMap != null) {
+ return this.checkStructureInfo(object2IntMap, type, skipReferencedStructures);
+ } else {
+@@ -100,9 +193,9 @@ public class StructureCheck {
+ } else if (!placement.applyAdditionalChunkRestrictions(pos.x, pos.z, this.seed, this.getSaltOverride(type))) { // Paper - add missing structure seed configs
+ return StructureCheckResult.START_NOT_PRESENT;
+ } else {
+- boolean bl = this.featureChecks
+- .computeIfAbsent(type, structure2 -> new Long2BooleanOpenHashMap())
+- .computeIfAbsent(l, chunkPos -> this.canCreateStructure(pos, type));
++ boolean bl = this.featureChecksSafe // Paper - rewrite chunk system - synchronise this class
++ .computeIfAbsent(type, structure2 -> new SynchronisedLong2BooleanMap(PER_FEATURE_CHECK_LIMIT)) // Paper - rewrite chunk system - synchronise this class
++ .getOrCompute(l, chunkPos -> this.canCreateStructure(pos, type)); // Paper - rewrite chunk system - synchronise this class
+ return !bl ? StructureCheckResult.START_NOT_PRESENT : StructureCheckResult.CHUNK_LOAD_NEEDED;
+ }
+ }
+@@ -228,15 +321,26 @@ public class StructureCheck {
+ }
+
+ private void storeFullResults(long pos, Object2IntMap<Structure> referencesByStructure) {
+- this.loadedChunks.put(pos, deduplicateEmptyMap(referencesByStructure));
+- this.featureChecks.values().forEach(generationPossibilityByChunkPos -> generationPossibilityByChunkPos.remove(pos));
++ // Paper start - rewrite chunk system - synchronise this class
++ this.loadedChunksSafe.put(pos, deduplicateEmptyMap(referencesByStructure));
++ // once we insert into loadedChunks, we don't really need to be very careful about removing everything
++ // from this map, as everything that checks this map uses loadedChunks first
++ // so, one way or another it's a race condition that doesn't matter
++ for (SynchronisedLong2BooleanMap value : this.featureChecksSafe.values()) {
++ value.remove(pos);
++ }
++ // Paper end - rewrite chunk system - synchronise this class
+ }
+
+ public void incrementReference(ChunkPos pos, Structure structure) {
+- this.loadedChunks.compute(pos.toLong(), (posx, referencesByStructure) -> {
+- if (referencesByStructure == null || referencesByStructure.isEmpty()) {
++ this.loadedChunksSafe.compute(pos.toLong(), (posx, referencesByStructure) -> { // Paper start - rewrite chunk system - synchronise this class
++ // make this COW so that we do not mutate state that may be currently in use
++ if (referencesByStructure == null) {
+ referencesByStructure = new Object2IntOpenHashMap<>();
++ } else {
++ referencesByStructure = referencesByStructure instanceof Object2IntOpenHashMap<Structure> fastClone ? fastClone.clone() : new Object2IntOpenHashMap<>(referencesByStructure);
+ }
++ // Paper end - rewrite chunk system - synchronise this class
+
+ referencesByStructure.computeInt(structure, (feature, references) -> references == null ? 1 : references + 1);
+ return referencesByStructure;
+diff --git a/src/main/java/net/minecraft/world/ticks/LevelChunkTicks.java b/src/main/java/net/minecraft/world/ticks/LevelChunkTicks.java
+index 47c2b2da9799690291396effb9e1b06d71efc6fd..2cdd18f724296f10cd4a522d1e8196723d39cf45 100644
+--- a/src/main/java/net/minecraft/world/ticks/LevelChunkTicks.java
++++ b/src/main/java/net/minecraft/world/ticks/LevelChunkTicks.java
+@@ -26,6 +26,19 @@ public class LevelChunkTicks<T> implements SerializableTickContainer<T>, TickCon
+ @Nullable
+ private BiConsumer<LevelChunkTicks<T>, ScheduledTick<T>> onTickAdded;
+
++ // Paper start - add dirty flag
++ private boolean dirty;
++ private long lastSaved = Long.MIN_VALUE;
++
++ public boolean isDirty(final long tick) {
++ return this.dirty || (!this.tickQueue.isEmpty() && tick != this.lastSaved);
++ }
++
++ public void clearDirty() {
++ this.dirty = false;
++ }
++ // Paper end - add dirty flag
++
+ public LevelChunkTicks() {
+ }
+
+@@ -50,6 +63,7 @@ public class LevelChunkTicks<T> implements SerializableTickContainer<T>, TickCon
+ public ScheduledTick<T> poll() {
+ ScheduledTick<T> scheduledTick = this.tickQueue.poll();
+ if (scheduledTick != null) {
++ this.dirty = true; // Paper - add dirty flag
+ this.ticksPerPosition.remove(scheduledTick);
+ }
+
+@@ -59,6 +73,7 @@ public class LevelChunkTicks<T> implements SerializableTickContainer<T>, TickCon
+ @Override
+ public void schedule(ScheduledTick<T> orderedTick) {
+ if (this.ticksPerPosition.add(orderedTick)) {
++ this.dirty = true; // Paper - add dirty flag
+ this.scheduleUnchecked(orderedTick);
+ }
+ }
+@@ -81,7 +96,7 @@ public class LevelChunkTicks<T> implements SerializableTickContainer<T>, TickCon
+ while (iterator.hasNext()) {
+ ScheduledTick<T> scheduledTick = iterator.next();
+ if (predicate.test(scheduledTick)) {
+- iterator.remove();
++ iterator.remove(); this.dirty = true; // Paper - add dirty flag
+ this.ticksPerPosition.remove(scheduledTick);
+ }
+ }
+@@ -98,6 +113,7 @@ public class LevelChunkTicks<T> implements SerializableTickContainer<T>, TickCon
+
+ @Override
+ public ListTag save(long l, Function<T, String> function) {
++ this.lastSaved = l; // Paper - add dirty system to level ticks
+ ListTag listTag = new ListTag();
+ if (this.pendingTicks != null) {
+ for (SavedTick<T> savedTick : this.pendingTicks) {
+@@ -114,6 +130,11 @@ public class LevelChunkTicks<T> implements SerializableTickContainer<T>, TickCon
+
+ public void unpack(long time) {
+ if (this.pendingTicks != null) {
++ // Paper start - add dirty system to level chunk ticks
++ if (this.tickQueue.isEmpty()) {
++ this.lastSaved = time;
++ }
++ // Paper end - add dirty system to level chunk ticks
+ int i = -this.pendingTicks.size();
+
+ for (SavedTick<T> savedTick : this.pendingTicks) {
+diff --git a/src/main/java/org/bukkit/craftbukkit/CraftChunk.java b/src/main/java/org/bukkit/craftbukkit/CraftChunk.java
+index 7dae8d91b74cc7df0745f0c121e3bea09b8d0b6d..1e2530c9e5212b6d2bdbc94817beddb4247dac73 100644
+--- a/src/main/java/org/bukkit/craftbukkit/CraftChunk.java
++++ b/src/main/java/org/bukkit/craftbukkit/CraftChunk.java
+@@ -115,7 +115,7 @@ public class CraftChunk implements Chunk {
+
+ @Override
+ public boolean isEntitiesLoaded() {
+- return this.getCraftWorld().getHandle().entityManager.areEntitiesLoaded(ChunkPos.asLong(this.x, this.z));
++ return this.getCraftWorld().getHandle().areEntitiesLoaded(io.papermc.paper.util.CoordinateUtils.getChunkKey(this.x, this.z)); // Paper - rewrite chunk system
+ }
+
+ @Override
+@@ -124,51 +124,7 @@ public class CraftChunk implements Chunk {
+ this.getWorld().getChunkAt(this.x, this.z); // Transient load for this tick
+ }
+
+- PersistentEntitySectionManager<net.minecraft.world.entity.Entity> entityManager = this.getCraftWorld().getHandle().entityManager;
+- long pair = ChunkPos.asLong(this.x, this.z);
+-
+- if (entityManager.areEntitiesLoaded(pair)) {
+- return entityManager.getEntities(new ChunkPos(this.x, this.z)).stream()
+- .map(net.minecraft.world.entity.Entity::getBukkitEntity)
+- .filter(Objects::nonNull).toArray(Entity[]::new);
+- }
+-
+- entityManager.ensureChunkQueuedForLoad(pair); // Start entity loading
+-
+- // SPIGOT-6772: Use entity mailbox and re-schedule entities if they get unloaded
+- ProcessorMailbox<Runnable> mailbox = ((EntityStorage) entityManager.permanentStorage).entityDeserializerQueue;
+- BooleanSupplier supplier = () -> {
+- // only execute inbox if our entities are not present
+- if (entityManager.areEntitiesLoaded(pair)) {
+- return true;
+- }
+-
+- if (!entityManager.isPending(pair)) {
+- // Our entities got unloaded, this should normally not happen.
+- entityManager.ensureChunkQueuedForLoad(pair); // Re-start entity loading
+- }
+-
+- // tick loading inbox, which loads the created entities to the world
+- // (if present)
+- entityManager.tick();
+- // check if our entities are loaded
+- return entityManager.areEntitiesLoaded(pair);
+- };
+-
+- // now we wait until the entities are loaded,
+- // the converting from NBT to entity object is done on the main Thread which is why we wait
+- while (!supplier.getAsBoolean()) {
+- if (mailbox.size() != 0) {
+- mailbox.run();
+- } else {
+- Thread.yield();
+- LockSupport.parkNanos("waiting for entity loading", 100000L);
+- }
+- }
+-
+- return entityManager.getEntities(new ChunkPos(this.x, this.z)).stream()
+- .map(net.minecraft.world.entity.Entity::getBukkitEntity)
+- .filter(Objects::nonNull).toArray(Entity[]::new);
++ return this.getCraftWorld().getHandle().getChunkEntities(this.x, this.z); // Paper - rewrite chunk system
+ }
+
+ @Override
+diff --git a/src/main/java/org/bukkit/craftbukkit/CraftServer.java b/src/main/java/org/bukkit/craftbukkit/CraftServer.java
+index 927c3110a64cfab665137a6f0c8b72075168f2bf..52a8eaa84a22c5cfc30a4e8a4c15d41bd58caef6 100644
+--- a/src/main/java/org/bukkit/craftbukkit/CraftServer.java
++++ b/src/main/java/org/bukkit/craftbukkit/CraftServer.java
+@@ -1402,7 +1402,6 @@ public final class CraftServer implements Server {
+ // Paper - Put world into worldlist before initing the world; move up
+
+ this.getServer().prepareLevels(internal.getChunkSource().chunkMap.progressListener, internal);
+- internal.entityManager.tick(); // SPIGOT-6526: Load pending entities so they are available to the API
+
+ this.pluginManager.callEvent(new WorldLoadEvent(internal.getWorld()));
+ return internal.getWorld();
+@@ -1447,7 +1446,7 @@ public final class CraftServer implements Server {
+ }
+
+ handle.getChunkSource().close(save);
+- handle.entityManager.close(save); // SPIGOT-6722: close entityManager
++ // handle.entityManager.close(save); // SPIGOT-6722: close entityManager // Paper - rewrite chunk system
+ handle.convertable.close();
+ } catch (Exception ex) {
+ this.getLogger().log(Level.SEVERE, null, ex);
+@@ -2483,7 +2482,7 @@ public final class CraftServer implements Server {
+
+ @Override
+ public boolean isPrimaryThread() {
+- return Thread.currentThread().equals(this.console.serverThread) || this.console.hasStopped() || !org.spigotmc.AsyncCatcher.enabled; // All bets are off if we have shut down (e.g. due to watchdog)
++ return io.papermc.paper.util.TickThread.isTickThread(); // Paper - rewrite chunk system
+ }
+
+ // Paper start - Adventure
+diff --git a/src/main/java/org/bukkit/craftbukkit/CraftWorld.java b/src/main/java/org/bukkit/craftbukkit/CraftWorld.java
+index 4b6a04e47f5d4c071607516519098fab317dcf12..01fc74e6cc8ea8808b821583afb26309587dc003 100644
+--- a/src/main/java/org/bukkit/craftbukkit/CraftWorld.java
++++ b/src/main/java/org/bukkit/craftbukkit/CraftWorld.java
+@@ -518,10 +518,14 @@ public class CraftWorld extends CraftRegionAccessor implements World {
+ ChunkHolder playerChunk = this.world.getChunkSource().chunkMap.getVisibleChunkIfPresent(ChunkPos.asLong(x, z));
+ if (playerChunk == null) return false;
+
+- playerChunk.getTickingChunkFuture().thenAccept(either -> {
+- either.ifSuccess(chunk -> {
++ // Paper start - rewrite player chunk loader
++ net.minecraft.world.level.chunk.LevelChunk chunk = playerChunk.getSendingChunk();
++ if (chunk == null) {
++ return false;
++ }
++ // Paper end - rewrite player chunk loader
+ List<ServerPlayer> playersInRange = playerChunk.playerProvider.getPlayers(playerChunk.getPos(), false);
+- if (playersInRange.isEmpty()) return;
++ if (playersInRange.isEmpty()) return true; // Paper - rewrite player chunk loader
+
+ ClientboundLevelChunkWithLightPacket refreshPacket = new ClientboundLevelChunkWithLightPacket(chunk, this.world.getLightEngine(), null, null);
+ for (ServerPlayer player : playersInRange) {
+@@ -529,8 +533,7 @@ public class CraftWorld extends CraftRegionAccessor implements World {
+
+ player.connection.send(refreshPacket);
+ }
+- });
+- });
++ // Paper - rewrite player chunk loader
+
+ return true;
+ }
+@@ -609,20 +612,7 @@ public class CraftWorld extends CraftRegionAccessor implements World {
+ @Override
+ public Collection<Plugin> getPluginChunkTickets(int x, int z) {
+ DistanceManager chunkDistanceManager = this.world.getChunkSource().chunkMap.distanceManager;
+- SortedArraySet<Ticket<?>> tickets = chunkDistanceManager.tickets.get(ChunkPos.asLong(x, z));
+-
+- if (tickets == null) {
+- return Collections.emptyList();
+- }
+-
+- ImmutableList.Builder<Plugin> ret = ImmutableList.builder();
+- for (Ticket<?> ticket : tickets) {
+- if (ticket.getType() == TicketType.PLUGIN_TICKET) {
+- ret.add((Plugin) ticket.key);
+- }
+- }
+-
+- return ret.build();
++ return chunkDistanceManager.getChunkHolderManager().getPluginChunkTickets(x, z); // Paper - rewrite chunk system
+ }
+
+ @Override
+@@ -630,7 +620,7 @@ public class CraftWorld extends CraftRegionAccessor implements World {
+ Map<Plugin, ImmutableList.Builder<Chunk>> ret = new HashMap<>();
+ DistanceManager chunkDistanceManager = this.world.getChunkSource().chunkMap.distanceManager;
+
+- for (Long2ObjectMap.Entry<SortedArraySet<Ticket<?>>> chunkTickets : chunkDistanceManager.tickets.long2ObjectEntrySet()) {
++ for (Long2ObjectMap.Entry<SortedArraySet<Ticket<?>>> chunkTickets : chunkDistanceManager.getChunkHolderManager().getTicketsCopy().long2ObjectEntrySet()) { // Paper - rewrite chunk system
+ long chunkKey = chunkTickets.getLongKey();
+ SortedArraySet<Ticket<?>> tickets = chunkTickets.getValue();
+
+@@ -1327,12 +1317,12 @@ public class CraftWorld extends CraftRegionAccessor implements World {
+
+ @Override
+ public int getViewDistance() {
+- return this.world.getChunkSource().chunkMap.serverViewDistance;
++ return this.getHandle().playerChunkLoader.getAPIViewDistance(); // Paper - replace player chunk loader
+ }
+
+ @Override
+ public int getSimulationDistance() {
+- return this.world.getChunkSource().chunkMap.getDistanceManager().simulationDistance;
++ return this.getHandle().playerChunkLoader.getAPITickDistance(); // Paper - replace player chunk loader
+ }
+
+ public BlockMetadataStore getBlockMetadata() {
+@@ -2495,17 +2485,20 @@ public class CraftWorld extends CraftRegionAccessor implements World {
+
+ @Override
+ public void setSimulationDistance(final int simulationDistance) {
+- throw new UnsupportedOperationException("Not implemented yet");
++ if (simulationDistance < 2 || simulationDistance > 32) {
++ throw new IllegalArgumentException("Simulation distance " + simulationDistance + " is out of range of [2, 32]");
++ }
++ this.getHandle().chunkSource.chunkMap.setTickViewDistance(simulationDistance);
+ }
+
+ @Override
+ public int getSendViewDistance() {
+- return this.getViewDistance();
++ return this.getHandle().playerChunkLoader.getAPISendViewDistance(); // Paper - replace player chunk loader
+ }
+
+ @Override
+ public void setSendViewDistance(final int viewDistance) {
+- throw new UnsupportedOperationException("Not implemented yet");
++ this.getHandle().chunkSource.chunkMap.setSendViewDistance(viewDistance); // Paper - replace player chunk loader
+ }
+
+ // Paper start - implement pointers
+diff --git a/src/main/java/org/bukkit/craftbukkit/entity/CraftPlayer.java b/src/main/java/org/bukkit/craftbukkit/entity/CraftPlayer.java
+index b60db7df3cef33a4a6a9804104759ecaa3ae330a..6f8999df04e6ad4d4d52e87b05a187f586d60c74 100644
+--- a/src/main/java/org/bukkit/craftbukkit/entity/CraftPlayer.java
++++ b/src/main/java/org/bukkit/craftbukkit/entity/CraftPlayer.java
+@@ -3454,31 +3454,31 @@ public class CraftPlayer extends CraftHumanEntity implements Player {
+
+ @Override
+ public int getViewDistance() {
+- return io.papermc.paper.chunk.system.ChunkSystem.getLoadViewDistance(this.getHandle());
++ return io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.getAPIViewDistance(this);
+ }
+
+ @Override
+ public void setViewDistance(final int viewDistance) {
+- throw new UnsupportedOperationException("Not implemented yet");
++ this.getHandle().setLoadViewDistance(viewDistance < 0 ? viewDistance : viewDistance + 1);
+ }
+
+ @Override
+ public int getSimulationDistance() {
+- return io.papermc.paper.chunk.system.ChunkSystem.getTickViewDistance(this.getHandle());
++ return io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.getAPITickViewDistance(this);
+ }
+
+ @Override
+ public void setSimulationDistance(final int simulationDistance) {
+- throw new UnsupportedOperationException("Not implemented yet");
++ this.getHandle().setTickViewDistance(simulationDistance);
+ }
+
+ @Override
+ public int getSendViewDistance() {
+- return io.papermc.paper.chunk.system.ChunkSystem.getSendViewDistance(this.getHandle());
++ return io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.getAPISendViewDistance(this);
+ }
+
+ @Override
+ public void setSendViewDistance(final int viewDistance) {
+- throw new UnsupportedOperationException("Not implemented yet");
++ this.getHandle().setSendViewDistance(viewDistance);
+ }
+ }
+diff --git a/src/main/java/org/bukkit/craftbukkit/generator/CustomChunkGenerator.java b/src/main/java/org/bukkit/craftbukkit/generator/CustomChunkGenerator.java
+index b65710b648e31ab74204b5abd9397d9e6e26dac4..c77f722131e0e40e9de29bf8d42f9bc5d8fa2f7d 100644
+--- a/src/main/java/org/bukkit/craftbukkit/generator/CustomChunkGenerator.java
++++ b/src/main/java/org/bukkit/craftbukkit/generator/CustomChunkGenerator.java
+@@ -264,7 +264,7 @@ public class CustomChunkGenerator extends InternalChunkGenerator {
+ return ichunkaccess1;
+ };
+
+- return future == null ? CompletableFuture.supplyAsync(() -> function.apply(chunk), net.minecraft.Util.backgroundExecutor()) : future.thenApply(function);
++ return future == null ? CompletableFuture.supplyAsync(() -> function.apply(chunk), executor) : future.thenApply(function); // Paper - run with supplied executor
+ }
+
+ @Override
+diff --git a/src/main/java/org/bukkit/craftbukkit/util/DelegatedGeneratorAccess.java b/src/main/java/org/bukkit/craftbukkit/util/DelegatedGeneratorAccess.java
+index cd7f1309cf01a5f01a28aded03a36fe15adb1756..41a291d42667c38d3e5bbe47236772761e85929b 100644
+--- a/src/main/java/org/bukkit/craftbukkit/util/DelegatedGeneratorAccess.java
++++ b/src/main/java/org/bukkit/craftbukkit/util/DelegatedGeneratorAccess.java
+@@ -815,19 +815,39 @@ public abstract class DelegatedGeneratorAccess implements WorldGenLevel {
+ @Nullable
+ @Override
+ public BlockState getBlockStateIfLoaded(final BlockPos blockposition) {
+- return null;
++ return this.handle.getBlockStateIfLoaded(blockposition);
+ }
+
+ @Nullable
+ @Override
+ public FluidState getFluidIfLoaded(final BlockPos blockposition) {
+- return null;
++ return this.handle.getFluidIfLoaded(blockposition);
+ }
+
+ @Nullable
+ @Override
+ public ChunkAccess getChunkIfLoadedImmediately(final int x, final int z) {
+- return null;
++ return this.handle.getChunkIfLoadedImmediately(x, z);
++ }
++
++ @Override
++ public void getHardCollidingEntities(final Entity except, final AABB box, final Predicate<? super Entity> predicate, final List<Entity> into) {
++ this.handle.getHardCollidingEntities(except, box, predicate, into);
++ }
++
++ @Override
++ public List<Entity> getHardCollidingEntities(final Entity except, final AABB box, final Predicate<? super Entity> predicate) {
++ return this.handle.getHardCollidingEntities(except, box, predicate);
++ }
++
++ @Override
++ public void getEntities(final Entity except, final AABB box, final Predicate<? super Entity> predicate, final List<Entity> into) {
++ this.handle.getEntities(except, box, predicate, into);
++ }
++
++ @Override
++ public <T> void getEntitiesByClass(final Class<? extends T> clazz, final Entity except, final AABB box, final List<? super T> into, final Predicate<? super T> predicate) {
++ this.handle.getEntitiesByClass(clazz, except, box, into, predicate);
+ }
+ // Paper end
+ }
+diff --git a/src/main/java/org/bukkit/craftbukkit/util/DummyGeneratorAccess.java b/src/main/java/org/bukkit/craftbukkit/util/DummyGeneratorAccess.java
+index e8a73d34dbb372581b03018aade170a31c266099..210f454a840aa5564f7cbf33b83d31aa74814c84 100644
+--- a/src/main/java/org/bukkit/craftbukkit/util/DummyGeneratorAccess.java
++++ b/src/main/java/org/bukkit/craftbukkit/util/DummyGeneratorAccess.java
+@@ -268,4 +268,19 @@ public class DummyGeneratorAccess implements WorldGenLevel {
+ @Override
+ public void scheduleTick(BlockPos pos, Fluid fluid, int delay, net.minecraft.world.ticks.TickPriority priority) {}
+ // Paper end - add more methods
++ // Paper start
++ @Override
++ public List<Entity> getHardCollidingEntities(Entity except, AABB box, Predicate<? super Entity> predicate) {
++ return java.util.Collections.emptyList();
++ }
++
++ @Override
++ public void getEntities(Entity except, AABB box, Predicate<? super Entity> predicate, List<Entity> into) {}
++
++ @Override
++ public void getHardCollidingEntities(Entity except, AABB box, Predicate<? super Entity> predicate, List<Entity> into) {}
++
++ @Override
++ public <T> void getEntitiesByClass(Class<? extends T> clazz, Entity except, AABB box, List<? super T> into, Predicate<? super T> predicate) {}
++ // Paper end
+ }
+diff --git a/src/main/java/org/spigotmc/AsyncCatcher.java b/src/main/java/org/spigotmc/AsyncCatcher.java
+index e8e3cc48cf1c58bd8151d1f28df28781859cd0e3..2e074c16dab1ead47914070329da0398c3274048 100644
+--- a/src/main/java/org/spigotmc/AsyncCatcher.java
++++ b/src/main/java/org/spigotmc/AsyncCatcher.java
+@@ -9,7 +9,7 @@ public class AsyncCatcher
+
+ public static void catchOp(String reason)
+ {
+- if ( (AsyncCatcher.enabled || io.papermc.paper.util.TickThread.STRICT_THREAD_CHECKS) && Thread.currentThread() != MinecraftServer.getServer().serverThread ) // Paper
++ if (!(io.papermc.paper.util.TickThread.isTickThread())) // Paper
+ {
+ MinecraftServer.LOGGER.error("Thread " + Thread.currentThread().getName() + " failed main thread check: " + reason, new Throwable()); // Paper
+ throw new IllegalStateException( "Asynchronous " + reason + "!" );
+diff --git a/src/main/java/org/spigotmc/WatchdogThread.java b/src/main/java/org/spigotmc/WatchdogThread.java
+index ad282d34919716b75acd10426cd071da9d064a51..9e5d08f57aa448552d100ca892c211d44441ef68 100644
+--- a/src/main/java/org/spigotmc/WatchdogThread.java
++++ b/src/main/java/org/spigotmc/WatchdogThread.java
+@@ -8,7 +8,7 @@ import java.util.logging.Logger;
+ import net.minecraft.server.MinecraftServer;
+ import org.bukkit.Bukkit;
+
+-public class WatchdogThread extends Thread
++public final class WatchdogThread extends io.papermc.paper.util.TickThread // Paper - rewrite chunk system
+ {
+
+ private static WatchdogThread instance;
+@@ -115,6 +115,7 @@ public class WatchdogThread extends Thread
+ // Paper end - Different message for short timeout
+ log.log( Level.SEVERE, "------------------------------" );
+ log.log( Level.SEVERE, "Server thread dump (Look for plugins here before reporting to Paper!):" ); // Paper
++ io.papermc.paper.chunk.system.scheduling.ChunkTaskScheduler.dumpAllChunkLoadInfo(isLongTimeout); // Paper - rewrite chunk system
+ WatchdogThread.dumpThread( ManagementFactory.getThreadMXBean().getThreadInfo( MinecraftServer.getServer().serverThread.getId(), Integer.MAX_VALUE ), log );
+ log.log( Level.SEVERE, "------------------------------" );
+ //