当我在不同的Web Workers中运行一个循环时,尽管变量应该是线程本地的,但循环在线程间共享计数器变量。我不知道该怎么做,但我不知道该怎么做。
有问题的循环在run
函数中,在被编译为WASM的Rust代码中如下所示:
#![no_main]
#![no_std]
use core::panic::PanicInfo;
use js::*;
mod js {
#[link(wasm_import_module = "imports")]
extern "C" {
pub fn abort(msgPtr: usize, filePtr: usize, line: u32, column: u32) -> !;
pub fn _log_num(number: usize);
}
}
#[no_mangle]
pub unsafe extern "C" fn run(worker_id: i32) {
let worker_index = worker_id as u32 - 1;
let chunk_start = 100 * worker_index;
let chunk_end = chunk_start + 100; //Total pixels may not divide evenly into number of worker cores.
for n in chunk_start as usize..chunk_end as usize {
_log_num(n);
}
}
#[panic_handler]
unsafe fn panic(_: &PanicInfo) -> ! { abort(0, 0, 0, 0) }
run
被传递线程id,范围从1到3(包括1和3),并打印出100个数字-因此所有三个线程都应该记录数字0到299,尽管顺序是混合的。我希望看到1 2 3...从线程1、101、102、103…来自线程2的201、202、203,以及来自线程3的201、202、203。如果我按顺序运行函数,这确实是我所看到的。但是如果我并行运行它们,我会让每个线程互相帮助,所以它们会记录像1,4,7...在第一个线程上,在第二个线程上2、6、9,在第三个线程上3、5、8;直到99,其中所有三个线程将停止。每个线程的行为就像它与其他线程共享chunk_start
、chunk_end
和n
。
它不应该这样做,因为.cargo/config.toml
指定了--shared-memory
,所以编译器在分配内存时应该使用适当的锁定机制。
[target.wasm32-unknown-unknown]
rustflags = [
"-C", "target-feature=+atomics,+mutable-globals,+bulk-memory",
"-C", "link-args=--no-entry --shared-memory --import-memory --max-memory=2130706432",
]
我知道这是被拿起,因为如果我改变--shared-memory
标志为其他东西,rust-lld
抱怨它不知道它是什么。
wasm-bindgen's parallel demo工作得很好,所以我知道这是可能的。我就是看不出他们是怎么设定的。
也许这是我在web worker中加载模块的方式的原因?
const wasmSource = fetch("sim.wasm") //kick off the request now, we're going to need it
//See message sending code for why we use multiple messages.
let messageArgQueue = [];
addEventListener("message", ({data}) => {
messageArgQueue.push(data)
if (messageArgQueue.length === 4) {
self[messageArgQueue[0]].apply(0, messageArgQueue.slice(1))
}
})
self.start = async (workerID, worldBackingBuffer, world) => {
const wasm = await WebAssembly.instantiateStreaming(wasmSource, {
env: { memory: worldBackingBuffer },
imports: {
abort: (messagePtr, locationPtr, row, column) => {
throw new Error(`? (?:${row}:${column}, thread ${workerID})`)
},
_log_num: num => console.log(`thread ${workerID}: n is ${num}`),
},
})
//Initialise thread-local storage, so we get separate stacks for our local variables.
wasm.instance.exports.__wasm_init_tls(workerID-1)
//Loop, running the Rust logging loop when the "tick" advances.
let lastProcessedTick = 0
while (1) {
Atomics.wait(world.globalTick, 0, lastProcessedTick)
lastProcessedTick = world.globalTick[0]
wasm.instance.exports.run(workerID)
}
}
这里的worldBackingBuffer
是WASM模块的共享内存,它是在主线程中创建的。
//Let's count to 300. We'll have three web workers, each taking ⅓rd of the task. 0-100, 100-200, 200-300...
//First, allocate some shared memory. (The original task wants to share some values around.)
const memory = new WebAssembly.Memory({
initial: 23,
maximum: 23,
shared: true,
})
//Then, allocate the data views into the memory.
//This is shared memory which will get updated by the worker threads, off the main thread.
const world = {
globalTick: new Int32Array(memory.buffer, 1200000, 1), //Current global tick. Increment to tell the workers to count up in scratchA!
}
//Load a core and send the "start" event to it.
const startAWorkerCore = coreIndex => {
const worker = new Worker('worker/sim.mjs', {type:'module'})
;['start', coreIndex+1, memory, world].forEach(arg => worker.postMessage(arg)) //Marshal the "start" message across multiple postMessages because of the following bugs: 1. Must transfer memory BEFORE world. https://bugs.chromium.org/p/chromium/issues/detail?id=1421524 2. Must transfer world BEFORE memory. https://bugzilla.mozilla.org/show_bug.cgi?id=1821582
}
//Now, let's start some worker threads! They will work on different memory locations, so they don't conflict.
startAWorkerCore(0) //works fine
startAWorkerCore(1) //breaks counting - COMMENT THIS OUT TO FIX COUNTING
startAWorkerCore(2) //breaks counting - COMMENT THIS OUT TO FIX COUNTING
//Run the simulation thrice. Each thread should print a hundred numbers in order, thrice.
//For thread 1, it should print 0, then 1, then 2, etc. up to 99.
//Thread 2 should run from 100 to 199, and thread 3 200 to 299.
//But when they're run simultaneously, all three threads seem to use the same counter.
setTimeout(tick, 500)
setTimeout(tick, 700)
setTimeout(tick, 900)
function tick() {
Atomics.add(world.globalTick, 0, 1)
Atomics.notify(world.globalTick, 0)
}
但这看起来很正常。为什么我在Rust for循环中看到内存损坏?
1条答案
按热度按时间eit6fx6z1#
在wasm-bindgen中做了一些神奇的事情--start被替换/注入了代码修复内存。不过,看起来有些问题-
https://github.com/rustwasm/wasm-bindgen/discussions/3474
https://github.com/rustwasm/wasm-bindgen/discussions/3487