Interestingly enough for the C and C++ folks, compiler specific dialects for embedded without standard library, are still argued for as if being C and C++.
Lot of stdlib, especially net, crypto, in tinygo doesn't compile, or if compiles has stubs as implementation that panics with not implemented. Few years ago, I tried compiling small terminal http client app and failed at compile stage.
Writing embedded code with an async-aware programming language is wonderful (see Rust's embassy), but wonder how competitive this is when you need to push large quantities of data through a micro controller, I presume this is not suitable for real-time stuff?
You can disable GC in tinygo, so if you allocate all the necessary buffers beforehand it can have good performance with real-time characteristics. If you _need_ dynamic memory allocation then no, because you need GC it can't provide realtime guarantees.
Doesn't seem like those should be mutually exclusive, though the habits involved are quite opposing and I can definitely believe they're uncommon.
E.g. GC doesn't need to be precise. You could reserve CPU budget for GC, and only use that much at a time before yielding control. As long as you still free enough to not OOM, you're fine.
I've written a fair amount of code for EmbeddedGo. Garbage Collector is not an issue if you avoid heap allocations in your main loop. But if you're CPU bound a goroutine might block others from running for quite some time. If your platform supports async preemption, you might be able to patch the goroutine scheduler with realtime capabilities.
It's really not so different! In embassy, DMA transfers and interrupts become things that you can .await on, the process is basically:
* The software starts a transaction, or triggers some event (like putting data in the fifo)
* The software task yields
* When the "fifo empty" interrupt or "dma transfer done interrupt" occurs, it wakes the task to resume
* the software task checks if it is done, and either reloads/restarts if there's more to do, or returns "done"
It's really not different than event driven state machines you would have written before, it's just "in-band" of the language now, and async/await gives you syntax to do it.
Even if you don't know Rust, I'd suggest poking around at some of the examples here:
It does indeed produce much smaller binaries, including for macOS.
[0] https://github.com/tinygo-org/tinygo/issues/4880
[1] https://github.com/Nerzal/tinywebsocket
For WASI, check out WASI Preview 2, https://docs.wasmtime.dev/api/wasmtime_wasi/p2/index.html
https://tinygo.org/docs/reference/lang-support/stdlib/
https://tinygo.org/docs/reference/lang-support/
And parts of the stdlib that don't work:
https://tinygo.org/docs/reference/lang-support/stdlib/
https://code.carverauto.dev/carverauto/serviceradar/src/bran...
E.g. GC doesn't need to be precise. You could reserve CPU budget for GC, and only use that much at a time before yielding control. As long as you still free enough to not OOM, you're fine.
Hardware-level async makes sense to me. I can scope it. I can read the data sheet.
Software async in contrast seems difficult to characterize and reason about so I've been intimidated.
Even if you don't know Rust, I'd suggest poking around at some of the examples here:
https://github.com/embassy-rs/embassy/tree/main/examples
And if you want, look into the code behind it.
https://tinygo.org/docs/reference/microcontrollers/
A few ESP32s on there.