Our lab recently released a paper where we introduce ShadowPEFT, a new Parameter-Efficient Fine-Tuning (PEFT) paradigm tailored for edge computing scenarios.
Unlike traditional approaches such as LoRA and its variants, which inject trainable parameters directly into the weights of Transformer, requiring tight coupling with the backbone.
ShadowPEFT instead enhances the frozen large base model by adding a lightweight, centralized, pretrainable, and detachable Shadow network. This shadow network operates in parallel with the base model, delivering learned corrections to each decoder layer. Because the shadow module is architecturally decoupled from the backbone, it can be independently trained, stored, and deployed, benefiting edge computing scenarios and edge-cloud collaboration computing.
In the spirit of "just making shit"โbecause, frankly, you can these daysโI decided to get back to basics. I built a lightweight RL MLP powered by WebGPU that runs directly in your browser.
The twist? I replaced the standard MLP with a Continued Fractions network. The real win here is interpretability; by applying a Taylor expansion to the continued fractions, we can actually decompose the output and see exactly which features influenced the outcome. It effectively replaces the usual "itโs just black-box magic" with actual visibility into the logic.
It was a fun experiment. Feel free to clone the repo and make it your own!
Just tried tencent/HY-World-2.0 โ a multimodal world model that takes in text or a single image and generates editable 3D scenes.
Unlike Google's Genie and HY-World 1.5, v2.0 generates engine-ready 3D content: ๐ฎ Direct import into Unreal Engine and Unity โ no format wrangling ๐ง Supports multiple 3D asset formats: Mesh, 3DGS, point cloud, etc. โ๏ธ Fully editable โ not a baked video, but actual geometry you can modify ๐ค Also usable for embodied simulation environments
Basically: from "AI generates a world you can look at" โ "AI generates a world you can ship."