LLaMA-rs: a Rust port of llama.cpp for fast LLaMA inference on CPU
Run LLaMA inference on CPU, with Rust π¦ππ¦. Contribute to setzer22/llama-rs development by creating an account on GitHub.
Read more »Run LLaMA inference on CPU, with Rust π¦ππ¦. Contribute to setzer22/llama-rs development by creating an account on GitHub.
Read more »Rust+OpenCL+AVX2 implementation of LLaMA inference code – GitHub – Noeda/rllama: Rust+OpenCL+AVX2 implementation of LLaMA inference code…
Read more »A CLI tool to get help with CLI tools π. Contribute to orhun/halp development by creating an account on GitHub.
Read more »The monster in the closet that no one wants to make eye contact.
Read more »ChatGPT powered Rust proc macro that generates code at compile-time. – GitHub – retrage/gpt-macro: ChatGPT powered Rust proc macro that generates code at compile-time.
Read more »The Rust programming language provides a powerful type system that checks linearity and borrowing, allowing code to safely manipulate memory without garbage collection and making Rust ideal for developing low-level, high-assurance systems. For such system…
Read more »Since December I and others have been working on a port of the Move programming language to LLVM. This work is sponsored by Solana, with the goal of running Move on Solana and its rbpf VM; though ultimately the work should be portable to other targets sup…
Read more »We released a new CLI v3 with a major refactor. We talk about our journey to a 100 percent Rust CLI tool.
Read more »We released a new CLI v3 with a major refactor. We talk about our journey to a 100 percent Rust CLI tool.
Read more »The holy grail of being able to write an application once and deploy it to iOS, Android and the Web continue to elude us. Some solutions exist β React Native…
Read more »