Deciding where your computational models will run has become more complicated with the rise of low-cost and low-power ML model accelerators. In this talk, we’ll explore how to distribute your computational workloads using distributed Erlang, benchmarking these systems, new opportunities unlocked by edge device inference, and how patterns from Nx and Nerves can be extended to new classes of devices.
- Learn how to apply Elixir tools and techniques for distributing your computational workloads across a wide variety of devices.
- Data Scientists
- Infrastructure Engineers
- Edge/IoT Device Makers