Jeff Weiss is a Senior Staff Backend Engineer at Adobe, working on the Frame.io team. Within his first three months, he delivered the “Media Intelligence” search features announced at Adobe MAX 2025, including natural language search powered by LLMs and semantic visual search via vector embeddings. With over 10 years of professional Elixir experience spanning renewable energy industrial IoT (Panthalassa), virtual power plants (Enbala), autonomous vehicles (PolySync), and logistics (Le Tote), Jeff specializes in building resilient, high-throughput distributed systems. He holds a BS in Computer Science from Kansas State University and an MBA in International Business.
At Frame.io (Adobe), we built AI-powered search to help creative professionals find the exact shot or moment across millions of assets. This required two AI integrations: an LLM for natural language queries and vector embeddings for semantic visual search. Our LLM component uses Meta’s Llama 4 via AWS Bedrock, translating queries like “videos of people on beaches from last week” into OpenSearch DSL. For visual search, we generate vector embeddings using Adobe’s SearchCut API (CLIP-based models), enabling similarity searches without metadata. The challenge? Video embedding takes 30+ seconds per asset. Our Oban infrastructure—already split across four instances due to database contention—couldn’t handle this load without impacting critical jobs. Our solution: a multi-stage Broadway pipeline consuming from SQS. A router pipeline handles authorization and versioning, fanning out to dedicated embedding and transcription workers. This provides independent scaling, traffic buffering, and complete isolation from our job system. You’ll learn: practical LLM integration patterns, when to choose Broadway over Oban, and how to build observable AI pipelines in Elixir.
Key Takeaways:
Target Audience: