profiq Video: Using LangSmith in a non-LangChain codebase by Viktor Nawrath
Anke Corbin
2 weeks ago
Applications powered by large language models (LLMs) need to be more than merely functional—they need to be scalable, reliable, and capable of continuous improvement over time. To make this happen, you need tools that allow you to trace and fine-tune your app’s performance to identify inefficiencies, diagnose issues, and improve response quality. This is where LangSmith comes into play: a tool built for developers working with LLMs who need insight into how their AI models behave in production environments. LangSmith can be a game changer for AI developers.
LangSmith gives developers the ability to trace and evaluate the calls made to language models. By tracking these interactions, you can gain a clearer understanding of your app’s behavior and identify where improvements are needed, whether it’s optimizing response time or enhancing the quality of outputs.
In this video, profiq’s Viktor Nawrath takes you behind the scenes of using LangSmith. Viktor is the Software Engineering Manager at profiq, with over a decade of experience in technology and a particular interest in AI/Machine Learning. Viktor has an extensive background developing software using JavaScript, TypeScript, React, Redux, GraphQL, Python, Elixir, and AWS.
In the video, Viktor walks users through using LangSmith without LangChain. He demonstrates its myriad features, such as the basics of the dashboard, running traced calls, managing created traces, and ultimately shows you how LangSmith provides vital data for improving your AI app’s performance and getting it production-ready.
Here are some important timestamps from the video:
0-:30 Intro
00:30 OpenAI wrapper and traceable APIs
02:42 Running the traced call
03:35 LangSmith dashboard
04:17 Trace detail
05:46 Annotations and datasets
08:22 Manually creating traces
10:45 Final thoughts
Watch the full video here:
You May Also Like:
profiq Blog: MLflow: serving LLMs and prompt engineering by Miloš Švaňa
profiq Blog: Evaluating LLMs with MLflow by Miloš Švaňa
profiq Video: Tech demo: Autonomous Agents using LLMs by Viktor Nawrath
profiq Video: Technology Demo – Intro to perf testing by Petr Vecera
LangChain: LangSmith
InfoWorld: What is LangSmith? Tracing and debugging for LLMs
YouTube Video: LangSmith Tutorial – LLM Evaluation for Beginners by Dave Ebbelaar