Risk-Free trial

Blog

This blog is intended for software system engineers, architects and managers or people generally interested in development, testing and integration of software systems. It is part of profiq’s community effort that has the objective of sharing knowledge and ideas about software system integration, testing and development. In addition to this technical content, we share updates about life at profiq.

Tags

AI Developer Tools With the rapid advancements in large language models (LLMs) over the past few years, we’ve seen a surge in AI-powered developer tools. Since code is essentially text, it’s an ideal use case for LLMs. While there are many tools available to assist with software development, let’s focus on the most impactful ones: code completion and code manipulation tools. The most well-known tool in this space is GitHub Copilot, which gained immense popularity as one of the first to hit the market. However, there are now several new…

One of our Tech Research team’s most interesting recent projects is the development of an autonomous web explorer. This tool, built on OpenAI's GPT-4o model, extends the capabilities of traditional web scrapers by performing more complex interactions, such as clicking buttons and filling out forms. This means it can navigate through dynamic states on websites—like encountering a login form with an error message about an incorrect password—by understanding and interacting with the content in context. One of the primary uses of our automated web explorer is in the creation of…

profiq Video: Using LangSmith in a non-LangChain codebase by Viktor Nawrath

Applications powered by large language models (LLMs) need to be more than merely functional—they need to be scalable, reliable, and capable of continuous improvement over time. To make this happen, you need tools that allow you to trace and fine-tune your app's performance to identify inefficiencies, diagnose issues, and improve response quality. This is where LangSmith comes into play: a tool built for developers working with LLMs who need insight into how their AI models behave in production environments. LangSmith can be a game changer for AI developers. LangSmith gives…

New frameworks and tools are constantly emerging, each promising to make our lives easier and our applications faster. But every so often, a framework comes along that doesn’t just promise incremental improvements—it redefines the game. Qwik is one such framework: built by performance nerds, for performance nerds.  Qwik introduces novel architecture that eliminates the need for the traditional UI hydration process, a step that often bogs down web apps with unnecessary overhead. Instead of rehydrating the entire UI on the client side, Qwik only activates the parts of the application…

If you want to improve your product by developing new AI features built on top of large language models (LLMs), you have many options to choose from. GPT models from Open AI are often considered the go-to solution for most use cases. But the competition in this space is heating up. Other proprietary solutions such as Gemini from Google or Claude from Anthropic are catching up in terms of quality, features, and pricing. There are also many high-quality open-weight models such as Llama-3.1 from Meta or the Mistral family from…

profiq Video: Evaluating LLMs with MLflow by Miloš Švaňa

Are you developing an application and looking to integrate large language model (LLM) features? With multiple options like GPT, Gemini, Claude, and open-source models from Hugging Face, choosing the right solution can be overwhelming. Each model offers unique strengths, from GPT's versatile text generation to Claude's detailed descriptions, and an open-source model's flexibility. Integrating LLM features can significantly enhance your application by providing capabilities such as natural language understanding, text generation, and intelligent automation. To make an informed decision, it’s essential to evaluate your application's specific needs, compare model performances,…

ai llm

Let’s make LLMs generate JSON!

In this article, we are going to talk about three tools that can, at least in theory, force any local LLM to produce structured output: LM Format Enforcer, Outlines, and Guidance. After a short description of each tool, we will evaluate their performance on a few test cases ranging from book recommendations to extracting information from HTML. And the best for the end, we will show you how forcing LLMs to produce a structured output can be used to solve a very common problem in many businesses: extracting structured records from free-form text.

json llm tools

From ChatGPT to Smart Agents: The Next Frontier in App Integration

It has been over a year since OpenAI introduced ChatGPT and brought the power of AI and large language models (LLMs) to the average consumer. But we could argue that introducing APIs for seamlessly integrating large language models into apps developed by companies and independent hackers all over the world can be the true game changer in the long term. Developers are having heated discussions about how we can utilize this technology to develop truly useful apps that provide real value instead of just copying what OpenAI does. We want to contribute to this discussion by showing you how we think about developing autonomous agents at profiq. But first a bit of background.

agent ai chatgpt large language models llm openai plugin