Free trial

How to develop a MuleSoft connector

GitLab logo


After a brief introduction to the MuleSoft platform in my first blog, I would like to share my first experience with developing a connector. I decided to develop a Gitlab connector because it is a great system and I did not find a connector that would be developed for it.

First, I’m going to focus on describing the options you have when you decide to develop a connector. Next, I will describe, how to structure the connector and what are the specifics that you need to care of in comparison to the development of a standard Java program.

Development options

You have basically three options when deciding about the way to develop your connector. You can choose to develop the connector based on Java SDK for your app, with SOAP approach or as RESTfull API. You can read more about this in the official documentation.

Gitlab has a Java SDK available, so I have chosen to develop the connector that way. It has the advantage that I do not have to bother with low-level logic (such as how to call a GET request and others) because the Gitlab Java SDK solves this for me. I just call appropriate methods from the SDK.

If there is no Java SDK available for your project, you will have to develop it from scratch. You can automate parts of this task, such as code generation from YAML API specification, but describing it goes beyond the scope of this article.


The structure is briefly described in the MuleSoft documentation so I will do just a quick high-level introduction here.

Most fundamentally the connector consist of two parts. The config class and the connector itself. The config class stores the data related to the connector instance. In my case, it is the username, password, and the Gitlab host URL. There is also the option to enter API key but the connector is able to obtain it automatically based on the username and password too. The config also contains read-only API endpoint pointer which is the gate to all API calls. It is established after connecting to the Gitlab with credentials. All the connection and disconnection logic is located in the config class too.


@ConnectionManagement(friendlyName = "Configuration")
public class ConnectorConfig {
    private String gitlabHost = "";

    private String privateToken;
    @Connect(strategy = ConnectStrategy.SINGLE_INSTANCE)
    public void connect(@ConnectionKey final String username, @Password final String password, @Optional final Boolean ignoreCertificationErrors, @Optional final Integer requestTimeout) throws ConnectionException {
        ... // Connection logic

    public void disconnect() {
        ... // Disconection logic

    public void testConnectivity(@ConnectionKey final String username, @Password final String password, @Optional final Boolean ignoreCertificationErrors, @Optional final Integer requestTimeout) throws ConnectionException {
        ... // Test connectivity logic

    public boolean isConnected() {
        ... // Validation connection logic


All processor methods that call the Gitlab API are included in the connector class. You could implement most of the methods with just one line by calling the appropriate SDK method. I decided to improve the code by adding logging and exception handling. I also decided that processors inputs will be just basic data types (Integer, String, etc.) and enums in some cases. The choice was made to unify the approach (SDK mismatches that it sometimes requires the class, while sometimes just the ID of the element) and simplify the usage of the connector. I often used annotation “FriendlyName” too to make the connector look more consistent.

Here is an example processor:

public GitlabMergeRequest acceptMergeRequest(@FriendlyName("Project ID") final Integer projectId, @FriendlyName("Merge Request ID") final Integer mergeRequestId, @Optional final String mergeCommitMessage) throws IOException {
    final GitlabProject project;
    final GitlabMergeRequest result;

    LOGGER.trace("Trying to load project {}...", projectId);
    try {
        project = this.config.getApiHandler().getProject(projectId);
    catch (final IOException ex) {
        LOGGER.error("Project {} was not loaded.", projectId);
        throw ex;
    if (project == null) {
        LOGGER.error("Project {} was not loaded (return \"null\").", projectId);
        throw new IOException("Project was not loaded (return \"null\").");
    LOGGER.debug("Project {} was loaded correctly.", projectId);

    LOGGER.trace("Trying to accept merge request {}...", mergeRequestId);
    try {
        result = this.config.getApiHandler().acceptMergeRequest(project, mergeRequestId, mergeCommitMessage);
    catch (final IOException ex) {
        LOGGER.error("Accepting merge request {} failed.", mergeRequestId);
        throw ex;
    if (result == null) {
        LOGGER.error("Accepting merge request {} failed (return \"null\").", mergeRequestId);
        throw new IOException("Accepting merge request failed (return \"null\").");
    LOGGER.debug("Merge request {} was accepted correctly.", mergeRequestId);

    return result;

If you would like to see the complete connector, you can find and download it on Github. You are welcome to use it and improve it through fork and pull request. Looking forward to hearing your feedback.

gitlab mulesoft

4 Responses to “How to develop a MuleSoft connector”

  1. Learn how to develop a MuleSoft connector | DevOps Home says:

    […] This post was originally written by Filip Vavera from profiq. […]

  2. Petr says:

    Why did you use Mule when here is Apache Camel?

    • Gabor Puhalla says:

      Hi, we developed this connector as part of our community efforts. We were curious about MuleSoft, we explored it and developed a connector for it. We will possibly explore and write about Apache Camel too, in the future.

  3. Morgan says:

    Hi There,

    Great write up – thanks! Just an important heads up
    An API for API’s is also a tricky sales problem, especially if it’s for an often used / well documented API.
    The implementation of your client / SDK has to work better than integrating with the SDKs of the original API themselves, and business logic needs to be predictable or manageable. Recently, I’ve been impressed with Segment, which provides an API of APIs for tracking user interactions to Mix Panel, Customer IO, and others. The trick here is that these are all similar – based off of events with data -that feed into the other systems. But even in this use case, there are tricks or annoyances, such as update/delete capabilities.
    So, what is the difference between Zapier and Mule Soft?
    Do you publish any video tutorial series on YouTube about Technology, it would definitely make it easier to understand and get started with it.
    If you mind I can connect you via LinkedIn or Twitter to stay updated about your new posts.

    Appreciate your effort for making such useful blogs and helping the community.

    Best Regards,

Leave a Reply

Related articles


Let’s make LLMs generate JSON!

In this article, we are going to talk about three tools that can, at least in theory, force any local LLM to produce structured output: LM Format Enforcer, Outlines, and Guidance. After a short description of each tool, we will evaluate their performance on a few test cases ranging from book recommendations to extracting information from HTML. And the best for the end, we will show you how forcing LLMs to produce a structured output can be used to solve a very common problem in many businesses: extracting structured records from free-form text.

Notiondipity: What I learned about browser extension development

Me and many of my colleagues at profiq use Notion for note-taking and work organization. Our workspaces contain a lot of knowledge about our work, plans, or the articles or books we read. At some point, a thought came to my mind: couldn’t we use all this knowledge to come up with project ideas suited to our skills and interests?

From ChatGPT to Smart Agents: The Next Frontier in App Integration

It has been over a year since OpenAI introduced ChatGPT and brought the power of AI and large language models (LLMs) to the average consumer. But we could argue that introducing APIs for seamlessly integrating large language models into apps developed by companies and independent hackers all over the world can be the true game changer in the long term. Developers are having heated discussions about how we can utilize this technology to develop truly useful apps that provide real value instead of just copying what OpenAI does. We want to contribute to this discussion by showing you how we think about developing autonomous agents at profiq. But first a bit of background.