OpsLevel Logo
Product

Visibility

Catalog

Keep an automated record of truth

Integrations

Unify your entire tech stack

OpsLevel AI

Restoring knowledge & generating insight

Standards

Scorecards

Measure and improve software health

Campaigns

Action on cross-cutting initiatives with ease

Checks

Get actionable insights

Developer Autonomy

Service Templates

Spin up new services within guardrails

Self-service Actions

Empower devs to do more on their own

Knowledge Center

Tap into API & Tech Docs in one single place

Featured Resource

OpsLevel's new MCP Server powers your AI Assistant with real-time context
OpsLevel's new MCP Server powers your AI Assistant with real-time context
Read more
Use Cases

Use cases

Improve Standards

Set and rollout best practices for your software

Drive Ownership

Build accountability and clarity into your catalog

Developer Experience

Free up your team to focus on high-impact work

Featured Resource

Production readiness checklist: An in-depth guide
Production readiness checklist: An in-depth guide
Read more
Customers
Our customers

We support leading engineering teams to deliver high-quality software, faster.

More customers
Hudl
Hudl goes from Rookie to MVP with OpsLevel
Read more
Hudl
Keller Williams
Keller Williams’ software catalog becomes a vital source of truth
Read more
Keller Williams
Duolingo
How Duolingo automates service creation and maintenance to tackle more impactful infra work
Read more
Duolingo
Resources
Our resources

Explore our library of helpful resources and learn what your team can do with OpsLevel.

All resources

Resource types

Blog

Resources, tips, and the latest in engineering insights

Guide

Practical resources to roll out new programs and features

Demo

Videos of our product and features

Events

Live and on-demand conversations

Interactive Demo

See OpsLevel in action

Pricing

Flexible and designed for your unique needs

Docs
Log In
Book a demo
Log In
Book a demo

Just launched: OpsLevel MCP

‍

Share this
Table of contents
 link
 
Resources
Blog

Building an MCP Server: A Developer's Guide

No items found.
Building an MCP Server: A Developer's Guide
Wesley Ellis
|
July 8, 2025

At OpsLevel, we've just launched our open-sourced MCP server, enabling seamless integration of OpsLevel context with large language models. This post dives into why we did that and how you can do the same. 

Note: I’m going to assume you’re working for a software company of some kind, that has an API and customers, but you can follow along just as easily if you just want to peek under the hood.

What is MCP?

Model Context Protocol is a new standard initially published by Anthropic, that aims to define the way that applications provide context and tools to Large Language Models. To give a concrete example, GitHub Copilot in VS Code (the client) can query data from OpsLevel (the server) regardless of which model (OpenAI’s o3 or Anthropic’s sonnet 3.7) a developer has chosen to use.

When it comes to LLMs, context is everything but you can’t fit everything in the context window. MCP provides a way for a client to build up a list of “tools” and provide those to a model at the beginning of a chat. For example, opslevel-mcp currently provides a components tool (list all components like services at your company) and a documents tool (search developer documentation). That way if you ask Copilot “how do I rollback my shopping cart service”, the model can get that information from OpsLevel via the MCP server.

At a low level, MCP defines a JSON RPC based protocol (because it’s 2025) for clients to call methods on the MCP server like:

  • tools/list (get the tools available)
  • tools/call (call a specific tool with optional arguments)

There are loads more of these you can see in the spec, but most developers will be interacting with MCP using some kind of SDK or library.

What we’re focused on today is the server part. How do we write one of those? But before we get into that let us talk about why.

How does this help users?

We talked a bit about this above, but MCP empowers LLM tools by providing structured access to crucial information and capabilities.

At OpsLevel, we think developers using AI tools to write code or understand their software systems will benefit from granting LLMs the ability to use the information in OpsLevel. Grounding LLMs with actual up to date information is a good way to prevent hallucinations and provide relevant background information.

It’s also a good idea for you to own the MCP server that people use to interact with your company. Nothing stops a random developer from making a server that talks to your APIs, and there’s no guarantee they will do it well, or do it non-maliciously.

Why MCP?

Function calling / tool use is still pretty new and evolving. Currently, all of the major LLM providers have some way of exposing functions or tools that the model can use when calling their specific APIs. This works if you are the one making API calls as a developer and there are only one or two providers you need to interact with.

From a more consumer perspective, OpenAI tried to do something similar with ChatGPT Plugins, but they were discontinued, or maybe replaced with GPTs. That effort was OpenAI specific, so if you developed a plugin for them, there was no way to leverage it elsewhere. MCP is designed to be client agnostic.

Today there are three major MCP clients people can use:

  • Github Copilot with Visual Studio Code
  • Claude Desktop and Claude on the web
  • Cursor, the AI Code editor.

You may notice one big glaring omission there, nothing from OpenAI. While you can use OpenAI models with Copilot, the only other way to use MCP right now is with the OpenAI Agents framework. That said, OpenAI’s CEO did tweet that support in the desktop app was coming soon:

‍

Sam sharing some public support for MCP

‍

And even Google is getting behind MCP...

‍

Google is also starting to share support for MCP

‍

In the days since I started writing this, Anthropic announced that Claude on the web can use remote MCP servers and launched with a handful of them.

This is a rapidly evolving space, but we believe there is a good alignment of incentives between people making clients, people making models and people making servers that will drive MCP adoption.

Development Process

Library choice

Step 1 is picking a library/SDK for your MCP server. If you are using:

  • Python
  • Typescript/Javascript
  • Java / Kotlin
  • Swift
  • C#
  • Rust

then you are in luck because there are official SDKs available!

For us at OpsLevel, we wanted to use Go, since we have a Go SDK already and our initial proof of concept for this started out as a subcommand in the OpsLevel CLI. Add to that the simplicity of distributing go binaries, it was an easy choice. It also lets us build and publish a docker container for users or clients that would prefer that.

We found 2 libraries we could work with after:

  1. https://github.com/metoro-io/mcp-golang
  2. https://github.com/mark3labs/mcp-go

Our initial work started with mcp-golang, but we switched to mcp-go after running into some JSON schema issues when connecting with VSCode as a client and generally finding the quality of mcp-go better.

Transport choice

Next we had to choose the transport we wanted to use. MCP defines 2 ways for Clients to talk to Servers:

  1. Stdio: client communicates with server over stdin and stdout
  2. HTTP with SSE: client uses HTTP to talk to a server somewhere

Stdio is a lot easier to get started with and it’s what we choose to use. Primarily, we chose it because HTTP transport is still very new and none of the clients supported it when we started working. That has changed recently, and I expect HTTP to become the dominant transport in the future, but we’re trying to get in on the ground floor here.

Stdio was simple to build and we didn’t have to worry about hosting. But it does place more burden on the user during setup since they have to get the server binary. We let users brew install opslevel/tap/opslevel-mcp or use the public.ecr.aws/opslevel/mcp:latest docker image. That works OK for developers (which are our primary customers) but works less well for less technical users.

We’ll talk more about this in the security section below, but I also think the security model is a bit tighter here. The MCP client runs the server binary itself and communicates over pipes. There are no open ports to worry about, and anything that can run commands on a user's computers doesn’t need to bother with a random MCP server binary.

Development Env

As someone working on an MCP server, we found that Claude desktop and VSCode were the easiest to develop with. The development loop looked like

  1. Make changes to the MCP server and rebuild
  2. Switch to MCP client and try out a query
  3. wait… (LLMs are still pretty slow!)
  4. GOTO 1

Pro tip for Claude Desktop: Cmd+R reloads the electron app, restarting all of the MCP server connections, speeding up the development loop after backend changes.

The MCP project also has an inspector, which was useful in debugging at a lower level.

Code Structure

Our MCP server is essentially an API client for OpsLevel. When a tool call comes in for users we make an API request (using our opslevel-go sdk). In fact, most of the opslevel-mcp server consists of exposing existing queries using mcp-go. Here’s an example of the users tool:

s.AddTool(
        
        mcp.NewTool("users", mcp.WithDescription("Get all the user names, e-mail addresses and metadata for the opslevel account.  Users are the people in opslevel. Only use this if you need to search all users.")),

        func(ctx context.Context, req mcp.CallToolRequest)
‍
        (*mcp.CallToolResult, error) {
                resp, err := client.ListUsers(nil)
                return newToolResult(resp.Nodes, err)
       
        })

So, if you already have an SDK for your service then the programming part of making an MCP server consists mostly of doing some plumbing.

The other part of making an MCP server is documentation. For the server itself and for each tool (and tool parameter), you write a description. This description is important because it’s all the LLM has access to when determining which tools to call and how. These are basically prompts and thus benefit from prompt engineering. So the obvious question is: how do I know if the descriptions are good?

Well, that’s the topic for the next blog post. MCP is so new that there are no existing tools for evaluating how well LLMs use an MCP server! We took a stab at writing an evaluation framework, and we’ll talk about what we did and how well it worked.

Security

So, there have been a few articles yelling about the security of MCP servers, and to be fair there are dangers to worry about. I think there are 3 broad categories of things that can go wrong in this space.

1. LLM going wild

If you let an LLM run arbitrary SQL commands against your database, there’s a risk that it does something stupid like dropping important tables in prod or it makes a more subtle mistake.

We chose to hedge against this one by starting with read only tools. The opslevel-mcp can fetch and query data, but it cannot create, update or destroy anything. In the future, when we feel more confident and there is customer demand, we could expose additional capabilities to an AI. I think the protocol could do a better job of marking some tools as “dangerous” or in need of additional approval from users. Github’s MCP server addresses in part by allowing users to specify a “tool list” argument if they don’t want to allow access to merging. You can also address this at the API level by using an API key with a server that doesn’t have dangerous permissions.

2. Insecure access to MCP

Some MCP servers run on user machines listening on localhost. That means anything that can make HTTP requests (like a malicious chrome extension) could access all the information and abilities exposed from that server. The MCP specification does support authentication using Oauth 2.0/2.1. This would allow servers to only process authenticated requests, instead of all requests.

3. Malicious MCP servers

Since MCP servers are providing context to the LLM “in band”, it is possible to trick LLMs using all manner of prompt injection attacks. You could lie in tool descriptions and responses to get the LLM to call a tool and then exfiltrate data available from other MCP servers. You could take over a previously trusted MCP server and run malicious code, since many MCP servers are invoked with npx some-package@latest . Being careful about what MCP servers you use, pinning dependencies and using docker can all help to mitigate the dangers here, but ultimately users will have to trust MCP servers just like they trust command line tools and chrome extensions. Part of the reason we chose to go with a binary and a docker container is that it puts control over updates into the hands of the user.

Launching

The last part of launching our MCP server was to get the word out about it (aka marketing!). We had seen a few MCP server “directories” in our research, so we decided to make a list of all of them. Turns out there are quite a number of them. Here they are in no particular order:

  • modelcontextprotocol/servers: repo under the official modelcontextprotocol github organization
  • MCP.so
  • Smithery.ai
  • cursor.directory (made by fans of the Cursor IDE)
  • Glama MCP servers
  • Awesome MCP Servers
  • Pulse MCP

Some of these involved opening pull requests, others were forms to submit. Only time will tell which of these will be around in a year or two.

Stay tuned for part two, where we will cover the particular challenges we had in measuring how effectively LLMs can use our MCP server and evals in general. In the meantime, feel free to explore our opslevel-mcp project on GitHub!

‍

More resources

AI coding assistants are everywhere, but are developers really using them?
Blog
AI coding assistants are everywhere, but are developers really using them?

AI coding tools are at maximum hype, but are teams actually getting value from this new technology?

Read more
Fast code, firm control: An AI coding adoption overview for leaders
Blog
Fast code, firm control: An AI coding adoption overview for leaders

AI is writing your code; are you ready?

Read more
March Product Updates
Blog
March Product Updates

Some of the big releases from the month of March.

Read more
Product
Software catalogMaturityIntegrationsSelf-serviceKnowledge CenterBook a meeting
Company
About usCareersContact usCustomersPartnersSecurity
Resources
DocsEventsBlogPricingDemoGuide to Internal Developer PortalsGuide to Production Readiness
Comparisons
OpsLevel vs BackstageOpsLevel vs CortexOpsLevel vs Atlassian CompassOpsLevel vs Port
Subscribe
Join our newsletter to stay up to date on features and releases.
By subscribing you agree to with our Privacy Policy and provide consent to receive updates from our company.
SOC 2AICPA SOC
© 2024 J/K Labs Inc. All rights reserved.
Terms of Use
Privacy Policy
Responsible Disclosure
By using this website, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Data Processing Agreement for more information.
Okay!