Some tools serve a simple purpose but use complex systems to get the job done. Take an excavator—it digs holes and moves earth from one place to another. The task is easy to understand, even for a child, but lifting hundreds of kilos of dirt is no small task. It takes several mechanisms working together. These are all controlled from the driver's seat, allowing the operator to drive the machine and move its arm smoothly.

The wiring and controls hide the machine's inner complexity. The driver doesn't need to know how everything works—just which lever to pull and how to move the arm with the stick. This idea shows up often when we talk about interfaces: they simplify what we need to know to use something. In this article, we'll look at interfaces—what they're for, how we use them, and what benefits they bring.

Interfaces in the Wilds

Interfaces aren't just found in software—they're everywhere. Many tools follow shared standards, which act like interfaces in the real world. We don't always call them that, but the idea is the same. A good standard doesn't tell you how to build something—just what it should do and where its limits are.

The Allen wrench is a great example of a standard. It fits any bolt with a hexagonal socket, no matter what you're building or taking apart. As long as the shape matches, it works—simple as that.

This “doesn't need to care” quality keeps things simple. If an Allen wrench only worked with hex bolts made of a specific steel grade and coated with tin, it'd be limited to a niche industry—and mostly used by specialists.

Can we replace an Allen wrench? Sure—as long as the tool fits the hex head and can handle the needed torque. A motorized screwdriver with a hex bit, for example, can do the same job faster.

Standardization brings two key benefits. First, it gives users flexibility. As long as a tool fits a hex head and can handle torque, it can replace an Allen wrench. With fewer constraints, tools become more reusable—since hex bolts come in many sizes and uses, yet still work with the same wrench.

Second, it brings certainty. When something meets a standard, we can trust it to work. That means less trial and error and no wasted time on tools that don't do the job.

Some interfaces hide the complexity behind the scenes so users only deal with what matters. Take a heavy door with a lever—it might use counterweights or pneumatic systems, but all the user needs to do is pull the lever and the door opens.

A modern example is a car. Every driver uses the same basic controls—gas, brake, and steering wheel—without needing to understand how the engine works. That's abstraction: hiding the complex parts and showing only what the user needs.

We brought this concept to software engineering.

Managing Administration Context

Interfaces are common in compiled languages. They balance strict typing with flexibility. In contrast, interpreted languages don't usually need them, since they can work with any type on the fly.

Machines don't need interfaces—we do. Interfaces help us manage types, organize code, and make things easier to understand. As systems grow more complex, abstraction becomes essential. Interfaces also let different developers work independently, as long as they stick to the same contract.

What makes code complex? I use three criteria:

  1. How much context you need to hold to understand it
  2. How many steps it takes to get the job done
  3. How many possible outcomes it can have

Principles like Single Responsibility exist to reduce this complexity. When a function tries to do too much, it increases the mental load, the number of steps, and the chances of unexpected behavior.

A classic sign of complexity is when unrelated features break each other. For instance, you switch your theme from light to dark—and suddenly you can't log out. That means theme changes are somehow affecting the logout logic. This is an example of accidental coupling. The more things talk to each other, the more things can go wrong. That's why the principle of least communication is useful—it helps keep interactions limited and behavior predictable.

The principle of least communication says a function should get only the data it needs. But in a growing codebase, this can feel inconvenient—you have to keep updating inputs as the function changes. So some people just pass the whole data structure to avoid that.

The problem? Now it's unclear what the function actually depends on. When it's time to refactor, you're stuck wondering: Can I change this field? Does it affect anything?

This makes updates riskier and harder. But with least communication, you know every input matters—no guessing required.

When two teams work in parallel, they can move independently—as long as they follow the same interface. Shared assumptions lead to smooth communication, and any bugs will come from implementation, not misunderstandings.

The catch? If the interface changes, both sides need to update their work. That's why it's crucial to define interfaces early, with the full system in mind.

When I first learned about interfaces, they were introduced through polymorphism—and it just didn't click. As a beginner, I hadn't seen complex enough code to understand why adding extra layers made sense. It felt like more steps to do the same thing.

Years later, after working in several codebases, I saw the value: polymorphism lets you swap one type for another as long as it fits the interface. It doesn't matter how different the implementations are.

For example, linear regression and decision tree regression use different logic, but if they take the same input and produce the same kind of output, you can switch between them without breaking the system.

Ownership

An interface connects two parties: the provider and the subject. The provider implements the functionality, and the subject calls it, receives results, and moves forward. Some interfaces focus on what the subject needs, others on what the provider can do, and some just define shared common behavior. Let's take a closer look at these three approaches.

Sometimes the provider creates the interface that the subject uses. This doesn't happen often but it does. Take Redis, for example—it offers interfaces for caching (set, get, delete) and also for pub/sub messaging. Even though Redis provides both, it's helpful to treat these as separate interfaces. This may be less common but is definitely a valid way to organize things. This is similar to how Factory Methods work.

Most often, the subject defines the interface, and the provider works to meet it. The subject doesn't care how the provider does it. So, if the provider changes, the subject keeps working fine—if the interface stays the same.

For example, switching from a relational to a non-relational database only impacts the provider side, as long as the contract with the subject doesn't change. This makes change easier and sets clear boundaries.

One thing to note is that a subject can be a provider from another perspective. The notion of subject provider is depend on the relations.

Finally, some interfaces don't belong strictly to a subject or provider. Take Go's Reader interface—it's used for reading from files, networks, and more. It's so common, it acts like a universal language.

Reader isn't tied to your app or a specific file; it's the middle ground where both meet. This lets your app read from many providers, and providers can serve many subjects.

Usages

Let's look at common ways interfaces are used in real code. These examples will also highlight the concepts we've discussed.

We'll cover:

  1. Go's Reader interface
  2. Factory Method
  3. Strategy Method
  4. Mock testing
  5. OSI Layer

All examples use Go, but don't worry if you're new to it. If you know basic programming like types, methods, and functions, you'll get the idea.

Golang's Reader

We brought up Golang's Reader interface earlier. Let's explore its usage in Golang codebase. An interface in Go is defined as a collection of method signatures.

It transfers data into the byte slice you provide. The n return value represents the number of bytes read. If there are no more bytes to read, then it would return error io.EOF. Many data sources in a computer can be read as byte streams. Two common examples are a file and an HTTP response body.

When reading a file, you want to set the limit you want to read especially if you read file from public. Users might upload extremely large files that could overload or crash your application. You can allocate a buffer with a maximum number of bytes to safely read, let's say 1 kilobytes.

If the file is larger than 1 kilobyte, you only read the first 1 kilobyte and ignore the rest. If the file is smaller, the read may return io.EOF, indicating the end of the data stream. The contents read so far will be stored in the buffer.

func reads(object io.Reader) ([]byte, error) {
  buffer := make([]byte, 1024)
  _, err := object.Read(buffer)
  if err != nil && err != io.EOF {
    return []byte{}, err
  }
  return buffer, nil
}

The same function can handle http request payload. We can use the same container function since both type *os.File and http.Response.Body are implementing Reader interface. However, because the input is an interface, object can only access Read methods even if the actual implementation has many other methods. While this seems restricting, it is a good thing. This aligns with our goal of minimizing complexity in our functions. By constraining this types, we can sure that this function only use Read methods.

Unlike other programming like Typescript, Java, and C#, that allows struct member as a part of interface, Go only contains function signature / methods in an interface.

Strategy Methods

There are many repeatable patterns that software engineers use to solve problems. One common problem is that different types perform the same task in different ways. Sounds abstract? Let's look at a concrete example.

Assume you have three equally popular vendors for transferring money. Users can choose any of them to transfer money. However, these three have different requirement to transferring money. The first vendor requires a transfer token from their server. The second requires embedding a unique identifier. The third requires encrypting the request using a specific method and a provided key. These three offer the same functionality but implement it differently—something quite common.

A common solution is to create a strategy function that returns an interface with a method such as Transfer(amount int, destination string). Interfaces allow different implementations to be used interchangeably, which is crucial when implementations vary but the purpose stays the same. This function takes the selected vendor as input. It might look something like this:


type TransferProvider interface {
  Transfer(amount int, destination string) error
}

func NewTransfer(vendor string) TransferProvider {
  ...
}
          

Each vendor must implement the Transfer method to satisfy the TransferProvider interface. The caller of Transfer doesn't need to know the internal requirements of the transfer process. That logic is encapsulated in the implementation.

What if we want to add a new vendor with different requirements? We can create a new type and ensure it has Transfer methods so it complies with TransferProvider interface. We then update NewTransfer to return the new type when selected. This ensures that callers of the interface don't need to change when a new vendor is added.

The strategy method abstracts the implementation details, so the caller only decides ‘what' to use, not ‘how' to implement it. Which is what we do with NewTransfer. Most design pattern definitions seem abstract because they use non concrete, often unfamiliar terminology. I started with a real-world case so you can understand what the pattern is solving, and then map it back to the abstract definition to recognize it in the future.

Mock Testing

Software needs to be tested to ensure that a particular function behaves correctly given specific inputs. However, sometimes access to external services is limited—either because using the real service incurs cost, introduces latency, or is restricted by regulation or environment (e.g., production-only access).

In such cases, developers use mocking to replace real service interactions with dummy implementations. These mock services are designed to match the interface of the real ones and respond with controlled outputs based on known assumptions. The goal is to isolate the logic under test and simulate predictable behavior, allowing tests to run quickly, reliably, and without side effects.

Making a service swappable requires it to be accessed through an interface. In a real setting, data is sent to the actual service, such as an external API or database. In a mock setting, the data stays within your system—no real network or external communication occurs.

However, since both the real and the mock services implement the same interface, the rest of the system doesn't need to care which one is being used. This separation allows tests to run in isolation, while still exercising the logic that depends on the service interaction.

OSI Layer

One more interface we use frequently is the internet. While it's accessible from many devices and locations, the internet relies on interfaces to deliver information securely and reliably. It is structured in seven layers, commonly known as the OSI model:

  1. Application (what we see)
  2. Presentation
  3. Session
  4. Transport
  5. Network
  6. Data link
  7. Physical

I won't go into detail about each layer, but I'll highlight how some layers interact with one another.

What we see—the actual application—is at the application layer. The presentation layer converts data into a format the application can use. It doesn't care how that data is displayed or interpreted; it only ensures it's in a usable format. If the application doesn't support a file type, that's not the presentation layer's concern.

The session layer manages the data stream. It starts, maintains, and terminates connections but doesn't care what the data contains or how it's formatted—that's the presentation layer's responsibility. This separation of concerns continues across the entire model.

Layering allows us to focus on one level while preserving overall functionality. Each layer communicates with the one directly above or below it through a clearly defined interface. For example, the presentation and application layers agree on file formats. An HTTP request contains headers and payload, with headers and body separated by \n\n. A JPEG file follows a standard image format, so the application layer knows to open it with a media viewer or similar tool.

Conclusion

We've learned that an interface defines and limits communication between two parties. While this may seem restrictive, it actually reduces the number of things that can go wrong. It also allows us to swap one service with another, as long as both conform to the same interface—ensuring the system remains stable.

This advantage might feel negligible in a small codebase, but in multi-repository, multi-team environments, it becomes essential. Each team can focus solely on the interface and their own implementation. Management, too, can allocate resources more confidently, knowing that coordination depends only on agreed-upon interfaces.

Although interfaces don't exist as concrete artifacts, they reduce complexity by managing how components interact. As we've seen, both software and hardware benefit from this approach. Planning system interactions early—through well-designed interfaces—can help maintain clarity and keep complexity under control.