SystemC is built on one fundamental idea: separate communication from computation. Modules do the work; channels move the data. This separation is what makes SystemC scale from a simple shift-register model all the way to a full virtual platform with multiple CPUs, memories, and interconnects.

But channels didn't arrive fully formed. They evolved — from simple synchronization primitives to hardware signal semantics to full transaction-level protocol stacks. This article traces that evolution and explains why each step existed.

1. What Is a Channel?

In SystemC, a channel is a class that implements one or more interfaces. An interface is a pure virtual base class (inheriting from sc_interface) that defines an API — a set of methods the channel must provide.

Modules connect to channels through ports. A port is typed to an interface: sc_port<my_if>. This means a module doesn't care which channel is connected — it only cares that the channel implements the required interface. Swap the channel, the module never knows.

The triad that makes it work:
Interface → defines the API contract  |  Channel → implements the interface  |  Port → connects module to channel via the interface

All SystemC channels inherit from one of two base classes: sc_prim_channel (primitive — no hierarchy, no processes, very fast) or sc_channel (hierarchical — can have sub-modules, ports, and processes). This distinction drives the entire evolution.


2. Primitive Channels — The Starting Point

Before channels existed, engineers shared data between concurrent processes using raw module member variables and hand-coded events. The problem: two processes running in the same time step with no defined execution order could silently corrupt shared state. Channels were introduced to eliminate this hazard.

sc_mutex — Protecting Shared Resources

The simplest channel: a mutual exclusion lock. One process locks it; others block until it's released.

sc_mutex bus_access;

void write_thread() {
  bus_access.lock();      // block if already held
  // ... perform write ...
  bus_access.unlock();
}

// SC_METHOD variant (non-blocking)
void grab_bus_method() {
  if (bus_access.trylock() == 0) {
    // got it
    bus_access.unlock();
  }
}

Classic use: bus arbitration — multiple masters competing for a shared bus, with the mutex standing in for a real arbiter during early architectural exploration.

Limitation: sc_mutex has no event to signal when it becomes free. A waiting process must poll with trylock() between wait() calls, or it will deadlock the simulation.

sc_semaphore — Counting Resources

A generalization of the mutex: instead of one token, you have N tokens. A semaphore initialized with count 3 allows up to 3 simultaneous holders.

sc_semaphore read_ports(3);   // 3 concurrent reads allowed
sc_semaphore write_ports(1);  // only 1 write at a time

void read(int addr, int& data) {
  read_ports.wait();    // acquire one slot
  // ... perform read ...
  read_ports.post();    // release
}

Practical uses: multiport RAM access control, TDM timeslot allocation, token ring management. Note that sc_semaphore::wait() is implemented internally with wait(event) — it's not the same as the process-level wait().

sc_fifo<T> — Buffered Data Flow

The most widely used architectural-level channel. A templated first-in-first-out queue with configurable depth (default: 16 elements). It implements two interfaces: sc_fifo_in_if<T> for reading and sc_fifo_out_if<T> for writing.

sc_fifo<double> pipe(24);   // depth-24 FIFO

// Producer SC_THREAD
void producer() {
  pipe.write(3.14);            // blocks if full
}

// Consumer SC_THREAD
void consumer() {
  double val = pipe.read();    // blocks if empty
}

// Non-blocking variant
void method_consumer() {
  double val;
  if (pipe.nb_read(val)) {
    // got data
  } else {
    next_trigger(pipe.data_written_event());
  }
}

sc_fifo<T> naturally models Kahn process networks: blocking reads and writes with bounded buffers. It decouples producer and consumer rates, which is exactly what you need when modeling an image processing pipeline, a communication stack, or a bus-to-peripheral data path.


3. The Evaluate-Update Paradigm — Hardware Fidelity

The mutex and semaphore channels are software synchronization primitives. Hardware simulation needs something more: all sinks must see a signal update at the same simulation time. This is the fundamental requirement that the shift register problem exposes.

Consider four registers in a chain. Each register is a concurrent process. In hardware, they all sample their input simultaneously on the clock edge and all update simultaneously. In naive software simulation with no ordering guarantee, a chain of assignments corrupts:

// WRONG for concurrent hardware model
Q4 = Q3;
Q3 = Q2;
Q2 = Q1;
Q1 = DATA;  // Q4 now has DATA, not the old Q3

The solution is the evaluate-update (delta-cycle) paradigm, baked into sc_signal<T>.

How Delta Cycles Work

Every sc_signal<T> maintains two storage locations: a current value (what processes read) and a new value (what processes write to). The simulation kernel runs in two phases:

This evaluate → update cycle is a delta cycle. Multiple delta cycles can happen at the same simulation timestamp. Time only advances when no more deltas are pending.

sc_signal<int> count_sig;

// During evaluate phase:
count_sig.write(10);            // stores 10 in "new value"
int v = count_sig.read();       // still returns OLD value!

wait(SC_ZERO_TIME);             // let update phase run

v = count_sig.read();           // NOW returns 10
This behavior surprises software engineers and delights hardware engineers. It's identical to VHDL's signal and Verilog's non-blocking assignment (<=). The current value is frozen until the update phase completes — which is exactly how flip-flops work in silicon.

sc_signal<T> API

sc_signal<bool> clk;
sc_signal<int>  data;

// Sensitivity (static)
sensitive << clk.posedge_event();
sensitive << data.value_changed_event();

// Runtime wait
wait(data.value_changed_event());

// Edge detection (bool/sc_logic specialization only)
wait(clk.posedge_event() | clk.negedge_event());
if (clk.posedge()) { /* clock rose in last delta */ }

Multi-Driver Channels: sc_signal_resolved

Standard sc_signal<T> enforces a single-writer rule: at most one process may write per delta cycle. For tri-state buses where multiple drivers compete, SystemC provides sc_signal_resolved and sc_signal_rv<W>. These allow multiple writers and apply a resolution table (0+Z=0, 1+Z=1, 0+1=X) to produce the bus value — the same logic a real tri-state buffer implements in silicon.


4. Ports, Interfaces, and Exports — The Connectivity Model

As designs grow, wiring channels directly becomes unmanageable. The port-interface-export system solves this at scale.

Ports and Interfaces

A port is typed to an interface: sc_port<sc_fifo_in_if<Pkt>>. The module calls methods on the port; the port forwards them to whatever channel is bound at elaboration time. This makes modules completely independent of channel implementation.

SC_MODULE(Consumer) {
  sc_port<sc_fifo_in_if<int>> in;   // typed to interface

  void run() {
    int val = in->read();            // calls channel's read()
  }
};

sc_export — Moving Channels Inside

Sometimes the channel belongs inside a module (think: a memory model that has its own internal FIFO for request queuing). sc_export<interface> exposes that interface from within:

SC_MODULE(Memory) {
  sc_export<sc_signal_in_if<int>> data_out;
  sc_signal<int> m_data;

  SC_CTOR(Memory) {
    data_out(m_data);  // bind export to internal channel
  }
};

The outside world binds to the export as if it were a port. The channel stays encapsulated. This also enables one channel to expose multiple interfaces — a channel inheriting from two interface classes can have two exports, one per interface.


5. Custom Channels — When Built-ins Aren't Enough

Built-in channels cover common cases. Real projects always hit a case the standard doesn't anticipate. That's where custom channels come in — and they split into two types.

Primitive Custom Channels (sc_prim_channel)

Fast, no hierarchy, no processes. Ideal for simple signaling mechanisms. Here's a custom interrupt channel — cleaner than abusing sc_signal<bool> for events:

class eslx_interrupt_gen_if : public sc_interface {
public:
  virtual void notify() = 0;
  virtual void notify(sc_time t) = 0;
};

class eslx_interrupt_evt_if : public sc_interface {
public:
  virtual const sc_event& default_event() const = 0;
};

class eslx_interrupt
  : public sc_prim_channel
  , public eslx_interrupt_gen_if
  , public eslx_interrupt_evt_if
{
public:
  explicit eslx_interrupt()
    : sc_prim_channel(sc_gen_unique_name("intr")) {}

  void notify()             { m_intr.notify(); }
  void notify(sc_time t)    { m_intr.notify(t); }
  const sc_event& default_event() const { return m_intr; }

private:
  sc_event m_intr;
  eslx_interrupt(const eslx_interrupt&) {} // no copy
};

The channel can now be used in static sensitivity lists (sensitive << intr_ch) because it exposes default_event().

Hierarchical Custom Channels (sc_channel)

sc_channel is a typedef for sc_module. A hierarchical channel is literally a module that implements interfaces. It can have its own sub-modules, ports, processes, and internal state. This is the right tool for complex bus protocols.

Examples from industry: an AMBA AXI channel that handles burst transactions, handshaking, and channel arbitration internally — while exposing a simple read(addr) / write(addr, data) interface to the modules that connect to it. The protocol complexity is hidden inside the channel. Swap it for an RTL-accurate version later without touching the master or slave modules.

Feature Primitive (sc_prim_channel) Hierarchical (sc_channel)
Hierarchy / sub-modulesNoYes
Simulation processesNoYes
PortsNoYes
Evaluate-update supportYes (request_update)No
SpeedFastSlower
Best forSimple signals, FIFOs, eventsComplex buses, transactors

6. TLM — The Channel Becomes a Protocol

RTL-style channels (signals, FIFOs, pin-toggling) simulate accurately but slowly. For a full SoC with CPUs executing millions of instructions, you need to simulate communication at a much higher level. This is the motivation for Transaction-Level Modeling (TLM).

Instead of wiggling wires, a TLM channel transports an entire transaction — a struct describing a bus operation (address, data, command, burst length) — in a single function call.

TLM 1.0 — Standardizing the Interface

The OSCI TLM 1.0 standard (integrated into SystemC) defines three interface categories:

And the channels that implement them:

// Unidirectional FIFO
tlm_fifo<Packet> pipe(8);     // depth-8 TLM FIFO

// Request/response pair (two FIFOs)
tlm_req_rsp_channel<BusReq, BusRsp> bus_chan;

// Bidirectional transport
tlm_transport_channel<BusReq, BusRsp> xport_chan;

TLM 2.0 — The Industry Standard

TLM 2.0, now part of IEEE 1666-2011, goes further. It introduces:

With TLM 2.0, a CPU model calls socket->b_transport(payload, delay). The channel (or target) handles the protocol, timing, and routing — all hidden behind a single function call. A full ARM Cortex-A SoC simulates at hundreds of MIPS this way.


The Full Picture

The evolution of channels maps directly onto the abstraction levels of hardware design:

Channel Abstraction Models
sc_mutex / sc_semaphoreAlgorithmicResource contention, arbitration
sc_fifo<T>ArchitecturalData flow, pipeline buffering
sc_signal<T>RTL / Cycle-accurateHardware wires, flip-flop semantics
Custom primitiveAnyDomain-specific events, adaptors
Hierarchical channelBus / ProtocolAMBA, AXI, PCI, transactors
TLM 1.0 channelsTransaction-levelAbstract data transfer
TLM 2.0 socketsSystem-levelFull SoC virtual platforms

The channel-interface-port triad is the same across every level. What changes is the granularity of communication: from toggling a single bit on a wire, to passing a multi-kilobyte DMA transfer in one function call. That's the abstraction ladder SystemC was designed to climb — and channels are the rungs.