Metrics
This section describes the metrics used to evaluate models in NWAVE, split into software metrics (used to assess functional performance) and hardware metrics (used to estimate feasibility, cost, and constraint compliance on the H1 Neuronova neuromorphic chip).
Software Metrics
Accuracy
Accuracy measures how often the network predicts the correct class label. It is computed by comparing the model’s predicted class against the provided target labels.
Two accuracy variants are supported, depending on how output neurons are organized.
Plain Accuracy
Use case: one output neuron per class.
Import with:
from nwavesdk.metrics import accuracy
Input
spk: output spike tensor of shape (B, T, C)
where:B= batch sizeT= number of time stepsC= number of classestargets: tensor of shape (B) containing ground-truth class indices
How it works
- Spikes are summed over time for each class
- The class with the highest spike count is selected as the prediction
Output
- A scalar accuracy value in [0, 1]
Population Accuracy
Use case: multiple neurons represent the same class (population coding).
Import with:
from nwavesdk.metrics import accuracy_population
Input
spk: output spike tensor of shape (B, T, O)
whereO = num_classes × neurons_per_classtargets: tensor of shape (B) containing ground-truth class indicesnum_classes: number of output classes
How it works
- Output neurons are grouped by class
- Spikes are summed across time and across neurons belonging to the same class
- The class with the highest total spike count is selected
Output
- A scalar accuracy value in [0, 1]
Note
Population accuracy is recommended when deploying to hardware, as it is more robust to mismatch and quantization effects. An appropriate loss that lets the network optimize this metric is explained in the population loss section.
Hardware Metrics
Chip Consumption
get_chip_consumption estimates the average power consumption of a fully
hardware-aware network when mapped to the H1 Neuronova chip.
Import with:
from nwavesdk.metrics import get_chip_consumption
Input
model: a network composed exclusively of hardware-compatible layers
(Frontend,HWLayer,HWSynapse,FakeQuantize)spks: list of spike tensors, one per hardware layer
each of shape (B, T, N)dt: simulation timestep in seconds
How it works
The metric is estimated from experimental measurements on the H1 chip and accounts for both dynamic and static contributions to power consumption.
Output
- A single scalar value representing total average power consumption
Unit of measure
- Watts (W)
Warning
Power consumption can only be computed for models that are fully hardware-aware. Mixing software-only layers will raise an error.
Warning
Frontend layers are not included in the power estimation. The Frontend operates
on analog signals from the hardware filterbank, so its power consumption follows a
different model than the spike-based digital synapses (HWSynapse). The current
implementation skips Frontend layers and emits a warning. Frontend power estimation
is not supported yet.
Chip Deployability
is_net_deployable checks whether a model satisfies all architectural and
numerical constraints required for deployment on the H1 Neuronova chip.
Import with:
from nwavesdk.utils import is_net_deployable
Input
model: any PyTorch module in general, but likely will receive modules that contains NWAVE's layers.
What is checked
- Correct layer ordering:
Frontendas first layerHWLayeras second layer- Alternating
HWSynapse/HWLayerafterwards - Maximum number of frontend inputs (≤ 16)
- Total number of neurons (≤ 256)
- All parameters within the allowed weight range
- Hardware sign-topology constraints on synaptic matrices
These constraints directly reflect the physical limitations of the chip.
Output
Trueif the network is deployableFalseotherwise
Tip
When this check fails, training with hardware-aware losses can help.
Topology Coherence
coherence quantifies how well a synaptic weight matrix complies with the
hardware sign-topology constraint enforced by the H1 Neuronova chip.
This metric provides a continuous, interpretable measure of constraint satisfaction, rather than a binary pass/fail signal.
This metric computes how aligned the signs of weights within a group are. So a value of 1 means all weights share the same sign.
Import with:
from nwavesdk.utils import coherence
Input
W: synaptic weight matrix of shape (\(N_{in}, N_{out}\))
Output
- A scalar value in [0, 100] representing the percentage of coherent connections
Unit of measure
- Percent (%)
Note
A coherence value of 100% implies full compliance with the sign-topology constraint and guarantees that this specific constraint will not prevent hardware deployment. Lower values indicate how far the network is from being fully compliant and can be used as a training or debugging signal.
Note
This metric can be used also for LIF networks or hybrid ones. As is only necessary to pass weight matrix of the layer (easily accessible through model.syn_layer.weight)
Summary
- Accuracy metrics evaluate functional correctness in software
- Chip consumption estimates power usage in Watts
- Chip deployability checks strict hardware feasibility
- Topology coherence measures degree of compliance with hardware sign-topology constraints
- All hardware metrics assume a network built using NWAVE hardware abstractions