Hardware-aware Losses
These losses encode constraints and non-idealities of the Neuronova neuromorphic chip. They are designed to regularize network parameters so that trained models can be deployed on hardware.
These losses are optional and composable. They represent one practical toolkit developed at Neuronova, not a universal prescription.
Hardware Constraints Recap
The chip imposes several constraints, some can be approached during training with penalty losses that alter complexity landscape in order to let the model reach a mappable configuration thorugh optimization. Those type of constraints are:
-
Bounded weight range:
All weights must lie in the interval \([-0.9, 0.9]\) -
Structured sign topology:
Groups of neurons must share aligned weight signs, reflecting physical routing constraints
Weight Magnitude Loss
Function: weight_magnitude_loss
Penalizes weights whose absolute value exceeds a specified limit.
Import with:
from nwavesdk.loss import weight_magnitude_loss
Description
For each module in the model:
- All standard weights and recurrent weights are inspected
- Values outside the allowed range are softly penalized
- The penalty is normalized by the total number of parameters
This loss is size-invariant and does not bias larger models.
Mathematical Form
For each weight \(w\):
The final loss is the average over all parameters.
Arguments
model: PyTorch modulelimit: Maximum allowed absolute weight (default:0.9)
Returns
A scalar tensor representing the normalized penalty.
Topology Loss
Function: topology_loss
Encourages weight sign alignment according to hardware topology constraints.
from nwavesdk.loss import topology_loss
Description
Weights are grouped in non-overlapping blocks of 5 neurons. Within each group, incoming weights are encouraged to have consistent signs.
This reflects:
- Routing constraints of the chip
- Limitations on mixed-sign fan-in
The loss penalizes sign disagreement within each group.
Behavior
- Perfectly aligned signs → zero penalty
- Mixed signs → increasing penalty
- Works on both feedforward and recurrent weights
Arguments
model: PyTorch modulelam: Scaling factor for the topology regularizer
Returns
A scalar tensor proportional to the topology violation penalty.