Hardware Constraints and Non-Idealities
This section describes how the Neuronova chip deviates from an ideal neural network and how these constraints can be modeled during training using nwavesdk.
Weight Constraints
All synaptic weights are constrained to the range:
Weights outside this range cannot be reliably programmed on hardware. This constraint is typically enforced during training via regularization. More information about the loss provide to achieve this are in the respective loss section.
Sign Topology Constraint
Each core enforces a sign topology on its weights:
- Neurons are grouped in blocks of 5
- Incoming weights within each block must share the same sign
- Mixed-sign fan-in is penalized
An illustrative diagram of the topology constraint is represented below:
Hardware-imposed sign topology (groups of 5 neurons)
This constraint is discussed also in the respective loss section and can be enforced via topology-aware regularizers.
Hardware Non-Idealities
Synaptic Mismatch
Due to fabrication variability, synaptic weights exhibit random mismatch. In NWAVE, this can be modeled as additive noise applied to the effective synaptic weights.
This is controlled via the stddev parameter.
Example: Synaptic Mismatch
from nwavesdk.layers import HWSynapse
syn = HWSynapse(
nb_inputs=64,
nb_outputs=64,
stddev=0.02 # enable synaptic mismatch
)
Nominal value for variability obtained from simulations is stddev = 4.
Leak (Tau) Mismatch
Neuron leak currents (and therefore membrane time constants) also vary across hardware.
This effect can be modeled by enabling ileak_mismatch in hardware layers.
Example: Leak Mismatch
from nwavesdk.layers import HWLayer
layer = HWLayer(
n_neurons=64,
taus=20e-3,
dt=1e-3,
ileak_mismatch=True
)
Quantization Constraints
For faster deployment, the chip does not rely on full 32-bit floating-point precision.
Synaptic weights can be quantized to a reduced number of bits during training and inference.
Example: Reduced-Precision Synapses
from nwavesdk.layers import HWSynapse
syn = HWSynapse(
nb_inputs=64,
nb_outputs=64,
quantization_bit=6 # use 6-bit quantization
)
Example: Quantized Recurrent Layer
from nwavesdk.layers import HWLayer
layer = HWLayer(
n_neurons=64,
taus=20e-3,
dt=1e-3,
layer_topology="RC",
quantization_bit=6
)
Quantization-aware training allows models to remain accurate while being more easily deployable on hardware.
However, the quantization is a soft constraint, meaning that in principle any weight with any precision could be deployed on chip, but typically, higher precision requires more manual tuning of hardware parameters in the deploy phase.
Summary
- The Neuronova chip imposes structural and numerical constraints
- Non-idealities such as mismatch can be explicitly modeled
- Quantization is recommended for rapid and reliable deployment, but is optional
- NWAVE provides the necessary abstractions to handle these effects during training
Hardware-imposed sign topology (groups of 5 neurons)