written 5.3 years ago by |
We need a larger drain resistor to achieve higher gain, but more drain resistance means a lower DC bias voltage at the output node.
This is a problem because the output voltage is also the MOSFET’s drain voltage, and a lower drain voltage corresponds to a higher risk of pushing the FET out of saturation and into the triode region. We suggested that a current source might resolve this problem by providing high gain without negatively affecting the bias conditions. The following diagram gives you an idea of the improved biasing situation associated with the use of a current mirror instead of drain resistors.
But as you can see in the circuit diagram, this large small-signal resistance does not apply to the biasing conditions: The bias voltage at the output node is determined by whatever gate-to-source voltage corresponds to $Q_3$’s drain current.
If we consider that this drain current is not particularly large and that $Q_3$’s threshold voltage is maybe 0.7 V, we can guess that the magnitude of $V_{GS}$ will be quite small relative to the high gain resulting from the current mirror’s large small-signal resistance.
The bias voltage will be influenced by the width-to-length ratio of the current-mirror transistors. Recall that the saturation-mode relationship between gate-to-source voltage and drain current (if we ignore channel-length modulation) is the following:
$I_D = \dfrac{1}{2} \mu_n C_{ox} \dfrac {W}{L} (V_{GS} - V_{TH})^2$
We can see that a lower width-to-length ratio will cause the FET to conduct less drain current for the same VGS. Likewise, if the drain current is held constant and the width-to-length ratio is reduced, the magnitude of VGS will have to increase. Theoretically, then, we could fine-tune the bias voltage by adjusting the width-to-length ratio of the current-mirror transistors.