Currently GetVirtualTransactionSize is a useless placeholder function that
simply returns size, also its comment describes it as having to do with
weight.
Redefine it to be useful: for dense-sigops transactions, it increases above
the actual size, representing the 'excess resources' consumed. Accordingly
the default value for nBytesPerSigOp is increased from 20 to 50, a number
based on actual consensus limits.
This is analogous to what Core does in the mempool sense of virtual size,
but really it's very different since we don't have segwit witness weight
and their choice of sigops constant is too permissive: Core's definition
is max(weight, nsigopscost * 20)/4, i.e., max(bip141size, nsigopscount * 20).
(Note: as of this Diff, nothing in the codebase uses virtual size.)