How does Java’s use-site variance compare to C#’s declaration site variance?

I am just going to answer the differences between declaration-site and use-site variance, since, while C# and Java generics differ in many other ways, those differences are mostly orthogonal to variance.

First off, if I remember correctly use-site variance is strictly more powerful than declaration-site variance (although at the cost of concision), or at least Java’s wildcards are (which are actually more powerful than use-site variance). This increased power is particularly useful for languages in which stateful constructs are used heavily, such as C# and Java (but Scala much less so, especially since its standard lists are immutable). Consider List<E> (or IList<E>). Since it has methods for both adding E’s and getting E’s, it is invariant with respect to E, and so declaration-site variance cannot be used. However, with use-site variance you can just say List<+Number> to get the covariant subset of List and List<-Number> to get the contravariant subset of List. In a declaration-site language the designer of the library would have to make separate interfaces (or classes if you allow multiple inheritance of classes) for each subset and have List extend those interfaces. If the library designer does not do this (note that C#’s IEnumerable only does a small subset of the covariant portion of IList), then you’re out of luck and you have to resort to the same hassles you have to do in a language without any sort of variance.

So that’s the advantages of use-site inheritance over declaration-site inheritance. The advantage of declaration-site inheritance over use-site inheritance is basically concision for the user (provided the designer went through the effort of separating every class/interface into its covariant and contravariant portions). For something like IEnumerable or Iterator, it’s nice not to have to specify covariance every single time you use the interface. Java made this especially annoying by using a lengthy syntax (except for bivariance for which Java’s solution is basically ideal).

Of course, these two language features can coexist. For type parameters that are naturally covariant or contravariant (such as in IEnumerable/Iterator), declare so in the declaration. For type parameters that are naturally invariant (such as in (I)List), declare what kind of variance you want each time you use it. Just don’t specify a use-site variance for arguments with a declaration-site variance as that just makes things confusing.

There are other more detailed issues I haven’t gone into (such as how wildcards are actually more powerful than use-site variance), but I hope this answers your question to your content. I’ll admit I’m biased towards use-site variance, but I tried to portray the major advantages of both that have come up in my discussions with programmers and with language researchers.

Leave a Comment