Why aren’t variable-length arrays part of the C++ standard?

[*]

(Background: I have some experience implementing C and C++ compilers.)

Variable-length arrays in C99 were basically a misstep. In order to support VLAs, C99 had to make the following concessions to common sense:

  • sizeof x is no longer always a compile-time constant; the compiler must sometimes generate code to evaluate a sizeof-expression at runtime.

  • Allowing two-dimensional VLAs (int A[x][y]) required a new syntax for declaring functions that take 2D VLAs as parameters: void foo(int n, int A[][*]).

  • Less importantly in the C++ world, but extremely important for C’s target audience of embedded-systems programmers, declaring a VLA means chomping an arbitrarily large chunk of your stack. This is a guaranteed stack-overflow and crash. (Anytime you declare int A[n], you’re implicitly asserting that you have 2GB of stack to spare. After all, if you know “n is definitely less than 1000 here”, then you would just declare int A[1000]. Substituting the 32-bit integer n for 1000 is an admission that you have no idea what the behavior of your program ought to be.)

Okay, so let’s move to talking about C++ now. In C++, we have the same strong distinction between “type system” and “value system” that C89 does… but we’ve really started to rely on it in ways that C has not. For example:

template<typename T> struct S { ... };
int A[n];
S<decltype(A)> s;  // equivalently, S<int[n]> s;

If n weren’t a compile-time constant (i.e., if A were of variably modified type), then what on earth would be the type of S? Would S‘s type also be determined only at runtime?

What about this:

template<typename T> bool myfunc(T& t1, T& t2) { ... };
int A1[n1], A2[n2];
myfunc(A1, A2);

The compiler must generate code for some instantiation of myfunc. What should that code look like? How can we statically generate that code, if we don’t know the type of A1 at compile time?

Worse, what if it turns out at runtime that n1 != n2, so that !std::is_same<decltype(A1), decltype(A2)>()? In that case, the call to myfunc shouldn’t even compile, because template type deduction should fail! How could we possibly emulate that behavior at runtime?

Basically, C++ is moving in the direction of pushing more and more decisions into compile-time: template code generation, constexpr function evaluation, and so on. Meanwhile, C99 was busy pushing traditionally compile-time decisions (e.g. sizeof) into the runtime. With this in mind, does it really even make sense to expend any effort trying to integrate C99-style VLAs into C++?

As every other answerer has already pointed out, C++ provides lots of heap-allocation mechanisms (std::unique_ptr<int[]> A = new int[n]; or std::vector<int> A(n); being the obvious ones) when you really want to convey the idea “I have no idea how much RAM I might need.” And C++ provides a nifty exception-handling model for dealing with the inevitable situation that the amount of RAM you need is greater than the amount of RAM you have. But hopefully this answer gives you a good idea of why C99-style VLAs were not a good fit for C++ — and not really even a good fit for C99. 😉


For more on the topic, see N3810 “Alternatives for Array Extensions”, Bjarne Stroustrup’s October 2013 paper on VLAs. Bjarne’s POV is very different from mine; N3810 focuses more on finding a good C++ish syntax for the things, and on discouraging the use of raw arrays in C++, whereas I focused more on the implications for metaprogramming and the typesystem. I don’t know if he considers the metaprogramming/typesystem implications solved, solvable, or merely uninteresting.


A good blog post that hits many of these same points is “Legitimate Use of Variable Length Arrays” (Chris Wellons, 2019-10-27).

Leave a Comment

tech