Is calling destructor manually always a sign of bad design?

All answers describe specific cases, but there is a general answer:

You call the dtor explicitly every time you need to just destroy the object (in C++ sense) without releasing the memory the object resides in.

This typically happens in all the situation where memory allocation / deallocation is managed independently from object construction / destruction. In those cases construction happens via placement new upon an existent chunk of memory, and destruction happens via explicit dtor call.

Here is the raw example:

{
  char buffer[sizeof(MyClass)];

  {
     MyClass* p = new(buffer)MyClass;
     p->dosomething();
     p->~MyClass();
  }
  {
     MyClass* p = new(buffer)MyClass;
     p->dosomething();
     p->~MyClass();
  }

}

Another notable example is the default std::allocator when used by std::vector: elements are constructed in vector during push_back, but the memory is allocated in chunks, so it pre-exist the element contruction. And hence, vector::erase must destroy the elements, but not necessarily it deallocates the memory (especially if new push_back have to happen soon…).

It is “bad design” in strict OOP sense (you should manage objects, not memory: the fact objects require memory is an “incident”), it is “good design” in “low level programming”, or in cases where memory is not taken from the “free store” the default operator new buys in.

It is bad design if it happens randomly around the code, it is good design if it happens locally to classes specifically designed for that purpose.

Leave a Comment

tech