There’s nothing in particular in DVCSs that makes merging easier. It’s simply cultural: a DVCS wouldn’t work at all if merging were hard, so DVCS developers invest a lot of time and effort into making merging easy. CVCS users OTOH are used to crappy merging, so there’s no incentive for the developers to make it work. (Why make something good when your users pay you equally well for something crap?)
Linus Torvalds said in one of his Git talks that when he was using CVS at Transmeta, they set aside an entire week during a development cycle for merging. And everybody just accepted this as the normal state of affairs. Nowadays, during a merge window, Linus does hundreds of merges within just a few hours.
CVCSs could have just as good merging capabilities as DVCSs, if CVCS users simply went to their vendors and said that this crap is unacceptable. But they are caught in the Blub paradox: they simply don’t know that it is unacceptable, because they have never seen a working merge system. They don’t know that there is something better out there.
And when they do try out a DVCS, they magically attribute all the goodness to the “D” part.
Theoretically, due to the centralized nature, a CVCS should have better merge capabilities, because they have a global view of the entire history, unlike DVCS were every repository only has a tiny fragment.
To recap: the whole point of a DVCS is to have many decentralized repositories and constantly merge changes back and forth. Without good merging, a DVCS simply is useless. A CVCS however, can still survive with crappy merging, especially if the vendor can condition its users to avoid branching.
So, just like with everything else in software engineering, it’s a matter of effort.