The problem you describe is not handled by the database, and from my experience is not entirely handled by Hibernate either.
You have to take explicit steps to avoid it being a problem.
Hibernate does some of the work for you. As per the previous answer, Hibernate ensures that within an isolated flush the inserts, deletes and updates are ordered in a way that ensures that they will be applied in an achievable order. See performExecutions(EventSource session) in the AbstractFlushingEventListener class:
Execute all SQL (and second-level cache updates) in a special order so that foreign-key constraints cannot be violated:
- Inserts, in the order they were performed
- Deletion of collection elements
- Insertion of collection elements
- Deletes, in the order they were performed
When having unique constraints it’s very important to know this order, especially if you want to replace a one-to-many child (delete old/insert new) but both the old and the new child share the same unique constraints (e.g. same email address). In this case you could update the old entry, instead of deleting/inserting, or you could flush after delete only to then continue inserting. For a more detailed example you can check this article.
Note that it does not specify the order of updates. Examining the Hibernate code leads me to think the update order will depend on the order in which the entities were added to the persistence context, NOT the order they were updated. That might be predictable in your code, but reading the Hibernate code did not leave me feeling I would rely on that ordering.
There are three solutions I can think of:
- Try setting hibernate.order_updates to be true. This should help avoid deadlocks when multiple rows in the same table are being updated, but won’t help with deadlocks across multiple tables.
- Make your transactions take a PESSIMISTIC_WRITE lock on one of the entities before doing any updates. Which entity you use will depend on your specific situation, but so long as you ensure an entity is chosen consistently if there is a risk of deadlock, this will block the rest of the transaction until the lock can be obtained.
- Write your code to catch deadlocks when they occur and retry in a sensible fashion. The component managing the dead-lock retry must be located outside of the current transaction boundary. This is because the failing session must be closed and the associated transaction roll-backed. In this article you can find an example of an automatic retrying AOP Aspect.