ROM-SQL Changesets break Sequel's deadlock retry handling

I’ve been making a lot of progress learning the ins and outs of ROM. While I’m learning to like it a lot, I’ve found that there are many functionalities that seem inconsistent between rom-core (interfaces) and rom-sql. Several issues are changeset-related, and I look forward to seeing how that progresses (and how I can help contribute).

A recent issue that I ran into has to do with testing concurrent database connections and committing 2 changesets within a savepoint transaction. While it’s unclear why MySQL gets deadlocks and PostgreSQL does not, the issue pertains to allowing Sequel to retry a transaction when there’s a deadlock.

This example raises a ROM::SQL::Error exception when there is a deadlock in MySQL and Sequel does not retry when it should:

changesets = [ build_changeset1, build_changeset2 ]

gateway.transaction(savepoint: true, retry_on: Sequel::SerializationFailure) do
  changesets.each(&:commit)
end

# ROM::SQL::Error:
#       Mysql2::Error: Deadlock found when trying to get lock; try restarting transaction

The workaround was to call multi_insert with the changeset data:

changesets = [ build_changeset1, build_changeset2 ]

gateway.transaction(savepoint: true, retry_on: Sequel::SerializationFailure) do
  changesets.each do |changeset|
    changeset.relation.multi_insert(changeset.to_a)
  end
end

If there’s a deadlock, the workaround will retry. It’s possible to output a message with Sequel’s before_retry option to indicate that a deadlock was retried.

Here’s the commit that addresses the issue in the project we discovered this in: https://github.com/RailsEventStore/rails_event_store/pull/302/files

All this said, we didn’t dig any deeper to see if the ROM/Changeset functionality we are still using is a culprit in causing the deadlocks. However, the ability to retry is sufficient at the moment.

P.S. Anyone know why PostgreSQL doesn’t deadlock when MySQL does? We’re able to use the normal ROM changeset commit functionality for PG and only use the workaround for MySQL.

Further investigation indicates that the Create command causes individual INSERT statements to be generated when there are multiple tuples because it wants to return the primary keys for all new records.

In our case, this almost guarantees a deadlock on a table with a unique index when attempting to insert concurrently. Changing to the example in the original post causes a batch insert, which reduces the occurrence of a deadlock, but doesn’t eliminate it.

@flash-gordon any ideas? :sweat_smile:

I’m not an expert in MySQL but I think the problem is caused by having unique indexes along with a primary key constraint, shallow googling by “mysql insert deadlock unique indexes” returns a bunch of results. Looks like some concurrency issues caused by the structure of tables in MySQL (aka clustering).

Hi @flash-gordon, thanks for the input. The main thing that’s pointing to either ROM or Sequel is that the ActiveRecord adapter for the same schema does not hit deadlocks with the same concurrency tests. It could maybe be that AR is retrying internally or something like that. For the time being, we’re just going to resort to retries on deadlocks… hard to say if there is something substantial enough to investigate here. No luck identifying what might be different between each.