[storm] No exception raised when hitting database locking error
Steve Kieu
msh.computing at gmail.com
Sat Dec 18 00:40:18 GMT 2010
Hi there,
Thanks, yes if I do commit() (surprisingly the flush() does not help) in
every object modification - it works BUT it is painfully slow with postgres.
I have a reasonable performance and no problem with MySQL / My ISAM so for
now I sticked with it.
As problem shows up in both database server using transaction in this case,
I am not sure if it is the database fault (do table level locking rather
than row locking) or storm does something in the picture. I would like to
remove storm and try to use plainly psycopg2 and MySQLdb to see if it
happened but have not time to do it yet.
Another thing is, I expect storm to raise exception in that case which is
not the case - it just pass the commit() call and continue the execution as
normal.
Many thanks,
On Fri, Dec 17, 2010 at 11:06 PM, Mario Zito <mazito at analyte.com> wrote:
> Hello All,
>
> May be completely wrong but the problem may be that the DB (for some
> reason) is locking the full table and itś not doing row level locking.
> There is info at
> http://www.postgresql.org/docs/8.1/static/explicit-locking.html about
> Postgres locking.
>
> I had a similar problem some days ago, using Postgres too, and the same
> error, but in a different situation (multiple users updating the same
> object, in different server processes, but NOT at the same time), even when
> doing a commit at the end of each update transaction.
> It also happened that different processes showed different versions of the
> same data (even after the commits) when the data has been changed in a
> diferent process.
>
> I solved it by doing a commit() BEFORE starting all requests (even read
> requests) to clear Storm caches. After this, everything worked fine.
> This explained why this solved the problem:
> http://www.mail-archive.com/storm@lists.canonical.com/msg00704.html
>
> Maybe this can help you too :-)
>
> Mario
>
>
> 2010/12/17 james at jamesh.id.au <james at jamesh.id.au>
>
>> On Thu, Dec 16, 2010 at 3:25 PM, Steve Kieu <msh.computing at gmail.com>
>> wrote:
>> >
>> > Hello all,
>> >
>> > It is kind of hard to explain - basically I wrote an application that
>> > trying to retrieve a set of objects from the db - then opening a config
>> file
>> > and loop over the lines regex matching and if match, update a field from
>> the
>> > object. At the end of function - call store.commit()
>> >
>> > Run it at the same time in multiple servers to connect to one central
>> > database (tested with postgre and mysql). The set of object mentioned
>> above
>> > are a different set (as they run in different servers)
>> >
>> > If I use postgres and I often get error - the program quit with an
>> error:
>> >
>> > could not serialize access due to concurrent update
>> >
>> > I tried to wrap store.commit() in try and except but it never runs into
>> > except: it just passed the store.commit() as normal and exited.
>>
>> It isn't unusual for serialisation errors to be reported prior to
>> commit: the database usually reports them as soon as it detects the
>> problem. The traceback from the error you saw probably tells you
>> where it occurred. Try wrapping your try/except block around the
>> entire transaction logic rather than just the commit() call.
>>
>> If you're using PostgreSQL, the
>> psycopg2.extensions.TransactionRollbackError exception should cover
>> the cases you're interested in.
>>
>>
>> > The same problem with MySQL with innoDB (different error message though,
>> > something abt deadlock detected
>>
>> I would guess MySQL is pretty much the same: report the error as soon
>> as the problem is detected rather than letting you continue until
>> commit.
>>
>>
>> > What surprises me is that why the db server refuses to work. I am
>> positive
>> > that they are completely different set and then it should be in
>> different
>> > rows modifications.
>> >
>> > And then storm does not throw exception on this case so I can catch
>> using
>> > except and try again or do something about it.
>> >
>> > The problem is fixed if: I call store.commit() after each object value
>> > update (store.flush() is not enough) ; but it is painfully slow
>> >
>> >
>> >
>> > MyISAM does not have that problem as well
>>
>> Well MyISAM doesn't support transactions, so you shouldn't expect it
>> to report transaction serialisation errors ...
>>
>> James.
>>
>> --
>> storm mailing list
>> storm at lists.canonical.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/storm
>>
>
>
>
> --
> Mario A. Zito
> ANALYTE SRL
> Parana 457, piso 2, of. 'A'
> (C1033AAI) Buenos Aires, Argentina
> tel: (54-11) 5258-0205 int 138
> mazito at analyte.com
> www.analyte.com
>
--
Steve Kieu
Ph: +61-7-3367-3241
sip:*01161428 at sipbroker.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://lists.ubuntu.com/archives/storm/attachments/20101218/b0661a1e/attachment-0001.htm
More information about the storm
mailing list