![]() Just going ahead and changing the isolation level, or even just using NOLOCK, can have big impacts on your database and also what exact results queries return - which could have business impacts. You've got to make sure your queries are fit for business purposes, rather than queries just 'completing at all costs', databases are a business tool first a foremost and need to work around that (however much we might not like that)! ![]() Perhaps you need to do some general performance tuning on your database server (I won't go into detail on this here though as it's getting way off topic). Why is the SELECT statement taking so long to run? Why are things being deleted from the table so frequently? Maybe some simple query tuning will get them all to complete quicker and reduce the chance of a deadlock. A more appropriate avenue to explore may be your queries themselves. MSDN - How can a select statement result in being chosen as deadlock victim?, SQL Server central, MSDN - Understanding Isolation Levelsĭon't rush to make a change just so you can get your queries to finish. ![]() It's hard to say exactly what would be best, as it'll depend on a lot of different variables about your system and it's use.īest thing you can do in that regard though is read up about isolation levels, these are some similar posts/sources that will start you off and go into things in more detail. You could use READ_COMMITTED_SNAPSHOT, but this could result in further overheads, or use READ_UNCOMMITED/use a NOLOCK hint - I'm not a fan of those options usually as the data returned might not be actually what you want, but they may be suitable in your case. ![]() It is possible that you should have a look at your isolation levels. It's difficult to know what the best approach is going to be for you (without a lot more information). I'll add some links to deadlock detection at the end but I suspect this isn't the main purpose of the question so. Why it happens specifically in the case you're talking about is not possible to say without further information. As the SELECT query is not updating anything, it's usually going to be the one to be the deadlock victim, as it'll likely be the least expensive to rollback. Your (or anyone's) database wouldn't be very efficient if only one person could do something with it at a time.Ī SELECT query is taking shared locks on any rows it's looking at, so although it's not going to conflict with another SELECT query, it definitely will conflict with exclusive locks from a DELETE. I thought that second transaction must wait until first is finishedīasically, this is not true. LEFT OUTER JOIN ORG CA ON (CA.orgId = PW.orgId) INNER JOIN CONTACT PC ON (PC.pID = P.pID) INNER JOIN WORK PW ON (P.mainwork = PW.pwId) There is how queries looks like (i obfuscated them a little): SELECT distinct P.pID, (LastNAME+' '+FirstName+' '+SurName) as NAME, 'person' as Type, CA.OrgName, PC.personemail as Email I can't change the order of transactions or control them in some way (there is many users of app, and they can do whatever they want in any order at any time). I thought that second transaction must wait until first is finished, or first transaction must return data without the changes that were made by the second transaction. It causes SELECT query to failed with error errorCode=1205 sqlState=40001 and message that deadlock happened. Then, before the SELECT query is not yet finished, another thread deletes data from that table. Firstly 1 thread selecting a huge amount of data from a table. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |