Getting LINQ to SQL and LINQ to Entities to use NOLOCK
I was visiting a local company where a bunch of friends work and they asked how to get LINQ to SQL to use NOLOCK. They were explicitly asking how to get the SQL generated by LINQ to SQL to add the NOLOCK hints to the end.
However, with NOLOCK (even though "everyone" has used it at some point or another) is generally considered a last resort. Queries that use NOLOCK aren't guaranteed to return correct results or technically, return any results at all.
SQL 2005 has snapshot-based isolation levels that prevent readers from blocking writers or writers from blocking readers without allowing dirty reads.
Now, I have said that NOLOCK has served me personally very well in the past on systems of some size, but I hear what folks who say no to NOLOCK are saying. It certainly depends on one's definition of "correct results." ;)
There's three ways to get the behavior your want. Using TransactionScope is the recommended way to affect the transaction options associated with the commands generated by either LINQ to SQL or LINQ to Entities.
LINQ to SQL also supports explicitly setting the transaction on the context, so you could get the connection from the context, open it, start a transaction, and set it on the context. This can be desirable if you think SQL 2005 is promoting transactions too often, but the preferred method is TransactionScope.
ProductsNewViewData viewData = new ProductsNewViewData(); using (var t = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions {
IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted
})) { viewData.Suppliers = northwind.Suppliers.ToList(); viewData.Categories = northwind.Categories.ToList(); }
Here's an example where I used it in some recent code. This TransactionScope could be hidden (layered away) in your DAL (Data Access Layer) or in your Data Context directly if you wanted it to be neater.
A second way is that you can still create and call Stored Procedures (sprocs) from LINQ to SQL and those sprocs could include NOLOCK, TransactionScope is a better choice for LINQ to SQL or LINQ to Entity generated SQL if you feel that your query doesn't need to lock down the table(s) it's reading from.
Note that you'll want to be aware of which statement in your LINQ to SQL actually starts talking to the database. You can setup a query ahead of time and it won't be executed, for example, until someone calls ToList() or the like. It's at this point you'll want to wrap it in the using(TransactionScope){}.
Another third way you could set it at a DataContext level (which, to be clear, would affect every generated LINQ to SQL query executed on that context) would be to execute the command:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
...using the connection available inside your DataContext.
Thanks to David Browne and Michael Pizzo for their help on this!
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
About Newsletter
...excuse me while I go write some xQuery to make myself feel clean again.
int[] foo = new int[] {1,2,3}
when I can
var foo = new int[] {1,2,3}
Why type int[] twice?
I understand that the type information is there in the code, but I'm uncomfortable with casual use of var too. It does increase the impedance experienced when reading through code, because when I hit the declaration I've got to read further into the line before understanding what type I'm working with. It also means it won't be possible to quickly scan down the left hand side of the code to see where types are used in declarations. I know various search functions are only a few key-strokes away, but sometimes you just want to scroll up and down a sort distance to check stuff out.
I don't want to see this becoming common practice - do you hear me, world? ;-)
using(new TransactionScope(...))
{
...
}
If you're not actually going to do anything with the instance anyway!!!
Steve McConnel once said that optimizing for write speed at the expense of readability is fundamentally wrong; you have to keep in mind that your code will be read an order of magnitude more times that it will be written, so it matter most to be clear and easy to read/parse than to be neat and tricky.
That said, I agree with Ryan that unless for anonymous types, you should write the actual type in the LHS instead of just 'var'. Makes for a much nicer "top-down, left-right" code reading experience.
But let's not get all flamewarish on the issue, okay ;-)
public string MyRoutine()
{
return "hello";
}
public string MyOtherRoutine()
{
var thingy = MyRoutine();
return thingy;
}
That's where I tend to dislike it. If the scope is small it's cool...but for long term readability it's rough...especially like in my example if MyRoutine was in another class. Intellisense is great and you can hover over both "var" and "MyRoutine()" and get the types) but it doesn't help if the code is printed in a book or on a webpage.
http://resharper.blogspot.com/2008/03/varification-using-implicitly-typed.html
the question\concern I have is in regard to tight coupling and the lack of "Seperation of concerns" that this seems to introduce.
When setting the transaction isolation level in the application binaries arent we creating a very tightly coupled scenario in which the code has too much intimate knowledge of the DB?
How is a production DBA going to troubleshoot database performance,blocking, and deadlock issues? All we'll see in the profiler is a bunch of inline SQL Statements with no indication of where they live or where they are coming from.
If the isolation level has to be changed, it would require a C# code change, a full QA cycle and a binary redeploy.
Seems to me that placing this in sprocs shields the app from having to know too much about the database.
Futhermore, this makes me think of some comments Bob Beauchemin had on his blog regarding performance tuning SQL generated by LINQ. He's right.....if we cant control the SQL that is generated by the API, we will likely have huge performance issues that cannot easily be fixed without a huge redesign at the application level.
Currently, as all SQL code lives in sprocs, when I see a performance issue, I can turn the profiler on an very quickly determine the sproc that is the troublemaker and then I can quickly and easily modify the sql as needed to optimize it. Knowing all the way there are many ways to write the same query while yielding the same results but very different performance. Sometimes I just need to change a join to use a correlated sub-query instead. Somtimes I need to tweak the method used for filtering my WHERE Clause.....
while making changes to the sproc I can instantly see the performance gains by viewing the query plan and monitoring sub-tree cost, etc. If LINQ is generating SQL for me, this becomes impossible.
Am I worried about nothing here or are these issues really going to exist with the wide adoption of LINQ?
GAJ
You are very right as described LINQ to SQL does cross that line of coupling...for some development houses that might be fine, for others it's a violation of a cardinal rule...thankfully LINQ to SQL can also employ sprocs...so the best of both worlds is really available to you. So you can keep that isolated if you so wish.
I was not aware of that.
Guess I need to study up on LINQ a bit more before panicking myself into an early grave.
thanks
GAJ
If you're using optimizer hints, you're probably fixing the _result_ of your problem, rather than the _cause_ of your problem.
Dim o as New MyType()
But I really like var. It doesn't just save on typing the type twice on the same line, it keeps you from having to read the type twice. To me, it reduces noise in my code. So I'm using it in my code and there's nothing you can do to stop me. Muhaha!
Maybe one of you guys need to build a refactor tool that changes all the inferred types to declared types.
But I agree with your overall point ;)
If I am using TableAdapters for all my DB needs. What do I really gain in LINQ? In fact, am I not peppering my code with all those SQL statements, which is a very bad thing to do if I have to change, debug, etc.
Comments are closed.