Entity Framework Profiler Serial
Entity Framework Profiler Serial Rating: 4,0/5 6622votes
In many large-scale projects, software developers are often have to work with existing SQL Server databases with predefined tables and relationships. The problem can be that some predefined databases can have aspects that are awkward to deal with from the software side. Ynab 4 Activation Keygen Idm here. As a software developer, my choice of database access tool is Microsoft’s (EF) so I am motivated to see how EF can handle this. Entity Framework 6 has a number of features to make it fairly straightforward to work with existing databases. In this article I’ll detail those steps that I needed to take on the EF side, in order to build a fully featured web application to work with the AdventureWorks database.
I’ll actually use the database, which is a cut-down version of the larger AdventureWorks OLTP database. I am using Microsoft’s with the propriety package for the UI/presentation layer, which I cover in At the end, I also mention some other techniques that I didn’t need for AdventureWorks, but I have needed on other databases. The aim is to show how you can use EF with pre-existing databases, including ones that need direct access to T-SQL commands and/or Stored Procedures. Creating the Entity Framework Classes from the existing database Entity Framework has a well-documented approach, called reverse engineering,.
Tabtight professional, free when you need it, VPN service. Exportizer Pro 6 Crack Patch Serial gives you a lot of database export tools. One of the best function of this app is to ensure full control over the output. Find this Pin and more on Developers Paradise! Exportizer Pro 6.1.1.35. Xojo 2016 Release 4.1 ToolsParadiseWebsiteHtml. Xojo 2016 Release 4.1.
This produces data classes with various Data Annotations to set some of the properties, such as string length and nullablity (see the example below built around the CustomerTable), plus a DbContext with an OnModelCreating method to set up the various relationships. This does a good job of building the classes. Certainly it is very useful to have the Data Annotations because front-end systems like MVC use these for data validation during input.
However I did have a couple of problems: • The default code generation template includes the `virtual` keyword on all of the relationships. This enabled, which I do not want. (see section 1 below) • The table SalesOrderDetail has two keys: one is the SalesOrderHeaderID and one is an identity, SalesOrderDetailID. EF failed on a create and I needed to fix this.
(See section 3 below) I will now describe how I fixed these issues. 1: Removing lazy loading by altering the scaffolding of the EF classes/DbContext.
As I said earlier the standard templates enable ‘lazy loading’. I have been corrected in my understanding of lazy loading by some readers.
The that ‘Lazy loading is the process whereby an entity or collection of entities is automatically loaded from the database the first time that a property referring to the entity/entities is accessed’. The problem with this is it does not make for efficient SQL commands, as individual SQL SELECT commands are raised for each access to virtual relationships, which is not such as good idea for performance. For that reason I do not use Lazy Loading so I want to turn it off.
However if this isn’t an issue for you then you can leave it in. Lazy Loading can make handling relationships easier for the software, although in my second article I will show you a method that specifically selects each data column it needs and therefore does not need Lazy Loading Now you could hand-edit each generated class so as to remove the ‘virtual’, but what happens if, or rather when, the database changes? The problem is that you would then have to re-import the database and so lose all your edits, which you or your colleague might have forgotten about by then, and suddenly your whole web application slows down. No, the common rule with generated code is not to edit it. In this case the answer is to change the code that is generated during the creation of the classes and DbContext. The generation of the EF classes and DbContext is done using some t4 templates, referred to as scaffolding.
By default the reverse engineering of the database uses some internal scaffolding, but you can import the scaffolding and change it. There is a very clear explanation of how to using NuGet, so I’m not going to repeat it. Once you have installed the EntityFramework.CodeTemplates you will find two files called Content.cs.t4 and EntityType.cs.t4, which control how the DbContext and each entity class respectively are built. Even if you aren’t familiar with t4 (a great tool) then you can understand what it does – its a code generator and anything not surround by is standard text. I found the word ‘virtual’ in the EntityType.cs.t4 and deleted it. I also removed the word ‘virtual’ from the Content.cs.t4 file on the declaration of the DbSet. You may want to alter the scaffolding more extensively, perhaps by adding a [Key] attribute on primary keys for some reason.
All is possible, but you must dig into the.t4 code in more depth. One warning about using importing scaffolding – Visual Studio threw a nasty error message when I first tried to import using the EntityFramework.CodeTemplates scaffolding (see entry). It took a bit of finding but it turns out if you have installed then they clash. If you have Entity Framework Power Tools installed then you need to disable it and restart Visual Studio before you can import/reverse engineer a database.
I hope that gets fixed as Entity Framework Power Tools is very useful. Sniper Elite 3 Highly Compressed 10mb. Cannot insert explicit value for identity column in table 'SalesOrderDetail' when IDENTITY_INSERT is set to OFF. That confused me for a bit, as other two-key items had worked, such as CustomerAddress. I tried a few things but as it looked like an EF error I tried telling EF that the SaledOrderDetailID was an Identity key by using the attribute [DatabaseGenerated(DatabaseGeneratedOption.Identity)]. That fixed it!
The best solution would be to edited the scaffolding again to always add that attribute to identity keys. That needed a bit of work and the demo was two days away so in the meantime I added the needed attribute using the and a ‘buddy’ class. This is a generally useful feature so I use this example to show you how to do this in the next section. Adding new DataAnnotations to EF Generated classes Being able to add attributes to properties in already generated classes is a generally useful thing to do. I needed it to fix the key problem (see section 1 above), but you might want to add some DataAnnotations to help the UI/presentation layer such as marking properties with their datatype, e.g. [ DataType(DataType.Date)].
The process for doing this is given in the Example section of this the MetadataType attribute. I will show you my example of adding the missing Identity attribute.
The process requires me to add a partial class in another file (see later for more on this) and then add the [MetadataType(typeof(SalesOrderDetailMetaData))] attribute to the property SaledOrderDetailID in a new class, sometimes called a ‘buddy’ class. The effect is to apply those attributes to the existing properties. That fixed my problem with EF creating new SalesOrderDetail properly and I was away. What happens when the database changes? Having sorted the scaffolding as discussed above then just repeat step 1, ‘Creating the Entity Framework Classes from the existing database’. There are a few things you need to do before, during and after the re-import.
• You should remember/copy the name of the DbContext so you use the same name when you re-import. That way it will recompile properly without major name changes. • Because you are using the same name as the existing DbContext you must delete the previous DbContext otherwise the re-importing process will fails. If its easier you can delete all the generated files as they are replaced anyway.
That is why I suggest you put them in a separate directory with no other files added. • When re-importing by default the process will add the connection string to your App.Config file again. I suggest you un-tick that otherwise you end up with lots of connection strings (minor point, but can be confusing).
• If you use source control (I really recommend you do) then a quick compare of the files to check what has changed is worthwhile. Adding new properties or methods to the Entity classes In my case I wanted to add some more properties or methods to the class? Clearly I can’t add properties that change the database – I would have to talk to the DBA to change the database definition and import the new database schema again.
However in my case I wanted to add properties that accessed existing database properties to produce more useful output, or to have an, like HasSalesOrder. You can do this because the scaffolding produces ‘partial’ classes, which means I can have another file which adds to that class. To do this it must: have the same namespace as the generated classes The class is declared as public partial.
I recommend you put them in a different folder to the generated files. That way they will not be overwritten by accident when you recreate the generated files (note: the namespace must be the original namespace, not that of the new folder).
Below I give an example where I added to the customer class. Ignore for now the IModifiedEntity interface (dealt with later in this article) and [Computed] attribute, which I will cover in. } Note that you almost certainly will want to add to the DbContext class (I did – see section 4 below).
This is also defined as a partial class so you can use the same approach. Which leads me on to Dealing with properties best dealt with at the Data Layer In the AdventureWorks database there are two properties called 'ModifiedDate' and ‘ rowguid‘. In the AdventureWorks Lite database these were not generated in the database. Therefore the software needs to update ModifiedDate on create or update and set the rowguid on create. Many databases have properties like this and, if not handled by the database,they are best dealt with at Data/Infrastructure layer.
With EF this can be done by providing a partial class and overriding the SaveChanges() method to handle the specific issues your database needs. In the case of AdventureWorks I adding an IModifiedEntity interface to each partial class that has ModifiedDate and rowguid property. Then I added the code below to the AdventureWorksLt2012 DbContext to provide the functionality required by this database. Using SQL Store Procedures Some databases rely on (SPs) for insert, update and delete of rows in a table. AdventureWorksLT2012 did not, but if you need to that EF 6 has added a neat way of linking to stored procedures. It’s not trivial, but you can find on how to get EF to use SPs for Insert, Update and Delete operations. Clearly if the database needs SPs for CUD (Create, Update and Delete) actions then you need to use them, and there are plenty of advantages in doing so.
In the absence of stored procedures, it is easy from the software point of view to use EFs CUD actions and EFs CUD have some nice features. For instance, EF has an in-memory copy of the original values and uses this for working out what has changed. The benefit is that the EF updates are efficient – you update one property and only that cell in a row is updated. The more subtle benefit is tracking changes and handling SQL security, i.e.
If you use SQL column-level security (Grant/Deny) then if that property is unchanged we do not trigger a security breach. This is a bit of an esoteric feature, but I have used it and it works well. Other things you could do This is all I had to do to get EF to work with an existing database, but there are other things I have had to use in the past.
Here is a quick run through of other items: Using Direct SQL commands. Sometimes it makes sense to bypass EF and use a SQL command, and EF has all the commands to allow you to do this. The EF documentation has a page on this here which gives a reasonable overview, but I recommend Julia Lerman’s book ‘ which goes into this in more detail (note: this book is very useful but it covers an earlier version of EF so misses some of the latest commands like the use of SPs in Insert, Update and Delete). For certain types of reads SQL makes a lot of sense. For instance in my GenericSecurity library I need to read the current SQL security setup (see below). I think you will agree it makes a lot of sense to do this with a direct SQL read rather than defining multiple data classes just to build the command. Neither of these example had parameters, but if you did need any parameters then SqlQuery and SqlCommand methods can take parameters, which are checked to protect against a SQL injection attack.
The documentation shows this. One warning on SqlCommands. Once you have run a SqlCommand then EF’s view of the database, some of which is held in memory, is out of date. If you are going to close/dispose of the DbContext straight away then that isn’t a problem. However if the command is followed by other EF accesses, read or write, then you should use the EF ‘Reload’ command to get EF back in track. See my here for more on this.
SQL Transaction control. When using EF to do any database updates using the. SaveChanged() function then all the changes are done in one transaction, i.e.
If one fails then none of the updates are committed. However if you are using raw SQL updates, or a combination of EF and SQL updates, you may well need these to be done in one transaction. Thankfully EF version 6 introduced commands to allow you to control transactions.
I used these commands in my EF code to work with SQL security. I wanted to execute a set of SQL commands to set up SQL Security roles and grant/deny access, but if any one failed I wanted to roll back. The code to execute a sequence of sql commands and rollback if any single command fails is given below.
You can also use the same commands in a mixed SQL commands and EF commands. See this for an example of that. Conclusion There were a few issues to sort out but all of them were fixable. Overall, getting EF to work with an existing database was fairly straightforward, once you know how. The problem I had with multiple keys (see section 1) was nasty, but now I, and you, know about it we can handle it in the future.
I think the AdventureWorks Lite database is complex enough to be a challenge: with lots of relationships, composite primary keys, computed columns, nullable properties etc. Therefore getting EF to work with AdventureWorks is a good test of EFs capability to work with existing SQL databases. While the AdventureWorks Lite database did not need any raw SQL queries or Stored Procedures other projects of mine have used these, and I have mentioned some of these features at the end of the article to complete the picture.
In fact version 6 of EF added a significance amount of extra features and commands to make mixed EF/SQL access very possible. The more I dig into things the more goodies I find in EF 6. For instance EF 6 brought in,,,, plus a number of other things. Have a good look around the – there is a lot there. So, no need to hold back on using Entity Framework on your next project that has to work with an existing SQL database.
You can use it in a major role as I did, or now you have good connection sharing just use it for the simple CRUD cases that do not need heavy T-SQL methods. My carried on this theme by looking at the challenges of displaying and updating this data at the user interface end. I talk about various methods to develop a good the user experience quickly while still keeping a reasonable database performance.
Jon P Smith is a full-stack software developer and architect who focuses on Microsoft's web applications using Entity Framework (EF) ORM on the server-side, with various front-end JavaScript libraries. Jon is especially interested in defining patterns and building libraries that improve the speed of development of web/database applications. As well as his articles on Simple-Talk Jon has a number of extra articles on own technical blog,, and has produced a number of open-source. He is also the author of the book called, published by Manning. Lazy Loading I too found the terminology here a bit confusing so I looked that up when I was writing the article. The link says that ‘Lazy loading is the process whereby an entity or collection of entities is automatically loaded from the database the first time that a property referring to the entity/entities is accessed’.
If you look at that document in that link you will see a definition of the three terms that EF uses, e.g. Eager, Lazy and Explicit loading. I have avoided Lazy Loading and it seems I didn’t quite understand it. However readers like you have put me right. I still don’t want to use Lazy Loading because of performance issues so I use.Include statements if I want a relationship loaded (what they call Explicit loading). Lazy Loading and other reverse engineer options Lazy loading will not directly drag in things that you did not want.
If you access certain entity properties in code (properties that belong to child entities related to a main parent entity for example) then by definition you would have wanted access to them so EF will then lazily load them at the time that portion of your code will actually be executed. The problem with lazy loading is that it will go and fetch those individually at separate times which, when done in some sort of code loop, can result in the famous N+1 database server connections/queries problem with EF (one initial connection for the parent entity and then N connections as you go through to loop to get properties to each related entity accessed in the loop).
The other comment I wanted to make: the reverse engineer process that you reference was only added to the official EF tools package starting with EF 6.1. If somebody is still stuck with earlier versions of EF but needs to reverse engineer a database schema to get Code First classes then that person is stuck having to rely on some third-party Visual Studio extensions for that task.
I blogged about that issue here: •. Lazy Loading and other reverse engineer options Hi Cristian (name from you blog) Thank you for your comment as it showed a hole in my understanding. I had avoided Lazy Loading after some initial problems and therefore had not really looked into it properly.
As you say it loads navigational properties ‘when they are read’ not when the main class is loaded. I have updated the article to reflect that and sorry for my error. As you say Lazy Loading isn’t that conducive to good SQL performance as it creates multiple SQL commands so I don’t want to use them. On the comment about the EF 6.1/reverse engineering I have to say the main reason I could write this article is because version 6.1 of EF has included a whole raft of features that make working with existing SQL databases so much easier. I do however mention two other reverse engineering tools which work with pre-6.1 EF code, one of which you mention in your blog.
However, for me, the ranges of new features I mention at the end of the article make EF 6.1 the preferred choice for working with databases that use SQL to the full. RE: Great article Hi Anonymous. Thanks for your comment. As an active developer/architect I spend a lot of time trying to work things for documentation, so I know what works for me.
I want to know ‘why’ as well as ‘how’ because often just knowing ‘how’ isn’t enough. My feeling is EF 6 has made a big difference to what you can do with existing databases, but you have to decide if it can work/help in your situation. A second follow on article coming out in a week’s time will look at the other part of the puzzle, i.e.
Using EF in a real MVC5 web application. I think the two together cover the use of EF from end-to-end. RE: T4 Files Hi RickyTad, Yes, as I explained in section 2: ‘Altering the code that Reverse Engineering produces’ you can download and then edit T4 files. That section includes a link to an article from the Date Developer Team, but its mainly about how to download rather than how the T4 works. I wouldn’t say that the T4 code is very easy to follow and I haven’t found any documentation on it (I didn’t expect to). I think a bit of experimenting will be needed, but I expect you can achieve what you are looking to do. RE: T4 Files Hi Jon, Thank you for the reply.
I will definitely have to make some 'try & error' experiments regarding the T4 files. The problem is that these T4 scripts are executed only when adding a new 'ADO.NET Entity Data Model' to the project and selecting 'Code First from Database', but the breakpoints set in the T4 files are not hit during code generation. And without Visual Studio debugging it is very difficult to understand what exactly happens in the T4 scripts.
Do you know any 'trick' to trigger execution and debug these T4 scripts in VisualStudio? Auto-save Data base changes in Database first approach Hi Ammarhassan48, From your question I assume you are using a ‘Model First’ approach, where you create the database using the EF Designer. If you are doing that then I don’t believe there is a way of changing the edmx files without running the designer or changing the xml file. However if you are changing your database a lot then you might like to think about changing to the Code First->Reverse Engineering approach I have listed in this article. See the process on the EF Documentation site: •.
I have used and extensively. They are fairly similar in features and price.
They both offer useful performance profiling and quite basic memory profiling. DotTrace integrates with Resharper, which is really convenient, as you can profile the performance of a unit test with one click from the IDE. However, dotTrace often seems to give spurious results (e.g. Saying that a method took several years to run) I prefer the way that ANTS presents the profiling results. It shows you the source code and to the left of each line tells you how long it took to run.
DotTrace just has a tree view. Is quite basic and requires you to compile special instrumented versions of your assemblies which can then be run in the EQATEC profiler. It is, however, free. Overall I prefer ANTS for performance profiling, although if you use Resharper then the integration of dotTrace is a killer feature and means it beats ANTS in usability. The free Microsoft CLR Profiler ( / ) is all you need for.NET memory profiling. 2011 Update: The has quite a basic UI but lots of useful information, including some information on unmanaged memory which dotTrace and ANTS lack - you might find it useful if you are doing COM interop, but I have yet to find any profiler that makes COM memory issues easy to diagnose - you usually have to break out windbg.exe. The ANTS profiler has come on in leaps and bounds in the last few years, and its memory profiler has some truly useful features which now pushed it ahead of dotTrace as a package in my estimation.
I'm lucky enough to have licenses for both, but if you are going to buy one.Net profiler for both performance and memory, make it ANTS. Others have covered performance profiling, but with regards to memory profiling I'm currently evaluating both the Scitech.NET Memory Profiler 3.1 and ANTS Memory Profiler 5.1 (current versions as of September 2009).
I tried the JetBrains one a year or two ago and it wasn't as good as ANTS (for memory profiling) so I haven't bothered this time. From reading the web sites it looks like it doesn't have the same memory profiling features as the other two. Both ANTS and the Scitech memory profiler have features that the other doesn't, so which is best will depend upon your preferences. Generally speaking, the Scitech one provides more detailed information while the ANTS one is really incredible at identifying the leaking object.
Overall, I prefer the ANTS one because it is so quick at identifying possible leaks. I would add that dotTrace's ability to diff memory and performance trace sessions is absolutely invaluable (ANTS may also have a memory diff feature, but I didn't see a performance diff). Being able to run a profiling session before and after a bug fix or enhancement, then compare the results is incredibly valuable, especially with a mammoth legacy.NET application (as in my case) where performance was never a priority and where finding bottlenecks could be VERY tedious. Doing a before-and-after diff allows you to see the change in call count for each method and the change in duration for each method. This is helpful not only during code changes, but also if you have an application that uses a different database, say, for each client/customer. If one customer complains of slowness, you can run a profiling session using their database and compare the results with a 'fast' database to determine which operations are contributing to the slowness.
Of course there are many database-side performance tools, but sometimes I really helps to see the performance metrics from the application side (since that's closer to what the user's actually seeing). Bottom line: dotTrace works great, and the diff is invaluable.
