Being a frequent user of SQL Server Reporting Services for the past 6 years has made me blind to the annoyances that I had with the software when I first started using it. For example, I’ve learned to deal with not being able to easily change the order that reports appear in the Solution Explorer. I am no longer frustrated that I can’t change the order of datasets in the Report Data pane. I’ve grown accustomed to editing the XML of the report to accomplish these annoyances and many others.
Even though I think SSRS is a superb piece of software (no other tool I know of can generate reports so easily with as much flexibility as SSRS) there is one lacking feature that still drives me nuts:
Why can’t I organize my reports in sub folders!?
Seriously. Visual Studio allows me to organize my files in sub folders in nearly all other project types: C# console apps? Check. ASP.NET MVC solutions? Check.
Why then can’t my SSRS solutions do the same thing?
If you look at the Microsoft Connect for the issue more than 200 people agree. It’s ridiculous this functionality isn’t built in. Not only does Visual Studio have the capability in other solutions, but reports can be deployed to multiple different folders on the SSRS server itself, leaving the only missing link a context menu action that says “Create New Folder.” I know what I’m asking for here is a “basic” change, nothing nearly as complicated as adding an additional QUALIFY filtering clause (which would be great to have too), but that just makes it more reason that this should have been fixed a long time ago!
However, there is a brief glimmer of hope. Microsoft has been releasing updates to SQL Server Data Tools somewhat regularly the past year, including bug fixes and feature requests from Connect feedback. Let’s hope they continue to get better about fixing issues like these so that everyone will be able to right click > Create New Folder in their SSRS projects sometime in the near future.
It is easy to get caught up in the daily details of life and not take the time to reflect on longer term goals and accomplishments.
Inspired by Brent Ozar and Steve Kamb, these Epic Life Quests are intended to help me reflect on my accomplishments and help me stay focused on the things that are important.
Each level contains five achievements and once all are completed I can “level up” to the next five. Follow along and let me know if you create any epic life quests of your own.
Level 2 (currently working on)
Blog weekly for 6 months straight — Last year I began blogging more than any previous year, but I didn’t always stick to a schedule. My biggest problem was I didn’t know what I wanted to write about so choosing topics was difficult and frustrating. After looking back at what posts were the most well-received, I’ve decided to focus the first half of 2017 to mostly technical and professional development type topics.
Vacation in Hawaii — Our vacations in 2016 focused on places we could reach by car so that we could save some money for a larger trip. This will be the bigger trip. After visiting Hawaii, I will have visited 36 states + Washington D.C (airports don’t count!).
Work on mental mindfulness— practice meditation to improve focus, patience, manage stress, be happier. I want to average at least 5 days/week for 3 months to reach this goal.
Always be reading at least one book — Although I read 40+ books in 2016, there were stretches of weeks at a time where I was not reading anything. For 6 months I don’t want to go more than 3 days without having picked a book to have available to read.
Level 1 Quests (completed before 2017)
Here are some of my achievements before I started this page on January 1, 2017.
Set up an environment for programming regularly at home — completed 2016
Recently I have been working on a project where I needed to parse XML files that were between 5mb and 20mb in size. Performance was critical for the project, so I wanted to make sure that I would parse these files as quickly as possible.
The two C# classes that I know of for parsing XML are XmlReader and XmlDocument. Based on my understanding of the two classes, XmlReader should perform faster in my scenario because it reads through an XML document only once, never storing more than the current node in memory. On the contrary, XmlDocument stores the whole XML file in memory which has some performance overhead.
Not knowing for certain which method I should use, I decided to write a quick performance test to measure the actual results of these two classes.
In my project, I knew what data I needed to extract from the XML up front so I decided to configure test in a way that mimics that requirement. If my project required me to run recursive logic in the XML document, needing a piece of information further down in the XML in order to know what pieces of information to pull earlier on from the XML, I would have set up an entirely different test.
For my test, I decided to use the Photography Stack Exchange user data dump as our sample file since it mimics the structure and file size of one my actual project’s data. The Stack Exchange data dumps are great sample data sets because they involve real-world data and are released under a Creative Commons license.
The C# code for my test can be found in its entirety on GitHub.
In my test I created two methods to extract the same exact data from the XML; one of the methods used XmlReader and the other XmlDocument.
The first test uses XmlReader. The XmlReader object only stores a single node in memory at a time, so in order to read through the whole document we need to usewhile(reader.Read()) in order to loop all of the nodes. Inside of the loop, we check if each node is an element that we are looking for and if so then parse out the necessary data:
public static void XmlReaderTest(string filePath)
// We create storage for ids of all of the rows from users where reputation == 1
List<string> singleRepRowIds = new List<string>();
using (XmlReader reader = XmlReader.Create(filePath))
if (reader.Name == "row" && reader.GetAttribute("Reputation") == "1")
On the other hand, the code for XmlDocument is much simpler: we load the whole XML file into memory and then write a LINQ query to find the elements of interest:
public static void XmlDocumentTest(string filePath)
List<string> singleRepRowIds = new List<string>();
XmlDocument doc = new XmlDocument();
singleRepRowIds = doc.GetElementsByTagName("row").Cast<XmlNode>().Where(x => x.Attributes["Reputation"].InnerText == "1").Select(x => x.Attributes["Id"].InnerText).ToList();
After writing these two methods and confirming that they are returning the same exact results it was time to pit them against each other. I wrote a method to run each of my two XML parsing methods above 50 times and to take the average elapsed run time of each to eliminate any outlier data:
public static double RunPerformanceTest(string filePath, Action<string> performanceTestMethod)
Stopwatch sw = new Stopwatch();
int iterations = 50;
double elapsedMilliseconds = 0;
// Run the method 50 times to rule out any bias.
for (var i = 0; i < iterations; i++)
elapsedMilliseconds += sw.ElapsedMilliseconds;
// Calculate the average elapsed seconds per run
double avergeSeconds = (elapsedMilliseconds / iterations) / 1000.0;
Results and Conclusions
Cutting to the chase, XmlReader performed faster in my test:
Now, is this ~.14 seconds of speed difference significant? In my case, it is, because I will be parsing many more elements and many more files dozens of times a day. After doing the math, I estimate I will save 45–60 seconds of parsing time for each set of XML files, which is huge in an almost-real-time system.
Would I have come to the same conclusion if blazing fast speed was not one of my requirements? No, I would probably go the XmlDocument route because the code is much cleaner and therefore easier to maintain.
And if my XML files were 50mb, 500mb, or 5gb in size? I would probably still use XmlReader at that point because trying to store 5gb of data in memory will not be pretty.
What about a scenario where I need to go backwards in my XML document — this might be a case where I would use XmlDocument because it is more convenient to go backwards and forwards with that class. However, a hybrid approach might be my best option if the data allows it: if I can use XmlReader to get through the bulk of my content quickly and then load just certain child trees of elements into XmlDocument for easier backwards/forwards traversal, then that would seem like an ideal scenario.
In short, XmlReader was faster than XmlDocumet for me in my scenario. The only way I could come to this conclusion though was by running some real world tests and measuring the performance data.
So should you use XmlReader or XmlDocument in your next project? The answer is it depends.