PerByte is a small company that accomplishes a lot of great things for our customers. Whether we're working with them on a configuration, data conversion, or implementation, they keep us pretty busy. Being busy, however, is not the same thing as being too busy.
To say "too busy" is merely to say "confused priorities."
- Jonathan Lockwood Huie
As the core of our business, work that we do for our customers is obviously very important and always demands a high priority. However, if that were our only priority we should be considered being negligent to those same and future customers. An investment does need to be made in both sharing what we've learned and improving our process.
This blog is an initial example of sharing what we've learned. It's intentionally minimal and has a focus on small but important discoveries. It'll have the occasional rant and philosophical discussion but intends to have a strong lean toward activities in our core competencies (Aptify, Sitefinity, .NET, development, databases, etc.)
What would happen if we were to step back and started collecting a larger scoped package of what we've learned?
Core is, at its simplest, a collection of open source libraries that makes developing for .NET and Aptify more efficient. Utilities, wrapper classes, components, and should-have-been-stock functionality have been wrapped up in an easy to use format.
While still a work-in-progress it's already chock full of good bits that'll help you out immediately. It is open source but we do ask you to request access (be sure to give us your GitHub account name).
Bringing You More
As part of our investment in improving our process we're building a set of tools that we will make generally available to anyone using the Aptify platform.
Visual Studio, PowerShell, and Aptify will be tightly integrated to bring you templates, scripting capabilities, better testing, debugging, and more.
We have launchers, best practice analyzers, object syncing, and a myriad of other tools planned to generally make our your lives better.
These are still in development but will start becoming available in 2015. Register now and we'll notify you as development matures.
Its effective, plain, and independent. And it doesn't care about anything.
What should we do?
So I need to get the data from SQL into these objects. A problem that has been solved really well many times.
How about ORMs?
For data access, a well designed ORM can be a huge boon to productivity and code quality. Using Entity Framework or NHibernate is like riding shotgun in a Hummer Limo. You'll have a chauffeur and you'll get there in style. You will, however, need to speak up and yell at the driver if you know a better route, assuming they listen, and good luck finding a close parking spot with that behemoth.
I'm just looking to run a couple of SQL statements and get the data. For this situation they really take the simple out of KISS. Keep it. Stupid.
How about Micro-ORMs?
Micro-ORMs are a really good fit, in general, for this type of problem. Given that I want to deal with strongly typed objects there is PetaPOCO and Dapper. If I could use dynamic types I could use the wonderful Massive.
If an ORM is like riding in a Hummer Limo than a Micro-ORM is like riding a bike. They're efficient and unobtrusive. You can cut through most shortcuts and you can park anywhere. A bike is great but still overkill when all I need to do is see who rang the doorbell.
Dynamically emitted mapping code and a fair bit of reflection for code that's going to run once and all at once seems a bit out of place.
Why not write code the long way?
I nearly chose this option. Shameful I know. It's quick and dirty but it works. Then I remembered that I've been tyring to live up to one of my modern code-writing mantras only write code that only you can write. In short, don't write boilerplate code if you can help it.
There really should be a middle ground. A Micro-Micro-ORM perhaps. For my purposes I have created an incredibly simple set of extension methods to let me write some manner of elegance that fits my purposes. There is much to be improved but I figured if I did that, I'd end up with MassiveDapperPoco.
Assuming this is our POCO:
Let's start with the boilerplate:
Nothing terribly exciting. It's smelly, but it works. Simply the presence of LINQ can make our code a little prettier:
We hid the loop in LINQ but its doing all sorts of extra work behind the scenes. This uses IDataReader.GetEnumerator which copies values around a few times, instantiates a number of extra objects, and generally causes more overhead than we really need.
The code also doesn't look that much better so the value is minimal. We can do better of course. With a set of easily reused extension methods and some simple enumeration code we can get to here:
And of course if you use some manner of shared database context in your app this could easily be a one liner:
• PowerBox, BashBox, TextShell, whatever
This should be easier. .NET provides some great ways to launch a console application and capture it's output, but I kept finding bad example after bad example and ended up running around like a chicken with my head cut off. Hence, this post.
To clarify what I mean by "redirecting console output", most of us have seen a typical console window:
This is what it usually looks like when you run a console application in Windows. However, while working on a side project of mine, I needed a way to integrate CDRDAO into my application. CDRDAO is an open-source console application that burns and rips CDs; I wanted to integrate it without popping up the ugly black console window, and I also needed to be able to read its output within my application. I wanted it to look something like this:
Easy, I thought. Well, sort of, but not exactly straightforward without some research.
Running a Process
Usually when starting a process, you would simply use the quick and easy Process.Start static method. However, in order to allow for more complicated operations, you'll need to instantiate the ProcessStartInfo and Process classes like so:
Hiding the Window
Hiding the window is as simple as setting the CreateNoWindow property of the ProcessStartInfo class to true. Don't take a wild guess like I did and try to set the WindowStyle property to ProcessWindowStyle.Hidden, as it appears to do nothing at all.
This is where it gets a tad bit more complicated, as there are numerous things you must do to make the output redirections work. Let's just lay down the code and then we can walk through it:
The first new thing we've added here is info.UseShellExecute = false;. By default, the Process class uses the system shell to execute processes. Several things are affected by this, but most important is that you cannot redirect output when using the system shell. However, when not using the system shell (setting info.UseShellExecute to false), you must provide the full, rooted path to the EXE file, as the current working directory does not apply.
Next we have info.RedirectStandardOutput = true;, which simply tells the process that we want to redirect the output to our application. We also need to enable raising events on the Process class, and hook up an event handler to the OutputDataReceived event. This new process_OutputDataReceived event handler will allow us to do whatever we want with the output from the CDRDAO process.
Furthermore, we set the SynchronizingObject property to the instance of our class.
You'd think you'd be done here, but after all that in my testing with CDRDAO, I wasn't getting any output in my process_OutputDataReceived event. It turns out that CDRDAO happens to output all its messages as standard errors, instead of standard output. Therefore, it's a good idea to (and you might need to) redirect/capture the standard error output as well:
The three new lines added here should be fairly obvious, as they follow the same logic as the standard output: RedirectStandardError which tells the process to redirect the output, ErrorDataReceived which hooks up the event, and BeginErrorReadLine which initializes capturing the error output.
Consuming the Output
Obviously, there are a number of things you can do in the process_ErrorDataReceived and process_OutputDataReceived events, such as parsing the output to control the logic of your application, or simply showing the output to the user. It's also important to note that you can combine those two event handlers into one without issues, and just point both OutputDataReceived and ErrorDataReceived to the same event handler method.
Then, in a Windows Forms application, you can send your output to a ReadOnly and MultilineTextBox control to display it nicely. It works best if you do it like this:
Here, we simply check to make sure e.Data isn't null and that the form isn't disposed, as otherwise if the user closes the form before the process has exited, bad things will happen.
Finally, it's important to use the AppendText method of the TextBox control for performance reasons. The method will also automatically scroll to the end with each append operation.
Let us know in the comments if you run into any issues or do anything worth bragging about.
What do you do when you need to reference a third-party library? That's easy, right?
Would I have asked if it was that easy? How about when you're working in an application that allows for plug-ins or modules that aren't sandboxed or in any way isolated? (i.e. certain CMSs, ERPs, SAPs, WTFs)
Same answer right?
Not if you're worried about conflicting versions of those third party libraries. Between code signing, breaking changes, and modules that are never updated you're signing up for some annoyances at best.
Ok, so let's throw this into the GAC. It allows side-by-side.
True, but many third-party libraries never update the important part of their version so as to avoid other versioning issues. So the older version and the newer version actually are the same version. Makes sense, right?
I forgot to mention that this system I'm working in often runs as an ASP.NET website which means a lot of dynamic compilation. Namespaces are critical, unique ones are good. Now we have duplicates and many existing pages that I can't control will likely break.
I recently had the opportunity to move all of the static content (images and documents) from one Sitefinity 4.x installation to another, the destination being Sitefinity 6.x. Telerik does provide a migration solution for Enterprise installations; however, I would imagine that the majority of Sitefinity customers are not running the enterprise version.
All of the Sitefinity static content is stored in SQL Server, not as files in the file system as you would expect. Therefore, I found it easiest to migrate the content via simple SQL scripts.
Unfortunately, there's an extreme lack of documentation on the structure of the Sitefinity database. Therefore, I built these scripts mostly off of trial and error. Keep in mind that the data structure does vary slightly between Sitefinity versions, so you might find an error or two to work through in the SQL. Obviously, make a backup of your database before proceeding.
As always, if you just want to grab the script and run, you can get the whole thing right here.
Ideally, before running these scripts you would ensure that both the old and new databases are in the same SQL Server instance; this certainly makes life easier. Alternatively, you can use SQL Server Linked Servers to connect to both instances at once.
Obviously, you'll need to replace the SitefinityOld and SitefinityNew database names with your own database names. If you're using linked servers, you'll need to add the instance names as well.
Sitefinity apparently uses the sf_chunks table to store all of the binary data. Therefore, obviously it's the most massive of the transfers. Here's the script to bring it over:
The sf_media_content table contains most of the relevant metadata for the content. I never attempted to identify exactly what sf_approval_tracking_map is used for, but I was getting errors when I didn't include it; I assume it contains important information for content workflow. Without further adieu:
Thumbnails and URLs
The sf_media_thumbnails and sf_url_data tables contain exactly what you'd expect. The URLs are particularly important, as none of your content will work without them. Here goes:
Finally, none of the content pieces in any of the custom libraries will show up without bringing the sf_libraries table over. So here we go:
Update - 2014-07-07
Turns out that if you don't pull the permissions records at all, bad things happen (such as permissions not properly propagating from parents, etc.). Go figure. I had to add the following SQL to ensure that permissions work correctly:
Did it work?
If all queries ran successfully, you should now have all of your static content in your new Sitefinity installation. Go ahead and log into Sitefinity and confirm that it's all there. Keep in mind that the above scripts transferred everything that I needed for my particular situation, but permissions were not one of them. If you need permissions, you'll need to dive in to figure that piece out yourself. There may also be some further data that I missed, but everything that we were using was brought over just fine.
Let us know in the comments if this did or did not work for you, and if you happen to figure out how to pull the permissions, do us all a favor and post that here as well.