hash_bucket()

Archive for the ‘C#’ Category

Here’s a little gem.

Most people like to keep their unit test assemblies and code separate from the code being tested (maintainability is such a nice thing). This however raises the issue of how to write tests for methods and classes that are marked internal.

Lots of different approaches exist, such as editing the *.csproj file by hand to include (inject) the code file containing the tests in to the code being tested. This, however is not a very smooth approach.

Thankfully there is a much easier approach using a technique called Friend Assemblies. Basically it’s a way of saying that Assembly A and Assembly B are friends, meaning that B can peek into A’s little secrets marked Internal.
Here’s how to set this up:

1. Add the Assembly Attribute ‘InternalsVisibleTo’ to the assembly being tested:
This goes into the file ‘AssemblyInfo.cs’, an autogenerated file in the Properties folder of your project.

[assembly: InternalsVisibleTo(“Tests.dll”)]

2. Make sure that the assembly containing the tests reference the assembly being tested, and that the output matches the name you typed in the Assembly Attribute (in this case ‘Tests.dll’).

One good thing to know is that sometimes your assemblies fail to make friends because the output of you test assembly has not been explicitly named. In such cases you should make sure to compile the test assembly using the ‘/out:’ switch.
/out:Tests.dll

As always, MSDN has the full story here:
http://msdn.microsoft.com/en-us/library/0tke9fxk(VS.80).aspx

and here:
http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.internalsvisibletoattribute.aspx

Advertisements

This article will introduce the Conditional Attribute and give an example of its usage. The technical level is kept low to make the text as accessible as possible. More info can be found in the official C# Programming guide, in particular the section called ‘Conditional’.

When it comes to debugging, the world (of IDEs) is full of tools to make life easier for us as developers. The latest version of Visual Studio (2008) features a vast array of built in debugging aids that help diagnose and keep track of a running application/assembly. This is all well, and maybe what I am about to say next may sound a bit old or odd, but I for one would never do away with my trusty old output console. I have talked about this before in a post that gave a brief walk through on how to open console windows from an existing and running application in Windowed mode (that is sporting one of those fancy graphical interfaces that seem to be so in fashion lately ;) ).

I have written a small dll that I hook up to must of my projects, through which you can ‘report’ debugging messages to one of 5 pre-defined ‘channels’. The report can be set to write to:

1) The Visual Studio output console (default)
2) A regular text file (you set the path)
3) A console window
4) Email
5) Wreq (my own custom project management tool)

This little tool uses reflection to keep track of what the application is up to, but you can also tell it to ‘report’ any message you choose at any time.

Eg:

DebugReporter.Report(
MessageType.Information,
“[Application Name]”,
“Creating and setting Auto Save path” +
“\nPath: ” + _path.ToString());

or:

DebugReporter.ReportMethodEnter();

DebugReporter.ReportMethodLeave();

One thing about reporting debug info this way is that when you switch to a Release build you want all of those messages to go away automatically, so that you do not have to comment out all these calls.

One way of acheiving this would be to use the #if-endif construct such as:

#if DEBUG
DebugManagers.DebugReporter.Report(
DebugManagers.MessageType.Information,
“[Application Name]”,
“Creating and setting Auto Save path” +
“\nPath: ” + _path.ToString());
#endif

This pre-processor directive would tell the compiler to skip the encapsulated block of code. However, in a typical small application I usually end up with a lot (hundreds) of calls like this and decorating them all with #if-endif just isn’t a very elegant solution. Fortunately the designers of the CLR thought about this and gave us a much better way of dealing with this situation.

One thing you can easily discern from the above code is that all these calls access the same static method on the same static class. Thus, instead of decorating all the method calls, what if we could decorate the method itself?

In .net there is a concept called Attributes that allows you to add custom behaviour and ‘properties’ to methods. The framework contains a lot of pre-defined attributes that deal with for example security, compiler-options, discoverability and much more. Also, as a developer you can define your own attributes, but that is beyond the scope of today’s topic. (See the ‘.NET Framework Developer’s Guide’, in particular the section on ‘Writing Custom Attributes’.)

One such pre-defined attributes is the ‘Conditional’ attribute. This works pretty much like an inverse version of the #if-endif construct. it tells the compiler when to, for example execute a particular method.

Note: This is not a pre-processor directive in the form we know it from of a C/C++, in fact the .net framework as no such concept since the code is JITed upon execution. Thus the above statement that the compiler executes a piece of code is sort of correct, if by compiler to mean the CLR, and by executing you mean Just-In-Time compilation… The C# Programming Guide calls this a ‘preprocessing identifier’.
(See the ms-help page ms-help://MS.VSCC.v90/MS.MSDNQTR.v90.en/dv_csref/html/e1c4913b-74d0-421a-8a6d-c14b3f0e68fb.htm)

When using the Conditional attribute you define a constant condition that has to be met for any code-path that access the decorated mehod to be executed. Naturally this can help us in the debuggin scenario. One such constant that is defined automatically when building a Debug build in Visual Studio is the DEBUG constant. Thus we can do the following:

[Conditional(“DEBUG”)]
public static void Report(…)
{

}

Using this attribute is what invokes the magic. Now, if the DEBUG constant is not set, any method call to the Report method will simply be excluded from the build, keeping our Release build nice and neat and without cluttering the code with those #if-endif. In fact, the method itself wont event be present in the emitted MSIL :). That, is clean!

There are a few restrictions on using the Conditional Attribute. In particular, the method you want to decorate must have a return type of Void.

Note: This actually makes a lot of sense as the method will no longer be available and the compiler would have a hard time tracing all the possible usages of the eventual return value that might be expected by later code…

I still see a lot of (c#) code written that is full of #if-endif (almost exclusively to defer execution paths in/out of debugging), but I would highly recommend you start experimenting with using the Conditional Attribute instead (where applicable of course ;).

There are still some legitimate use for the #if-endif of course, for example when you want to skip a particular code-block within a method, or when the method you call has a return type other than Void. Thus you should not rely on the Conditional attribute to solve all of your conditional execution, but keep it around as a powerful and simple technique. Also, it might give you an excuse to learn more about the power of Attributes in .NET, and how to write your own custom Attributes.

There’s been some buzz around the web lately around ASP.NET MVC, particularly after Mix where it was a reoccurring topic amongst the more technically oriented visitors. I’ve kept an eye on it for some time, mostly because I like the MVC (Model-View-Controller) pattern a bunch, but I never took the time to actually download the extensions and play with it. For the past few weeks though I’ve been spending a lot of time in planes and airports, and what better way to kill some time than to code?

The MVC pattern may be old news, and I’m sure that most people with at least some application development experience have already used it at some point, though they may not have been aware of it. In software systems architecture we like to think of three distinct layers of abstraction that help us organize and manage code and functionality. Or at least we did until the number 3 started to feel old and was replaced by the much more sexy ‘n’ ;).

MVCThose three layers are Data, Logic and UI, though they may be called by different names depending on your platform/environment/religion. What MVC does is it focuses on the UI layer, or the presentation layer, and splits it further into one or more Controllers and Views.

Controllers are the handlers and responders to UI events such as user interaction. The Controller responds to a user action and then updates the Model to reflect this action. The Model is roughly equal to the Data layer in the 3 tier architecture described above, though there are some important differences. A model is a contextually skewed data layer in the sense that there may be several models representing the same data from various ‘viewpoints’. The Model is what gives contextual meaning to the data by ‘modelling’ it in accordance with the current domain of the application.

The View presents the Model to the user in accordance with its context, e.g. as a UI that is updated to reflect the state of the Model. The model has no knowledge of the View and the View cannot directly change the state of the Model.

This basic explanation of the MVC pattern should be obvious to most .NET developers who are used to mantras like “Applications = Code + Markup” and keywords like code-behind. The MVC pattern is more or less implemented in any pattern/architecture that says ‘Thou shalt not mix UI and logic’ or ‘Get your XML out of my code!’.

What makes ASP.NET MVC so interesting is that it brings this pattern to the ASP.NET programming model, offering a much cleaner, simple to maintain and natural way (for app-devs that is) to program the web. One of my friends recently rewrote his blog using it, and I can’t wait to get an excuse to use this in a live project!

I highly encourage you to download the Visual Studio extensions and try them out. More info can be found here:
http://www.asp.net/mvc/

I just wanted to point out for those of you who are interested in working with editable FlowDocuments in the Windows Presentation Foundation (WPF) that most standard editing commands such as adjusting alignment, setting various font variations such as underline, italics or bold and also building bulleted lists and other nice (albeit simple) layout tricks, are all available out of the box in WPF.

Say you have a RichTextBox and you want the user to be able to add some text, edit the “style”of this text and also set various style-properties that should be applied to all new text entered into the RTB.

The simplest way to do this is to simply slap a RichTextBox into a grid of a new Window, like so:

<Window x:Class=”EditableRTB.Window1″
xmlns=”http://schemas.microsoft.com/winfx/2006/xaml/presentation&#8221;
xmlns:x=”http://schemas.microsoft.com/winfx/2006/xaml&#8221;
Title=”Window1″ Height=”300″ Width=”300″>
<Grid>
<RichTextBox />
</Grid>
</Window>

(Remember that the contents of a RichTextBox is actually a FlowDocument…)

Now compile and run this, add some text into the box, select it and then press the keyboard shortcut Ctrl+U. You will noticed that the selected text gets underlined. You can try the same for other standard shortcuts available in for example Word Pad, and voila! ;) most of them work right out of the box.

While this is nice it might also be helpful for some users to have buttons to press that invoke the same commands. Sometimes referred to as Usability. The nice thing about implementing such functionality is that it can all be done in XAML. Below is a sample button that invokes the EditingCommands.ToggleUnderline command.

<Button Content=”Un” Command=”EditingCommands.ToggleUnderline”
CommandTarget=”{Binding ElementName=_textArea}”
CommandParameter=”{x:Null}”>
</Button>

If we break this down into its interesting parts you will first notice the Command attribute. This attribute takes any RoutedCommand with an associated CommandBinding. In this case we are using a Command that is available by default in WPF. There are many others associated with other types of controls and scenarios such as MediaCommands and ComponentCommands.

The next attribute is called CommandTarget. This is the control on which the Command should be executed. In this sample XAML the CommandTarget is a RichTextBox called “_textArea”.

The last attribute is called CommandParameter. While some default Commands do take parameters, most do not. Thus in most cases you can safely pass a Null value for this parameter.

That’s about it. Granted, the above is slightly simplified, but if you experiment with it for a while you will find how easy it is to manipulate the contents of a FlowDocument in a RichTextBox.

You can easily check the generated XAML of the FlowDocument being generated in the RTB by simply adding another TextBox somewhere in your window, hook up the TextChanged event from the RTB to an eventhandler and have this eventhandler do something like the following:

private void _textArea_TextChanged(object sender, TextChangedEventArgs e)
{
this.myTextBox.Text = XamlWriter.Save(this._textArea.Document);
}

This will allow you to see what the different EditingCommands actually do to your FlowDocument in real time :) Be warned though that this is not a very nice solution but only a quick hack :)

Read more about the Editing Commands over at MSDN at this link:

MSDN

This post is going to explain, in simple terms, how to build an XBAP that needs to access data (in this case XML) over a network. It is also going to talk about how to communicate with a Web Service from within an XBAP. All this while avoiding the WebPermission exception AND the Environment exception. :) And no, we won’t be going full-trust.

As always, some things are simplified for the sake of clarity, if you feel there is something you’d like me to explain in more detail, please drop a comment.

Let’s get to it. First a list of truths (as of 2007/04/12):

  • An XBAP cannot communicate with a WCF Web Service*
  • An XBAP cannot access local resources on the users machine**
  • An XBAP cannot access arbitrary online resources***
  • An XBAP is very sensitive about its Site Of Origin
  • * – As of now, however, rumors say that with the release of .NET 3.5, this might be made possible, within certain security limitations.
  • ** – Unless the application is installed and running with modified security settings. By default an XBAP lives in the security sandbox of the Internet Zone.
  • *** – For example, you can set an Image control to Source an image from an off-site url, but you cannot (by default) access an XML stream from an arbitrary URL, via for example an XmlDocument.Load().

So with that out of the way, how do you go about connecting your XBAP to a webservice? Well it turns out that you can actually communicate to non-WCF webservices, and in the world of .NET that means ASP.NET (asmx) services.

This fact has lead some to suggest the technique of Bridging, but what does that mean? Well, say that you have a set of nice WCF services, or for that matter services that do not reside on the same physical machine / domain as your XBAP. In that case, what you do is you build an ASP.NET service that exposes methods to go get the data from these other services and pass it back down to you, thus acting as a bridge between your XBAP and the rest of the world.

How is this possible if an XBAP cannot access arbitrary online resources? Well it turns out that an XBAP can actually communicate with one and only one point outside its protective little bubble, without security getting in your way so to speak :), and that is that magical place called the Site of Origin. The following illustrations should clarify somewhat:

Xbap data flow

This needs to be explicitly enabled in the security settings for you project.

(In VS2005, right click your project in the Solution Explorer and select Properties, click the Security tab, the Advanced button and then make sure the “Grant the application access to its site of origin.” check-box is checked. Also why you are there, it helps to fill out the “Debug this application as yada yada” textbox with a domain/machine name that makes sense to you. I will explain why later.)

Now that you have that out of the way, two things become possible. First of all you can access webservices hosted on the same site, and even better you can now reference resource-uris that reside on this domain.

So say for example that you have an XML file sitting on the same webserver as the one hosting the XBAP, and you’ve granted the application access to its site of origin. This means you can do the following inside your XBAP:

using System.XML;
	XmlDocument D = new XmlDocument();
	D.Load("http://yoursiteoforigin/filename.xml");
	//Do something with your XML...

Not too bad :). Be warned however that the domain/machine name part of the site of origin must be the same as the one you set in the security settings. This can lead to some confusion during debug when for example you might have set the site of origin to ‘http://machinename&#8217; but in your project you might refer to the ‘http://localhost&#8217;. This will not work!

This also means that in some scenarios you can switch out your bridging ASP.NET webservice for a Windows Service that serves up XML data to the webserver that you application can download, thus avoiding the need to open up and secure a web service.

This whole site of origin thing is very important for another reason as well. Say that you are developing and testing an ASP.NET web service on the same machine, and that you deploy it on your local IIS (I’m talking about IIS6 by the way). If you then go ahead and reference that service in your XBAP project, you need to take note so that the url by which you refer to it is the exact same as your site of origin.

Don’t let the machinename/localhost similarity fool you, the Url’s that get added to you Service.wsdl file need to be exactly the same as the ones you specified as the Site of Origin.

Ok, I’ll stop here for now. More on the deployment of ASP.NET services and XBAPs in another post.

Teaching

Posted on: April 11, 2007

Today I held a 3 hour introduction to building user experiences (that’s UI’s for those who don’t like Buzz words :) using the Microsoft Expression suite of applications.

Focusing mainly on Expression Blend and how to integrate it into the workflow, I took the class from understanding the relationship between code-behind files and XAML, walked through data-binding, control templates & styles, animations and events, and finally built a small RSS reader.

I also gave a short introduction to the .NET framework and how its various parts come together to form the basis of the Expression/Visual Studio workflow.

The whole presentation was in Japanese and I suspect I did some interesting mistakes in both grammar and vocabulary. Sometimes translating technical terms from English to Japanese can be really tricky. A few times I managed to mix the word for Relationship with the one for Jump-Suite, causing some very confused looks :). 

A while back I architected a content management system for a client. The system is built on a combination of web services, windows services, databases and rich web browser client technology, and during this process I made heavy use of the built in support for XML in the Microsoft SQL Server 2005 combined with the tools available in the System.XML namespace.

One of the problems that many developers seem to run into early on however, is how to get the beautiful XML that the server can produce for you, out to files that can be published.

Using the FOR XML syntax in T-SQL, and its various constructs, it is relatively easy to get SQL Server to produce nicely formatted output, including schema definitions, namespaces, user-defined paths and so on.

In this post I would like to just walk through a simple example that shows a very basic procedure for producing an XML representation of a set of tables, and how to get that representation out to disk. The example will use an SQL Server with a stored procedure for producing the XML, and a small C# method that takes the constructed XML and saves it to disk.

First off, lets take a look at the FOR XML syntax. (The official documentation can be found here)

To put it simple, FOR XML can be tucked on to any SELECT statement in order to format the result set not as a set of ROWs but as an XML structure. In its most simple form it might look something like this:

SELECT * FROM MyDatabaseTableWithPeople FOR XML AUTO

(Don’t worry about the AUTO keyword, we’ll get to that in a moment)

This query might produce a result looking something like this (assuming I have only 2 columns, a Name and a BIT that shows how friendly we are…):

<MyDatabaseTableWithPeople Name=”Kalle” Friend=”1″ />
<MyDatabaseTableWithPeople Name=”Kalle” Friend=”1″ />
<MyDatabaseTableWithPeople Name=”Sven” Friend=”0″ />
<MyDatabaseTableWithPeople Name=”Anna” Friend=”1″ />
<MyDatabaseTableWithPeople Name=”Jocke” Friend=”1″ />

While this is nice, a lot of information is missing and the formatting looks like it might need some work. The reason why we see each table row as an XML row is due to the AUTO keyword that we tucked on to the end of our query. FOR XML can be used in 4 different formatting ‘modes’:

  1. RAW
  2. AUTO
  3. EXPLICIT
  4. PATH

We wont go into all of these in this post, and truth to be said they are more than just ‘formatting modes’. But for this very basic introduction that will suffice.

In AUTO mode, the resulting XML will be nested based on how you construct your SELECT query, thus in order to achieve nested XML you will need to construct nested statements. This is beyond the scope of this post, but you can read more about it here. AUTO can be nice if all you need is a minimal XML representation to be injected into another database or another XML document, but for simple scenarios when you just want to get the data out, RAW might be a better option. This is the option we will use in this example.

(A side not, actually AUTO is a pretty intelligent little fellow that will for example try to organize entries that share for example an ID in a hierarchical way, this can be very nice for real world data and I encourage you to research this further if you find this post interesting.)

Let’s change the AUTO keyword to RAW and rerun our query:

<Row Name=”Kalle” Friend=”1″ />
<Row Name=”Kalle” Friend=”1″ />

Looks about the same, with one immediate difference, the name of the table has been replaced with the word Row. This is because the RAW keyword will simple format the output of the SQL query as XML without making any guesses as to how we might want to name our XML Nodes. Let’s change that Row word into something more nice, a name that might be useful in parsing or databinding, and that makes sense in the context of the data:

Our SQL query now reads:

SELECT * FROM MyDatabaseTableWithPeople FOR XML RAW (‘Friend’)

and the result is:

<Friend Name=”Kalle” Friend=”1″ />
<Friend Name=”Kalle” Friend=”1″ />

Much nicer right.

Now the next thing we probable want to do is change the attributes into child nodes. This can easily be accomplished by adding the key word ELEMENTS to our query, like so:

SELECT * FROM MyDatabaseTableWithPeople FOR XML RAW (‘Person’), ELEMENTS

Giving us:

<Person>
<Name>Kalle</Name>
<Friend>1</Friend>
</Person>

Already an improvement!

We might also want to accommodate for NULL values in our data, as not doing so would mean that they are simply left out of the XML all together, which might lead to parsing errors later on. Doing so is simple, just add the keyword XNSINIL after the ELEMENTS key word, like so:

SELECT * FROM MyDatabaseTableWithPeople FOR XML RAW (‘Person’), ELEMENTS XSINIL

This construct will also add an xmlns attribute to each item in your XML. Might seem a bit redundant but it makes for safer output. What actually goes on under the hood is a different discussion, and this example is meant to be simple. Google if you want to know more.

Now before we are finished, let’s also add a root XML node to make our document complete. In a real world scenario you probable want to add more data at other levels in the document, but for us this will suffice:

SELECT * FROM MyDatabaseTableWithPeople FOR XML RAW (‘Person’), ROOT (‘PeopleList’), ELEMENTS XSINIL

Adding the ROOT key word also brings another advantage, namely that the xmlns attribute is now only defined on the root element, which makes for cleaner XML.

Ok, so now we are happy with our rudimentary and simple XML document, so let’s save our query in a stored procedure and write some client code to get it out of the database.

The following method can be placed in any C# context provided that you reference the System.Xml, System.Data.Sql and System.Data.SqlClient.

	SqlConnection conn =
	     new SqlConnection(someConnectionString);
	conn.Open();
	string cmdText = nameOfYourStoredProcedure;

	SqlCommand xmlExportCom = new SqlCommand();
	xmlExportCom.CommandType = CommandType.StoredProcedure;
	xmlExportCom.CommandText = cmdText;
	xmlExportCom.Connection = conn;

	try
	{
	  XmlDocument d = new XmlDocument();
	  XmlReader x = xmlExportCom.ExecuteXmlReader();
	  d.Load(x);
	  using (XmlWriter w =
		XmlTextWriter.Create(path, settings))
	  {
	  	d.Save(w);
		w.Close();
		x.Close();
	  }
	}
	catch (Exception)
	{
		//do something
	}
	finally
	{
		xmlExportCom.Dispose();
		conn.Close();
		conn.Dispose();
	}

(The code is a little bit verbose for clarity…)

So what do we have here… Let’s break it down into steps:

  1. Open a connection to the database using some connection string
  2. Store the name of your StoredProcedure in a string for easy reference
  3. Build an SqlCommand, setting the CommandType to StoredProcedure, the CommandText to the string you created in step 2, and the Connection to the connection created in step 1
  4. Instantiate an XmlDocument to hold the result of the SqlCommand
  5. Instantiate an XmlReader to the result of the SqlCommand by executing its ExecuteXmlReader() method, which returns an XmlReader
  6. Load the XmlReader into the XmlDocument
  7. Instantiate an XmlWriter with the path(string) to where you want to save your docuemnt and (optionally) an XmlWriterSettings object that might specify for example indentation.
  8. Save the XmlDocument using the XmlWriter
  9. Close the XmlWriter
  10. Close the XmlReader
  11. Dispose the SqlCommand
  12. Close and Dispose the SqlConnection.

And Voila! There’s your pretty XML file steaming fresh from the Sql Database onto your disk in pretty formatted human readable text :)

That’s it. Please consult the links below for more info:

Constructing XML Using FOR XML

Using XML in SQL Server

System.Xml Namespace


.

This blog has no clear focus. It has a focus though, it's just not very clear at the moment...

Dev Env.

Visual Studio 2008 Prof / NUnit / Gallio / csUnit / STools (ExactMagic) / doxygen / dxCore / TypeMock / TestDriven.net / SequenceViz / CLRProfiler / Snoop / Reflector / Mole / FxCop / Subversion / TortoiseSVN / SlikSVN / CruiseControl.net / msbuild / nant

Blog Stats

  • 81,435 hits