Migrating a website from GoDaddy to AWS for free

Migrating a website from GoDaddy to AWS for free

Okay so this post has been sitting on the back burner for a while because i have been busy with a new job, but i just wanted to write a little something up about migrating a website to AWS.

In my situation i had been using GoDaddy as a host for some time however i have never been particularly happy with either their performance or value and i had wanted to find another host for a while.

As it happens AWS has a free tier that makes a low traffic website like practically free to host, with excellent reliability and performance. So it was a bit of a no brainer.

I should note that as website is generated from from markdown using Sandra.Snow, it is effectively has no server-side rendering and this guide would be suitable for similar static websites. It is definitely possible to host server side rendered websites and APIs on AWS, but that may cost a little more and is beyond the scope of this article.

Migration process

This guide largely follows Vicky Lai's excellent guide which itself uses the an AWS Guide but i'll help you out with some of the GoDaddy particulars.

Create a website

  1. Sign up at https://aws.amazon.com/free/
  2. Create AWS Bucket(s) to store your website contents
    1. Use the guide here: https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html
    2. Create buckets for your domain and each sub domain (even if they are only for url redirection) with default settings your region, e.g.
      1. badmonkeh.com
      2. www.badmonkeh.com
  3. Use a tool such as Cloud Berry to upload website content to your main S3 bucket (the one without a subdomain), e.g badmonkeh.com
  4. Assign the policy to make the bucket contents publicly accessible
  5. Enable static website hosting in bucket properties for the main bucket
    1. Properties -> Static Website Hosting
    2. Use this bucket to host a website
    3. Configure default page
    4. Note the endpoint url e.g. http://badmonkeh.com.s3-website-ap-southeast-2.amazonaws.com
  6. Redirect requests for your 'www' subdomain bucket to your main S3 bucket
    1. Properties -> Static Website Hosting
    2. Redirect requests to main bucket name, e.g. badmonkeh.com
  7. (Optional) configure website / bucket logging - (I skipped this but it is a good idea)
    1. https://docs.aws.amazon.com/AmazonS3/latest/dev/LoggingWebsiteTraffic.html
  • You need an S3 bucket for every subdomain (including "www") as well as one for your domain

DNS Routing

When it comes to DNS routing you have two options, you can either continue to use your domain registrars DNS server, e.g. GoDaddy, or you can set up Route 53.

Using Route 53 is ideal if you plan to migrate your domain name over, however this will incur a small fee (USD 0.50 / month at time of writing) so bare this into consideration.

Using Route 53

  1. Export DNS records from yuor existing host, e.g. GoDaddy
    1. GoDaddy > DNS Management
    2. Advanced Features > Export Zones File (Windows)
  2. Set up DNS routing using Route 53 Hosted Zone
    1. AWS > Route 53 > Create Hosted Zone
    2. Create Hosted Zone for your domain name, e.g. badmonkeh.com
    3. Add alias for your domain and "www" subdomain, e.g. badmonkeh.com and www.badmonkeh.com
      1. Create Record Set
        1. Alias: Yes
        2. Alias Target: select s3 bucket name, e.g. badmonkeh.com (s3-website-ap-southeast-2.amazonaws.com)
  3. Migrate DNS records to Route 53
    1. Import Zone File (copy and paste)
      1. NS - Instead of transferring these, replace their values with the name server values that are provided by Amazon Route 53.
      2. SOA - Amazon Route 53 provides this record in the hosted zone with a default value.
  4. Migrate DNS records in GoDaddy
    1. Deleted any subdomain records, they will be recreated later
    2. Disabled domain forwarding
    3. Disabled DNS record management
  • Make sure you migrate all records, such as MX records to allow custom email providers
  • You may have to delete some entries, such as "@" entries to make the file importable
  • You can always add the records line by line if there are any issues

Using GoDaddy

I have found the migrating my domain to GoDaddy was relatively easy and definitely cheaper and i will write this up in a future article.

However if you plan to keep your domain with GoDaddy you can continue to use their DNS servers, you will just need to update your DNS records.

  1. Update GoDaddy DNS record
    1. Find website's static subdomain
      1. Log-in to AWS
      2. Go to S3
      3. Select website bucket e.g. www.badmonkeh.com
      4. Go to Properties
      5. Copy Endpoint, e.g. http://badmonkeh.com.s3-website-ap-southeast-2.amazonaws.com
    2. Update GoDaddy DNS record
      1. Log in to GoDaddy
      2. Go to DNS Management
      3. Change CNAME with host "www"
        1. Points To: "@", Change to endpoint value (without protocol, just the subdomain)


Overall the process is relatively straight forward and offers many potential benefits, but i do encourage you to do your own research and determine if this is suitable for your requirements.

Further enhancements would be to migrate your domain names and use a CDN such as AWS's CloudFront.

Categories Software Development | Tagged aws, web hosting

Capturing a screenshot of an HTML element

Capturing a screenshot of an HTML element

Ever come across a website you want screenshot but the content spans two pages? Or perhaps you don't want to go through the hassle of editing and cropping the screenshot to isolate the small piece of content on the page that is useful. Well look no further, here is a simple tip on how to capture the content of a specific element only.


  1. Right click the element on the page and choose Inspect Element (Q)
  2. Hover over the tree of elements and ensure that all of the content you want is masked
  3. Right-click the element and choose Screenshot Node
  4. Check your downloads folder for the saved image.



  1. Right click the element on the page and choose Inspect
  2. Hover over the tree of elements and ensure that all of the content you want is masked
  3. Open the command menu by typing Ctrl+Shift+P
  4. Type in Node and select the Capture node screenshot command
  5. Check your downloads folder for the saved image.



This has been useful to me, i hope it is to you too. If you have any other tips hit me up in the comments.

Categories Software Development | Tagged firefox, web design

Converting MPEG to H.265

Converting MPEG to H.265

I recently took a video of a friend's performance with my DSLR and only afterwards recalled just how massive MPEG files are. The 1080p clip was only 7 minutes long but was a whopping 1.1GB! A good thing that my camera does not support 4K ;).

Being time poor I fanticised briefly about using Windows Movie Maker which i recall as having a degree of utility, albeit with a lot of restrictions.

Unfortunately the Windows 10 replacement only supports 720p at best in the free version whic is pretty dismal considering 4K is pretty standard these days and most codecs and encoding tools are open source anyway.

It was then that i remembered ffmpeg, a really powerful open source audio/video de/encoder that i had used in the past to encode MPEG1 to either DivX or Xvid.

Sure enough ffmpeg is still going strong and has naturally kept pace with the times and allows re-encoding in a number of modern formats.

Encoding to H.265

So now to the fun part the encoding. I chose encoding to H.265 as it should be about half the file size of H.264 and is a likely successor.


  1. To start download ffmpeg or you can build it from source.
  2. Re-encode it using the following command

    ffmpeg -i input.mov -c:v libx265 -preset veryslow -crf 18 -c:a aac -strict -2 -b:a 128k output.mp4

  3. Profit

The above command re-encodes your mpeg into a h.265 movie using aac for audio at a very high quality (practically lossless). I should note that whilst this will produce a very high fidelity video, it may take a long time e.g. on my 16 thread machine clocked at 4.5Ghz re-encoding a 7min video took over two hours!

However it did manage to reduce the file size from 1.1GB to 202MB without any visual difference. To further reduce the size you could increase the CRF value, e.g. setting this to 28 produced a 54MB file with very minor quality loss.

ffmpeg command arguments

So let's break down the arguments provided on the command line above a bit to better understand their impact.

-i <filename> - This simply specifies the input filename

-c:v libx265 - -c is an alias for -codec:v or -vcodec and specifies the encoder to use, in this case lixb265. You could specify libx264 for X.264 encoding.

-preset veryslow - This specifies the X.264/265 encoding speed, the slower it encodes the smaller the file size. Set this in the range from veryslow, slower, slow, medium (default), fast, faster, veryfast, superfast and ultrafast depending on your patience ;).

I should note that in the example above, veryslow produced a 202MB file after 135 mins but veryfast produced a 288MB file after only 4 mins.

-crf 18 - I am encoding using a constant bitrate because i care more about quality than file size, and this factor determines the output bitrate - and hence the quality. This is a scale between 0-51, where 17/18 is practically lossless and the default is 23. Experiment with higher numbers first to see if they give you an acceptable quality as they will also result in a smaller file with less encoding time.

-c:a aac - -c is an alias for -codec:a or -acodec and specifies the audio encoder to use. I should note that the AAC encoder is not open source and included with the static build of ffmpeg so you need to either build it from source or use the command -strict -2 to use the internal encoder.

b:a 128k - This specifies the audio bitrate.

If you want to know all of the options please be sure to read the ffmpeg documentation or specific encoding documentation for x.265 or AAC.


I hope this is as useful a reference for you as it is for me ;), let me know in the comments if you have any tips or feedback.

Categories Photography | Tagged h265, mpeg, software, tools, video

Wix and the TFS 2013 build server, let's be friends

We recently commissioned a new build server as part of an upgrade and moved to WiX 3.9R2 as part of the process. After setting up a new build profile my installer build would fail with this same message per each ICE action:

light.exe (0): Error executing ICE action 'ICE01'. The most common cause of this kind of ICE failure is an incorrectly registered scripting engine. See http://wixtoolset.org/documentation/error217/ for details and how to solve this problem. The following string format was not expected by the external UI message logger: "The Windows Installer Service could not be accessed. This can occur if you are running Windows in safe mode, or if the Windows Installer is not correctly installed. Contact your support personnel for assistance.".

Which produces a similar message in the server's event logs:

Product: ProductNameRemoved -- Error 1719. The Windows Installer Service could not be accessed. This can occur if you are running Windows in safe mode, or if the Windows Installer is not correctly installed. Contact your support personnel for assistance.

At this point much googling leads to to a similar set of instructions, re-register msiexec, check registry permissions, etc. I tried all of these to no avail, until finally I came across an insightful post from John Robbins that has the solution:

"In order for the .WiXProj files to compile, the account running the build controller/agent must be in the local machine’s administrator group."

Therefore the solution is simply:

  1. Add the service account to the Local Administrators user group
  2. Restart the server
  3. ...
  4. Profit!

Note: If you do not already have a service account then I strongly advise you to create one, it is a dangerous security practice to run add a low privileged account like NT SERVICE as an administrator.

Categories Software Development | Tagged dev, error1719, error217, tfs, tfs2013, tfsbuild, wix, wix-3.9

DesignData makes your life easier


Okay so I have finally jumped onto the band wagon and decided to apply my skills to Windows Phone development - after all I've had a Windows Phone since launch and I am a .NET developer. So really it's about time.

I won't go into the basics of how to start coding a Windows Phone application, the Windows Dev Centre really have that covered with a good variety of simple articles. What I will say however is that with XAML/Silverlight and a C# back-end, coding for Windows Phone is much like a marriage between ASP.NET and Win Forms, and a good one at that.

And without further adieu...

On with the show

Soon enough after jumping into creating your first application, hopefully you are going to discover of the MVVM design pattern and start binding your controls to properties in a ViewModel. Or regardless binding your controls to an object of some kind.

Here's where there is a bit of a Gotcha moment, as you clear the dummy values and bind to the controls and for the most part discover that your interface is rather, well, blank. All of those lists and textboxes are not only empty but is most cases not visible at all.

If you are like me it is at this moment you go looking in the IDE for a suitable property to add testing data to, as seeing your UI in the Designer can sometimes be useful. And then there is the next Gotcha, there isn't one.

Microsoft in their wisdom, most likely to confuse developers, have not made it so easy for us. But have no fear there is a way, or truth be told a few, to add some test data to your design time UI.

Option #1: FallbackValue

By far the easiest way to achieve this to specify a FallbackValue in the actual binding on the control. This will set a value to the control if the bound value is not available.

<HyperlinkButton Content="{Binding FooterText, FallbackValue='Foo Bar'}" />

However the drawbacks of this method are that this value will also be shown at runtime if you have no bound value, and it won't do much for any control that expects a collection be bound to it.

Option #2: Create a DesignData file

The next best option is to create a specific DesignData file that represents your ViewModel and contains data to be shown only at design time. This file will be written in XAML and allows you to set values for any of the public properties in your ViewModel, including any dependant types.

Now of course you could also use this method for any object you intend to set as the DataContext of a page or control, the method is much the same.

What you need to do in short is:

  1. Create a text file and name it with a .xaml extension.
  2. Add XAML representing your ViewModel (see below)
  3. Set the BuildAction to DesignData, and clear the CustomTool property
  4. In your control/page, ensure that the same namespace is also defined in your control/page's tag
  5. Add a property to your PhoneApplicationPage (or any Control) referencing your new xaml page

    d:DataContext="{d:DesignData Source=/DesignData/MyViewModel.xaml}"

  6. Then bind the control as you would normally.

Creating the XAML DesignData file

<local:MyViewModel xmlns:local="clr-namespace:Badmonkeh.ViewModels">
         <local:MyViewModelItem Name="One two three four" Colour="Blue" PercentComplete="75" ShowProgress="True"/>

In this particular example I am representing a class named MyViewModel which has a MyViewModelCollection in a property named Items. This in turn contains a MyViewModelItem in its default property.

This is pretty standard XAML here, each property of a simple type is a simple key-value pair and each complex type must be described with a new XML element. Where a property for a complex time is not the default content property, a separate element must be created a referenced using the name of the parent class, e.g.


The important things to note here:

  • The declaration of the namespace corresponds to the class's full name
  • Generic lists are not supported so you will have to extend your own collection
  • Your ViewModel must have a public parameter-less constructor
  • Only properties with backing fields can be bound (see below).

A caveat and a solution

There is a caveat to this method, inherent in its design. What actually happens when you create a DesignData method is that the designer will create an instance of your class at design time where certain objects may not be available such as your data model, or IsolatedStorage.

The easiest away around this is of course to  simply use properties with backing fields but when that is not convenient you can get around this by checking if you are in design mode by calling DesignerProperties.IsInDesignTool and using a backed field only for design time.

public string Comment
        if (DesignerProperties.IsInDesignTool)
            return _designerComment;

        if (_blog == null) return null;
        return _blog.Comment;
        if (DesignerProperties.IsInDesignTool)
            _designerComment = value;
            _blog.Comment = value;

In the example you can see that I check if we're in the designer (or Blend for that matter) and use a backed field to avoid throwing an exception which would disable the design data, or returning null which would just disable the property.

You might of course wonder why go to the trouble of using a backing field when I can just return a testing value in the property? Well you're right, you can. However this will override any value you specify in the design data XAML file and if this class is shared with other pages, for instance if is used by a User Control, then you won't be able to create separate design data files for different pages.


I have shown you a couple of ways to shows some design time data in your pages and help simplify the design process somewhat. The particular beauty of the second approach is that you can design your Control/Page first and really just consider the fields that you will need to bind to. At the time you just need to create a POCO class and only after the design is finalized implement your business logic.

Regardless I hope some of this was useful to you and if not, or if there is anything else I missed, hit me up in the comments below.

Categories Software Development | Tagged .net, c#, design, DesignData, dev, silverlight, wp7, wp8, xaml

Active Records with Partial Trust, Revisited

Active Records with Partial Trust, Revisited

Getting active records to run under medium trust is a problem i have once solved before, and after upgrading to v2.1.2 it has come back again. Essentially the problem is two fold:

  1. Ensuring all referenced Castle assemblies has the APTCA attribute applied
  2. Statically generating proxies with the APTCA attribute applied (is this still necessary?)

To begin i need to identify which assemblies are the problem. So i created a small test project and noted the assemblies that were directly or indirectly used, using Reflector. At the same time i could go through the list and note which ones did not have the attribute applied. This lead to the following list.

Referenced Assemblies Without APTCA:

  • Castle.Core (
  • Castle.Components.Validator (
  • Castle.ActiveRecord (2.1.2)

Now in order to call these assemblies under medium trust i will have to rebuild them, applying the correct attribute, this could be quite a problem but fortunately Active Records is an open source project. After a short search i discovered that the source is now stored at Github: http://github.com/castleproject

Then it was a simple case of downloading the source, labeled at the appropriate versions listed above.

This however seemed to be more trouble that it was worth, given that the code was hard to build when it was first split into individual projects (read: required Nant and i couldn't be bothered.)

Building the projects from source was simple enough when you copy in the buildscripts\ folder from the root project 'castle'. You might have manually reference the key to get them strong named but this is a simple matter.

From here all you have to do is apply the attribute as below to the CommonAssemblyInfo file, and voila you are done.

[assembly: System.Security.AllowPartiallyTrustedCallers]

If your website is just a personal site and you do not have strongly name assemblies, then this is all you need to do.

Step 2 is the harder part. Assuming your assemblies are strongly signed, ensure first that they also have the APTCA.

Categories Software Development | Tagged active records, castle, partial trust

Upgrading Castle Active Records

Upgrading Castle Active Records

From (custom build) to 2.1.2

Downloaded binaries from http://sourceforge.net/projects/castleproject/files/ActiveRecord/2.1/AR%202.1.2.zip/download

I downloaded the binaries, and replaced my reference libraries. To my surprised there were no build errors.

Intrigued, i wandered onto the documentation to see what new features/syntax i could use. Some notable things i found:

You can now initialise active records by just supplying an assembly, and letting it reflect the types for you.

ActiveRecordsStarter.Initialize(Assembly assembly, IConfigurationSource source)

Warning: Try to create an Assembly exclusively for ActiveRecord types if you can. This overload will inspect all public types. If there are thousands of types, this can take a considerable amount of time.

Some validation attribute (Validators):


Generics support:

extend generic base class ActiveRecordBase<T> or for inherent validation support extend ActiveRecordValidationBase<T>

Note however that if a record is invalid and you try to save it, it will throw an error. Be sure to check it's IsValid method first.

What doesn't work is my proxy class generation.

So i decided to upgrade to using the NHibernate Proxy Generators (http://sourceforge.net/projects/nhcontrib/files/NHibernate.ProxyGenerators/1.0.0%20Alpha%20564/)

Here's a how to: http://nhforge.org/wikis/howtonh/pre-generate-lazy-loading-proxies.aspx

Unfortunately i had the same issue as last time, that it just does not work for NHibernate 2.2

So i decided to have a play around with the source code a little.

  • Converted the project to VS2010.
  • Deleted all in libs except for IlMerge.exe
  • Pasted new AR libs in there
  • Set all projects target to .NET 3.5

CastleStaticProxyFactory - commented out //using NHibernate.Proxy.Poco.Castle; - replaced with NHibernate.ByteCode.Castle - changed CastleLazyInitializer to LazyInitializer

CastleProxyFactoryFactory : IProxyFactoryFactory - added stub method for ProxyValidator

CastleStaticProxyFactoryFactory - added stub method for ProxyValidator

Had to add reference to NHibernate.ProxyGenerators.ActiveRecord to the console project

Commented instantiation of SessionFactory - is this the right thing to do?

CastleProxyGenerator - commented out //using global::Castle.DynamicProxy; + we want to use the new ProxyGenerator v2 - added using Castle.DynamicProxy; - changed ModuleScope to save weakly named assembly only (to avoid an error) - moduleScope.SaveAssembly(false); + for some reason a strongly named assembly doesn't contain the factory classes? - added compiler reference - references.Add(Assembly.Load("NHibernate.ByteCode.Castle"));, also Castle.ActiveRecord - altered the compilation to add the input assemblies as references to creating the StaticProxyFactory

Removed Post Build Event:

"$(SolutionDir)libs\NHibernateProxyGenerator\NPG.exe" /in:"$(TargetDir)BadMonkeh.DAO.dll" /out:"$(SolutionDir)BadMonkeh.Website\bin\BadMonkeh.DAO.Proxies.dll"
copy "$(TargetPath)" "$(SolutionDir)BadMonkeh.Website\bin\$(TargetFileName)"
Categories Software Development | Tagged active records, castle

Catching unhandled exceptions at the lowest level for all threads

Catching unhandled exceptions at the lowest level for all threads

When designing a responsible application it always pays to think about exception handling. Of course it is good practice to not use "catch-all" exception blocks as this makes it easier to find bugs within the system. However when a user invariably finds one, you ideally don't want your application to just crash. You may wish to show them a pretty dialog, or in the very least log the exception details to help you fix it.

If you only ever use a single thread in your application then you can get away with the below:

    Application.Run(new MyApp());
catch (Exception ex)
    //handle error here

This happens to work well, however what if you are using separate threads to do processing so that you don't tie up the UI? These events will never be handled by this handled, and hence will crash your app.

The solution however is not that difficult. There are likely a few ways you could go about this but this one seems the easiest to me.

static void Main()
    // this event handler works for all threads BUT the main thread
    AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(CurrentDomain_UnhandledException);
    // this event handler ONLY works for the main thread
    Application.ThreadException += new System.Threading.ThreadExceptionEventHandler(Application_ThreadException);
    // this ensures that the handler will ALWAYS get the event and so can't be reconfigured in the app.config
    // this may or may not be applicable to you
    Application.Run(new MyApp());

private static void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e)

private static void Application_ThreadException(object sender, ThreadExceptionEventArgs e)

private static void HandleUnhandledException(string exceptionMessage)
    //log exception and show dialog here

Categories Software Development | Tagged c#, exception

Line highlighting in a RichTextBox

Line highlighting in a RichTextBox

I often find myself writing little tools to perform specific purposes, as is the wont of a programmer.

However, mostly out of laziness, i tend to avoid writing command line utils and instead prefer little GUI apps. They allow me to add features that i can't have with a CLI, such as persisting settings and allowing file selection dialogs.

Anyways, one of the easiest ways to adapt a console app is to create a windows forms app that features a text box containing the output of whatever you wish to achieve. Now this itself is quite easy to do, however it is just as one dimensional as a console app. Why, you may ask, it is because you are still limited to displaying text in one colour. Should you wish to clearly identify your output you are limited to formatting the position or the characters of the text, which can be quite limiting. For example what if you wanted the errors in a process to be more visible that informational messages?

The solution is to use a RichTextBox. It allows you all of the rich text formatting of a rudimentary RTF editor, such as Wordpad. You could even display HTML in it.

Now, you could change your input to have the appropriate HTML/RTF tags, but this is a quite heavy handed approach and requires you to combine the format with the data. All i really want is to be able to change the colour of text as i insert it, quickly an easily. And here is a method to just do that:

    /// <summary>
    /// Appends a log message and highlights it with the specified text colour
    /// </summary>
    /// <param name="message">The message to append</param>
    /// <param name="colour">The colour to display the text in</param>
    private void AppendAndHighlightLogMessage(string message, Color colour)
        int startIndex = txtLog.Text.Length;
        txtLog.AppendText(message + Environment.NewLine);
        txtLog.Select(startIndex, message.Length);
        txtLog.SelectionColor = colour;            

In this example we have a RichTextBox named txtLog, and we are appending a string to it and highlighting only that with the colour specified.

It is important to note that you call the AppendText(..) method. If you just set the text using txtLog.Text += message; this will not work!

One of the nice things about this approach is even though you are only 'highlighting' the text, the colour will remain.

Categories Software Development | Tagged RichTextBox

Creating a custom databound control with postbacks

Creating a custom databound control with postbacks

/// <summary>
/// Creating a custom databound control with postbacks
/// --------------------------------------------------
/// This class provides you the basis for a databound custom control
/// and shows you where to place logic to ensure the data is saved in
/// the ViewState and the dynamically created child controls work
/// as intended
/// </summary>
class MyControl : Control
    private object datasource;
    //CF: A list we are using for example
    private RadioButtonList _List;

    //CF: Define this as whatever you want if this control is databound
    //    This object should not be persisted as you are relying on ViewState
    public object DataSource
        get { return datasource; }
        set { datasource = value; }

    //CF: This variable holds the number of items that were databound
    //    It is used on post back to recreate the items that are stored in the ViewState
    private int NumberOfItems
            if (ViewState["NumberOfItems"] == null)
                return 0;
            return (int) ViewState["NumberOfItems"];
        set { ViewState["NumberOfItems"] = value; }

    protected override void OnInit(EventArgs e)

        //CF: The controls must be created here otherwise the postback data will not be loaded!!!

        //CF: If your control requires ControlState uncomment this line

    protected override void CreateChildControls()
        //CF: Only create the list and it's items here it will be populated on databinding, or via viewstate
        _List = new RadioButtonList();
        for (int i = 0; i < NumberOfItems; i++)
            ListItem item = new ListItem();

        //CF: Setting additional properties of the controls should be done *after* 
        //    they are added to Controls collection. This will "dirty" the ViewState
        //    and ensure their value is persisted
        //    Ensure you give your dynamic controls an ID, to associate them to their 
        //    ViewState data
        _List.ID = "list";

    protected override void OnDataBinding(EventArgs e)
        //CF: You need to set the number of items here
        NumberOfItems = DataSource.Count;

        //CF: As the list was already created in OnInit() with no items, it must be recreated here

        //CF: Now we loop through each item and databind it's actual value
        for (int i = 0; i < NumberOfItems; i++)
            ListItem item = _List.Items[i];
            //TODO: Obtain the value from the DataSource and set the Controls properties

    //CF: If necessary override Render() to specify custom control rendering, otherwise leave it as it is
    //protected override void Render(HtmlTextWriter writer)
    //  //TODO: Do stuff

    //CF: If you are using control state uncomment the following 2 paragraphs and customise them 
    //    as necessary. If you need to save multiple objects it is probably best to wrap them up
    //    in a struct/class.
    //    Remember all objects must be Serializable.
    protected override void LoadControlState(object savedState)
        object[] state = (object[]) savedState;
        MyObject = state[1];

    protected override object SaveControlState()
        object[] state = new object[2];
        state[0] = base.SaveControlState();
        state[1] = MyObject;
        return state;
Categories Software Development | Tagged asp.net, control, postback