I don't want to get off on a rant here, but....

Avatar

Technology, Programming, Complaints, etc.

Go for App Engine json and datastore tags

If you're working with Go as a backed on App Engine you're going to want to make some Entity properties not store to the datastore as well has having a custom json field name. The silent failure case is if you use commas to separate them like this: `json:"attendees", datastore:"-" Instead there should be no comma just a space `json:"attendees" datastore:"-"

Ember.js confusing errors and "magic" functionality

If in ember.js you're getting errors like "TypeError: Cannot call method 'unchain' of undefined" and you get them when you try to navigate away from a route to another route. The problem may be this feature which makes model properties that start with capital letters lookup on the Ember.lookup which is basically the window object, making them act like a global property.


The confusing part is it works to build the view, data is there, behavior is as expected, it just blows up when you try to leave the route. This may just be when using the property in an if block helper. 

With Go as my JSON source all my properties are coming capitalized from the server, and only one Route had this issue.

So the solution is to make the property be lower case.  This can just be a dynamic property that aliases the Uppercase to uppercase, since it's a handlebars feature, it wouldn't impact your JS code using Ember.

And credit to this Stack Overflow answer that got me to the change and the solution.

MS tooling loves to break stuff in new, random, and unexpected ways

If you are getting an error starting with "$(ReplacableToken" then some string identifier from your code then finishing "-Web.config Connection String_0)" it means that the deploy task is is maiming your Web.config trying to customize it for the environment.  To disable this pointless on by default behavior that makes no sense for MS to do when no replacements have been defined for the tokens it's creating.... do the following;

Don't fall for the trick of creating more files to make MS leave your existing files alone, just add the flag to the existing csproj file 

Prevent tokenizing connection strings

If you want to prevent your web.config connection strings from being tokenized it’s pretty easy. All we need to do is the add a property to the build/package/publish process. We can do that in 2 ways. Edit the project file itself or create a file with the name {ProjectName}.wpp.targets where {ProjectName} is the name of your project. The second approach is easier so I use that. In my case it would be MvcApplication1.wpp.targets. The contents of the file are shown below...

http://blogs.msdn.com/b/webdev/archive/2010/11/10/asp-net-web-application-publish-package-tokenizing-parameters.aspx

mod_ssl attacking Subversion clients, demanding client certificates

Over the weekend upgraded to Subversion 1.7.2, Apache 2.2.21 (which contains mod_ssl 2.2.21).  Everything worked great browsing the repository from a browser.  Problems started as soon as svn command line or TortoiseSVN were used.  Client Certificate prompts all over the place, sometimes cancelling worked, sometimes it caused the attempt to fail, general annoyance and stupidity across the board.

Verified 100 times that "SSLVerifyClient none" was set, moved it to vhost and directory levels as well, no dice.  I could break browser access by setting it to require.  Nothing worked to config it away, so I put back the old 2.2.15 mod_ssl file and bam, everything works like a charm again.  It looks like there were some recent mod_ssl changes around optional at the server level prevented required at a lower level... it seems this went too far for some clients.  Since 2.2.21 has been in the wild for a long time I'm guessing this only impacts the SVN HTTP library, since browsers work fine, and that would have caused a whole lot of rioting on the internet if browsers broke from the change.

You can have the Session but you have to know the secret knock

Was very perplexed by this issue, context was fine, but context.Session was null, until finding this StackOverflow post.

One better though if you don't need to write context.Session is to implement: System.Web.SessionState.IReadOnlySessionState instead, which is probably somehow cheaper.

At least they support options

"... will offer a choice of database services, including MySQL and the NoSQL system MongoDB. It also will offer MongoDB and Redis open source systems..." from Information Week's print article "VMWare Platform Takes It Deeper Into Cloud."

At least they have fixed it in the online version, though it's published under a different title. It's always funny to me that they can be so wrong technically while being so wrong editorially as well.

Troubleshooting slow ASP pages

Since I've always wondered about this and never bothered to figure it out before....

In IIS 6 logs, the timetaken value represents the entire time that IIS was touching the request. This at the very least includes time after IIS got the request, but is waiting to hand it off to ASP/ASP.Net for processing. It may also include time spend sending the bytes back to the client, but I don't have big enough data or slow enough networks to really answer that question well.

This does make a certain amount of sense. It's an web server log, not a web framework log, but it makes trying to troubleshoot "slow" pages really, really hard. We can't run multiple workers because it breaks classic ASP Session.  So everything just queues up every time there's a long running request.  So given a page with a large timetaken in the log, did that page really run for a long time? Or was it sitting on a queue waiting on another page that was running slowly?  Sure the first page in order that was slow is prob. the root cause of all the following slowness, but how does one determine that root page?

What would be ideal would be separate timequeued, timeprocessing, timenetwork type columns... maybe those exist in IIS 7.... of course that would require getting our classic ASP certified for Windows 2008.

Dear Information Week please just let me go

Since cancelling my free subscription Information Week can no longer afford to fact check... oh wait, they never did that before either. Their latest flub that jumped off the page at me "...Java's JSON-based..." wait, what? Java's JSON? Pretty sure that's not the case and a 2 second Google confirms, "JSON (JavaScript Object Notation) is a lightweight data-interchange format..." wow, that was hard. Of course it's not the first time, and I'm sure it won't be the last.


I've ignored the renewal emails, for months, but the quality of reporting is so low, it doesn't really shock me that they can't manage to end my subscription either. I guess correctness doesn't matter nearly as much as number of eyes on the page when you're pitching to management types.

And just to get out ahead of Information Week's next mis-statement, "The final choice of name caused confusion, giving the impression that the language was a spin-off of the Java programming language." So, no, Information Week JavaScript has nothing to do with Java. Let me know when I can expect my Researcher check in the mail.

SQL Server object_name still takes int value for object_id

If you're using the, not so new anymore, DMVs and pulling any bigints like resource_associated_entity_id you'll get an arithmetic overflow if you try to pass this value to object_name, because the object_id param to the function is still defined as int even in SQL Server 2008.


For example if you're looking at sys.dm_tran_locks to see what locks can't be granted during a block you need to only pass the resource_associated_entity_id if the resource_type is OBJECT otherwise pass NULL to get NULL back.  

Granted looking up the object_name of a PAGE or KEY wouldn't give back anything useful, but it would be nice if it didn't blow up the query.

Tricky SQL XML support for binary values

In setting up "Event" based block notifications for SQL 2005/2008 I had to get the binary SQL handle out of the XML provided by the event.  This of course seems rather simple, except you can't just supply varbinary(64) as the type to @xml.value because that would too easy for an MS product.  Trying this gives you back NULL instead of your binary value.

As this page tells you, you need to use an XQuery conversion, xs:hexBinary, to do instead.  But wait, you're not off the hook yet, because xs:hexBinary doesn't understand 0x delimited binary, and doesn't tell you that in any way shape or form.  Instead you get back your old buddy NULL.  And that's when you notice buried in the XQuery that it's actually chopping off the 0x if the value has it (even though the test value in the code doesn't).

So the whole chunk you end up needing looks like this:

Process.value( 'xs:hexBinary( substring((frame/@sqlhandle)[1],3))', 'varbinary(64)' )

Empty User Names despite authenticating successfully

Had an issue today where System.Web.HttpContext.Current.User.Identity.Name was returning String.Empty even though I was prompted to authenticate against the server when hitting the web page.  After a few Googles I realized it was because the web.config was set to <authentication mode="None"/>.  Changing it to "Windows" fixed the problem.  I've never understood what that web.config line did in the past, since Auth is done at the IIS level and ASP.NET shouldn't really care.  I guess now I know...

Telco analyst calls out the lying ISPs about bandwidth hogs

Benoit Felten, an Telco analyst Yankee Group, calls out ISPs for claiming to suffer the effects of users who hog their precious bandwidth.  He points out that no ISP has ever justified the existence of this class of users, nor have they ever released data about the usage of these hogs nor any other subset of their user population.  Yet Time Warner Cable, amongst others, uses the "existence" of these users to justify arbitrary, and I would claim exceedingly low, bandwidth caps after which they gouge users with additional fees.  It's nice to see someone close to the industry finally saying what a lot of us on the outside have been thinking and trying to shed light on for a long time.

He also lays down a gauntlet in front of the ISPs challenging them to provide him with usage data that he could analyse to understand their assertions a small number of aggressive users have such a large impact on the experience of the many.  So he's not just throwing stones, he's asking the ISPs to provide real, raw, hard data to back up their assertions via independent analysis.

App Engine and Bloog not getting along

So I noticed that the home page wasn't loading, and apparently a few others noticed too that something in the new version that Google has started pushing to it's servers breaks Bloog.  I pushed my fix to github, which disables the Tag list on the hope page for the time being till the issue can be really fixed.

Update: Long term fix committed in two commits to github: 1, 2
I basically took the AppEngine code for the method that was causing the error, and pulled it into the Bloog code, changing as needed to get it to run there.  I can now load my homepage, edit posts (as this proves), etc.

Gotchas while migrating an existing Ubuntu install to Software RAID (MDADM)

There are a lot of HOWTOs out there to migrate or setup software RAID on Linux.  So here's my two cents on the WTF moments I encountered:


I noticed after moving to booting off the RAID array the first time, edits to /boot/grub/menu.lst were no longer showing up at boot, no matter what combination of grub, root, setup, or grub-install I was running.  I then noticed grub-install was printing an odd message about probe issues, but was then saying it ran without errors.  I ran grub-probe -v and found it was having a problem on my second drive, which had a yet unformatted extra partition that was not in any way related to the RAID arrays.  When it was scanning that second drive it was printing an "unknown filesystem" message.  Once I formated that partition to EXT3 grub-install no longer had any suspicious/error looking lines.

The other issue I had was with booting off the new disk.  When I'd boot with root (hd1,0) the RAID drive didn't seem to come up right, and the kernel would timeout waiting for /dev/md0 on boot.  The directions I'd found always said to run grub and then do the following commands:
root (hd0,0)
setup (hd0)
root (hd1,0)
setup (hd1)

I stumbled upon another set of instructions that said not to change the root between the two setups, and once I did this and rebooted things worked when booting with root(hd1,0):
root (hd0,0)
setup (hd0)
setup (hd1)

I should point out that the second grub menu.lst entry still references root (hd1,0), it's just the grub command line setup that I left root(hd0,0) active.

Recovering SQL Server Cluster Resource Types using cluster.exe on the command line

At work we had an issue with a SQL cluster that mysteriously went down, due to the SQL Resources having been deleted.  As part of the Server team's efforts to restore functionality, the SQL Resource Types were also deleted.  Among the litany of issues we had to work through to get SQL back up and running, we had to piece together how to get the Resource Types back so we could successfully setup the Resources again.  

The following steps document what we needed to do and in no way do I promise this will work for you, not break things worse, etc.  You should use values from another working SQL cluster if you have one, the ones here were copied from another SQL cluster that was similarly configured. 
  1. cluster.exe RESTYPE "SQL Server" /CREATE /DLLNAME:SQSRVRES.DLL /ISALIVE:60000 /LOOKSALIVE:5000
  2. cluster.exe RESTYPE "SQL Server Agent" /CREATE /DLLNAME:SQAGTRES.DLL /ISALIVE:60000 /LOOKSALIVE:5000
  3. Add the SQL Server Resource via the Cluster GUI including the proper dependencies etc.
  4. Add the SQL Server Agent Resource via the Cluster GUI
  5. Follow the MS documented registry hacks to get the proper information back to allow the Clustered instances to start.  NOTE: This has to be added on all nodes in the cluster individually.
  6. Verify it runs from the command line: C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn>sqlservr.exe -sMSSQLSERVER
  7. Verify it runs from the command line as the service account using runas. 
  8. The Services should now start correctly on the local machine (not in the cluster).
  9. Stop the Services and bring them up via the Cluster. 

Information Week needs to fact check instead of cashing their Intel checks

"The 5500 is really the first chip to escape from the personal computing bias of the original x86 chips. It has a memory controller built onto the chip instead of off-loaded to a separate dedicated chip, reducing latencies encountered as a VM's operating system manages the memory that its application is using."
That's all well and good ... except AMD chips have done this since 2006, so Intel is "cutting edge" by staying 3 years behind the curve.  Granted the article doesn't explicitly say "AMD has yet to do this" but it also goes to great lengths not to say "the first Intel chip to not suck by including 20 year old technology."

Makes me wonder just how much ad money comes into Information Week from Intel, considering anyone who knows anything about chip architecture would tell you this has been a huge advantage for AMD the last several years as the CPU was so much faster than everything else in the box you needed to do everything you could to feed it information faster.  And when you're talking computers, one speaker at a time 1980s bus technologies are not usually mentioned in the same sentence as "fast."  

Of course Information Week targets the people writing the checks, who know nothing about hardware, and will tell their peons "we should buy Intel servers because they made this great technological leap that no one else has yet.  Kind of like how at work we moved from Legato backup to CommVault backup (as far as I can tell the only worse backup product than Legato) when suddenly the Legato ads in Information Week stopped and the CommVault adds started up.  It's always a sad day when marketing outweighs technology, but if it's good enough for Microsoft why shouldn't everyone else do it too, right?  I hope the publish my response to the editors.

Time Warner Cable backtracks from evil cap plan

The power of complaining wins yet another battle.  MSNBC is reporting that due to public and political outcry Time Warner Cable is abandoning it's efforts to introduced metered and tiered internet services.  This is a big win for network neutrality as any limits on what, how much, and who on the internet violate the basic principles it was founded on.  This is like if your phone company sold you long distance service, but then told you if you make more than 20 calls a month, regardless of their duration they would start charging you an extra dollar per call.  People would never stand for it on the phone network, or road systems, but because normal people don't understand how the internet works, they assume that the providers will do the right thing.  History has proven time and time again that's just not the case.

Time Warner 0, Eric Massa 1

Wired has coverage of NY's Democratic Congressmen Eric Massa's attempts to pass legislation banning Time Warner Cable from introducting usage caps and tiered pricing for their Roadrunner internet services.  TWC has been trying out these caps and pricing structures in various markets and apparently finally stepped on the right person's toes by starting to record usage data in Rochester, New York.  Wired points out that while playing the "woe is us" card, TWC has been raking in the profits, with their own annual report showing that their broadband costs were down 12% in 2008 while revenues were up 11%.  Makes it a little hard to justify how the power users are beating your service providings into the ground while you're rolling around in Scrouge McDuck's vault.


Mr. iTunes DJ woke up on the wrong side of the bed

Stupid new "iTunes DJ" who replaced "Party Shuffle" has a bad attitude when you try to listen to songs he's already got "queued up to spin" as the kids would say


Of course that's only if you "Add to iTunesDJ" because then you might not really want to actually play the songs you just said you wanted to play.  If you pick, "Play next in" or "Play in" then the DJ knows you're serious and doesn't ask questions.  

I understand building the DJ functionality on top of the existing playlist functionality, but I can only assume that Party Shuffle was the same thing with a different name, and therefore this is a big time annoying regression in the new release.  Go go gadget testing.

Net Neutrality is not something the founders of the Internet take lightly

Tim Berners Lee sums up Net Neutrality:
"Net Neutrality says: "If I pay to connect to the Net with a certain quality of service, and you pay to connect with that or greater quality of service, then we can communicate at that level."

That's all. Its up to the ISPs to make sure they interoperate so that that happens.

Net Neutrality is NOT asking for the internet for free.
Net Neutrality is NOT saying that one shouldn't pay more money for high quality of service.   We always have, and we always will."

Written in 2006, unfortunately it's now 2009 and the telcos are still trying to confuse the common consumer into thinking Net Neutrality is Google trying to trick consumers into paying for Google's internet bill.