Sunday, August 2, 2009

Amazon HMAC with ColdFusion

I recently received a work request to update our calls that we were making against Amazon's Web Services with a new authentication signature. This proved to be more complicated that it should have been. I thought I would share my experience, in hope that it will help remove a headache for someone else out there.

The purpose was to "sign" your requests with a secret key so Amazon could verify that the request had not been altered in transit (these are requests that are transmitted via REST calls (http). There have been questions about why this was needed. The biggest point of abuse that I can think of comes from the use of these webservices from a client side (AJAX) - or doing a DOS against another competitor since you can only make so many requests per minute. Of course, now AJAX calls will need to be designed so the secret key never ever ever get exposed to the client side.

Here is the "documentation" Amazon provided. They gave this Important note to us:
"You have until August 15, 2009 to authenticate requests sent to the Product Advertising API. After August 15, 2009, messages that aren't authenticated will be denied."
What they failed to mention was their "testing" schedule (aka planned outages) which was sent (and forwarded to me just a few days ago).
Here is the full schedule:
"These planned outages will help our developers test their signed requests implementation and also discover applications and code paths that they may have missed, giving them the opportunity to address these gaps before signed requests become mandatory on August 15."
  • Monday, Aug 3 at 2PM (ET) through Tuesday, Aug 4 at 2PM, a few requests (appx 20% of the requests) that do not implement this new method correctly will be rejected.
  • Monday Aug 10 from 2PM (ET) through 4PM, all requests that do not implement this new method correctly will be rejected.
Uhh, sorry what? Planned production outage 2 weeks earlier?!! Needless to say this led to an elevated blood pressure for many. I looked at some of the Sample REST requests, found some HMAC code already available on RIA Forge in several "amazon" related projects and thought "This will be easy enough".

Its not that it wasn't easy to reproduce the example they gave. That part WAS easy. The difficult / frustrating part was getting the REAL requests signed and authenticated. The good news for many is that you can simply check the status code of your original request and make it again if it has failed during this first round that lasts 24 hours and you will likely be fine.

Here is a checklist of what I would suggest looking at if you are having problems, this is from my own experiences and is bound to help someone. As I receive feedback, I'll update accordingly.
  1. Make sure you are using your OWN secret key.
  2. Use - NOT
  3. Do NOT use a trailing slash at the end of /onca/xml?...
  4. DO NOT put line breaks in your query string. Just use the normal & to separate your params.
  5. Make sure you are only doing a line break chr(10) like the following:
    'GET' & chr(10) & '' & chr(10) & '/onca/xml' & chr(10) & '#YourQueryStringHERE#'
For me, the big epiphany was after using the webservices URL, one type of responseGroup would work, but another would not. The error was coming from, even though I was making my calls to, just like their example.

Here is an example of the code that I used.

Tuesday, May 19, 2009

Puzzler: Which way to the runway?

Here is a fun puzzle for those looking for a break from normalcy ...

When visiting and looking at my route via a popular map utility on my phone, I came across an interesting artifact.

I'm curious if anyone can find the shadow of this, or for bonus points, the equation needed to calculate the height of said artifact, once the shadow is found!

Monday, March 2, 2009

Power of SQL: Update Replace()

Sometimes you realize after the fact that there's an issue with the way your data is being stored. In this case, we had a field that was storing names. Originally intended to be able to show an admin / show on reports the full person's name without concatenating data together, here is what was being built.
"firstname middlename lastname"
Unfortunately, when the middlename was not filled out, it contained two spaces between the first and last names. In addition to fixing the way that the the data is being updated, there's an easy fix for existing data.

update users
set fullname = replace(fullname,'  ',' ')
where fullname like '%  %'
For clarity, this finds and replaces two spaces with one space. Its easy to adapt to your specific situation and yes, its really that easy.

Wednesday, December 24, 2008

Tag Context

Update: I had a formatting issue in the code example.

I found this method handy when trying to track down the tab Context of a page's execution. This throws an error, catches it and allows you to output (or email ... or post to a log) the needed information of where a page's execution is coming from.
<cfset e=duplicate(cfcatch)>
<cfdump var="#e.tagContext#">
One word of caution though, in my experience, it can over time (several days of try/throw/catching errors) lead to server instability. In my case, it was sending an email in the catch and was left in even after I fixed the issue to monitor / make sure it was fixed. The error you will see is:
An exception occurred when setting up mail server parameters.
This exception was caused by: coldfusion.mail.MailSessionException
For some reason it loses the connection to the mail server and ANY code that tries to send an email will fail. Not good. It was handy and served its purpose, but be warned on your method of implementation.

Sunday, December 14, 2008

Power of SQL: Update Join Tables

This is one of the most common update queries I do when dealing with multiple tables that needs data updated.
update o
set o.status = 'complete'
from orders o
join order_detail od
on = od.order_id
where ... (criteria here)
I often see code that will do a select and then update the subsequent table where id in (list_of_values). Remember that (almost?) anytime you see an IN clause, it can be replaced with a more efficient join statement.

Wednesday, October 8, 2008

Power of SQL: Find and Delete Duplicates

Most likely we have all had the situation, or will most likely encounter it, where we have duplicate data that needs to be removed from a system. For me this was experienced when moving data from a flat file / legacy system into SQL. This is the solution I came up with after getting it into SQL. Perhaps you have a more elegant way in dealing with this, if so, feel free to comment. The execution of this entire process against ~200,000 records took less than 15 seconds on my particular database. The creation of the index seems to be helpful if you are going to run several queries against the data while preparing the statement.

First, the background:
I have a table of notes. Those zero or more of those notes are related to a foreign key. If there is more than one note that is identical, for the same foreign key, it needs to be removed.

It is extremely difficult to compare a text field to another text field. With up to 2GB in length, it can be a daunting task. For my situation, most of the notes were less than a length of 8000, only a small number were longer. If they were longer, we could safely assume if they were not different for the first 8000 chars, they wouldn't be. You may choose to add additional constraints for your situation.

Basic Table layout:
noteId (identity field)
foreignId (integer / foreign key to other data)

noteText (text field)
... other unimportant data

ALTER Statements Before Starting: This adds a checksum and length based on the note. FYI - I did find duplicate checksums with different lengths. If you need to be precise, I suggest using the binary checksum function which is more sensitive, but understand, a checksum CAN have a collision. That means you can have the same checksum for different data.
ALTER TABLE notes ADD len_notes AS len(CAST(notetext AS Varchar(8000)))
ALTER TABLE notes ADD cs_notes AS checksum(CAST(notetext AS Varchar(8000)))

CREATE INDEX temp_index ON notes (foreignId,cs_notes,len_notes)

Now we have just added a column that represents the length of a note and the checksum of a note. If your data is changing, new records will contain NULL values and updated records will NOT recalculate the checksum, so keep that in mind. You may want to put criteria on your select statement that will only update the non-null values if your data changes, or the data that has a length <>

Select and Review part of your data that will be deleted:

SELECT top 100 n2.noteId, n2.cs_notes, n2.len_notes, n2.notetext
FROM notes n2
WHERE noteId in (
SELECT n2.noteId
FROM notes n1
JOIN notes n2
ON (n1.foreignId = n2.foreignId
AND n1.cs_notes = n2.cs_notes
AND n1.len_notes = n2.len_notes)
GROUP BY n2.noteId
HAVING n2.noteId > min(n1.noteId)
) ORDER BY n2.noteId
Delete your duplicates: To be clear here, this will preserve the lowest noteId and delete those with higher ids where the checksum AND length AND foreignId match.
DELETE from notes where note_id in
SELECT n2.noteId
FROM notes n1
JOIN notes n2
ON (n1.foreignId = n2.foreignId
AND n1.cs_notes = n2.cs_notes
AND n1.len_notes = n2.len_notes)
GROUP BY n2.noteId
HAVING n2.noteId > min(n1.noteId)
Put the table back the way it was:
DROP INDEX notes.temp_index
ALTER TABLE notes DROP column [len_notes]
ALTER TABLE notes DROP column [cs_notes]
I hope this helped.

Tuesday, September 9, 2008

We Need YOU!

As a friend or family member - or even a stranger if you are reading this here, we are are asking for your help. A relative of ours was recently diagnosed with a highly aggressive form of lymphoma at the ripe old age of 33. His doctors at Baylor are working aggressively to eliminate the cancer, and part of his treatment is a stem cell transplant to rebuild his immune system (think of it as a system reboot after that nasty blue screen).

Unfortunately, we haven't been able to find a suitable donor yet. While there are millions of donors already registered, they have not found a "good enough" match. His doctors tell us that he's mostly likely to find a compatible donor in the area where we grew up, which happens to be in the Lincoln, NE area. Our family has set up a donor drive in conjunction with the the National Marrow Donor Program to get more on the donor registry and find a donor for him. Even if you suspect you are not a match for him, there are thousands of others who are also in his position (or will be).

We would appreciate very much if you would consider participating in the donor drive - or by passing the word along to someone who may be able to participate.
The drive will be Saturday, September 27, 2008 from 8am to noon
at the Southwood Community Center (5000 Tipperary Trail).
There is a small fee ($25) for the tissue typing. Minorities are currently free due to high demand and funding that is available.
We need an appx. head count in advance, so please send an email to donordrive @ if you're planning to come. There are some businesses and people who are donating to the cause by offering prizes for registering as a donor.

You can get additional information about the drive, prizes, as well as all of those questions not already answered by visiting: By the way, the organization heading the donation drive is a 501(c)(3) just in case there was any worry about that. If you cannot attend, you can order a "typing kit" online for $52 - all costs are tax deductible either way and they also accept monetary donations for those who do not meet the health requirements.

Again, please share this information with others who may be willing to help. An official flier is on the home page of the website that was setup. If anyone is interested in posting the information at their office or church, they can reference that image or I can send you a copy.

Again, we are grateful to anyone willing to participate. Thank you and have a great day!

If you have questions you can also contact me via Grand Central - located on the right. (Please leave a message though)