Taking On New Responsibilities
About three weeks ago our project DBA moved on suddenly to another post - and, at the same moment, took a well-deserved two-week vacation. He'd been filling in for a colleague for some weeks prior to that. So, I realised, coming in on a Monday morning, that I was now the project DBA as well as the project database developer. Great! - nothing like a Monday morning for news like that. It makes you understand the feelings behind the girl whose excuse for shooting people was just "I don't like Mondays"!
Anyway, I did a quick check of the system and all seemed fine, until I got to the Job Log and spotted a couple of red "X" marks - one I knew about, as it was a job I'd run a few weeks ago and which had failed, but I had fixed it and never bothered to delete it (sloppy, I know!). However, the other one was the weekly full backup from the previous Monday night. This meant that we had been running on nightly differentials for almost two weeks. Eeek!
On investigation I discovered that it was only a couple of databases that were a problem (out of about a dozen), but that backing up had failed for lack of available disk space. And the next full backup was due to run that Monday night! Not only that, but I was running some SQL right then that was fairly eating up what space there was by growing the transaction log like a B-movie monster!
As you can imagine, there was a certain amount of panic and feverish work going on to clear enough space for the log and the next backup (moving old backup files to another disk drive) before the more measured approach took hold, and I realised that all the backups were being done from SQL Server 2005, and so weren't being compressed. Zipping them up in blocks (a full and all the subsequent diffs before the next full) got us about a quarter of the hard drive back that day, and about another quarter over the next few days. I'm now waiting for provision of a DVD burner and a spindle of discs in order to get the data off-machine.
My good fortune was that this is, really, just a development machine, and a lot of the data just doesn't change very much. Had it been a production server with a lot of users the problem would have been much more pressing.
Moral: Always check the job log, and always check where your disk space is going - it's fairly simple to fix, but big-time bad if you miss it!
No Easy Way
The other day I was handed a new request: just some ETL to .CSV files, one per department, but it'll need to happen repeatedly. Ok: some stored procedures to get the data (it's not a particularly simple extract, so easier that way than with straight SSIS transformations), and then SSIS to get it out to the file system.
My sprocs worked pretty well - I provided an input parameter of the department number and got the data into a table named with the dept number. Great! Now the SSIS. I can use a ForEach block in the control flow to run the sproc and then have the data flow pick up the info from the appropriate table and drop it into the correctly-named .csv file.
So, I set up the ForEach block with a counter (1 to 16, as it happens), and pass that in to the SQL string invoking the sproc in an ExecuteSQL block. Great!
In the Data Flow I create everything for department 1, test it, and then set up the connection string of the FileDestination to be determined by an expression including the number from the ForEach block (in a variable declared outside the block). Doing conversions and other things at the same time in the expression editor is a nuisance, so I used a neat SSIS trick - making a variable's value a function of the value of another variable.
What I did was have an int variable for the loop counter, and a string variable for concatenation. The string variable is defined as
(D_STR)[User::intLoopCtr]
so it's always the right value. Then, for making names, I have the template (disk & path) stored in another string variable in order to make for easy changes. Creating the file name is simply concatenation.Now for the DataSource - the table filled by the sproc. Arrrrgh! You can't make the source table a dynamic entry, as there isn't an Expressions entry in the properties list! Time to sit back and think.
After a few minutes hard thinking, lunch, and a few more minutes hard thinking, I decided to follow Rule Two of Mike's Three Rules of Programming: "If it's getting really hard, then you're doing it the wrong way".
Instead of making the sproc generate a new table automatically, let me tell it what name to use. So I change the sproc, adding a new parameter and changing some code.
Now all I have to do is tell the sproc the number and the file name, and keep the file name always the same. That's all just a matter of concatenating some strings ("exec sproc ", the string version of the counter, and the constant table name) and using it in the ExecuteSQL task. The DataSource can now always look to the same table, so it's happy, and the whole thing works like magic. That's the way I like to leave things on a Friday night. Doesn't always work out like that, but you've gotta try!
Moral: Always be ready to go back and alter your design - you never know when what you didn't think of at the beginning will get in the way later on.
Training
There's no knitting pictures and no food stuff this post (sorry!). I'm writing this at 3.30 am because I couldn't sleep (I crashed at five for about four hours!), and I've been watching SSWUG's DBCon videos. It was on for the last three days but, of course, I had to be in work, beavering away for the good of the company. So, the wonderful thing is that all the sessions (77 in all, including things like opening remarks and keynotes) are available for on-demand viewing.
That means that, although I knew I wouldn't be able to watch them live, with the ability to chat to other viewers in real-time, and ask questions, I do have the opportunity of seeing all these great presentations when I can get an hour or so to myself. In fact, I have until June to watch them, and I can watch each one up to six times! How cool is that? I can watch them in the order I want to, so, for example, I just watched Chris Shaw's Growth Pains, about what happens in IT when companies grow. This was just part one of three, so I'll look at the other two tomorrow before going on to other things. My next target will be the PowerShell sessions, and there's another one called Agile for DBAs and Developers that looks really interesting ...
Obviously it's too late to watch it live (it finished on Friday), but you may be able to buy the DVD of the conference from SSWUG (they are available for people who paid for the conference, so ...). I don't know this for sure, though, but they would certainly be worth it!
In fact, with all the stuff it produces, membership of SSWUG is an amazing deal! Thanks to Stephen Wynkoop and all the people there for making so much valuable info so easily available.
Happy Easter everyone!
TTFN
No comments:
Post a Comment