Wednesday, October 23, 2013

Strange Mac Behavior

I have been having trouble with my MBP since I got back from vacation.  I could not tell what it was initially, just sluggish — but not super high disk IO or CPU.  I did not think much of it until the last two nights when I was at home my internet was really slow too from every device in the house.  I tracked the usage down to the Mac consuming all available outbound bandwidth.  Activity monitor reported the same high (> 1 megabit/second) usage.  So then I pulled up WireShark to see what was going on.  All the packets were for destined for my company's ActiveSync server.  Sure enough, I shut down Outlook and the problem disappeared.  No high bandwidth, no sluggish response.  Start Outlook back up, bam - issues returned.  

Outlook was not showing any active sync tasks so I just started cleaning up -- first my Inbox, also cleaned out the Deleted Items, Drafts, to no effect.  Then, I cleaned out my Sent Items and poof, the high bandwidth disappeared.  Googling shows this is a rare, but typical issue.  Upon the failure to Send a "large" email, Outlook can become confused and continue attempting to send the item even though you have cancelled the email.

Instead of following the typical online recommendation of deleting the entire Identity (which includes ALL Email and Calendars items, backup/restore), you can simply right click on your Sent Items and Drafts, and choose Folder Properties, then click Empty (save what you want first).  That does empty those folders, as well as force a clean resync from OWA.

So, if this happens to you, or you see ultra high network utilization on OWA, this may be the problem.


For me, it was a 20MB PDF I tried to email...

Tuesday, January 22, 2013

Storage Server gone Green!

It has been a while, but this topic keeps coming up during casual conversations so I thought it was about time to refresh everyone on what I am doing for my storage solutions.

I have been a big consumer of digital storage since my first computer in 1985. I digitize almost everything and really do not throw much away. I had hundreds of floppies, bought my first 1 GB SCSI drive in 1995 for $1,000, and have always been pushing the boundaries with my personal document collection, photos, music, and movies. As an example, I ripped all the CDs I own back in 1997 with 256k MP3. As I continued adding more CDs to my wall, I kept increasing the bit rates. I went back this summer and re-ripped the whole cabinet (about 500) CDs into Apple lossless format. Ditto for my DVD and BluRay collection. When you combine that with the tens of thousands of photos (original and scanned film), and every document I wrote since I was in high school in the 80’s through college and today, I have amassed quite a lot of digital assets.

To securely store all of these bits, I wanted an explicitly reliable filesystem. The best one I know of today is ZFS. Not only does it support different levels of redundancy, but also stores checksums of every block and can guarantee that I can retrieve the data exactly how it was written. Unlike "brain-dead" filesystems which are commonly used for Windows and Linux machines, they do not have the capability to verify that every data block is not only readable, but matches what was written. I have had issues in the past where an fsck and/or chkdsk returns a good status, but the data content itself is corrupted.

The primary source of transient data errors are Unrecoverable Read Errors (URE) which is typically around 10^14 for your normal platter drives. The guys at CERN have measured this and found an effective URE rate of 3*10^7. So basically, if you have 1TB of data, expect that 3 of those files have silent corruption if you are not using a checksum based filesystem. This “bit-rot” problem grows as you have more and more data stored. When we had 1GB drives, it was extremely uncommon, but at 24TB - it is a certainty. I have not had those same issues since moving to ZFS.

But that is all a side note to the real point of this article. How do I store all of those assets, and furthermore, how have I gone "Green" in the process? From my prior articles, you know I used a Norco 4U rack stuffed full of 20 Seagate 1.5TB drives. Then used two SuperMicro MV8 JBOD 8 port controllers to connect them to Solaris 10 where I had created a RAIDZ2 across 16 drives (8 drives on each controller) with 2 spare drives for the pool and 2 drives for the OS mirror with the onboard SATA ports. RAIDZ2 allows for 2 drives to fail and still be able to read and write data in a DEGRADED state. While that system was effective, stable, and extremely fast, it also consumed quite a bit of power and the case with all of its fans made a lot of noise pollution just to keep all those drives cool.

When I moved into my new house, I spent quite a bit on making it energy efficient. I replaced all the lights with LEDs (that deserves its own blog post about the importance of CRI in your household lights and ELV dimmers), ripped out and replaced much of the insulation, replaced the air conditioners with high SEER models, etc. So when it was time to rebuild the "storage server", I had to apply the same thinking.

But I did not want to focus exclusively on a "storage server". I had lots of other computers in the house, burning power, generating heat (in Texas where we use the A/C nearly year round), and taking up space. These were:
  • HomeSeer for Home Automation of all those new Z-Wave switches, pool, irrigation, etc
  • A ClearOS network server for DHCP & DNS (The reliability of the AT&T U-verse "Home Gateway" was problematic)
  • a CCTV recorder for my security cameras
  • a Media Server for the XBox, PS3, and Android devices
  • a HTPC at first running PLEX, then switching to XBMC
  • a Mac Mini running iTunes for the  iPhones in the house and AppleTV
  • A BitCoin manager for playing around with virtual currency
I had been using VMWare’s ESXi for a while really liked the way I could virtualize several systems into one. However, all of the systems I had were more than compute nodes, they had specialized IO cards and adapters. It was not until Intel, Nehalem, and DirectPath that this problem could be solved. With DirectPath, I could still virtualize the computers, but still give them exclusive access to dedicated PCI cards for their own use. This was the last hurdle and it was gone! So, I decided to build a large(ish) server with a new i7 chip, get some new PCI express cards, put all my eggs into one basket and start doing my part of saving the planet.

This items list below represents what is currently live and running in my main system. As you can see, I have been purchasing parts over the last few years as ideas (and budget!) came and went.
The physical space is a little tight in the case. There are 8 x 3.5” internal drive slots and 2 x 5 ¼” front bays. I used the IcyDock to mount the two SSDs and a normal adapter for the 1.5TB VMFS volume behind a fan speed controller in the top bays. The 1000 Watt power supply provides plenty of juice for the drives, processor, and video card. There is 16GB RAM attached to the Intel i7 processor running at 2.8GHz. The dedicated LSI PCI express controller card is attached to the 8x3TB SATA drives using two MiniSAS to SATA breakout cables and configured as JBOD. Each drive has its own path to the controller without any switching. I am running ESX 5.1.0 after my latest rebuild where I installed two SSDs for better performance. I did it because my VMs were starting to run slow and encountering high latency when they were sharing a single 1.5TB 7200 RPM Seagate drive. ESXi now boots off of an SSD along with the smaller VMs. The original 1.5TB boot drive is now datastore_hdd and used for the VMs which do not need high IO rates. I have used thin provisioning on the SSD datastore and thick provisioning on the platter datastore.

For the storage service, I took the LSI PCI Express card and assigned it directly to the Solaris 11 VM so it could bypass all the virtualization for IO. Then I made a raidz2 array across the 8 drives (the Solaris OS is a virtual disk on the SSD) and exported via CIFS, NFS, and a little bit of iSCSI.

$ zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
rpool  31.8G  5.36G  26.4G  16%  1.00x  ONLINE  -
tank2  21.8T  14.8T  6.99T  67%  1.00x  ONLINE  -
$ zpool status tank2
  pool: tank2
 state: ONLINE
  scan: scrub repaired 0 in 50h37m with 0 errors on Mon Oct 29 19:28:13 2012
config:

 NAME         STATE     READ WRITE CKSUM
 tank2        ONLINE       0     0     0
   raidz2-0   ONLINE       0     0     0
     c5t12d1  ONLINE       0     0     0
     c5t15d1  ONLINE       0     0     0
     c5t10d1  ONLINE       0     0     0
     c5t9d1   ONLINE       0     0     0
     c5t8d1   ONLINE       0     0     0
     c5t11d1  ONLINE       0     0     0
     c5t14d1  ONLINE       0     0     0
     c5t13d1  ONLINE       0     0     0

errors: No known data errors

With this configuration I have 21.8TB of effective storage!  I run a ‘zpool scrub’ on the main array once a month just to check for any data errors and correct any errors while there is still enough redundant data.


The diagram to the right represents the memory allocation I have settled on. With ESXi v5, I can also page memory over-allocations to SSD. It also shows the DirectPath configuration and USB devices attached to the VMs

This diagram below represents how I have split the VMs across the datastores.


Now, what do I do for backups? My original storage server is still running. It wakes up once a week in the summer and does an rsync with the new storage server. In the winter (or more precisely when the outdoor temperature is under 60 degrees F), it runs SETI@Home full time for “intelligent heating”. For backup up the VMs, I do still have to shut them down manually every once in awhile and do an export. I want to be able to back them up directly (at the VM layer) while they are online but have not dug into how I can do that from the ESXi kernel directly. scp on the ESXi host will not copy .vdmk files on running VMs. I also haven’t been able to get ESXi to attach to an iSCSI target which is hosted on a VM itself. If I could do that, then I’d only have to back up one VM regularly (the storage server VM hosting the iSCSI devices).



$ zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
rpool   696G  6.44G   690G     0%  1.00x  ONLINE  -
tank1  21.8T  10.8T  10.9T    49%  1.00x  ONLINE  -
$ zpool status tank1
  pool: tank1
 state: ONLINE
 scan: resilvered 35.6G in 1h46m with 0 errors on Thu Aug  9 19:12:41 2012
config:

        NAME         STATE     READ WRITE CKSUM
        tank1        ONLINE       0     0     0
          raidz2-0   ONLINE       0     0     0
            c9t0d0   ONLINE       0     0     0
            c9t1d0   ONLINE       0     0     0
            c9t2d0   ONLINE       0     0     0
            c9t3d0   ONLINE       0     0     0
            c9t4d0   ONLINE       0     0     0
            c9t5d0   ONLINE       0     0     0
            c9t6d0   ONLINE       0     0     0
            c9t7d0   ONLINE       0     0     0
            c10t0d0  ONLINE       0     0     0
            c10t1d0  ONLINE       0     0     0
            c10t2d0  ONLINE       0     0     0
            c10t3d0  ONLINE       0     0     0
            c10t4d0  ONLINE       0     0     0
            c10t5d0  ONLINE       0     0     0
            c10t6d0  ONLINE       0     0     0
            c10t7d0  ONLINE       0     0     0
        spares
          c8t0d0     AVAIL   

errors: No known data errors

Well, I am all out of time! Let me know if there are any questions or mistakes.

Notes:
  1. My ESXi server had an issue with PSOD (Purple Screen of Death) after I installed the SSDs. It would be a random world every time but always a #PF Exception 14 with vmk_Memcpy@vmkernel at fault. After many hours of diagnosis, this was related to having a single VMFS5 datastore spanning across both of the SSD drives. Also of note that I was using the host memory cache feature as well. When I rebuilt the VMFS as two separate datastores and manually balanced my VMs, all the PSODs ceased. 
  2. One of the CIFS clients was doing lots of small IO reads and writes and nearly always show 100% disk overloaded. This was related to the Windows SMB client doing synchronous writes to the share. I turned off forced sync writes to disk on just that one share (zfs set sync=disabled tank1/scratch) and performance was fantastic.

Wednesday, September 1, 2010

Zeke vs MacBook Pro




Tim>
so... I am working from home today.


Pete>
yes


Tim>
and Martine has a cat. let's call him Zeke because, well, that's his name and my nickname for him today is not as pleasant.

Pete>
hahahahahahaha


Tim>
Zeke likes to be the center of attention.

Pete>
of course

Tim>
Zeke likes to meow and walk on my keyboard when I am working, especially when talking business on the phone.
Today, I was running out of patience so I kept pushing Zeke out of the way, more roughly than usual.
then setting him down on the floor...
then pushing him away as he would not shut up while I was talking.
Zeke is a very vocal cat. He loves to meet people at the door and meow and them until they pet him.
Zeke, also likes to drink water. a lot of water.

Pete>
oh dear
btw: http://www.bitboost.com/pawsense/

Tim>
Apparantly, Zeke was very thirsty.
I drink water too, and so I had to get up and go to the bathroom after I hung up the phone.
when I came back, there was a very nice present for me...

Pete>
*snicker*
cats always get their revenge

Tim>
he threw up all the water he just drank, his breakfast, and them more water, right smack in the middle of my MBP keyboard.
my laptop was completely covered in puke

Pete>
oh *perfect*

Tim>
I immediately got paper towels and began to soak up the puke, just leaving the chunks behind.
when I noticed that the screen had completely frozen and there was no response to keyboard or mouse.
I turned the laptop up side down and got more of the liquid mess out of the crevices.

Pete>
dear lord man

Tim>
powered it off and then continued cleaning the keys, cover, and so on.
he puked up so much, it had run over both edges, getting under the laptop.
finally, I got it cleaned up enough and pushed the power button...
nada. no magic chime, no wizzes or whirls.
sh*t
$3k cat puke brick

Pete>
lovely

Tim>
took out my compressed air bottle and began trying to get the small chunks of half digested cat food out from under each and every key.
you never realize just how many keys there are until you have to clean each one of them individually.

Pete>
I'd have disassembled the case by this point

Tim>
...and I have a meeting in 2 minutes...

Pete>
sweet. Remember, no matter how bad your day...

Tim>
so now, everything "appears" dry. the keys are no longer juicy and responding fine.
I press the power button again.
yea! the magic chime!
but then zoomp.
power down.
MF
appears that some of the regurgitated "Fancy Feast" had seeped past the membrane and onto the motherboard.
again, more compressed air, more cleaning. same result.

Pete>
this story is epic

Tim>
argh...
but wait!!! it didn't power down this time.
just no video
I see the lights on under the key on and the backlit apple logo
Hmm... maybe I can hook up my external monitor, maybe all is NOT lost?
I dash into the garage, pull out the first VGA cable I find, run back into my office (I usually work at the dining table), and try to hook it up to my big monitor.
wrong fucking ends.
now the meeting has started, I can't see the PPT.
dash back into the garage, find a freaking VGA cable, push the monitor on its side, plug that up, plug that into the VGA dongle, push the power button...
wait for what feels like an eternity... magic chime! display comes out of power save and ... swirly clock!!! yeah!

Pete>
w00t!

Tim>
boots up, keyboard works.
trackpad mouse...not so much.

Pete>
of course

Tim>
every try to seriously use OSX w/o a mouse?
I remembered my shortcuts. apple-space for spotlight
searched for Displays, go to the System Preferences.
I can navigate, but how do I activate a button?
go back to Keyboard Preferences,
enable Full Keyboard Access, now I can press Space and activate items.
yeah!

Pete>
yeah the disabled access stuff can help

Tim>
So, I'm in the meeting, nobody has anything to talk about, I hang up the phone to get back to my cleaning.
shut down the laptop and proceed to attempt to clean the trackpad as I had cleaned the keys.
did a pretty good job because when I rebooted, the trackpad worked, but not the buttons on the bottom.
and as a bonus, the built-in display worked too!
so I've been working today without the ability to click-drag while Zeke still sleeps in the middle of the table.

Pete>
hahahahahahahahahha

Tim>
that's it, that's how my day's been.

Pete>
sorry, rofling here

Tim>
crazy cat.
now I have to decide if I want to attempt to get the MBP trackpad repaired.
and can just imagine the face of the "Genius"...
GeniusBoy: "So, what happened?"
Me: "My fiance's cat puked on my keyboard."
GeniusBoy: "Are you still engaged?"

Pete>
you won't be the first
*snort*

Tim>
I think this deserves a blog post.

Tuesday, March 31, 2009

Want Cheap Storage @ Home?

Okay, I've been asked several times what I use for bulk data storage at home. A lot of the tools I use for production systems are exactly the same and saving money by using commodity hardware at the office is just as easy. (ie if I can create 12TB single-fault reliable array at home for under $2k, then why spend $80k at the office???)

I bought a Norco RPC-4020 ($290) case which has 20x3.5" SATA drive bays with unfortunately flimsy trays in which I've mounted a cheap dual 2GHz AMD Opteron server motherboard with 4 PCI-X (133MHz/64bit) slots from eBay ($120), search "Monarch". The eBay deal included 4GB ECC RAM and a 120GB IDE HD. Then I bought 8x1.5TB Seagate drives ($130/ea for $1060 delivered). I added that with my old set of 6xHitachi 500GB, 6xWD 750GB and 2xSuperMicro AOC-SAT2-MV8 8 Port SATA controllers for $100 ea. After a bit of "Aggie Engineering" getting the Opterons cooled properly in the case (without buying new heatsinks) I assembled the parts was off to the races.

I downloaded and installed OpenSolaris 2008.11 for x86 on the 120GB and then created 3 RAIDZ pools and then created ZFS filesystems on them and shared via CIFS to my Windows and Mac machines. If you're brave, you can stripe/cat your raidzs together, but I left them separate so I can more easily upgrade the 300GB drives to 2TB drives later this year (because you can't remove devices from a zpool).

Total bill after incidentals, about 2 large. Total usable storage space, about 12TB. Total days of nerd fun, about 5.

Next on my project list is to rebuild my Vista Gaming PC for a Solid State disk drive boot and host an iSCSI target on the storage array. Been wanting to do that for a year now...

Notes:
  1. I do NOT recommend the 1.5TB Seagate drives. The failure rate on them is high (thanks for noticing RAIDZ). I've RMAd two and updated the firmware to CC1H on the others. (Google '1.5 TB Seagate Freeze') They also do NOT work with the Adaptec 21610SA.
  2. Some consumer level SATA drives don't like long cables. Even though the SATA spec allows it, when I was using a really nice 3U 12 bay external drive enclosure with 3xInfiniband connections to 2xLSI 3800X SAS controllers but I kept getting intermittent errors. That set me back 2 weeks and almost $800 to figure out. I still don't know if that's exactly it or a problem with the LSI3800X and the Tyan Thunder server MB but I had to scratch and restart on the controller. BTW, I still have those two 8 channel LSI controllers, an Adaptec 21610SA and a Norco DS-1220 laying around.
Links:

Backup your ZFS files to Mac's HFS+ over a WAN

It makes sense to keep multiple copies of your critical files not only on different computers, but in multiple physical locations. But how do you keep them all in sync? rsync(1) of course! I debated on zfs snapshots, but that doesn't really let me access my files locally on Leopard (10.5) so I decided that keeping a replicated filesystem works best right now. But, there are a few caveats and hidden obstacles. Let me cut to the chase and show you how I do it.

So I have a Solaris server at home (lets call it 'storage' with 10TB of storage on a ZFS filesystem which I share via CIFS to all of my other computing devices. (zpool create tank1 raidz1 blah blah; zfs create -o casesensitivity=mixed -o nbmand=on tank1/files; svcadm enable -r smb/server; zfs set sharesmb=name=files tank1/files; sharemgr show -vp)

I want to back up a portion of those files to my personal external USB drive at the office attached to my MacBook Pro.

I created a HFS+ partition on the mac using 'Disk Utility', formatted, mounted. I did my initial copy while I was attached to my local LAN using a basic rsync command (rsync -avz -e "ssh -l timk" timk@storage:/tank1/files/ /Volumes/Personal/files/)

Now, back at the office, I want to receive incremental updates. I went back through my history and started again with my basic command, added --delete-after (to remove files from my external backup drive which were removed or renamed on my master copy at home) but I was seeing files which had not changed get transferred. This was not right!

2009/03/31 10:05:50 [6346] receiving file list
2009/03/31 10:06:10 [6346] 31752 files to consider
2009/03/31 10:06:10 [6350] >f+++++++ Data/Rebecca's Personal/My Documents/Personal.old/Recipes/Shrimp Etouffeé.doc
2009/03/31 10:06:11 [6350] >f+++++++ Data/Rebecca's Personal/My Documents/Personal/Recipes/Shrimp Etouffeé.doc
2009/03/31 10:06:11 [6346] *deleting Data/Rebecca's Personal/My Documents/Personal.old/Recipes/Shrimp Etouffeé.doc
2009/03/31 10:06:11 [6346] *deleting Data/Rebecca's Personal/My Documents/Personal/Recipes/Shrimp Etouffeé.doc
2009/03/31 10:06:11 [6346] ^M2009/03/31 10:06:11 [6350] sent 64 bytes received 777331 bytes 345
50.89 bytes/sec
2009/03/31 10:06:11 [6350] total size is 63159427722 speedup is 81244.96


After some research, I realized it was the UTF-8 in the filename throwing off rsync due to HFS+'s munging and by adding an --iconv=UTF8-MAC,UTF-8 option, I could force the character set conversion of the filenames between my Mac's HFS+ and the ZFS on the storage server. But alas, life is never so easy:

rsync: on remote machine: --iconv=UTF8-MAC: unknown option

Oh, OS X 10.5 ships with rsync 2.6.9 and Solaris 11 (snv_101b) ships with 2.6.9 but the iconv option is only available in rsync 3.x (rsync.samba.org)

tim-kieschnicks-macbook-pro:~ timk$ /usr/bin/rsync --version
rsync version 2.6.9 protocol version 29
Copyright (C) 1996-2006 by Andrew Tridgell, Wayne Davison, and others.

Capabilities: 64-bit files, socketpairs, hard links, symlinks, batchfiles,
inplace, IPv6, 32-bit system inums, 64-bit internal inums

rsync comes with ABSOLUTELY NO WARRANTY. This is free software, and you
are welcome to redistribute it under certain conditions. See the GNU
General Public Licence for details.
tim-kieschnicks-macbook-pro:~ timk$

timk@stor:~$ /usr/bin/rsync --version
rsync version 2.6.9 protocol version 29
Copyright (C) 1996-2006 by Andrew Tridgell, Wayne Davison, and others.

Capabilities: 64-bit files, socketpairs, hard links, symlinks, batchfiles,
inplace, no IPv6, 64-bit system inums, 64-bit internal inums

rsync comes with ABSOLUTELY NO WARRANTY. This is free software, and you
are welcome to redistribute it under certain conditions. See the GNU
General Public Licence for details.
timk@stor:~$


So, a quick update of rsync via macports (sudo port install rsync) and I was good to go on the client. On Solaris, it took a few more minutes to download the packages and dependencies from sunfreeware.com. I needed the following packages.

% wget ftp://ftp.sunfreeware.com/pub/freeware/intel/10/rsync-3.0.5-sol10-x86-local.gz
% wget ftp://ftp.sunfreeware.com/pub/freeware/intel/10/popt-1.14-sol10-x86-local.gz
% wget ftp://ftp.sunfreeware.com/pub/freeware/intel/10/libiconv-1.11-sol10-x86-local.gz
% wget ftp://ftp.sunfreeware.com/pub/freeware/intel/10/db-4.2.52.NC-sol10-intel-local.gz
% wget ftp://ftp.sunfreeware.com/pub/freeware/intel/10/libintl-3.4.0-sol10-x86-local.gz
% wget ftp://ftp.sunfreeware.com/pub/freeware/intel/10/libgcc-3.4.6-sol10-x86-local.gz

Uncompressed the whole lot (gzip -d *.gz) and then I did a pkgadd -d pkgname one at a time and once I verified I was now good to go.

timk@stor:~$ which rsync
/usr/local/bin/rsync
timk@stor:~$ rsync --version
rsync version 3.0.5 protocol version 30
Copyright (C) 1996-2008 by Andrew Tridgell, Wayne Davison, and others.
Web site: http://rsync.samba.org/
Capabilities:
64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints,
socketpairs, hardlinks, symlinks, no IPv6, batchfiles, inplace,
append, ACLs, no xattrs,
iconv, no symtimes

rsync comes with ABSOLUTELY NO WARRANTY. This is free software, and you
are welcome to redistribute it under certain conditions. See the GNU
General Public Licence for details.
timk@stor:~$

tim-kieschnicks-macbook-pro:backup_stor timk$ which rsync
/opt/local/bin/rsync
tim-kieschnicks-macbook-pro:backup_stor timk$ rsync --version
rsync version 3.0.5 protocol version 30
Copyright (C) 1996-2008 by Andrew Tridgell, Wayne Davison, and others.
Web site: http://rsync.samba.org/
Capabilities:
64-bit files, 32-bit inums, 32-bit timestamps, 64-bit long ints,
socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace,
append, ACLs, xattrs,
iconv, symtimes, file-flags

rsync comes with ABSOLUTELY NO WARRANTY. This is free software, and you
are welcome to redistribute it under certain conditions. See the GNU
General Public Licence for details.
tim-kieschnicks-macbook-pro:backup_stor timk$



But... one more hurdle to overcome:

rsync: on remote machine: --iconv=UTF-8: unknown option
rsync error: syntax or usage error (code 1) at main.c(1318) [server=2.6.9]
rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
rsync error: error in rsync protocol data stream (code 12) at io.c(600) [receiver=3.0.5]


My storage server's rsync was still using the old version 2.6.9. No problem, I can specify the server's rsync path using --rsync-path=/usr/local/bin/rsync.

Now, finally, I can keep my files in sync!!!

#!/bin/bash

tgt=/Volumes/Personal/
src_host=10.0.0.XX

ping -t 5 -c 1 $src_host

if [ $? -ne 0 ]; then
echo Ping failed, using remote host.
src_host=storage.mydomain.com
fi

if [ ! -d ${tgt} ]; then
echo Personal volume not mounted, exiting;
exit 1;
fi

/opt/local/bin/rsync -avzi --delete-after --progress --iconv=UTF8-MAC,UTF-8 --rsync-path=/usr/local/bin/rsync --log-file=$HOME/tmp/`basename $0`/files-$$.log -e "ssh -l timk" timk@${src_host}:/tank1/files/ ${tgt}/files/