• Bootstrap Like Responsive CSS

    Several years back, I created my web based CV. It was an awesome website and I was very proud of it. Below are a few screen shots. I was very proud of this and told the first person I met to check it out. He whipped out his phone and went to the URL - https://bharath.lohray.com/cv. The page I saw was a huge disappointment!

  • Very Large Hand Disk Issues

    An year back, I had written about stress testing new hard disk drives. In the process of phasing out old disks, I bought a new 4TB Seagate USB 3 disk from Costco for $99. And as usual, I put it through stress tests...

    time sudo badblocks -vw /dev/sdf && pushb "badblocks Done"

    ...and here are the results.

    The last part of the command is to send me a push notification on Pushbullet once the tasks are run. Here is my implementation of the pushb utility

    Checking for bad blocks in read-write mode
    From block 0 to 3907018582
    Testing with pattern 0xaa: done
    Reading and comparing: done
    Testing with pattern 0x55: done
    Reading and comparing: done
    Testing with pattern 0xff: done
    Reading and comparing: done
    Testing with pattern 0x00: done
    Reading and comparing: done
    Pass completed, 0 bad blocks found. (0/0/0 errors)
    real    5289m17.742s
    user    43m2.892s
    sys     83m14.560s

    So, what is the problem?

    real    5289m17.742s

    That is over 3 and a half days to go through the disk 8 times. That is about 11 hours to read the entire disk or write the entire disk (~ 2h 45min / TB). Now, imagine a 6TB or an 8TB USB 3 disk. The issue here is not the time, but the error rates. Disks tend to have 1 unrecoverable read error per 100TB of reading the disk. This parameter remains the same across disk sizes.

    Below is an extract from a Seagate datasheet. Other manufacturers have similar figures.

    Seagate Datasheet

    Now, if you are using a 1TB disk, you can read the disk 100 times before you have an error. For a 4TB disk, this goes down to 25 times and 12.5 times for an 8TB disk.

    Another problem is that failure of a single disk now puts 4TB of data at stake as opposed to 2TB an year and a half ago. So, if you rest trusting your data to a single disk, you have set yourself up for an unpleasant surprise.

    So, to reiterate the conclusions form my last article -

    1. Always stress test new disks.
    2. Keep more than one backup
    3. Vary your disk collections by age and brands
    4. Phase out your old disks
  • On Demand Offline Backups

    I have a few monthly digital magazine subscriptions that I have been getting for years. I have saved and aggregated on my hard disk. Recently, I lost a chunk of these due to my mistake at the terminal. As a precaution, I had these sync to my cloud storage space on hubic (Offer 25GB of storage space + 50 with 5 referrals). I thought I hand a second copy secured. However, as I logged in to the hubic web interface, I saw files vanishing right before my eyes. The hubic client program syncing between my computer and the cloud was sending instructions to get rid of files that were no longer on the computer. By the time I could log in via SSH and terminate the client program, it was too late. Unlike Dropbox, hubic does not keep deleted files to undelete. Years of aggregated magazine subscriptions lost in minutes.

  • Hardcoding Passwords and git

    Recently I was working on building myself a script to download my securities transactions from Robinhood and save it as a CSV. I found an unofficial python library that was reverse engineered from the app. As I started building the script, I realized that I could not host it on git. I had hardcoded my credentials in the code.

  • Retrieving Historical Stock prices.

    Yahoo Open Data Tables is probably the last free source of financial information available after Google shut down it's finance API a few years back. In a recent project to analyze my stocks trades, I decided to cache 10 year historical prices form the Yahoo Open Data Tables.

subscribe via RSS