Buffalo NAS-Central Forums

Welcome to the Linkstation Wiki community
It is currently Fri Feb 23, 2018 7:20 am

All times are UTC+01:00




Post new topic  Reply to topic  [ 16 posts ]  Go to page 1 2 Next
Author Message
PostPosted: Mon Jun 04, 2007 12:41 pm 
Offline
Newbie

Joined: Mon Jun 04, 2007 12:27 pm
Posts: 6
Hey Guys, my terastation pro e14 got the infamous E14/Can't Mount error (ie referenced here: http://buffalo.nas-central.org/forums/viewtopic.php?f=15&t=2424&hilit=).

I have a PPC mac (albeit I'm waiting for a PCI SATA card to get two more sata ports). I'm wondering what the exact procedure to rebuild my raid5 mounting information is given an ubuntu liveCD. The wiki has a small walkthough (http://www.terastation.org/wiki/Data_Recovery#Data_Recovery_Method_With_PPC_Apple_Mac) but its not that helpful for someone like me who isn't familiar with mdadm.

My question is this: on said wiki page, it looks like all he's doing with mdadm is just examining things. There's a line on the wiki where the author wrote, "My system crashed (Ubuntu runs a little hot on a Powerbook) and I lost the actual output which showed that the array was rebuilding." How did he initiate the rebuilding? Through some mdadm command? or something else?

Also, the ts itself runs an embedded linux with a powerpc processor. Is it possible to run the hacked firmware that gives you telnet access and do all this on the ts itself? I tried doing that, but the ts doesn't seem to have a /dev/disk/, as per the above-referenced wiki. I'm not that familiar with unix... is this directory only for IDE drives (my ts has sata drives)? If not, why can't you do this whole process on the ts remotely?

Thanks for any help, community. Buffalo tech support told me the only option was to lose all my data... a pretty stupid option considering they're in the business of redundant data storage. I should have just spent my money building a unix box and spending a weekend on google rather than buying their product. grrr.


Top
   
PostPosted: Mon Jun 04, 2007 1:41 pm 
Offline
Moderator
User avatar

Joined: Tue Jul 26, 2005 5:22 pm
Posts: 1123
Location: United Kingdom
If you are running a telnet enabled version of the firmware, you can definitely use the mdadm commands directly on the TeraStation for the array holding the data files. You would need to manually stop processes like Samba first to ensure that there was no process running with files open on the array, but other than that there should not be any issues. It would be worth finding the documentation for mdadm on the interent and printing yourself out a copy for reference.

If you create a /initrd folder on your system, you will find that after you have booted the original initrd ram disk remains mounted. This allows you to get at and look at the linuxrc startuip script that has some examples of the way Buffalo have used mdadm. It will also give you examples of the device names used to reference the SATA drives on the TSP.

Please keep notes of anything that you do and update the wiki appropriately.


Top
   
PostPosted: Tue Jun 05, 2007 6:03 pm 
Offline
Newbie

Joined: Mon Jun 04, 2007 12:27 pm
Posts: 6
Thanks, I think I'll try that later today.

However, my main question was some clarifications regarding the repair process. Again, I'm not too familiar with mdadm and I could use some clarification as to what the guy on the wiki was doing exactly, specifically regarding the part where he writes, "My system crashed (Ubuntu runs a little hot on a Powerbook) and I lost the actual output which showed that the array was rebuilding." How did he initiate the rebuilding? Through some mdadm command?

I'll read the mdadm manual later today, but as with all unix programs, it takes quite a bit of time to grasp the high-level ideas if you have little experience with it (not to mention in this case I'm trying to use it for a very specific purpose and in my experience, its hard to decipher how to do specific purposes if someone doesn't give you a little push in the right direction). If someone could post a high level walkthrough of what is it that you're trying to do with mdadm and what commands would do it when recovering the ts's raid5 array, I guess thats my question. It'll make reading the manual a lot more coherent for me.


Top
   
PostPosted: Thu Jun 07, 2007 4:34 am 
Offline
Newbie

Joined: Mon Jun 04, 2007 12:27 pm
Posts: 6
So I've been reading up on mdadm and trying a few things. I think the problem is my superblock might be bust.

Note that all of this is being run on the terastation itself using the telnet-unlocked firmware. All of the structural information is therefore as it is on the ts, not as it would be under a mystical powerpc with 4 sata ports.

So here's what I've discovered... of the md* devices, md0 seems to be the main system (linux kernel?) drive and md1 seems to be the device that has all my precious precious data.

Looking at md0...
Code:
root@HAXD_HELPER:/etc# mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90.02
  Creation Time : Sat Jan 14 12:32:49 2006
     Raid Level : raid1
     Array Size : 385408 (376.38 MiB 394.66 MB)
    Device Size : 385408 (376.38 MiB 394.66 MB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Wed Jun  6 21:26:53 2007
          State : active
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           UUID : e87531ac:9fe1f96a:121f55a1:1220867e
         Events : 0.110

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       17        3      active sync   /dev/sdb1


which is good, right?. When I mdadm --examine it, however, I get
Code:
root@HAXD_HELPER:/etc# mdadm --examine /dev/md0
mdadm: No super block found on /dev/md0 (Expected magic a92b4efc, got 00000000)


So this is confusing since it seems like md0 isn't working, but somehow the kernel is working... does the kernel run off of just one of the harddrives (ie sdX1)?

---

Anyways, thats unimportant (for now?).... what I really care about is my raid data that I'm trying to recover. Similar story with /dev/md1, which as I understand, is the md device that has all my sweet sweet data...
Code:
root@HAXD_HELPER:/etc# mdadm --examine /dev/md1      
mdadm: No super block found on /dev/md1 (Expected magic a92b4efc, got 7d7d7d7d)


root@HAXD_HELPER:/etc# mdadm --detail /dev/md1
/dev/md1:
        Version : 00.90.02
  Creation Time : Tue Dec 27 16:09:40 2005
     Raid Level : raid5
     Array Size : 1462862592 (1395.09 GiB 1497.97 GB)
    Device Size : 487620864 (465.03 GiB 499.32 GB)
   Raid Devices : 4
  Total Devices : 1
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Wed Jun  6 22:22:04 2007
          State : active, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 37d97fb5:083ede07:8d3e9c16:0f299b85
         Events : 0.300

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3
       2       8       35        2      active sync   /dev/sdc3
       3       8       51        3      active sync   /dev/sdd3


So my md1's superblock is bust. Buuut, when I --examine all the sd devices that are part of md1, they work (meaning they have valid superblocks?)
Code:
root@HAXD_HELPER:/etc# mdadm -E /dev/sd[abcd]3
/dev/sda3:
          Magic : a92b4efc
        Version : 00.90.02
           UUID : 37d97fb5:083ede07:8d3e9c16:0f299b85
  Creation Time : Tue Dec 27 16:09:40 2005
     Raid Level : raid5
   Raid Devices : 4
  Total Devices : 1
Preferred Minor : 1

    Update Time : Wed Jun  6 22:22:04 2007
          State : active
 Active Devices : 4
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 2cd505c9 - correct
         Events : 0.300

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8        3        0      active sync   /dev/sda3

   0     0       8        3        0      active sync   /dev/sda3
   1     1       8       19        1      active sync   /dev/sdb3
   2     2       8       35        2      active sync   /dev/sdc3
   3     3       8       51        3      active sync   /dev/sdd3
/dev/sdb3:
          Magic : a92b4efc
        Version : 00.90.02
           UUID : 37d97fb5:083ede07:8d3e9c16:0f299b85
  Creation Time : Tue Dec 27 16:09:40 2005
     Raid Level : raid5
   Raid Devices : 4
  Total Devices : 1
Preferred Minor : 1

    Update Time : Wed Jun  6 22:22:04 2007
          State : active
 Active Devices : 4
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 2cd505db - correct
         Events : 0.300

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     1       8       19        1      active sync   /dev/sdb3

   0     0       8        3        0      active sync   /dev/sda3
   1     1       8       19        1      active sync   /dev/sdb3
   2     2       8       35        2      active sync   /dev/sdc3
   3     3       8       51        3      active sync   /dev/sdd3
/dev/sdc3:
          Magic : a92b4efc
        Version : 00.90.02
           UUID : 37d97fb5:083ede07:8d3e9c16:0f299b85
  Creation Time : Tue Dec 27 16:09:40 2005
     Raid Level : raid5
   Raid Devices : 4
  Total Devices : 1
Preferred Minor : 1

    Update Time : Wed Jun  6 22:22:04 2007
          State : active
 Active Devices : 4
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 2cd505ed - correct
         Events : 0.300

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     2       8       35        2      active sync   /dev/sdc3

   0     0       8        3        0      active sync   /dev/sda3
   1     1       8       19        1      active sync   /dev/sdb3
   2     2       8       35        2      active sync   /dev/sdc3
   3     3       8       51        3      active sync   /dev/sdd3
/dev/sdd3:
          Magic : a92b4efc
        Version : 00.90.02
           UUID : 37d97fb5:083ede07:8d3e9c16:0f299b85
  Creation Time : Tue Dec 27 16:09:40 2005
     Raid Level : raid5
   Raid Devices : 4
  Total Devices : 1
Preferred Minor : 1

    Update Time : Wed Jun  6 22:22:04 2007
          State : active
 Active Devices : 4
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 2cd505ff - correct
         Events : 0.300

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     3       8       51        3      active sync   /dev/sdd3

   0     0       8        3        0      active sync   /dev/sda3
   1     1       8       19        1      active sync   /dev/sdb3
   2     2       8       35        2      active sync   /dev/sdc3
   3     3       8       51        3      active sync   /dev/sdd3



Soo... it seems to me like the individual sd devices have the right superblock info, but the superblock info on the md1 device got bust. Is there any way I can tell the md1 device to look at the individual sd*3 devices for its superblock? I'm not sure how to phrase this in terms of proper raid/mdadm terminology (or if I even have the right idea). Any help would be greatly appreciated.

Second, I'm confused as to the part of the md1's --details which says,
Code:
...
**   Raid Devices : 4
**  Total Devices : 1
  Preferred Minor : 1
      Persistence : Superblock is persistent

      Update Time : Wed Jun  6 22:22:04 2007
            State : active, degraded
** Active Devices : 1
**Working Devices : 1

Why is only 1 device being active in md1? All four harddrives appear to be good, right?

Google is sucking at helping me learn about recovering from this specific raid5 problem (actually, it seems to be quite hard to find useful raid troubleshooting things in general)... if anyone who knows what they're doing could speak up, I would be infinitely grateful.


Top
   
PostPosted: Sat Jun 09, 2007 12:50 am 
Offline
Newbie

Joined: Mon Jun 04, 2007 12:27 pm
Posts: 6
So I'm still working around on this, but in the meantime, I found a nifty little feature.

If you want to write a custom little message on the LCD and you have the telnet-enabling firmware, the command is
Code:
root@HAXD_HELPER:/initrd# miconapl -a lcd_set_buffer0 "-LINE 1 16chars--LINE 2 16chars-"
root@HAXD_HELPER:/initrd# miconapl -a lcd_disp_buffer0


It stays on for a few seconds and then it gets overridden by the usual message rotation. There's probably some monitor script going on in the background constantly updating the messages, but it shouldn't be that difficult to find it and, er, customize it for your own purposes. My guess is it's /usr/sbin/miconmon

Happy scripting.


Top
   
PostPosted: Sat Jun 09, 2007 8:41 am 
Offline
Site Admin
User avatar

Joined: Sun Jul 17, 2005 4:34 pm
Posts: 5332
Hey scripters, you might take a look at my script log2lcd.sh for ideas.


Top
   
PostPosted: Sat Jun 09, 2007 3:50 pm 
Offline
Newbie

Joined: Mon Jun 04, 2007 12:27 pm
Posts: 6
There ya go... yeah, use andre's... I figured someone would have found that out by now.

Anyways, an update on my saga, saved here for the good of our collective knowledge. I got some raid5 advice from the good folks at the mailto:linux-raid@vger.kernel.org mailing list (http://www.nabble.com/linux-raid-f12444.html).

Main thing is that "mdadm --examine" only "makes sense" on a component of an md/raid array (ie the dev/sd*3 devices). The errors I was showing was basically because I didn't know what the command did and was shooting in the dark :) (Thanks Neil Brown)

It was pointed out that everything looked fine except the "total devices" and "working devices" thing. Running the commands:
mdadm --stop /dev/md1
mdadm --assemble /dev/md1 --update=summaries/dev/sd[abcd]3
would probably fix it, and indeed it did:

Code:
root@HAXD_HELPER:~# mdadm --stop /dev/md1

root@HAXD_HELPER:~# mdadm --assemble /dev/md1 --update=summaries /dev/sd[abcd]3
mdadm: /dev/md1 has been started with 4 drives.

root@HAXD_HELPER:~# mdadm --detail /dev/md1
/dev/md1:
        Version : 00.90.02
  Creation Time : Tue Dec 27 16:09:40 2005
     Raid Level : raid5
     Array Size : 1462862592 (1395.09 GiB 1497.97 GB)
    Device Size : 487620864 (465.03 GiB 499.32 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Fri Jun  8 19:13:51 2007
          State : active
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 37d97fb5:083ede07:8d3e9c16:0f299b85
         Events : 0.310

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3
       2       8       35        2      active sync   /dev/sdc3
       3       8       51        3      active sync   /dev/sdd3


I restarted it hoping all would be well. I hadn't mentioned this, but at some point early in my adventure, my E14/Can't Mount error got replaced with a generic "uh-oh I don't know whats wrong" E13/Array Error or something like that. Doing the above, however, brought back the good ol' E14. Progress!

I said to myself, "alright, well if this guy can't mount by himself, let me try mounting manually and see what happens." I created a /mnt/mydata since I have no idea where it officially mounts and ran:
Code:
root@HAXD_HELPER:~# mount -t xfs /dev/md1 /mnt/mydata
mount: wrong fs type, bad option, bad superblock on /dev/md1,
       or too many mounted file systems
       (aren't you trying to mount an extended partition,
       instead of some logical partition inside?)


Grrr. I gave up for a while and went to just poking around the system and seeing what I could find (thats about when I found the miconapl stuff). Finally I stumbled into /usr/sbin and saw a few xfs utilities. xfs_check and xfs_repair had promising names, so I google'd their man pages and their useages.

I ran xfs_check with something along the default options. Sure enough, many "uh-oh's" and "bad magic number"'s came up. This was my problem! I had a corrupted raid partition, not a corrupted raid setup! I had been looking at the wrong issue this entire time!

At some point, the system crashed and and the box began beeping non-stop with a weird error message not of the standard "Exx/blah blah I suck" form (it had some code that began with a W and non-descriptive text that I forgot). I suspected thermal overload so I put a fan blowing right at the unit and rebooted.

I first ran xfs_repair -n /dev/md1. The -n tag does it so no actual repairs are made, but it just tells you what it would change. After a while, I got bored, cntrl+c'd the process, and just ran xfs_repair /dev/md1. When I woke up the next morning, I found it had done alot, but it finally ended with "done". Many a magic number and inode was deemed bad and fixed. A few things were moved to a "lost+found" directory, so I probably lost some files, but hey, this is a data recovery. As long as I get most of my stuff back, I'll be happy.

Anyways, I rebooted so it would run whatever automatic mounting proceedures it does.

I rebooted and success! No red led's or beeping or anything bad! It went into an automatic resync, but the web-interface recognized the file system as XFS, which it had never done before during the crash (ie this is normal operating behavior!!!) The resync process takes a few hours, so I'll let you guys know then if it worked.


The main thing to take away from this thread: you don't need a mystical 4-sata-port-enabled powerpc mac that doesn't exist anywhere -- the terastation itself is such a computer! All you need is the hacked firmware and run everything on the terastation itself. Buffalo, if you guys read this, I can understand you wanting to deny us root access so we don't f%&8 things up and have you guys fix it through an RMA, but if you want to do that, for the love of god, give us some access to these recovery tools. No one can make a perfect computer that works all the time -- thats why God invented xfs_repair.


Top
   
PostPosted: Sun Jun 10, 2007 3:46 pm 
Offline
Newbie

Joined: Mon Jun 04, 2007 12:27 pm
Posts: 6
The recovery worked*!! All files are recovered*!!

*1) The files were recovered, but a lot of directory information was lost. Files still had their rightful filenames, and if a bunch were grouped in a directory they were still grouped in a directory, but the directory names became garbage and randomly flattened. ie, if before my file tree was something like
Code:
mydir1/mydir1a/lorem.avi
mydir1/mydir1a/ipsum.avi
mydir1/dolor.avi
mydir2/mydir2a/amet.mp3
mydir2/mydir2b/consectetur.mp3
mydir2/adipisicing.mp3

they now became
Code:
412864232/mydir1a/lorem.avi
412864232/mydir1a/ipsum.avi
412864232/dolor.avi
864321684/amet.mp3
132498643/consectetur.mp3
966853412/adipisicing.mp3

Note that mydir2 got 'flattened' while mydir1 didn't. It appeared to be random which folders got flattened and which didn't. This could be catastrophic if you stored programs on your ts, but since mine is mostly just a media server, it was easy enough to figure out and fix manually. Prepare for some headaches, though, if your directory structure was particularly complex.

*2) This is the important one. Note that your files are normally stored in /mnt/array1/share1, /mnt/array1/share2, etc. The results of my xfs_repair were put into a folder called /mnt/array1/lost+found and my share1 was empty. Don't be fooled! Even though your share is empty, all your files are in the (non-shared) lost+found. You need to simply move lost+found to your share.
Code:
mv /mnt/array1/lost+found /mnt/array1/sharedfolder1



I hope this entire thread eventually helps someone recover from a similar crash. What I learned from this is there is no set recovery procedure to follow. This thread should have given you some ideas as to things you can try doing, but prepare to spend a weekend (or week, in my case) reading man pages and going to google.


Top
   
PostPosted: Sat Aug 25, 2007 11:50 pm 
Offline
Total Newbie

Joined: Sat Aug 25, 2007 7:39 pm
Posts: 1
I just used this thread to recover over 900 gb of very important USGS data from our 2 TB terastaion pro. let me just say we wont be trusting buffalo with out data again.
thank you phyros fro your research. The only part this thread does not mention is where to get the firmware. let me sum up what I did.

1. Upgrade the firmware to an unlocked version to enable root access, you can get it here : http://homepage.ntlworld.com/itimpi/buf ... m#OPENTERA

2. Telnet into the terastaion, login as "myroot" and run the command xfs_repair /dev/md1
It will chug away for about an hour depending on severity of file structure. when its done it will say "done" and you will see the command prompt. quit out of the telnet session and turn off the terastation. turn it back on and wait for it to boot and hopefully mount the array.

3. Try to browse to your files on the TS, if there is nothing there, thats ok!

4. Telnet into the TS again, login as "myroot" and change the directory to /mnt/array1/lost+found then do an LS command and you should see a bunch of files and folders with numbers as names... move the files and folders into your shared folder, and vola!


mabey someday Buffalo will be smart enough to write a program to do this, since its their device that is causing people soo much grief in the first place. once you get your data take your drives and build yourself a freenas box ( http://www.freenas.org)


Top
   
PostPosted: Mon Aug 11, 2008 8:52 pm 
Offline
Total Newbie

Joined: Mon Aug 11, 2008 7:40 pm
Posts: 2
Image

TERASTATION DATA RECOVERY
By Tony Jimenez,
"tony(at)780tech(dot)com"
August 2008
Based on everything read in this thread


Having trouble with:
* Degrade Mode ?
* EM (Emergency Mode) Bootup ?
* Unreadable Partitions ?
* E14: Cannot mount RAID Array1

Read on

Long story short
In my experience the acquisition of a PPC Mac, SATA to USB adapters and USB Hub is totally unnecessary (Thus so far)
fortunately I was hard headed enough to continue searching until I found this thread which offers a much easier method of regaining access to your data, as the information is a little "scattered" I promised myself if I was successful I would create a little write-up detailing every step.

THE ADVENTURE BEGINS
If you just want to fix the stupid thing skip to "THE SOLUTION"

After several power outages (in which my Stupid Battery Backup failed), I found myself with a Terastation in DEGRADE mode with inaccessible data, it also reported DRIVE 2 as inaccessible, after I pulled drive two and verified it was banging and clicking and was indeed damaged, I attempted to access my data through the Terastation without the drive (Isn't that what RAID 5 is supposed to be all about?)

Anyways I could see the shared folder on the Terastation but when attempting to access it got an error, Arg..., In frustration I set to RMAing the defective hard drive (expedited service) swearing Loyalty to Western Digital and vowing to give them my first born child.

3 Days later I received the replacement drive (which reminds me I should probably send the old one back before they send their "hit men" to get me) plugged it in and turned on the unit only to find the unit will now only boot in "EMERGENCY MODE".

Stupid &^!@!&!^#$~~~!!!!!

OK so... I VPN into my office and download my copy of the Terastation pro firmware which I always keem handy precisely for these ocassions, I reflash the box and reboot, it now boots correctly and reports no errors on the drive but still complains about being in "Degrade Mode" and data is still inaccesible......

I log into the Web Interface and tell the array to rebuild with my new drive (As I cross all fingers and toes).
I (not so patiently) await 24 hours for the raid to rebuild itself (which it claims it did succesfully), upon rebooting however I am still greeted with the (E14: Cannot mount RAID Array1) message.

I desperatelly fight the urge to smash the thing against a concrete wall.

Deep breath .... 1, Deep breath 2 ...............

Ok so I set off to search the web for a solution to the problem, most everyone is sugesting I use a PPC Mac, SATA to USB adapters, USB Hub and a recent copy of Unbuntu

Through much meditation and patience I eventually arrive to THIS thread, Which semingly appears to offer a rather easy & straightforward solution (I HOPE), The information is a tad scattered but I follow through and see what happens, and vow to write a step by step if it actually does work... WHICH IT DID !

THE SOLUTION

Standard disclaimer ...
IF you f%&8 up your Terastation don't come crying to me, yada yada yada...

hopefully at this time you are so desperate for a solution you don't care anymore

1. Go get yourself and install a modified version of the firmware to enable Telnet & Root access
(IMPORTANT: GET THE VERSION SPECIFIC TO YOUR TERASTATION MODEL)
DIRECT LINK TO FIRMWARE DOWNLOAD SECTION: http://homepage.ntlworld.com/itimpi/buf ... ERA_TELNET

Image

2. Drop to a command prompt (from Windows Click on START, RUN, and type CMD)
TELNET into your Terastation and run the following command: xfs_repair /dev/md1
Image

You will see a series of messages as Disk check/repair is in progress, once it has finished you should see a "DONE" message and the command prompt (This will usually take an hour or two depending on the state of your data)
TYPE: EXIT (to exit your telnet session)

3. Press your Terastation's Power button a couple of seconds until you see the message stating "Shutting Down"
Press your Terastation's Power button one more time and allow it to boot back up (Hopefully this time it WILL Mount correctly)
Attempt to access your Terastation's Shared folder through your network, at the very least you should now be able to successfully access the share
If you now have access to your data ... Voila ! We are done, otherwise proceed to step 4

4. Drop to a command prompt (from Windows Click on START, RUN, and type CMD)
TELNET into your Terastation and switch over to following folder: /mnt/array1/lost+found
use the "MV" Command to move all your data back to its rightful location as per the chart below

Image

All data should be instantly relocated to the specified location and you should now be able to access it through your network.
One last note:
Root folders and files should now have messed up names, you need to identify them and rename them correctly (small price to pay in my opinion)


Top
   
PostPosted: Tue May 26, 2009 7:47 pm 
Offline
Total Newbie

Joined: Tue May 26, 2009 7:30 pm
Posts: 1
I read every line (I guess) in this thread. And in several google-related pages.

I have telnet access to the Terastation I want to fix, but... I can't fix it.


Code:
root@TERASTATION:~# xfs_repair /dev/md1
xfs_repair: /dev/md1 contains a mounted filesystem

fatal error -- couldn't initialize XFS library


I tried unmounting /dev/md1. It's not mounted.


Top
   
PostPosted: Sat Sep 12, 2009 11:06 am 
Offline
Total Newbie

Joined: Sat Sep 12, 2009 10:57 am
Posts: 1
Quote:
TELNET into your Terastation and run the following command: xfs_repair /dev/md1


I have searched these forums for instructions on how to TELNET into my Terastation Pro (I have upgraded the firmware already), but can't find anything. As I am a COMPLETE newbie, can anyone point me to existing instructions, or tell me what to do? I have got as far as Run, CMD, Telnet, but I don't know what to do next... (I am a mac user, so please be gentle...)


Top
   
PostPosted: Sat Apr 14, 2012 8:27 pm 
Offline
Total Newbie

Joined: Sat Apr 14, 2012 8:11 pm
Posts: 1
I have through a week-long ordeal very similar to those described above. Add to the stress the fact that this NAS box (HS-DHTGL/R5) is the sole repository of many of the photos / video of our new baby, not to mention years of vacations, helmetcam footage, and the likes.

So, I had a drive fail (likely result of a power failure, despite the device plugged into a UPS). Upon shutdown / attempt to reboot, got the E04 error. Flashed firmware to no avail - finally, swapping out the failed drive, and THEN flashing firmware, allowed it to boot. Device came up, showed the faulty drive, and the web UI allowed me to kick off the 'repair' process. It was chugging along nicely when I went to bed - and I expected to awake to a fully-functional array.

Instead, what I found, was the E14 "Can't Mount Array" error. MUCH searching later - I stumble upon this thread. Read it top to bottom and inside out. Gain telnet access to the device with acp_commander, and think I've got it...

Alas - the xfs_repair /dev/md1 command fails, as noted in the previous post above:


root@GWNAS:/# xfs_repair /dev/md1
xfs_repair: /dev/md1 contains a mounted filesystem

fatal error -- couldn't initialize XFS library


Attempting to unmount /dev/md1 simply reports that it's not mounted. :(

Any suggestions? I was SO hopeful: while definitely more complicated that just dropping in a new drive and hitting "rebuild" - I was hoping the details in this thread were going to get me there - even if I had to spend ages digging / moving files out of Lost+Found.

Thanks in advance for any ideas,
Billy


Top
   
PostPosted: Sun Apr 15, 2012 11:16 am 
Offline
Moderator
User avatar

Joined: Tue Jul 26, 2005 5:22 pm
Posts: 1123
Location: United Kingdom
This thread applies to the original PowerPC based TeraStation Pro models, whereas you seem to have a later model.

This might mean that the details of the commands to use have changed from those described here although I would expect the basic approach to stay valid.


Top
   
PostPosted: Fri Feb 01, 2013 11:27 pm 
Offline
Total Newbie

Joined: Mon Jan 28, 2013 3:58 am
Posts: 1
A million thanks to phyros and netcat -- you two really saved my bacon! :D

I found this thread while trying to solve an E14 error on my Arm-based Terastation Live HS-DHTGL/R5. As with SnowWake, I wanted to apply the knowledge in this thread to my Arm-based TeraStation, but I found that there were a few differences. I therefore cobbled together info from several other wikis and forum and experimented until I achieved success.

Here's my TeraStation Live (Arm-based) E14 Recovery recipe:

Prolog: Having tried, on-and-off to resuscitate my HS-HDTGL4, I surrendered and bought an identical unit via eBay. Here's hoping the first one had a system board issue that the 2nd one doesn't have!...

1. Leverage TeraStation's existing (but disabled) telnet access capability:
visit http://buffalo.nas-central.org/index.php/Open_Stock_Firmware#Getting_Console_.28Telnet.29_Access_with_acp_commander
From command line (I used a Mac Mini -- procedure's the same from a PC DOS prompt), Telnet to the TeraStation (I also changed the root password after login -- not terribly important as I did not follow additional procedures from the above website to permanently include telnet at startup:
Code:
$ java -jar acp_commander.jar -t 10.10.10.47 -o
ACP_commander out of the nas-central.org (linkstationwiki.net) project.
Used to send ACP-commands to Buffalo linkstation(R) LS-PRO.
WARNING: This is experimental software that might brick your linkstation!
Using random connID value = 30EFA7A7BE61
Using target:   hs-dhtgl5c3/10.10.10.47
Starting authentication procedure...
Sending Discover packet...   
Found:   HS-DHTGL5C3 (/10.10.10.47)    HS-DHTGL/R5(ANNEI) (ID=00326)    mac: 00:16:01:BB:92:7A   Firmware=  2.140   Key=4658BD91
Trying to authenticate EnOneCmd...   ACP_STATE_OK
start telnetd...   OK (ACP_STATE_OK)
Reset root pwd...   Password changed.
You can now telnet to your box as user 'root' providing no / an empty password.
BuffaloTech$ telnet 10.10.10.47
Trying 10.10.10.47...
Connected to hs-dhtgl5c3.
Escape character is '^]'.

BUFFALO INC. TeraStation series HS-DHTGL/R5(ANNEI)
HS-DHTGL5C3 login: root

root@HS-DHTGL5C3:~# passwd root
Changing password for root
Enter the new password (minimum of 5, maximum of 20 characters)
Please use a combination of upper and lower case letters and numbers.
Enter new password:
Re-enter new password:
Password changed.


2. Unlike PowerPC-based units, a TSLive using RAID5 stores its media on /dev/md2.
Verify this by using mdadm --detail to inspect array structures and sizes:


No media on /dev/md0 (it's an array on 4 devices, but it's too small):
Code:
root@HS-DHTGL5C3:~# mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90.03
  Creation Time : Mon Dec  1 23:36:36 2008
     Raid Level : raid1
     Array Size : 297088 (290.17 MiB 304.22 MB)
...

No media on /dev/md1:
Code:
root@HS-DHTGL5C3:~# mdadm --detail /dev/md1
/dev/md1:
        Version : 00.90.03
  Creation Time : Mon Dec  1 23:36:36 2008
     Raid Level : raid1
     Array Size : 497920 (486.33 MiB 509.87 MB)
...

Bingo! /dev/md2's big enough to be my RAID5 3TB array:
Code:
root@HS-DHTGL5C3:~# mdadm --detail /dev/md2
/dev/md2:
        Version : 00.90.03
  Creation Time : Mon Dec  1 15:15:37 2008
     Raid Level : raid5
     Array Size : 2926785600 (2791.20 GiB 2997.03 GB)
    Device Size : 975595200 (930.40 GiB 999.01 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Sun Jul  3 02:13:06 2011
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : aba3b5a9:b317491a:da4e365d:a8a28660
         Events : 0.6386152

    Number   Major   Minor   RaidDevice State
       0       8        6        0      active sync   /dev/sda6
       1       8       22        1      active sync   /dev/sdb6
       2       8       38        2      active sync   /dev/sdc6
       3       8       54        3      active sync   /dev/sdd6


3. Check to see that individual partitions are, in fact intact -- thus confirming that the array's screwed-up, but the data's still there:
Code:
root@HS-DHTGL5C3:~# mdadm -E /dev/sd[abcd]6
/dev/sda6:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : aba3b5a9:b317491a:da4e365d:a8a28660
  Creation Time : Mon Dec  1 15:15:37 2008
     Raid Level : raid5
    Device Size : 975595200 (930.40 GiB 999.01 GB)
     Array Size : 2926785600 (2791.20 GiB 2997.03 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 2

    Update Time : Sun Jul  3 02:13:06 2011
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 5d05f90f - correct
         Events : 0.6386152

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8        6        0      active sync   /dev/sda6

   0     0       8        6        0      active sync   /dev/sda6
   1     1       8       22        1      active sync   /dev/sdb6
   2     2       8       38        2      active sync   /dev/sdc6
   3     3       8       54        3      active sync   /dev/sdd6
/dev/sdb6:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : aba3b5a9:b317491a:da4e365d:a8a28660
  Creation Time : Mon Dec  1 15:15:37 2008
     Raid Level : raid5
    Device Size : 975595200 (930.40 GiB 999.01 GB)
     Array Size : 2926785600 (2791.20 GiB 2997.03 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 2

    Update Time : Sun Jul  3 02:13:06 2011
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 5d05f921 - correct
         Events : 0.6386152

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     1       8       22        1      active sync   /dev/sdb6

   0     0       8        6        0      active sync   /dev/sda6
   1     1       8       22        1      active sync   /dev/sdb6
   2     2       8       38        2      active sync   /dev/sdc6
   3     3       8       54        3      active sync   /dev/sdd6
/dev/sdc6:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : aba3b5a9:b317491a:da4e365d:a8a28660
  Creation Time : Mon Dec  1 15:15:37 2008
     Raid Level : raid5
    Device Size : 975595200 (930.40 GiB 999.01 GB)
     Array Size : 2926785600 (2791.20 GiB 2997.03 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 2

    Update Time : Sun Jul  3 02:13:06 2011
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 5d05f933 - correct
         Events : 0.6386152

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     2       8       38        2      active sync   /dev/sdc6

   0     0       8        6        0      active sync   /dev/sda6
   1     1       8       22        1      active sync   /dev/sdb6
   2     2       8       38        2      active sync   /dev/sdc6
   3     3       8       54        3      active sync   /dev/sdd6
/dev/sdd6:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : aba3b5a9:b317491a:da4e365d:a8a28660
  Creation Time : Mon Dec  1 15:15:37 2008
     Raid Level : raid5
    Device Size : 975595200 (930.40 GiB 999.01 GB)
     Array Size : 2926785600 (2791.20 GiB 2997.03 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 2

    Update Time : Sun Jul  3 02:13:06 2011
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 5d05f945 - correct
         Events : 0.6386152

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     3       8       54        3      active sync   /dev/sdd6

   0     0       8        6        0      active sync   /dev/sda6
   1     1       8       22        1      active sync   /dev/sdb6
   2     2       8       38        2      active sync   /dev/sdc6
   3     3       8       54        3      active sync   /dev/sdd6


4. Attempt xfs_repair:

4.a. Running xfs_repair -n allows me to see what xfs_repair would do, without committing...
Code:
root@HS-DHTGL5C3:~# xfs_repair -n /dev/md2
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!

attempting to find secondary superblock...
................................................................................
[screen full of dots]
…………………………………………………………
……………found candidate secondary superblock...
verified secondary superblock...
would write modified primary superblock
Primary superblock would have been modified.
Cannot proceed further in no_modify mode.
Exiting now.


4.b. xfs_repair as prescribed for PowerPC machine in this thread fails:
Code:
oot@HS-DHTGL5C3:~# xfs_repair /dev/md2
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!

attempting to find secondary superblock...
................................................................................
[screen full of dots]
……………………………………………………………
………found candidate secondary superblock...
verified secondary superblock...
writing modified primary superblock
sb root inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 256
resetting superblock root inode pointer to 256
sb realtime bitmap inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 257
resetting superblock realtime bitmap ino pointer to 257
sb realtime summary inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 258
resetting superblock realtime summary ino pointer to 258
Killed


4.c. "dmesg" command allows me to see why xfs_repair failed (out of memory):
Code:
oot@HS-DHTGL5C3:~# dmesg
lowmem_reserve[]: 0 0 0 0
...
Out of Memory: Kill process 11965 (xfs_repair) score 3736 and children.
Out of memory: Killed process 11965 (xfs_repair).
...


4.d. Allocate more swap space:

Observe currently-available swap space:
Code:
root@HS-DHTGL5C3:~# free
              total         used         free       shared      buffers
  Mem:       126080        25864       100216            0           20
 Swap:       131064         6392       124672
Total:       257144        32256       224888

Create an additional swapfile in the only partition that has any space -- the boot partition:
Code:
root@HS-DHTGL5C3:~# dd if=/dev/zero of=/boot/.swapfile2 bs=1M count=256
256+0 records in
256+0 records out

root@HS-DHTGL5C3:~# chmod 600 /boot/.swapfile2

root@HS-DHTGL5C3:~# mkswap /boot/.swapfile2
Setting up swapspace version 1, size = 268431360 bytes

root@HS-DHTGL5C3:~# swapon /boot/.swapfile2


4.e. Now attempt standard xfs_repair and observe that it can't deal with the inaccessible metadata in the array:
Code:
root@HS-DHTGL5C3:~# xfs_repair /dev/md2
Phase 1 - find and verify superblock...
sb root inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 256
resetting superblock root inode pointer to 256
sb realtime bitmap inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 257
resetting superblock realtime bitmap ino pointer to 257
sb realtime summary inode 18446744073709551615 (NULLFSINO) inconsistent with calculated value 258
resetting superblock realtime summary ino pointer to 258
Phase 2 - using internal log
        - zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.


4.f. Try to mount the array as advised. Alas, no joy:
Code:
root@HS-DHTGL5C3:~# mount -t xfs /dev/md2 /array1
mount: wrong fs type, bad option, bad superblock on /dev/md2,
       or too many mounted file systems


4.g. Pull out all the stops and run xfs_repair -L -- very serious risk of data loss, but I have no choice:
Code:
root@HS-DHTGL5C3:~# xfs_repair -L /dev/md2
Phase 1 - find and verify superblock...
sb root inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 256
resetting superblock root inode pointer to 256
...[lots of nasty "inconsistent...resetting messages like this]...
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
destroyed because the -L option was used.
        - scan filesystem freespace and inode maps...
bad magic # 0x0 for agf 0
bad version # 0 for agf 0
bad length 0 for agf 0, should be 1048576
...[nothin' but "bad magic" for a while]...
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
error following ag 0 unlinked list
        - process known inodes and perform inode discovery...
        - agno = 0
imap claims in-use inode 259 is free, correcting imap
...[a metric buttload of imap corrections]...
        - agno = 1
        - agno = 2
...[and lots of "agno's"]...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - clear lost+found (if it exists) ...
        - check for inodes claiming duplicate blocks...
        - agno = 0
entry "backuplog1_200910120100.txt" at block 0 offset 280 in directory inode 262 references non-existent inode 788
        clearing inode number in entry at offset 280...
...[a whole lot of clearing going on]...
...[more "agno's" and many non-existent inodes]...
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - ensuring existence of lost+found directory
        - traversing filesystem starting at / ...
rebuilding directory inode 2566914346
...[lots of rebuilding]...
Phase 7 - verify and correct link counts...
resetting inode 16778267 nlinks from 3 to 2
...[some resetting]...
done


5. Nuke the extra swap file:

Code:
root@HS-DHTGL5C3:~# swapoff /boot/.swapfile2    
root@HS-DHTGL5C3:~# rm /boot/.swapfile2


6. Discover that xfs_repair created a lost+found directory. Other forums have indicated that a lost+found directory may be the final resting place for many of my media files (they advised: If you find nothing in your array's media directory, but find many files in lost+found, copy the files from lost+found over to your array and -- once the dust settles -- start rebuilding your directories and renaming files).
Jump up and down and yell "hallelujah" upon discovering that I have no lost+found files to re-locate and -- to the best of my knowledge -- all my original array files are intact, where they used to be!: :biglol: :up: :biglol: :up:
Code:
root@HS-DHTGL5C3:~# cd /lost+found
root@HS-DHTGL5C3:/lost+found# ls -al
drwxr-xr-x    2 root     root            6 Jan 29 22:57 .
drwxr-xr-x   19 root     root         4096 Jan 30 18:31 ..

...and to think Buffalo reps in Buffalo Tech Support forums said this data was unsalvageable and we should just delete and re-build the array!

7. (last step): Copy all my files off the TeraStation and onto disk drives I can trust! (but with mucho backup) ;-)


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic  [ 16 posts ]  Go to page 1 2 Next

All times are UTC+01:00


Who is online

Users browsing this forum: No registered users and 3 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Powered by phpBB® Forum Software © phpBB Limited