You dont understand.
If you install unfs3 with optware you will not be able to create a new share
The webinterface will show you the new share, but you can not find it in you network.
because it is createt in /opt/pictures and not /mnt/array1/pictures
Sorry to resurrect an old post, but your notes regarding /opt and shares is the topic in question.
I've rewritten rc.optware to where I can enable/disable optware thus allowing shares to be managed without risk of misplaced directories (and a 3TB loss later, existing ones vanishing!) and I'm currently working on replacing the webui to where it'll be a bit more powerful and a lot more safety-friendly, but the question remains...
If I create a directory in /mnt/disk1/some_directory and reboot, Samba (manager) goes into some kind of "recovery" mode and adds the directory as a share. Deleting the directory obviously does not remove the share - plus I'm not trying to press my luck until this tid-bit is figured out.
Do you know of where the files/scripts are which build the smb.conf and/or which files restart services (and kill sshd) when a share is created/edited?
Basically, what I need to do is prevent Buffalo scripts from altering anything at all whether I'm rebooting, manipulating smb.conf (and others), etc. and I haven't dug deep enough to trip over such things just yet, but I'm guessing it'll be one or more within etc/init.d being called by webui.
Regarding "...can't create a new share..." and your example, it's actually a little more problematic than that - I cannot speak for all models, but for the LS series, when optware is installed (regardless of the drive layout and/or RAID configuration) it's installed into /mnt/disk1/.optware and that "recovery" process also seems to be monitoring directories within /mnt/[drive] as it somehow throws off existing shares by saving their locations as <>pictures<>/opt<> where it was originally <>pictures<>disk1<> as I'm speculating this being due to /opt being mounted to say md#01 where originally disk1 was mounted to that location. If running Array1 instead, it gets even more sticky where the 3TB loss came into the picture as it broke the array - on both drives. (XFS isn't very useful for file recovery, either...)
Any pointers as to which files are running the show would be tremendously appreciated!
Hardware: Linkstation 441 and 221 running stock firmware and altered running OS