Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-11-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:57 B21956 joined #gluster
01:07 bennyturns joined #gluster
01:08 RobertLaptop joined #gluster
01:08 glusterbot New news from newglusterbugs: [Bug 1024369] Unable to shrink volumes without dataloss <http://goo.gl/jZ350k>
01:27 harish joined #gluster
01:48 harish joined #gluster
01:50 Fresleven joined #gluster
01:56 hagarth joined #gluster
02:03 Fresleven_ joined #gluster
02:06 harish joined #gluster
02:12 mattapperson joined #gluster
02:15 Fresleven joined #gluster
02:29 mohankumar joined #gluster
02:29 mattapperson joined #gluster
02:34 mattapperson joined #gluster
02:53 rjoseph joined #gluster
02:58 kshlm joined #gluster
03:05 shubhendu joined #gluster
03:11 vpshastry joined #gluster
03:11 bharata-rao joined #gluster
03:28 johnmark_ _mattf: ping
03:39 davinder joined #gluster
03:39 sgowda joined #gluster
03:42 shylesh joined #gluster
03:43 vpshastry left #gluster
03:46 mohankumar joined #gluster
03:47 itisravi joined #gluster
03:53 kanagaraj joined #gluster
04:01 _BryanHm_ joined #gluster
04:03 shruti joined #gluster
04:03 marcoceppi joined #gluster
04:03 marcoceppi joined #gluster
04:17 davidjpeacock joined #gluster
04:20 davidjpeacock joined #gluster
04:33 sgowda joined #gluster
04:38 ndarshan joined #gluster
04:39 glusterbot New news from newglusterbugs: [Bug 1026291] quota: directory limit cross, while creating data in subdirs <http://goo.gl/hesUtT>
04:44 lalatenduM joined #gluster
04:48 ppai joined #gluster
04:49 aravindavk joined #gluster
04:50 ndarshan joined #gluster
04:53 sgowda joined #gluster
05:10 psharma joined #gluster
05:12 satheesh joined #gluster
05:13 saurabh joined #gluster
05:16 ndarshan joined #gluster
05:19 bala joined #gluster
05:20 davidjpeacock joined #gluster
05:24 RameshN joined #gluster
05:27 ababu joined #gluster
05:33 ndarshan joined #gluster
05:40 hchiramm__ joined #gluster
05:49 bulde joined #gluster
05:50 shubhendu joined #gluster
05:55 aravindavk joined #gluster
05:56 ababu joined #gluster
05:56 kanagaraj joined #gluster
05:57 bala joined #gluster
06:02 rastar joined #gluster
06:11 CheRi joined #gluster
06:18 vimal joined #gluster
06:20 davidjpeacock joined #gluster
06:35 ndarshan joined #gluster
06:37 elyograg regarding my rebalance problem that I put on the mailing list ... someone said that redhat only supports their Storage product.  Is that the case even for their consulting page? http://www.redhat.com/consulting/
06:37 glusterbot Title: Red Hat | Consulting (at www.redhat.com)
06:37 ababu joined #gluster
06:37 ngoswami joined #gluster
06:37 shubhendu joined #gluster
06:38 psharma joined #gluster
06:39 kanagaraj joined #gluster
06:39 bala joined #gluster
06:43 aravindavk joined #gluster
06:44 JoeJulian johnmark: ^
06:44 vshankar joined #gluster
06:48 DV joined #gluster
06:51 satheesh1 joined #gluster
07:03 hngkr joined #gluster
07:09 ndarshan joined #gluster
07:10 ppai joined #gluster
07:11 shri joined #gluster
07:20 davidjpeacock joined #gluster
07:28 jtux joined #gluster
07:31 rastar joined #gluster
07:33 ricky-ticky joined #gluster
07:54 rastar joined #gluster
07:58 ekuric joined #gluster
08:02 ctria joined #gluster
08:10 eseyman joined #gluster
08:11 ndevos elyograg: I think consultants can be asked to support or help out with almost anything, from my understanding it is all defined in the contract that binds the consultancy department and customer
08:12 ndevos elyograg: of course, it may be that the management would not want to support certain (competing?) products, I guess you need to get in touch with them and request some talk or information
08:20 davidjpeacock joined #gluster
08:20 hngkr joined #gluster
08:28 psharma joined #gluster
08:34 pkoro joined #gluster
08:35 vimal joined #gluster
08:40 shri joined #gluster
08:41 rjoseph joined #gluster
08:45 lalatenduM joined #gluster
08:46 ababu joined #gluster
08:51 hagarth joined #gluster
08:54 aravindavk joined #gluster
08:55 RameshN joined #gluster
08:55 hybrid512 joined #gluster
08:56 bala joined #gluster
09:00 mbukatov joined #gluster
09:00 calum_ joined #gluster
09:00 ProT-0-TypE joined #gluster
09:06 stickyboy Hmm, can I connect to sub-directories of a GlusterFS share using the FUSE client? ie, share name "data", connect to "data/some/sub/path"?
09:09 samppah stickyboy: afaik that's not possible with fuse client.. should work with nfs
09:09 samppah but i'm not completely sure about that..
09:09 stickyboy I want to give someone access to a subset of one of my shares.  Maybe NFS is the best way.
09:10 stickyboy I don't think I can create another volume
09:10 stickyboy ?
09:10 ninkotech joined #gluster
09:10 ninkotech__ joined #gluster
09:12 samppah stickyboy: volume inside volume?
09:12 stickyboy samppah: Yah. :\
09:12 stickyboy Looks like NFS is the only way.
09:12 ninkotech joined #gluster
09:13 ninkotech__ joined #gluster
09:20 davidjpeacock joined #gluster
09:20 stickyboy Hmm, now whether to use GlusterFS NFS or kernel NFS.
09:21 DV joined #gluster
09:22 askb joined #gluster
09:24 fidevo joined #gluster
09:29 badone_ joined #gluster
09:40 aravindavk joined #gluster
09:40 glusterbot New news from newglusterbugs: [Bug 1027668] Every volume logs "cannot add a new contribution node" every ten minutes <http://goo.gl/6cEBRP>
09:41 RameshN joined #gluster
09:41 psharma joined #gluster
09:41 ababu joined #gluster
09:43 bala joined #gluster
09:46 ndevos stickyboy: dont export a fuse-mount over NFS, unless you know what you are doing, and have read /usr/share/doc/fuse-*/README.NFS
09:46 stickyboy ndevos: Yah, I wouldn't dream of it. :)
09:47 ndevos :)
09:47 mgebbe_ joined #gluster
09:47 ndevos stickyboy: and bug 892808 is a feature request for the subdir mount functionality
09:47 glusterbot Bug http://goo.gl/wpcU0 low, low, ---, aavati, NEW , [FEAT] Bring subdirectory mount option with native client
09:49 stickyboy ndevos: Awesome, I'll subscribe.
09:52 fidevo joined #gluster
09:52 social any alive dev around? I'd love to ask for bug 1024369, I can see a lot of commits that might affect it in master so I'm close to running git bisect but it still would be better if some dev just pointed out correct commits that should get backported to 3.4 as soon as possible
09:53 glusterbot Bug http://goo.gl/jZ350k unspecified, unspecified, ---, sgowda, NEW , Unable to shrink volumes without dataloss
09:53 askb joined #gluster
10:01 badone_ joined #gluster
10:10 glusterbot New news from newglusterbugs: [Bug 892808] [FEAT] Bring subdirectory mount option with native client <http://goo.gl/wpcU0>
10:20 davidjpeacock joined #gluster
10:33 ricky-ticky joined #gluster
10:36 meghanam joined #gluster
10:38 satheesh joined #gluster
10:42 tjikkun_work joined #gluster
10:49 satheesh joined #gluster
10:57 shubhendu joined #gluster
11:08 diegows_ joined #gluster
11:18 jmeeuwen joined #gluster
11:19 kr1ss joined #gluster
11:20 davidjpeacock joined #gluster
11:38 kr1ss left #gluster
11:42 ababu joined #gluster
11:44 satheesh joined #gluster
11:46 shubhendu joined #gluster
11:51 kkeithley1 joined #gluster
11:54 psharma joined #gluster
11:54 Ramereth joined #gluster
11:54 edward2 joined #gluster
11:59 bulde joined #gluster
12:07 failshell joined #gluster
12:10 itisravi joined #gluster
12:12 shapemaker joined #gluster
12:13 ccha2 joined #gluster
12:14 psharma joined #gluster
12:16 ricky-ticky1 joined #gluster
12:16 nonsenso_ joined #gluster
12:17 rcheleguini joined #gluster
12:17 ppai joined #gluster
12:17 Nuxr0 joined #gluster
12:17 helmo_ joined #gluster
12:17 SteveCoo1ing joined #gluster
12:17 pkoro joined #gluster
12:20 satheesh joined #gluster
12:20 portante_ joined #gluster
12:20 mibby joined #gluster
12:20 davidjpeacock joined #gluster
12:23 CheRi joined #gluster
12:24 klaxa joined #gluster
12:26 saurabh joined #gluster
12:27 hchiramm__ joined #gluster
12:27 baoboa joined #gluster
12:30 davidjpeacock joined #gluster
12:34 DV joined #gluster
12:39 Debolaz joined #gluster
12:44 harish joined #gluster
12:50 dusmant joined #gluster
12:59 yinyin joined #gluster
13:04 rastar joined #gluster
13:15 ndarshan joined #gluster
13:15 NuxRo joined #gluster
13:40 davidbierce joined #gluster
14:04 blook joined #gluster
14:07 bennyturns joined #gluster
14:11 dbruhn joined #gluster
14:17 ndarshan joined #gluster
14:21 RedShift joined #gluster
14:34 hagarth joined #gluster
14:40 yinyin joined #gluster
14:43 rjoseph joined #gluster
14:47 ndarshan joined #gluster
14:49 satheesh1 joined #gluster
14:50 davinder joined #gluster
14:51 bugs_ joined #gluster
15:01 dbruhn Gah, this is maddening, I have a bunch of files showing up twice in my file system with the same inode
15:05 MichaelBode joined #gluster
15:07 mattapperson joined #gluster
15:07 sjoeboo joined #gluster
15:09 satheesh1 joined #gluster
15:10 jake[work] joined #gluster
15:12 jake[work] hi!  my setup is two nodes, 1 brick.  took 1 offline and added test files to brick.  put 1 online - no resync.  tried: http://gluster.org/community/documen​tation/index.php/Gluster_3.2:_Brick_​Restoration_-_Replace_Crashed_Server
15:12 glusterbot <http://goo.gl/60uJV> (at gluster.org)
15:13 wushudoin joined #gluster
15:13 dbruhn are you adding the files directly to the brick, or through the mount point?
15:13 jake[work] through mount point
15:13 dbruhn did you allow a self heal to run on it?
15:13 dbruhn or force one?
15:14 dbruhn that should update the out of date brick
15:14 ababu joined #gluster
15:14 jake[work] i just followed that procedure on the link
15:15 bstr joined #gluster
15:15 jake[work] i didn't try the self heal links on the bottom
15:15 jake[work] do i need to do that?  or is it automatic?
15:16 jake[work] i would like it to be automatic if possible
15:16 dbruhn it will automatically happen on the backend if you wait for it, or anything that triggers a stat on the file will cause it to happen
15:16 dbruhn or you can force a self heal to happen with the self heal functions
15:17 jake[work] how long does it usually take?  been sitting for around 5-10 minutes.  5 files on brick
15:17 jake[work] ah - you're saying if any changes on the working server?
15:18 dbruhn I am assuming your system is configured in a replica2 setup?
15:18 jake[work] i believe so.  it's not distributed
15:18 zerick joined #gluster
15:19 dbruhn If that's the case and one of the replica's goes offline, it should stop writing to that brick/server
15:19 dbruhn when the system comes back online, the next pass the self heal makes over it should correct the out of sync data
15:20 jake[work] ok.  i almost feel like the brick is not mounted
15:20 lpabon joined #gluster
15:20 dbruhn gluster volume info
15:20 dbruhn crap sec
15:21 dbruhn run "gluster volume status"
15:21 jake[work] hmm.  that shows it is
15:21 dbruhn info doesn't show it's connections, just the configuration
15:21 jake[work] ah
15:21 jake[work] i'm on 3.2
15:21 jake[work] status doesn't work
15:21 dbruhn why such an old version?
15:22 jake[work] from debian repo
15:22 dbruhn I am assuming this is still just test and get comfortable at this point
15:22 jake[work] couldn't igue out another way
15:22 jake[work] i need to do production
15:22 dbruhn hmm, I thought semiosis made 3.4 debian packages
15:23 dbruhn Yep, there are a lot of production systems out there
15:23 dbruhn assuming tcp/ip not RDMA?
15:23 jake[work] if you think version is the issue, i can attempt to upgrade
15:24 jake[work] and yes, tcp
15:24 dbruhn well 3.2 is still in production in a bunch of slow to change shops, but 3.3.2 is rock solid and been in production for quite a while, and 3.4 is the latest with a lot of new features and things that make it a much nicer system to operate
15:24 dbruhn then you should probably be trying to use 3.4
15:25 dbruhn http://packages.debian.org/sid/glusterfs-server
15:25 glusterbot Title: Debian -- Details of package glusterfs-server in sid (at packages.debian.org)
15:25 jake[work] i tried to find a decent step by step on installing 3.4. just caved into apt-get :o)
15:25 dbruhn you want the 3.4.1 stuff
15:25 dbruhn yeah, it happens
15:26 jake[work] ok.  i'll upgrade and try to retest.  tnx!
15:27 kkeithley_ ,,(repos)
15:27 glusterbot See @yum, @ppa or @git repo
15:28 kkeithley_ yes, semiosis has 3.4 in his ,,(ppa)
15:28 glusterbot The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy
15:28 B21956 joined #gluster
15:29 jake[work] nice! that is what i was looking for!
15:30 kkeithley_ debian ppa is at http://download.gluster.org/pub/gl​uster/glusterfs/3.4/3.4.1/Debian/
15:30 glusterbot <http://goo.gl/3aVtiv> (at download.gluster.org)
15:30 dbruhn thanks kkeithley, I always forget about glsuterbot until someone uses it
15:31 jake[work] just add that to sources.list?
15:32 jake[work] nm - i found it
15:33 jskinner_ joined #gluster
15:34 ProT-0-TypE joined #gluster
15:55 xavih joined #gluster
15:55 daMaestro joined #gluster
15:59 zaitcev joined #gluster
16:15 dbruhn http://fpaste.org/52371/38407151/
16:15 glusterbot Title: #52371 Fedora Project Pastebin (at fpaste.org)
16:15 dbruhn anyone have any idea why I would be seeing duplicate files/directories from the mount point, but nothing indicating that same issue on the bricks
16:17 dbruhn and why wouldn't it be consistent across all clients
16:18 DV joined #gluster
16:21 rastar joined #gluster
16:21 shubhendu joined #gluster
16:28 ira_ joined #gluster
16:29 ira_ joined #gluster
16:29 harish_ joined #gluster
16:30 bulde joined #gluster
16:41 neofob joined #gluster
16:41 kaptk2 joined #gluster
16:51 aliguori joined #gluster
16:52 [o__o] left #gluster
16:54 [o__o] joined #gluster
16:56 mattappe_ joined #gluster
16:57 [o__o] left #gluster
16:59 [o__o] joined #gluster
17:02 raar joined #gluster
17:03 kPb_in_ joined #gluster
17:06 mattapperson joined #gluster
17:07 g4rlic joined #gluster
17:09 g4rlic quick question: does glusterfsd need to bind to tcp/2049 to do it's normal job?  eg: I'm not using (to my knowledge) GlusterFS's NFS export capability.
17:09 semiosis g4rlic: no.  you can disable the gluster nfs server by setting 'nfs.disable on' on all your volumes
17:11 g4rlic semiosis: Oooh, excellent.  I will try that.  I'm trying to have both NFSd and glusterd running on the same physical machine (long story), but nfsd can't start correctly on account of glusterfsd already having that port bound.  Thanks!
17:11 semiosis yw
17:12 g4rlic is there a way to do that without having to tear down and rebuild the cluster?  eg: a gluster volume command, or something to that effect?  (pardon noob questions, guy who set this up isn't in yet.)
17:12 g4rlic looks like volume set will do it.  nvm. ;)
17:14 rastar joined #gluster
17:18 g4rlic side-note: when semiosis said all volumes, he meant it.  I only did 1 of 2 volumes (didn't know the second existed!) and until every volume had that option disabled, tcp/2049 wouldn't release.
17:18 g4rlic Again, thank you.
17:18 * semiosis says what he means and means what he says
17:18 semiosis yw
17:18 g4rlic :)
17:21 bala joined #gluster
17:32 mattappe_ joined #gluster
17:33 Mo__ joined #gluster
17:36 g4rlic semiosis: any way for me to specify that option during volume creation?  Or must it be set after?
17:41 JoeJulian g4rlic: after, but that's a good feature request. Want to file a bug report?
17:41 glusterbot http://goo.gl/UUuCq
17:42 mattappe_ joined #gluster
17:43 g4rlic JoeJulian: Sure thing.  2 seconds.  (btw, 3.4.1 is considerably more reliable than 3.2.x, thanks!)
17:43 Technicool joined #gluster
17:43 JoeJulian My pleasure.
17:43 g4rlic Suggested component?
17:43 JoeJulian cli
17:50 g4rlic JoeJulian: https://bugzilla.redhat.co​m/show_bug.cgi?id=1028130
17:50 glusterbot <http://goo.gl/VgZIUG> (at bugzilla.redhat.com)
17:50 glusterbot Bug 1028130: low, unspecified, ---, kaushal, NEW , Permit setting options (specifically nfs.disable) on volume creation
17:50 g4rlic How's that look?
17:52 JoeJulian Looks good to me
17:53 rotbeard joined #gluster
17:54 g4rlic Awesome.  Again, thanks for your help!
17:56 JoeJulian You're welcome.
17:56 palli joined #gluster
17:57 palli Hey everyone.
17:57 semiosis g4rlic: create volume, set option, start volume.
17:57 palli I have a total of 100TB of data, and preferably make it look like one filesystem. I am thinking about 10* 10TB XFS and merge them together with gluster.
17:57 palli Is gluster the right tool for my scenario ?
17:57 g4rlic semiosis: Yep, that's what my salt configs look like now. ;)
17:58 JoeJulian true... (why didn't I think of that?)
17:58 JoeJulian palli: Are the drives on different machines?
17:58 g4rlic Oh, I think I understand.  I can set the options prior to volume start so that gluster's nfs never kicks in.
17:59 semiosis right
17:59 palli JoeJulian: No, they are all on the same machine. Same logical volume group in fact.
17:59 g4rlic I was worried I'd hav eto start it before options took effect.
17:59 mattapperson joined #gluster
17:59 g4rlic Can I close the bug myself?  (I guess that bug doesn't need to exist)
17:59 semiosis g4rlic: at least, thats how i think it should work.  if it doesn't, file a bug about that please
17:59 glusterbot http://goo.gl/UUuCq
18:00 JoeJulian palli: Being on the same machine, I would use lvm or raid. They'll perform better.
18:00 g4rlic semiosis: Let me try, I'll let you know in a few minutes.
18:00 semiosis if you can't actually close the bug, just make a comment with your solution & saying the bug can be closed
18:00 semiosis and someone will close it for you
18:00 JoeJulian You can close your own bugs.
18:00 mattapp__ joined #gluster
18:01 palli JoeJulian: My problem is size, 100TB in one filesystem is pushing the boundaries of what xfs can do, and i am afraid what will happen if i need to filesystem check it.
18:02 JoeJulian xfs fsck is a noop, but I hear what you're saying. GlusterFS can certainly do what you're asking.
18:04 palli Do you think glusterfs, will handle the filesystem size better, than a big xfs would ?
18:05 semiosis xfs repair
18:06 g4rlic semiosis: yep, that works perfectly.  Closing bug with workaround.
18:08 JoeJulian palli: GlusterFS doesn't really care about how big it is, so yes. Should shorten recovery time in the event you did need to repair the brick filesystem too since each brick would be smaller.
18:09 g4rlic semiosis: bug closed with correct procedure.
18:09 semiosis sweet
18:12 glusterbot New news from newglusterbugs: [Bug 1026143] Gluster rebalance --xml doesn't work <http://goo.gl/hVyRoP>
18:14 glusterbot New news from resolvedglusterbugs: [Bug 1028130] Permit setting options (specifically nfs.disable) on volume creation <http://goo.gl/VgZIUG>
18:24 B21956 joined #gluster
18:26 bennyturns joined #gluster
18:37 mattappe_ joined #gluster
18:42 mattappe_ joined #gluster
18:46 rcheleguini joined #gluster
19:00 jake[work] a bit confused. on node 1 i do volume info and i see both bricks - good.  i also do df and i can see my brick.  i go to node 2 and i do df... but my brick isn't listed.  how can i tell on node 2 if the brick on that node is functioning?
19:04 mattappe_ joined #gluster
19:05 KORG joined #gluster
19:08 KORG joined #gluster
19:10 mattappe_ joined #gluster
19:15 g4rlic jake[work]: sounds like something's not mounted that should be.
19:15 jake[work] how do i check?
19:16 jake[work] i just mounted the brick on node2
19:16 jake[work] but how do i see if gluster is actually using it
19:16 jake[work] (pretty sure it's not)
19:16 JoeJulian @glossary
19:16 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
19:17 JoeJulian So your bricks are just part of the gluster volume. To use that volume you have to mount it.
19:17 semiosis jake[work]: gluster volume status, the brick log files, and the brick ,,(processes)
19:17 glusterbot jake I do not know about 'work', but I do know about these similar topics: 'development work flow', 'work flow' : The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd
19:17 glusterbot (for geo-replication) and glustershd (for automatic self-heal). See http://goo.gl/F6jqx for more information.
19:17 semiosis that was weird
19:17 JoeJulian hmm I didn't know about that...
19:17 semiosis !!!
19:17 JoeJulian Oh, right... I know why...
19:18 m0zes joined #gluster
19:19 JoeJulian heading down to the office to pick up the replacement server to put back in the CoLo after a "power event" took out the last one...
19:19 jake[work] if i do volume create replica, that should mount the bricks in each node? no?
19:19 JoeJulian no
19:19 jake[work] ok.  that's prob what i missed
19:20 JoeJulian When you create a volume, you tell glusterfs what to use for bricks for that volume. It doesn't mount anything.
19:20 jake[work] but what would tell me if there were actually mounted?
19:20 jake[work] volume info is deceiving
19:20 semiosis mount
19:21 semiosis if you set up your bricks using a subdir on a mounted filesystem then gluster will fail to start the brick process if the mount is missing
19:21 JoeJulian http://www.gluster.org/community/d​ocumentation/index.php/QuickStart
19:21 glusterbot <http://goo.gl/OEzZn> (at www.gluster.org)
19:22 semiosis for example, say you want a brick on /dev/sdb1, then mount /dev/sdb1 at /bricks/sdb1 and set gluster to use server:/bricks/sdb1/brick as the brick path
19:22 jake[work] yep - i did quickstart
19:22 JoeJulian step 2 in that quickstart is where you created and mounted what you were going to use for bricks.
19:22 semiosis now if /bricks/sdb1 is not mounted then the gluster brick path, /bricks/sdb1/brick, will not exist and gluster will fail to start the brick export daemon, and that will be shown by gluster volume status & log files, etc
19:23 jake[work] volume status says both bricks are online
19:23 jake[work] but i know that isnt correct
19:23 jake[work] it said that when the disk wasn't even mounted
19:25 g4rlic jake[work]: the reason I suggested looking at mounts, is if the brick path doesn't show up in df, and you expect it to, it's because it's not mounted.
19:25 jake[work] yep.  i checked there and it is showing mounted now
19:26 jake[work] but no files are in /export/brick1 in node2
19:26 jake[work] i think the answer is adding the brick to the volume
19:33 mattappe_ joined #gluster
19:35 mattap___ joined #gluster
19:39 mattappe_ joined #gluster
19:58 andras joined #gluster
20:02 mattappe_ joined #gluster
20:04 andras hello! I have upgraded from 3.3. to 3.4 and got peers rejected 5 out of 9. Googled the whole day and found solution I have no working setup now :-( Any hints and tips what to check? Is there some magic command like : glusterd --xlator-option *.upgrade=on -N as for 3.3 upgrade?
20:08 semiosis see ,,(peer-rejected)
20:08 glusterbot http://goo.gl/g0b4Oi
20:08 andras from the logs I saw that there were some checksum mismatch problems. I checked /var/lib/glusterd/vols and got different md5sum. for some files. ( i can not remember which one I am at home no
20:10 andras Thanks, I tried that one. I have deleted all files, and got the info from another peer, but found different versions after the peer probe
20:10 semiosis weird
20:11 Rav_ joined #gluster
20:11 bugs_ joined #gluster
20:13 andras is there a way to rebuild the cluster from scratch if the data exists in the corresponding volumes? I had replica3
20:15 andras I remember that there were 2 new option lines in some of the volume info files. I tried also rscync-ing the config folders as a hack...... no good
20:21 andras actually at first I had 2 peers rejected, but later "I succesfully" downgraded 4 of them. now nothing works. As another "hack" I also tried editing the peer state to zero from 6. Then it retried again, but no successs
20:24 davidbierce joined #gluster
20:36 davidbierce joined #gluster
20:48 bugs_ joined #gluster
20:58 bugs_ joined #gluster
21:00 kmai007 joined #gluster
21:00 kmai007 good 3PM to everyone
21:00 basic` joined #gluster
21:01 kmai007 I have a question
21:02 kmai007 i have a 2 node replicated gluster
21:02 kmai007 i was doing some load testing
21:02 kmai007 while writing 50k files to a volume, i drop 1 of the gluster nodes
21:02 kmai007 and the client kept writing to the volume after a short pause
21:03 kmai007 as the 2nd node rejoined the party
21:03 kmai007 the files that were still being written after the drop, have been moved off to .glusterfs
21:04 kmai007 now  to the client the files after the dead node have disappeared
21:04 kmai007 how do I tell gluster to bring thos back in?
21:04 kmai007 i tried to initiate the self-heal but no changes
21:05 sprachgenerator joined #gluster
21:08 kobiashyi joined #gluster
21:10 kmai007 also, is there any doc. on how to do snapshots?
21:11 kmai007 i see the gluster cli cmds but, i was hoping there was some best practices with that feature
21:11 kmai007 glusterbot
21:11 kmai007 @glusterbot
21:11 kmai007 are you awake/
21:12 samppah kmai007: glusterfs snapshots is planned for 3.5 but i don't know status of it
21:13 samppah some people use lvm snapshots for that purpose
21:16 samppah http://www.idera.com/productssol​utions/freetools/sblinuxhotcopy this is also intresting tool for snapshotting
21:16 glusterbot <http://goo.gl/OiSjHJ> (at www.idera.com)
21:18 mattappe_ joined #gluster
21:22 yinyin_ joined #gluster
21:26 mattappe_ joined #gluster
21:40 irssi joined #gluster
21:48 mattappe_ joined #gluster
21:55 bennyturns joined #gluster
21:56 mattappe_ joined #gluster
21:58 mattap___ joined #gluster
22:02 mattappe_ joined #gluster
22:05 mattapp__ joined #gluster
22:08 diegows_ joined #gluster
22:17 mattapperson joined #gluster
22:20 johnsonetti joined #gluster
22:20 DV joined #gluster
22:23 kmai007 thanks @samppah
22:24 mattappe_ joined #gluster
22:24 mattappe_ joined #gluster
22:30 jake[work] joined #gluster
22:32 jake[work] i just reinstalled using the quick start guide.  glusterfsd is only running on 1 of the 2 nodes.  is this normal?
22:32 jake[work] also the brick on that same node is showing offline.  so i'm thinking it is not normal
22:34 mattappe_ joined #gluster
22:35 davidbierce joined #gluster
22:35 davidbierce joined #gluster
22:36 kmai007 no it should be running on all nodes
22:37 kmai007 what
22:38 kmai007 all you need to do is
22:38 kmai007 run
22:38 kmai007 service glusterd start
22:38 kmai007 on all the gluster nodes
22:38 kmai007 glusterfsd service will list all your volumes if you ps -ef|grep glusterfsd
22:39 kmai007 but the main service is glusterd
22:39 jake[work] yep - i can see it's failing Extended attribute  trusted.glusterfs.volume-id is absent
22:39 kmai007 did you setup your peers?
22:40 kmai007 don't use the quick guide
22:40 jake[work] it was working.  just did a reboot
22:40 MichaelBode_ joined #gluster
22:40 kmai007 oh
22:40 kmai007 then the service is not chkconfig on
22:40 kmai007 chkconfig glusterd on
22:40 kmai007 it will startup on reboot
22:40 jake[work] the service tried starting
22:40 jake[work] it's this error that is shutting everything down
22:41 kmai007 did you create and mount the xfs filesystem on all gluster nodes?
22:41 jake[work] yep - mount went through
22:41 kmai007 strange....i'm not an expert either, but i've burned the midnight oil to get through my initial gluster creations
22:41 jake[work] haha - same boat
22:42 jake[work] i just want to get one time through where i can reboot and it still works
22:42 jake[work] then ready for production
22:42 kmai007 reboot is not going to get you through production, believe me
22:42 jake[work] i know :o)
22:43 jake[work] was it stable after you got through the initial config?
22:43 kmai007 i've built, rebuilt, gluster about 10+ times
22:43 kmai007 to get it into memory
22:43 mattapperson joined #gluster
22:44 kmai007 but 3.4.1 has been my best experience
22:44 jake[work] * got 5 more to go
22:44 kmai007 so are you on what linux distro?
22:44 kmai007 i'm on rhel6
22:44 jake[work] debian
22:44 jake[work] 7
22:44 kmai007 ok i'm not sure where the differences are, sorry i'm not much help
22:45 kmai007 you have a 2 node setup?
22:45 jake[work] yeah - i could build on rhel.  not that big a diff to me.  just want to know that it's stable
22:45 MichaelBode joined #gluster
22:45 jake[work] and yes - only 2 nodes for now.  want to test with 4
22:46 jake[work] has it gone down on you at all?
22:46 raar joined #gluster
22:53 kmai007 nope its pretty stable so far
22:53 kmai007 no issues with nodes crashing
22:53 kmai007 i have it in testing, with about 50 clients split evenly
22:53 jake[work] ok.  tnx!
22:54 kmai007 with specific gluster nodes mounted, it doesn't matter with fuse, but i try to keep it balanced
22:54 kmai007 i use it as a web content storage
22:55 jake[work] got it.  just the config has me a little nervous.  i reinstalled around 5 times so far and i swear i've followed every step
22:55 jake[work] it works... then it doesn't
22:56 jake[work] going to try a few more times and see if i can figure out what is tripping it up
23:00 kmai007 so does the extensive long logs tell you anything?
23:00 kmai007 if you did a "gluster peer status" what do you get?
23:00 kmai007 (disconnected) is what i'm guessing
23:02 jake[work] peer status was fine
23:02 jake[work] network wise everything looked good
23:02 jake[work] only problem was the fsd wasn't starting
23:02 jake[work] complained about this attribute
23:03 jake[work] i'm wiping again.  hopefully better results this time
23:03 kmai007 are u creating xfs or ext4
23:04 jake[work] xfs
23:05 mattappe_ joined #gluster
23:13 johnsonetti joined #gluster
23:14 mattappe_ joined #gluster
23:17 davidbierce joined #gluster
23:29 zerick joined #gluster
23:49 SpeeR joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary