Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 Gill joined #gluster
00:08 plarsen joined #gluster
00:14 Pupeno joined #gluster
00:29 [o__o] joined #gluster
00:30 atrius` joined #gluster
00:33 codex joined #gluster
00:34 gildub_ joined #gluster
00:35 MrAbaddon joined #gluster
00:45 akay1 hi anoopcs, thanks for that. i've logged the bug as requested https://bugzilla.redhat.com/show_bug.cgi?id=1237375
00:45 glusterbot Bug 1237375: medium, unspecified, ---, rhs-bugs, NEW , Trashcan broken on Distribute-Replicate volume
00:54 siel joined #gluster
01:04 kovshenin joined #gluster
01:06 mckaymatt joined #gluster
01:11 PatNarciso has migrate-brick lost the 'start' option?  I see only force and commit now.
01:12 PatNarciso err, replace-brick.
01:18 smohan joined #gluster
01:21 aravindavk joined #gluster
01:28 gildub joined #gluster
01:37 nangthang joined #gluster
01:43 wkf joined #gluster
01:47 smohan joined #gluster
02:10 davidself joined #gluster
02:12 ndk joined #gluster
02:14 haomaiwang joined #gluster
02:22 pppp joined #gluster
02:27 plarsen joined #gluster
02:28 soumya_ joined #gluster
02:28 Pupeno joined #gluster
02:39 haomaiwa_ joined #gluster
02:53 anoopcs PatNarAFK, Yes..you are right. all other options are deprected
02:55 anoopcs akay1, Thanks for reporting the bug. We will look into the issue asap accordingly.
02:55 PatNarAFK anoopcs: gotcha.  thank you.
02:56 anoopcs PatNarAFK, np
03:00 kdhananjay joined #gluster
03:08 bharata-rao joined #gluster
03:10 haomaiwa_ joined #gluster
03:10 glusterbot News from newglusterbugs: [Bug 1237375] Trashcan broken on Distribute-Replicate volume <https://bugzilla.redhat.com/show_bug.cgi?id=1237375>
03:27 overclk joined #gluster
03:35 sakshi joined #gluster
03:37 TheSeven joined #gluster
03:38 atinm joined #gluster
03:38 RayTrace_ joined #gluster
03:49 RameshN joined #gluster
03:53 rejy joined #gluster
03:54 kanagaraj joined #gluster
03:57 vmallika joined #gluster
03:58 wkf_ joined #gluster
03:59 itisravi joined #gluster
03:59 nishanth joined #gluster
03:59 TheCthulhu1 joined #gluster
04:04 smohan joined #gluster
04:09 raghug joined #gluster
04:12 victori joined #gluster
04:17 Pupeno joined #gluster
04:22 gem joined #gluster
04:25 yazhini joined #gluster
04:28 suliba joined #gluster
04:33 deepakcs joined #gluster
04:34 nbalacha joined #gluster
04:40 glusterbot News from newglusterbugs: [Bug 1220173] SEEK_HOLE support (optimization) <https://bugzilla.redhat.com/show_bug.cgi?id=1220173>
04:40 Pupeno joined #gluster
04:42 shubhendu joined #gluster
04:44 gem joined #gluster
04:46 ramkrsna joined #gluster
04:48 ramteid joined #gluster
04:50 vimal joined #gluster
04:50 jiffin joined #gluster
04:58 ppai joined #gluster
05:00 meghanam joined #gluster
05:05 pppp joined #gluster
05:06 sakshi joined #gluster
05:12 smohan joined #gluster
05:14 ashiq joined #gluster
05:16 Manikandan_ joined #gluster
05:17 kshlm joined #gluster
05:18 ndarshan joined #gluster
05:19 dusmant joined #gluster
05:23 spandit joined #gluster
05:26 rjoseph joined #gluster
05:30 rafi joined #gluster
05:37 haomaiwang joined #gluster
05:38 anil joined #gluster
05:38 anil joined #gluster
05:40 RayTrace_ joined #gluster
05:40 anil joined #gluster
05:41 glusterbot News from newglusterbugs: [Bug 1238047] Crash in Quota enforcer <https://bugzilla.redhat.com/show_bug.cgi?id=1238047>
05:41 glusterbot News from newglusterbugs: [Bug 1238048] Crash in Quota enforcer <https://bugzilla.redhat.com/show_bug.cgi?id=1238048>
05:44 maveric_amitc_ joined #gluster
05:45 PatNarAFK joined #gluster
05:46 Bhaskarakiran joined #gluster
05:47 hagarth joined #gluster
05:49 vmallika joined #gluster
05:55 kdhananjay joined #gluster
05:55 gildub joined #gluster
06:00 kdhananjay joined #gluster
06:00 atalur joined #gluster
06:02 nbalacha joined #gluster
06:02 atalur joined #gluster
06:04 SOLDIERz joined #gluster
06:07 raghu joined #gluster
06:08 akay1 anoopcs, thanks for that
06:11 glusterbot News from newglusterbugs: [Bug 1238054] Consecutive volume start/stop operations when ganesha.enable is on, leads to errors <https://bugzilla.redhat.com/show_bug.cgi?id=1238054>
06:29 ramteid joined #gluster
06:31 saurabh joined #gluster
06:35 NTQ joined #gluster
06:40 aravindavk joined #gluster
06:40 RayTrace_ joined #gluster
06:45 gem joined #gluster
06:45 meghanam_ joined #gluster
06:46 RameshN joined #gluster
06:50 dusmant joined #gluster
06:54 gem_ joined #gluster
07:03 Trefex joined #gluster
07:04 spalai joined #gluster
07:08 rjoseph joined #gluster
07:11 dusmant joined #gluster
07:15 akay1 anyone seen brick log errors like these? [2015-07-01 07:11:06.442137] E [posix-helpers.c:1092:posix_handle_pair] 0-gv0-posix: /data/brick3/brick/aaaa.xxx: key:trusted.glusterfs.dht.linkto flags: 1 length:16 error:File exists
07:15 akay1 [2015-07-01 07:11:06.442246] E [posix.c:1216:posix_mknod] 0-gv0-posix: setting xattrs on /data/brick3/brick/aaaa.xxx failed (File exists)
07:16 vmallika joined #gluster
07:19 atinm joined #gluster
07:19 pppp joined #gluster
07:22 [Enrico] joined #gluster
07:27 harish_ joined #gluster
07:28 anrao joined #gluster
07:34 anti[Enrico] joined #gluster
07:36 harish_ joined #gluster
07:37 atalur joined #gluster
07:40 Pupeno joined #gluster
07:45 meghanam_ joined #gluster
07:47 lalatenduM I remember seeing a page for typical usecases for gluster documented, but can not find it now
07:47 lalatenduM can someone send me the link?
07:49 dusmant joined #gluster
07:54 kovshenin joined #gluster
07:54 perpetualrabbit joined #gluster
07:58 perpetualrabbit Hello people, I hope someone can help me. I need to replace a brick in a 20 node disperse gluster cluster. I have no idea what are the steps. I guess something like: 1) remove faulty brick from cluster 2) repair brick and add it back to the cluster 3) do some kind of healing operation and/or health check on the cluster.
08:00 fsimonce joined #gluster
08:01 txomon|fon joined #gluster
08:01 txomon|fon joined #gluster
08:04 jtux joined #gluster
08:04 soumya joined #gluster
08:06 Slashman joined #gluster
08:13 MrAbaddon joined #gluster
08:16 perpetualrabbit I need some help with disperse volume brick replacement, please.
08:16 atinm joined #gluster
08:20 al joined #gluster
08:20 atalur joined #gluster
08:21 tanuck joined #gluster
08:23 lanning joined #gluster
08:24 RayTrace_ joined #gluster
08:27 dusmant joined #gluster
08:32 soumya joined #gluster
08:59 ppai joined #gluster
09:03 perpetualrabbit I need some help with disperse volume brick replacement, please.
09:04 xavih perpetualrabbit: what version of gluster are you using ?
09:04 rgustafs joined #gluster
09:05 Pupeno_ joined #gluster
09:08 pppp joined #gluster
09:09 [Enrico] joined #gluster
09:13 perpetualrabbit 3.7.2
09:14 perpetualrabbit xavih, so that is the latest.
09:15 xavih perpetualrabbit: disperse volumes do not support removal and addition of bricks. You need to use the 'gluster volume replace-brick' command
09:15 perpetualrabbit The situation is this: 20 node cluster, redundancy is 4, so 16 must always run.
09:15 perpetualrabbit Ok, but I tried that and it did not work.
09:16 xavih perpetualrabbit: what error do you get ?
09:17 perpetualrabbit searching for it...
09:18 shubhendu joined #gluster
09:18 nsoffer joined #gluster
09:18 dusmant joined #gluster
09:18 nishanth joined #gluster
09:19 aravindavk joined #gluster
09:19 ndarshan joined #gluster
09:26 perpetualrabbit gluster> volume replace-brick ectest maris022:/export/gluster/brick maris022:/export/gluster/brick commit force
09:26 perpetualrabbit Then I get various errors:
09:26 perpetualrabbit volume replace-brick: failed: Another transaction could be in progress. Please try again after sometime.
09:28 perpetualrabbit I stopped the daemons on maris022 first, then empties the /export/gluster/brick directory, and started the daemons again. What should I do then?
09:28 Norky joined #gluster
09:28 perpetualrabbit This exercise is to simulate a downed brick, and trying to recover from it, in a disperse volume.
09:29 ramteid joined #gluster
09:29 xavih perpetualrabbit: you cannot replace a brick with itself
09:30 perpetualrabbit So how does one replace a disk then?
09:31 perpetualrabbit It should be possible to take down a node with a failing disk (==brick in my case), replace the disk, and complete the disperse cluster again, right?
09:35 xavih perpetualrabbit: I'm not sure if there's another possibility. If the bad brick completely died, you need to replace it with another different one. If you reuse the same server, you should place the brick in a different directory
09:36 xavih perpetualrabbit: You can also force the use of the same directory, but I think this can only be done manually touching some extended attributes of the new brick
09:37 xavih perpetualrabbit: for example you could do this: gluster volume replace-brick ectest maris022:/export/gluster/brick maris022:/export/gluster/brick_new commit force
09:38 perpetualrabbit Ah, thanks. I read all of the documentation, but it is rather sparse on the disperse volumes
09:40 perpetualrabbit The outcome of this exercise I'm doing is going to be some documentation. For the university faculty I work for, I'm trying out gluster with disperse volumes. I need to test it, and particularly how to recover from various faults.
09:41 xavih perpetualrabbit: that's great :)
09:41 glusterbot News from newglusterbugs: [Bug 1238135] Initialize daemons on demand <https://bugzilla.redhat.com/show_bug.cgi?id=1238135>
09:43 perpetualrabbit Yeah, but it is tricky. Between commands that not quite work, and mountains of output that I don't understand it is hard to see what (if anything) is happening.
09:44 xavih perpetualrabbit: btw, there's an important bug in 3.7.2 that prevents self-heal to succeed (it apparently succeeds, but it doesn't)
09:45 xavih perpetualrabbit: you should try current release-3.7 branch, a nightly build or wait until 3.7.3 is released :(
09:45 perpetualrabbit ah, that is unfortunate...for instance when I say this: gluster volume heal ectest help
09:45 perpetualrabbit I get this:
09:45 perpetualrabbit Brick maris003.lorentz.leidenuniv.nl:/export/gluster/brick/
09:45 perpetualrabbit Number of entries: 0
09:45 perpetualrabbit Brick maris004.lorentz.leidenuniv.nl:/export/gluster/brick/
09:45 perpetualrabbit <gfid:9dcbf515-341d-4fe3-a99c-ed6bb76b8f5c>
09:45 perpetualrabbit Number of entries: 1
09:45 perpetualrabbit Brick maris005.lorentz.leidenuniv.nl:/export/gluster/brick/
09:46 perpetualrabbit <gfid:9dcbf515-341d-4fe3-a99c-ed6bb76b8f5c>
09:46 perpetualrabbit Number of entries: 1
09:47 perpetualrabbit So some brick have no entries (maris003). I don't even know what is meant by 'entries', but it seems not right. Also no clue how to fix it.
09:49 xavih perpetualrabbit: 'entries' are files and directories known to need to be healed (or being healed)
09:50 perpetualrabbit ah ok. So the ones with 1 entries need to be healed, and the others do not. Is there a way to find out more?
09:51 perpetualrabbit ah ok. So the ones with 1 entries need to be healed, and the others do not. Is there a way to find out more?
09:51 perpetualrabbit oops, I pressed up arrow and enter in the wrong window
09:52 xavih perpetualrabbit: I'm not an expert about self heal commands, so I'm cannot assure exactly if this means that the file needs healing on that brick or it simply needs to be checked in all bricks to see if healing is needed (I think it's more the latter)
09:52 xavih perpetualrabbit: there are also some problem with self-heal daemon with 3.7.2
09:53 xavih perpetualrabbit: it's not always triggered for all files
09:53 xavih perpetualrabbit: are there many files in the volume ?
09:54 xavih perpetualrabbit: 3.7.3 will work much better with all these issues...
09:56 perpetualrabbit Well, I can tell you that I just tested with maris022 (which is the 20-th node, counting from maris003). I downed the gluster daemons, remove all the files in the brick directory, including the dotfiles/directories. This to simulate a disk replacement. I could have remade the xfs filesystem too, but I didn't. Then I restarted the daemons. After a while, I see that the brick on maris022 is filling up again. So that seems
09:56 perpetualrabbit hopeful, right?
09:57 xavih perpetualrabbit: it seems to be working then
09:58 xavih perpetualrabbit: anyway, as I said before, there's an important bug in self-heal on 3.7.2. You should check it
09:59 uebera|| joined #gluster
10:00 perpetualrabbit Well yes and no. It transferred about half a gig up to now, but then it stops a long time, starts again. With this speed it will take weeks to repair.
10:01 perpetualrabbit Probably the bug you mention I suppose. I'll try to find out what it is about.
10:01 Pupeno joined #gluster
10:01 ctria joined #gluster
10:02 perpetualrabbit I have this in my /etc/yum.repos.d directories on all nodes:
10:02 perpetualrabbit baseurl=http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/Fedora/fedora-$releasever/$basearch/
10:02 perpetualrabbit So I suppose that is the current release branch, right?
10:05 xavih perpetualrabbit: yes, the latest version released is 3.7.2. 3.7.3 will contain the fix
10:05 perpetualrabbit right. So I wait
10:08 surabhi_ joined #gluster
10:08 rjoseph joined #gluster
10:16 MrAbaddon joined #gluster
10:18 ndarshan joined #gluster
10:20 nishanth joined #gluster
10:20 LebedevRI joined #gluster
10:21 soumya joined #gluster
10:21 shubhendu joined #gluster
10:27 curratore joined #gluster
10:28 an joined #gluster
10:30 dusmant joined #gluster
10:38 neofob joined #gluster
10:41 neofob joined #gluster
10:43 vovcia hi o/ im search for assistance with nfs client
10:43 vovcia i cant mount share gluster 3.7 centos 7
10:49 MrAbaddon joined #gluster
10:54 ppai joined #gluster
10:57 ninkotech joined #gluster
10:57 ninkotech_ joined #gluster
10:58 gildub joined #gluster
11:12 glusterbot News from newglusterbugs: [Bug 1238181] cli : "Usage:"  of gluster commands show replica in case of disperse volume <https://bugzilla.redhat.com/show_bug.cgi?id=1238181>
11:12 abyss_ joined #gluster
11:15 soumya joined #gluster
11:15 ndevos vovcia: can you ,,(paste) the output of 'mount -vvv -t nfs ....' ?
11:15 glusterbot vovcia: For RPM based distros you can yum install fpaste, for debian, ubuntu, and arch it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
11:23 vovcia ndevos: portmap connection refused https://bpaste.net/show/e7d315a260cd
11:25 kdhananjay1 joined #gluster
11:25 ndevos vovcia: portmap (rpcbind service) should be enabled and allowing connections on port 111 (udp and tcp)
11:26 vovcia hmm it seems rpcbind is broken in centos 7
11:27 vovcia yep permission problem with /var/lib/rpcbind
11:28 ndevos selinux? maybe needs a restorecon?
11:29 ira joined #gluster
11:32 rafi1 joined #gluster
11:35 kdhananjay joined #gluster
11:35 vovcia hmm false alarm permissions are ok
11:36 vovcia strange after restart of rpcbind and restart of glusterd now error is program not registered (pasting..)
11:36 vovcia ndevos: https://bpaste.net/show/b51f777fb5f6
11:37 vovcia oh and finally its mounted
11:39 vovcia ndevos: now i cant mkdir: mkdir: cannot create directory ‘/nfs/test’: Remote I/O error
11:42 glusterbot News from newglusterbugs: [Bug 1238188] Not able to recover the corrupted file on Replica volume <https://bugzilla.redhat.com/show_bug.cgi?id=1238188>
11:50 abyss_ joined #gluster
11:52 ndevos vovcia: check the /var/log/glusterfs/nfs.log on the storage server, maybe that contains some helpful info
11:54 meghanam joined #gluster
12:01 mator ndevos, so far can't reproduce nfs related repeated messages... going to post to bugzilla ticket soon
12:04 ndevos mator: sure, I'll find a way to keep me busy ;-)
12:05 kkeithley Gluster Community Meeting _now_ in #gluster-meeting
12:05 vovcia ndevos: in nfs.log nothin relevant except for [2015-07-01 11:37:28.247505] E [rpcsvc.c:565:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully
12:06 rafi joined #gluster
12:06 anmolb joined #gluster
12:09 rwheeler joined #gluster
12:09 jrm16020 joined #gluster
12:10 jtux joined #gluster
12:16 bene2 joined #gluster
12:20 an joined #gluster
12:27 SOLDIERz joined #gluster
12:31 unclemarc joined #gluster
12:34 mribeirodantas joined #gluster
12:37 [Enrico] joined #gluster
13:05 wkf joined #gluster
13:10 klaxa|work joined #gluster
13:11 vovcia ndevos: i think there is issue with registering in portmap, look at this command sequence: https://bpaste.net/show/6c0ee5a8e22e
13:14 raghug joined #gluster
13:14 ndevos vovcia: you should see some failed portmap registrations in the nfs.log...
13:15 ndevos vovcia: maybe bug 1181779 is what you are hitting? the list lines in the 1st comment might have a workaround
13:15 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1181779 unspecified, unspecified, rc, steved, ON_QA , rpcbind prevents Gluster/NFS from registering itself after a restart/reboot
13:16 vovcia ndevos: yes seems like problem with rpcbind -w
13:22 kbyrne joined #gluster
13:23 kdhananjay joined #gluster
13:25 hamiller joined #gluster
13:26 surabhi_ joined #gluster
13:26 wkf joined #gluster
13:27 vovcia ndevos: im still hitting strange Remote I/O error - when i set nfs.acl to off and then to on it magically works
13:29 sberry joined #gluster
13:29 sakshi joined #gluster
13:31 ndevos vovcia: hmm, can you file a bug for that, include the nfs.log of a clean boot, and capture a network trace from the mount process and onwards?
13:31 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
13:33 chirino joined #gluster
13:40 cyberswat joined #gluster
13:42 julim joined #gluster
13:43 vovcia ndevos: i think its centos related, becasue few weeks ago same setup was working flawlessly
13:44 vovcia but i can :))
13:48 ndevos vovcia: yes, I think it would be good to look into the details, maybe we need to fix something in RHEL ;-)
13:49 andras joined #gluster
13:51 andras Hi Gluster experts! I have this in logs [dht-layout.c:640:dht_layout_normalize] 0-gluster0-dht: found anomalies in /. holes=1 overlaps=0  - and cannot mount clients - I have remove-brick operation ongoing. Does this related? is it normal?
13:52 theron joined #gluster
13:59 wushudoin joined #gluster
14:00 bennyturns joined #gluster
14:02 kovshenin joined #gluster
14:03 Fidelix joined #gluster
14:04 Fidelix Hello folks. Is it possible to span a gluster filesystem over multiple servers? As in 1TB spanned in 2 servers would only take 500GB space in each server?
14:05 shyam joined #gluster
14:05 mator Fidelix, install 2+ servers?
14:06 Fidelix mator: ok... but you're saying it's possible with gluster? That's great.
14:07 Leildin you could span it over 4 servers of 250G
14:08 Leildin you can have a number of servers holding a number of bricks
14:08 Leildin it's all up to what you want/need
14:09 Fidelix Alright... that's fantastic. Are there any known issues when the files of the FS in question are usually 500MB-1GB in size?
14:10 mator probably more problems with small files, than with the bigger ones
14:10 plarsen joined #gluster
14:14 fission_ joined #gluster
14:15 fission_ hey everyone. Is there a way to add basic auth to gluster command and glusterfs?? Havent found anything in the docs.
14:16 aaronott joined #gluster
14:23 bene2 joined #gluster
14:24 shyam joined #gluster
14:25 dusmant joined #gluster
14:27 Fidelix Is there a way to convert an existing ext4 filesystem to glusterFS?
14:33 kanagaraj joined #gluster
14:35 Leildin not sure that's possible Fidelix but the better people here might know
14:35 Leildin I've only used xfs for my bricks
14:36 Leildin it all comes down to the fact that there is or isn't extended attributes
14:54 ilbot3 joined #gluster
14:54 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
14:55 jeroen_ joined #gluster
14:55 klaxa|work joined #gluster
14:57 jeroen_ when I try to install glusterfs-server on xubuntu 14.04, there always occurs a fail when it tries to start the service glusterd, does anybody have any idea why this happens?
14:58 coredump joined #gluster
14:59 ndevos jeroen_: maybe you can ,,(paste) some logs or output?
14:59 glusterbot jeroen_: For RPM based distros you can yum install fpaste, for debian, ubuntu, and arch it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
15:02 lezo joined #gluster
15:06 jeroen_ here is the output when I try to install glusterfs-server: http://paste.ubuntu.com/11805079/
15:07 ndevos jeroen_: do you have anything in /var/log/glusterfs/etc-glusterd*.log ?
15:07 and` joined #gluster
15:08 andras hi, I cannot mount glusterFS on any of the servers. I found nothing special in logs, mount just hangs. Any ideas what to check?  volume status say: all bricks online. I remove-brick is ongoing.    http://pastebin.com/JCdeTAnu
15:08 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:09 ndevos andras: maybe firewall?
15:09 andras ndevos: thanks will check. I have no fw, just iptables
15:10 ndevos andras: well, iptables is a firewall :)
15:11 jeroen_ andres: this is the content of the etc-glusterfs-gsterd.vol.log : http://paste.ubuntu.com/11805106/
15:13 glusterbot News from newglusterbugs: [Bug 1238318] NFS mount throws Remote I/O error <https://bugzilla.redhat.com/show_bug.cgi?id=1238318>
15:13 vovcia glusterbot: yes thats my bug :)
15:13 ndevos jeroen_: if you installed glusterfs-server as a package, it should not point to /usr/local/lib/glusterfs/3.7.2/xlator/mgmt/glusterd.so - the /local/ in there is wrong
15:14 ndevos jeroen_: that might be a packaging bug, did you use the packages from the ,,(ppa)?
15:14 glusterbot jeroen_: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
15:15 curratore_ joined #gluster
15:16 ndevos semiosis, kkeithley: that ^^ still points to the 3.4 version, I thought you guys provided the 3.7 ones too?
15:16 jeroen_ I did use them, but at first it did not want to add the ppa, so in the meanwhile, I tried to install it from source, which did not go well either. Then I managed to get the ppa working and afterwards I install via the package
15:17 ndevos jeroen_: I think you have partially installed the packages from source, and partially from the ppa, and I guess there is a conflict there
15:18 liewegas joined #gluster
15:18 jeroen_ Ok, so I should just remove the glusterfs folder I have now and then reinstall via the ppa?
15:19 tdasilva joined #gluster
15:19 andras service iptables stop  , gluster on server restarted, cant connect to any servers still. can this be related to ongoing remove-brick?
15:19 kkeithley @forget ppa
15:19 glusterbot kkeithley: The operation succeeded.
15:20 ndevos jeroen_: you should be able to do a 'make uninstall' from the directory where you did the 'make install'
15:20 aravindavk joined #gluster
15:20 jeroen_ Ok, I will try that :)
15:20 kkeithley learn ppa as The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
15:21 kkeithley @learn ppa as The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
15:21 glusterbot kkeithley: The operation succeeded.
15:21 kkeithley @(,,ppa)
15:21 ndevos jeroen_: but, I'm not 100% sure that cleans out everything... also uninstall the packages - and reinstall from the packages only
15:21 kkeithley ,,(ppa)
15:21 glusterbot The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
15:21 ndevos ~ppa | kkeithley
15:21 glusterbot kkeithley: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
15:21 ndevos kkeithley: uh, still 3.4?
15:22 soumya joined #gluster
15:22 kkeithley it's still in the ppa for anyone that wants to use it
15:23 kkeithley @forget ppa
15:23 glusterbot kkeithley: The operation succeeded.
15:23 bene2 joined #gluster
15:23 kkeithley @learn ppa as The official glusterfs packages for Ubuntu are available here: 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://goo.gl/aAJEN5 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
15:23 glusterbot kkeithley: The operation succeeded.
15:23 kkeithley ,,(ppa)
15:23 glusterbot The official glusterfs packages for Ubuntu are available here: 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://goo.gl/aAJEN5 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
15:24 andras ndevos: iptables stopped, still no mount :-( I am wondering if it is related to remove-brick. Could it be? Is it safe to stop remove-brick op?
15:24 jeroen_ ndevos: it works now, thank you for your help! :)
15:25 B21956 joined #gluster
15:25 ndevos andras: dont know much about remove-brick... no idea how that actually is done
15:25 ndevos jeroen_: ah, nice to hear!
15:26 psilvao1 joined #gluster
15:26 ndevos andras: you should see something in the logs for the mountpoint (on the client), /var/log/glusterfs/path-to-mountpoint.log
15:27 andras ndevos: It is strange, I see nothing else than pasted earlied http://pastebin.com/JCdeTAnu I have no other idea why I cant mount anymore. Only thing changed is that I started remove-brick.
15:27 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:28 andras ndevos: i am following remove-brick is doing its job. slowly..slowly
15:28 Pupeno joined #gluster
15:29 ndevos andras: yeah, that does not look too bad... maybe the remove-brick process has a log somewhere that has an error?
15:31 spalai left #gluster
15:31 PatNarciso joined #gluster
15:31 andras ndevos: thanks, will check those too
15:32 PatNarciso good morning all.  hi glusterbot.
15:32 PatNarciso and good afternoon to the Netherlands.
15:33 * ndevos _o/ PatNarciso
15:39 side_control joined #gluster
15:39 PatNarciso Need input please on ideal single-server setup.  8GB ram.  8x8TB (5400rpm) drives.  Was thinking jbod --> md-raid6 --> single xfs partition --> single brick.  use case: video storage and editing.  also considering ZFS, as I'm concerned about the video editing IO on md-raid6 and believe jbod:ZFS maybe > md-raid6:XFS.
15:46 PatNarciso I'm also open minded, and curious, what would the IO be like with an 8x brick dispurse volume.
15:49 an joined #gluster
15:51 DV joined #gluster
15:55 PatNarciso I question what the distributed disperse io would be like on a single server, single volume with 8 bricks.   wise for future server expansion and possible md/lvm alternative, or am I hanging myself with IO.
15:55 kovshenin joined #gluster
15:56 * PatNarciso would setup a vm to do basic benchmark testing; yet I feel the physical device bottleneck would be masked.
15:58 deepakcs joined #gluster
16:04 CyrilPeponnet joined #gluster
16:04 CyrilPeponnet joined #gluster
16:04 CyrilPeponnet joined #gluster
16:05 jobewan joined #gluster
16:05 CyrilPeponnet joined #gluster
16:06 CyrilPeponnet joined #gluster
16:06 CyrilPeponnet joined #gluster
16:07 cholcombe joined #gluster
16:09 calavera joined #gluster
16:13 CyrilPeponnet joined #gluster
16:20 cyberswat joined #gluster
16:36 NTQ joined #gluster
16:49 calavera joined #gluster
16:49 vmallika joined #gluster
16:53 jiffin joined #gluster
16:55 jmarley joined #gluster
16:56 pppp joined #gluster
17:11 jiffin joined #gluster
17:11 Rapture joined #gluster
17:14 anil joined #gluster
17:14 vmallika joined #gluster
17:18 soumya joined #gluster
17:21 vmallika joined #gluster
17:29 vmallika1 joined #gluster
17:30 firemanxbr joined #gluster
17:32 hagarth joined #gluster
17:32 MrAbaddon joined #gluster
17:35 jiffin joined #gluster
17:35 hagarth left #gluster
17:35 hagarth joined #gluster
17:36 vimal joined #gluster
17:39 dthrvr joined #gluster
17:46 jvandewege_ joined #gluster
17:46 CyrilP joined #gluster
17:46 CyrilP joined #gluster
17:48 CyrilPeponnet joined #gluster
17:51 jiffin joined #gluster
17:58 calavera joined #gluster
17:59 pppp joined #gluster
18:08 rafi joined #gluster
18:10 rafi joined #gluster
18:24 chirino joined #gluster
19:03 Rapture joined #gluster
19:09 bennyturns joined #gluster
19:18 shaunm_ joined #gluster
19:20 MrAbaddon joined #gluster
19:22 veonik joined #gluster
19:27 lexi2 joined #gluster
19:38 calavera joined #gluster
19:39 mckaymatt joined #gluster
19:45 corretico joined #gluster
19:50 Scub joined #gluster
19:50 Scub afternoon gents
19:53 Scub having a spot of trouble getting SSL enabled for management connections
19:54 Scub what im seeing seems to imply that the wrong SSL version is being used to communicate
19:54 Scub [2015-07-01 19:47:23.071924] E [socket.c:2756:socket_server_event_handler] 0-socket.management: server setup failed
19:54 Scub er
19:54 Scub https://gist.github.com/anonymous/973c77204695c4e735af
20:03 alexmipego joined #gluster
20:03 alexmipego hi
20:03 glusterbot alexmipego: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
20:03 hagarth Scub: this might help - https://kshlm.in/network-encryption-in-glusterfs/
20:04 alexmipego I'm researching gluster and I can't find a clear answer or docs for this. If I've a 2 replica cluster and 1 goes does, it seems the clients still work. But if both go down and 1 of them comes back up I can't connect the clients…
20:05 gh5046 joined #gluster
20:05 gh5046 Howdy folks
20:06 gh5046 The glusterfs-epel.repo files for 3.6 point to LATEST, which is 3.7
20:06 gh5046 For example
20:06 gh5046 http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.3/CentOS/glusterfs-epel.repo
20:06 gh5046 And
20:06 gh5046 http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/CentOS/glusterfs-epel.repo
20:06 gh5046 It was fixed for 3.5 once upon a time, could it be fixed for 3.6 too?
20:10 DV joined #gluster
20:11 alexandregomes joined #gluster
20:11 Scub hagarth: yea thats what im going through at the moment
20:12 Scub the common-names of the certs match what has been set with auth.ssl-allow
20:12 Scub it looks as though the certs are TLS1.2
20:12 levlaz joined #gluster
20:13 hagarth Scub: if you can't get it going, drop a note on gluster-users & kaushal will respond back with help
20:13 hagarth alexmipego: check your client's log file for more details
20:13 Scub fair enough :)
20:13 Scub thanks man <3
20:14 levlaz hagarth: thank you :)
20:14 levlaz I am working with Scub on this, we are hitting our heads against a wall right now.
20:15 hagarth levlaz, Scub: good luck with this!
20:15 Scub cheers! :3
20:16 obnox joined #gluster
20:21 gh5046 left #gluster
20:56 TheSeven assuming I have a replica3 volume in a gluster cluster with 3 nodes
20:57 TheSeven what's the correct way to reboot all 3 nodes while keeping the volume available?
20:57 TheSeven (one after another of course, but what do I have to do in between to ensure that everything is synced up nicely?)
21:06 shaunm_ joined #gluster
21:17 hagarth TheSeven: perform self-healing before successive reboots
21:17 calavera joined #gluster
21:33 wkf joined #gluster
21:35 TheSeven hagarth: will that happen automatically or do I have to issue any commands? if so, which ones?
21:36 TheSeven also, what exactly do I have to do to check completion?
21:37 TheSeven and how long will that process usually take? is it quick (e.g. just comparing some sequence numbers) or will it basically diff the whole volume?
21:37 TheSeven what are the consequences if e.g. the second node is rebooted too soon after the first?
21:38 TheSeven would that just cause service outages (because it knows that there is a problem), or will it cause irrecoverable problems? (e.g. split brain)
21:38 TheSeven (and in this case I'm considering anything that can't be reliably fixed by an automated process an irrecoverable problem)
21:44 kovsheni_ joined #gluster
21:51 badone joined #gluster
21:55 hagarth TheSeven: there is a pro-active self-heal daemon which has enough hints to go about self-healing after a node comes online
21:56 sysconfig joined #gluster
21:56 hagarth you just need to give it enough time to heal/re-synchronize. If you are using 3.6 or later, gluster volume heal <volname> would give more information about the backlog
21:56 hagarth s/heal <volname>/heal <volname> info/
21:56 glusterbot What hagarth meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
21:58 hagarth TheSeven: there are policies being built in 3.7 to make resolving of split-brain automatically. hopefully it should not be an irrecoverable problem soon :)
22:01 TheSeven hagarth: depends on whether you have any indication which copy is the "right" one, and if e.g. different files that belong to the same thing (e.g. database) might have ended up in inconsistent states
22:01 hagarth TheSeven: of course, yes. It would need an adminstrative policy to pick up the right copy
22:01 TheSeven I guess I should avoid that happening in the first place instead of attempting to fix it ;)
22:02 TheSeven how long does the pro-active self heal take until it realizes that there's something to do?
22:02 hagarth TheSeven: absolutely, waiting for heal <volname> info to have no entries listed would be a bare minimum requirement before successive reboots.
22:02 TheSeven i.e. if I run that heal info command immediately after the node comes up, will it already show that there's pending work?
22:03 hagarth TheSeven: interval of time depends on the amount of data to be synchronized, heal-daemon looks up an index to determine what needs to be healed and hence identifying objects that need healing happens rather quickly. after that you are bound by the disk, network and other latencies.
22:04 TheSeven otherwise I need to take some precautions that whatever does the reboots (be it a human or a script) doesn't misinterpret that as "everything's finished already"
22:04 hagarth TheSeven: yes, usually that would be the case.
22:06 TheSeven whatever "usually" means... if that isn't reliable, I should probably add some precautions. (issuing a heal command manually? just waiting for a minute? ...)
22:06 TheSeven I guess there's some window between the start of the heal operation and the first files that need to be healed appearing in that list
22:06 hagarth TheSeven: I haven't seen any other reports indicating otherwise.
22:06 TheSeven so how do I tell if a scan for files to be healed is going on, and just hasn't found any files to heal yet?
22:07 hagarth TheSeven: the index is the source of truth for both self-heal daemon and the command
22:08 TheSeven ok so it immediately knows what needs to be healed after coming up? i.e. the heal info command will synchronously compare some meta info with the other nodes?
22:08 TheSeven (and not just show an in-memory list of the local daemon that might not have been populated yet)
22:09 klaas_ joined #gluster
22:11 hagarth both query the index independently and perform necessary actions
22:11 hagarth heal-daemon goes about healing and the command prints the result out
22:13 jiqiren joined #gluster
22:14 glusterbot News from newglusterbugs: [Bug 1238446] glfs_stat returns bad device ID <https://bugzilla.redhat.com/show_bug.cgi?id=1238446>
22:30 aaronott joined #gluster
22:33 gildub joined #gluster
23:00 bit4man joined #gluster
23:01 nsoffer joined #gluster
23:02 gildub joined #gluster
23:05 theron_ joined #gluster
23:07 nishanth joined #gluster
23:26 jrm16020 joined #gluster
23:26 gildub joined #gluster
23:27 Rapture joined #gluster
23:42 R0ok_ joined #gluster
23:46 jmarley joined #gluster
23:50 an joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary