Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2018-01-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 msvbhat joined #gluster
00:15 gospod3 joined #gluster
00:19 gospod3 joined #gluster
00:29 ronrib joined #gluster
00:37 john51 joined #gluster
00:51 inodb joined #gluster
01:20 gospod3 joined #gluster
01:44 atinm joined #gluster
01:46 atinm_ joined #gluster
01:50 Shu6h3ndu joined #gluster
02:02 gospod2 joined #gluster
02:02 DV joined #gluster
02:22 ppai joined #gluster
02:26 gospod2 joined #gluster
02:30 armyriad joined #gluster
02:55 nbalacha joined #gluster
02:57 ilbot3 joined #gluster
02:57 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:22 susant joined #gluster
03:31 gospod2 joined #gluster
03:47 ndarshan joined #gluster
03:51 psony joined #gluster
03:56 mbukatov joined #gluster
04:07 sac` joined #gluster
04:08 msvbhat joined #gluster
04:20 Prasad joined #gluster
04:25 gyadav joined #gluster
04:30 skumar joined #gluster
04:33 Shu6h3ndu joined #gluster
04:34 itisravi joined #gluster
04:35 kotreshhr joined #gluster
04:36 sankarshan joined #gluster
04:37 gospod2 joined #gluster
04:37 rwheeler joined #gluster
04:40 ndarshan joined #gluster
04:43 hgowtham joined #gluster
04:46 kdhananjay joined #gluster
04:48 kramdoss_ joined #gluster
04:54 rastar joined #gluster
05:00 gyadav_ joined #gluster
05:07 jiffin joined #gluster
05:14 Vishnu_ joined #gluster
05:15 kotreshhr left #gluster
05:18 msvbhat joined #gluster
05:19 shyu joined #gluster
05:29 apandey joined #gluster
05:34 varshar joined #gluster
05:34 Humble joined #gluster
05:36 karthik_us joined #gluster
05:42 gospod2 joined #gluster
05:43 poornima joined #gluster
06:12 ndarshan joined #gluster
06:14 msvbhat joined #gluster
06:19 sac`` joined #gluster
06:19 ekarlso joined #gluster
06:19 ekarlso Hi
06:19 glusterbot ekarlso: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
06:19 ekarlso Shouldn't glusterfs volumes self-heal if all bricks are up ?
06:22 Shu6h3ndu joined #gluster
06:33 vbellur joined #gluster
06:34 xavih joined #gluster
06:43 sunnyk joined #gluster
06:48 gospod2 joined #gluster
06:53 aravindavk joined #gluster
06:54 omark1 joined #gluster
06:59 voidm joined #gluster
07:10 ThHirsch joined #gluster
07:11 Acinonyx joined #gluster
07:11 kramdoss_ joined #gluster
07:17 Kassandry joined #gluster
07:22 jtux joined #gluster
07:24 [diablo] joined #gluster
07:26 poornima joined #gluster
07:39 msvbhat joined #gluster
07:40 Humble joined #gluster
07:43 jkroon__ joined #gluster
07:44 omark1 joined #gluster
07:48 susant joined #gluster
07:50 ivan_rossi joined #gluster
07:53 gospod2 joined #gluster
08:00 social joined #gluster
08:18 ivan_rossi left #gluster
08:20 marbu joined #gluster
08:21 apandey joined #gluster
08:31 msvbhat joined #gluster
08:40 Klas I have a quite large georeplica with loads of small files running, and needed to restart from scratch with it, it seems to take a lot of time to catch up the last few percent, any good way to check if it is actually working with something or not?
08:40 gyadav joined #gluster
08:59 gospod2 joined #gluster
09:21 kotreshhr joined #gluster
09:27 p7mo joined #gluster
09:32 jkroon joined #gluster
09:36 varsha_ joined #gluster
09:38 poornima joined #gluster
09:38 Humble joined #gluster
09:39 atinm joined #gluster
09:39 buvanesh_kumar joined #gluster
09:40 apandey joined #gluster
09:40 hgowtham joined #gluster
09:45 mbukatov joined #gluster
09:52 nisroc joined #gluster
10:04 gospod2 joined #gluster
10:05 jri joined #gluster
10:06 gyadav joined #gluster
10:07 rafi joined #gluster
10:10 Humble joined #gluster
10:13 MrAbaddon joined #gluster
10:28 varsha_ left #gluster
10:33 msvbhat joined #gluster
10:41 Humble ndevos, ping
10:41 glusterbot Humble: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
10:41 Humble is that you created glusterfs-csi-driver repo ?
10:41 Humble if yes, can you rename that to gluster-csi-driver
10:42 ndevos Humble: no, I did not create that, maybe it was nigelb?
10:42 Humble nigelb, ^^^
10:42 ndevos Humble: but yeah, I can rename it if nigelb does not pong back :)
10:42 Humble I requested gluster-csi-driver in the bz
10:43 Humble it should be fine.. there are no commits
10:43 Humble I updated the bugzilla as well
10:43 ndevos Humble: renamed - https://github.com/gluster/gluster-csi-driver
10:43 glusterbot Title: gluster/gluster-csi-driver · GitHub (at github.com)
10:43 Humble thanks !!!
10:44 nigelb Humble: Right, I was taking an exam.
10:44 nigelb Sorry about that.
10:45 Humble nigelb, no worries.
10:45 Humble hope u passed the exam :)
10:45 nigelb Dunno, let's see fingers crossed.
10:49 level7 joined #gluster
10:50 mbukatov joined #gluster
10:53 Klas I found the error, seems like the changelog is corrupted somehow
10:53 Klas is there any way to reset it without doing full resync, or even with full resync?
10:55 Humble nigelb, all the best
10:55 Humble ndevos++ nigelb++ Thanks!
10:55 glusterbot Humble: ndevos's karma is now 32
10:55 glusterbot Humble: nigelb's karma is now 2
11:10 gospod2 joined #gluster
11:10 level7_ joined #gluster
11:13 MrAbaddon joined #gluster
11:15 msvbhat joined #gluster
11:22 pioto joined #gluster
11:23 jkroon joined #gluster
11:27 bluenemo joined #gluster
11:30 kotreshhr joined #gluster
11:48 itisravi joined #gluster
11:51 ThHirsch joined #gluster
11:54 itisravi joined #gluster
11:58 rastar joined #gluster
12:11 jri joined #gluster
12:13 jri_ joined #gluster
12:15 gospod2 joined #gluster
12:19 Klas trying to find documentation on how to activate and use NFS in 3.7.15, not having much luck
12:19 szafa joined #gluster
12:20 Klas I'm just wanting it to be the simplest possible variant, but finding the basic commands seems nigh on impossible
12:20 buvanesh_kumar joined #gluster
12:21 kramdoss_ joined #gluster
12:21 ThHirsch joined #gluster
12:23 ThHirsch1 joined #gluster
12:34 MrAbaddon joined #gluster
12:34 itisravi Klas: 3.7 should have the gluster nfs server process running on the bricks by default no?
12:36 poornima joined #gluster
12:41 Klas dunno =)
12:41 Klas I'm assuming not mountable at least
12:42 Klas I somewhat adapted https://serenity-networks.com/how-to-install-glusterfs-nfs-on-centos-7-for-virtual-machine-storage/ and it seems to work
12:43 Klas basically, this is just something I'm doing as an alternative to the earlier, georeplica-based backup which is broken
12:46 rwheeler joined #gluster
12:50 itisravi great. Yeah 3.7 still had gnfs On by default, so "mount -t nfs ip:volname /path-to-mount" should work out of the box.
12:52 Klas it does say N/A on status, which seems sane
12:52 msvbhat joined #gluster
12:55 Klas on my lab system, I'm getting a port and so forth when it's working, currently not able to replicate it on test system
12:55 Klas in lab, I sucessfully started it with:
12:55 Klas gluster vol set georeptest nfs.rpc-auth-allow 130.237.168.81,127.0.0.1
12:55 Klas gluster vol set georeptest nfs.disable off
12:55 Klas service rpcbind start
12:55 Klas service glusterd restart
12:57 Klas or maybe I'm just drunk and it works regardless
13:02 Klas itisravi: thanks, I assumed it wasn't just simply wide open, which was obviously wrong
13:03 itisravi Klas: oh, maybe I was wrong in assuming the "nfs.disable off " did not have to be done in 3.7.x.
13:04 Klas nope
13:04 Klas you didn't
13:04 Klas I just tried it and we've been running for 18 months with it on
13:04 itisravi ah, okay.
13:04 Klas unknowingly
13:04 Klas =P
13:04 Klas gonna run nfs.disable on on all of them now ;)
13:04 itisravi :)
13:05 phlogistonjohn joined #gluster
13:05 Klas that was not a very sane default, and it sounds like it's been changed in never versions?
13:05 Klas (so you've done things more sane)
13:07 itisravi yeah I think newer versions disable it by default to encourage NFS ganesha.
13:10 omark2 joined #gluster
13:12 jiffin joined #gluster
13:20 Klas for i in $(gluster vol list); do gluster vol set $i nfs.disable on; done
13:20 Klas and some informative mails later and a whole lot of less data accessible, I'm way happier than half an hour ago ;)
13:20 gospod2 joined #gluster
13:23 jkroon joined #gluster
13:36 susant joined #gluster
13:45 ThHirsch joined #gluster
13:46 szafa Hey there!
13:47 szafa anyone have an experience in adding new nodes to gluster replica cluster via gluster_volume feature in ansible playbooks ?
13:49 atinm joined #gluster
13:52 rwheeler_ joined #gluster
14:19 MrAbaddon joined #gluster
14:20 barbarbar joined #gluster
14:25 jiffin joined #gluster
14:26 gospod2 joined #gluster
14:32 gyadav joined #gluster
14:47 barbarbar Hi there! I had a crash on a three-node gluster setup with two nodes breaking down simultaneously, which shredded some Distributed+Replica-2 volumes. Can I just copy my files from the backup into the bricks themselves?
14:48 msvbhat joined #gluster
14:51 phlogistonjohn joined #gluster
14:54 sunnyk joined #gluster
14:59 psony joined #gluster
15:03 Rakkin__ joined #gluster
15:03 shyam joined #gluster
15:04 aravindavk joined #gluster
15:06 barbarbar my current procedure for the recovery:
15:06 barbarbar I have heketi automounting the glusterfs-bricks on the nodes for a given volume
15:07 barbarbar I copy over the files from backup to one volume
15:07 barbarbar I run `gluster volume heal <volid> full`
15:07 barbarbar and I'd expect to see the second brick on the other node to be filled automatically
15:07 barbarbar but that doesn't happen.
15:08 barbarbar both bricks are online according to `gluster volume stats <volid>`
15:10 Asako_ good morning.  Is there a way to force a geo-replication volume to start syncing?
15:10 rwheeler__ joined #gluster
15:10 Asako_ status shows created but it hasn't synced since then
15:17 illwieckz joined #gluster
15:31 gospod2 joined #gluster
15:38 illwieckz joined #gluster
15:38 bluenemo joined #gluster
15:44 msvbhat joined #gluster
15:45 ThHirsch joined #gluster
15:49 alvinstarr joined #gluster
15:51 ic0n joined #gluster
15:58 illwieckz joined #gluster
16:05 kramdoss_ joined #gluster
16:10 Prasad joined #gluster
16:11 Rakkin__ joined #gluster
16:20 illwieckz joined #gluster
16:33 vbellur joined #gluster
16:37 gospod2 joined #gluster
16:39 scubacuda joined #gluster
16:46 deibuji joined #gluster
16:48 deibuji hello; has anyone had a situation where they are getting incorrect volume size on glusterfs clients, after adding subvolumes and then rebalancing the volume? i'm using a Distributed-Disperse vol
16:52 shellclear joined #gluster
16:55 deibuji looking at what it is reporting, the size has been divided by the number the amount of usable bricks (in this case 6)
17:21 illwieckz joined #gluster
17:41 s34n I have 3 server (s1,s2,s3) on which I want to create a glusterfs volume. s1 has 3x the diskspace as s2 and s3. If I create a volume as replica 2 s1/d/1 s2/d s1/d/2 s3/d, I should get the first a volume with 1/2 mirrored on s1 and s2, half mirrored on s1 and s3, with space left over on s1. Do I understand that correctly?
17:42 s34n s/the first//
17:42 glusterbot What s34n meant to say was: I have 3 server (s1,s2,s3) on which I want to create a glusterfs volume. s1 has 3x the diskspace as s2 and s3. If I create a volume as replica 2 s1/d/1 s2/d s1/d/2 s3/d, I should get  a volume with 1/2 mirrored on s1 and s2, half mirrored on s1 and s3, with space left over on s1. Do I understand that correctly?
17:42 gospod2 joined #gluster
17:47 ekarlso Hi guys, how do I replace a brick ?
17:47 ekarlso I have formatted and mounted a disk at the same path as the old brick
17:47 cliluw joined #gluster
17:49 cliluw joined #gluster
17:51 deibuji @ekarlso i've followed this guide previously: http://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-faulty-brick
17:56 deibuji so i would mount it at a different place, as suggested in the document
17:58 ekarlso ah ok
18:01 ekarlso hmmm, gluster volume add-brick  data ovirt2:/gluster/brick6_1/data
18:01 ekarlso volume add-brick: failed: Incorrect number of bricks supplied 1 with count 3
18:02 ekarlso howto get around that ?
18:03 deibuji erm, i thought it said replace-brick in there
18:04 deibuji if you type gluster volume replace-brick help
18:04 deibuji that should give you a bit more info
18:08 ekarlso ah that wokred yes :)
18:12 deibuji awesome
18:13 prasanth joined #gluster
18:16 ekarlso hmmm
18:16 ekarlso in theory could a arbiter volume
18:16 ekarlso be on a smaller server then the rest ?
18:18 deibuji i'm unsure, i use dispersed vols
18:18 deibuji btw i eventually found this for my issue: http://lists.gluster.org/pipermail/gluster-users/2017-September/032587.html
18:18 glusterbot Title: [Gluster-users] upgrade to 3.12.1 from 3.10: df returns wrong numbers (at lists.gluster.org)
18:18 deibuji just removed option shared-brick-count
18:18 deibuji and i got the correct number
18:19 ekarlso deibuji: what was your issue ?
18:19 deibuji after upgrading from 3.10 to 3.12 on centos, the size of the volume was incorrect
18:19 deibuji it turned out to be 6 times smaller than actual available space
18:20 ekarlso what is dispersed vols ?
18:22 deibuji http://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-dispersed-volumes :)
18:22 glusterbot Title: Setting Up Volumes - Gluster Docs (at docs.gluster.org)
18:22 deibuji right got to go. ttfn
18:22 deibuji left #gluster
18:28 s34n the man page for gluster does not document an arbiter option for volume create
18:29 ekarlso http://docs.gluster.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/
18:29 glusterbot Title: Arbiter volumes and quorum options - Gluster Docs (at docs.gluster.org)
18:29 arpu_ joined #gluster
18:30 s34n shouldn't this be documented in the man page?
18:32 Asako_ should be, but is it?
18:33 s34n no for me
18:47 illwieckz joined #gluster
18:48 gospod2 joined #gluster
18:53 s34n I have probed each peer, but the status of the peers is "Accepted peer request (Connected)" not "Peer in Cluster"
18:53 s34n How do I get the state to progress?
18:56 s34n hmm. in my 3 node cluster, only one of the nodes have 2 peers. The others only have 1 peer
18:57 s34n how do I fix that?
19:10 s34n s1 has peers s2 and s3 which are both state "Accepted peer request (Connected)"
19:11 s34n s2 and s3 only have peer s1, state "Accepted peer request (Connected)"
19:11 jkroon joined #gluster
19:28 s34n hmm. if I detach s1 from s2 and s3, then probe s1 from s2 and s2, everybody sees everybody
19:53 gospod2 joined #gluster
20:29 al joined #gluster
20:52 barbarbar joined #gluster
20:53 barbarbar hi guys. I've solved my previous problem of restoring glusterfs, but still have one problem with the memory usage. How much memory should I expect per volume?
20:53 barbarbar I'm currently seeing around 750mb of memory per active volume and node, which seems quite high.
20:59 gospod2 joined #gluster
21:13 MrAbaddon joined #gluster
21:19 Rakkin__ joined #gluster
21:26 msvbhat joined #gluster
22:03 ThHirsch joined #gluster
22:04 gospod2 joined #gluster
22:04 plarsen joined #gluster
22:12 timmmey joined #gluster
22:13 timmmey Hi everyone, i have a small question which i cannot find in the manuals. Which ports need to exported through NAT for a geo-replication slave?
22:16 timmmey I can only find information for the general case, but hopefully not all ports need to be exposed to the internet?
22:28 mattsup joined #gluster
22:28 msvbhat joined #gluster
23:00 john51 joined #gluster
23:05 john51 joined #gluster
23:07 cliluw joined #gluster
23:08 rwheeler_ joined #gluster
23:09 rwheeler__ joined #gluster
23:09 shyam joined #gluster
23:10 gospod2 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary