Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-06-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 Alghost_ joined #gluster
00:05 ira_ joined #gluster
00:47 wadeholler joined #gluster
00:53 DV joined #gluster
01:08 gluytium joined #gluster
01:08 julim joined #gluster
01:15 shdeng joined #gluster
01:27 djgerm joined #gluster
01:30 djgerm Hello! So, I created a bunch of gluster bricks a while back, and one seems to have gone south. upon reboot, it can't start gluster-server because the /etc/glusterfs/glusterd.vol is bad (I am guessing because I created these with gluster commands instead of ever editing that file?). Now I am wondering: how do I know the name of the volumes and bricks and such from the surviving gluster nodes in order to re-add this node to t
01:30 Lee1092 joined #gluster
01:38 fcoelho joined #gluster
01:46 JoeJulian djgerm: It's very unlikely there's something wrong with glusterd.vol. That's typically an unaltered configuration file that ships with the package. To see why glusterd is failing to start, try "glusterd --debug".
01:46 djgerm ah thanks!!!
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:48 djgerm but the there's only that default entry in the volfile
01:54 djgerm hmm googling around a bit more, looks like others have cleaned up /var/lib/glusterd/peers/*
01:54 baojg joined #gluster
01:54 djgerm and then been able to start glusterd
01:56 djgerm https://bugzilla.redhat.com/show_bug.cgi?id=858732
01:56 glusterbot Bug 858732: high, medium, ---, bugs, CLOSED EOL, glusterd does not start anymore on one node
01:58 djgerm cleaning out /var/lib/glusterd/peers/* didn't rectify the issue.
02:04 ahino joined #gluster
02:27 djgerm after grepping through the output of glusterd —debug for "E"
02:27 djgerm http://pastebin.com/0UjTfSbT
02:27 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
02:28 djgerm oh. thanks glusterbot
02:28 djgerm http://paste.ubuntu.com/17781366/
02:28 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
02:42 djgerm OK! I got somewhere
02:42 djgerm not sure if I am corrupting all my data across all my peers now
02:43 djgerm but I changed the hostname1= entry in all the peer files to be their IP addresses instead of fqdn (even tho their fqdn's resolved)
02:45 luizcpg joined #gluster
02:47 djgerm well… that's supremely peculiar. the peer entries rewrote themselves to fqdn
02:47 djgerm so a stop start failed again
02:47 djgerm editing peer entries back to IP allowed start to work
02:48 harish joined #gluster
03:08 magrawal joined #gluster
03:11 Gambit15 joined #gluster
03:20 cliluw joined #gluster
03:22 JoeJulian djgerm: They should match what was used to define your bricks. If they're hostnames, those hostnames need to resolve to the correct servers.
03:23 djgerm the bricks have shortnames…
03:24 djgerm which also resolve...
03:25 djgerm not sure why the bricks have shortnames… in the states I used to provision these volumes/bricks I used fqdn all the way.
03:25 djgerm oh interesting… actually I see now that one volume is using fqdn, and the other is using short.
03:26 djgerm how bizarre
03:26 JoeJulian both have to resolve correctly.
03:27 djgerm aye, they do.
03:28 djgerm right now things are working fine.
03:28 djgerm I am going to reboot one of the nodes and see if it's fixed fixed...
03:29 djgerm kewl. a new error :)
03:29 JoeJulian btw, that paste is useless because (and the reason I suggested --debug) because whatever causes that to fail is somewhere below default logging levels.
03:30 djgerm oh. thanks.
03:30 djgerm my check_glusterfs is saying "glusterfsd brick daemon not running" after reboot.
03:30 djgerm but… peer and volume status are good :)
03:32 JoeJulian more often than not, the reason glusterd fails to start is because of a volume definition mismatch. That's solved by copying everything under /var/lib/glusterd/vols from a good server.
03:32 JoeJulian I've never solved anything by removing peers.
03:33 djgerm ah. good to know. how does one know which server is good? I guess one that doesn't randomly say things are unsynced
03:33 JoeJulian yep
03:34 JoeJulian Unlikely to be a problem from 3.7 up.
03:34 djgerm woo hoo!
03:34 purpleidea JoeJulian: i'm from Montreal :)
03:35 djgerm gluster is pretty impressive, hard to imagine it getting even better!
03:37 JoeJulian purpleidea: They're all the same.
03:38 purpleidea JoeJulian: alright, i'll feed you... you realize Canada is bigger than the United States, eh? ;)
03:40 aphorise joined #gluster
03:41 JoeJulian Sure, but less of it is livable. ;)
03:41 nishanth joined #gluster
03:47 JoeJulian Besides, purpleidea, didn't you move to Massachusetts anyway?
03:49 purpleidea nope!
03:52 baojg joined #gluster
03:55 RameshN joined #gluster
04:00 rafi joined #gluster
04:01 itisravi joined #gluster
04:11 unforgiven512 joined #gluster
04:12 unforgiven512 joined #gluster
04:12 unforgiven512 joined #gluster
04:12 nehar joined #gluster
04:18 atinm joined #gluster
04:20 shubhendu joined #gluster
04:30 ppai joined #gluster
04:31 kramdoss_ joined #gluster
04:31 kramdoss__ joined #gluster
04:33 poornimag joined #gluster
04:37 gem joined #gluster
04:45 nbalacha joined #gluster
04:45 msvbhat_ joined #gluster
04:50 Manikandan joined #gluster
04:51 sakshi joined #gluster
04:52 atrius joined #gluster
04:59 gowtham joined #gluster
05:02 om joined #gluster
05:03 om2 joined #gluster
05:03 Bhaskarakiran joined #gluster
05:04 ndarshan joined #gluster
05:07 jiffin joined #gluster
05:08 raghug joined #gluster
05:09 aravindavk joined #gluster
05:14 Apeksha joined #gluster
05:15 satya4ever joined #gluster
05:16 kdhananjay joined #gluster
05:19 prasanth joined #gluster
05:32 msvbhat joined #gluster
05:40 hgowtham joined #gluster
05:41 kovshenin joined #gluster
05:43 ramky joined #gluster
05:46 mchangir joined #gluster
05:47 pur_ joined #gluster
05:50 karthik___ joined #gluster
05:51 satya4ever joined #gluster
05:55 sabansal_ joined #gluster
05:58 raghug joined #gluster
06:00 [diablo] joined #gluster
06:05 kshlm joined #gluster
06:09 kramdoss_ joined #gluster
06:09 kramdoss__ joined #gluster
06:09 rafi joined #gluster
06:11 Manikandan joined #gluster
06:13 jtux joined #gluster
06:16 atinm joined #gluster
06:17 ashiq joined #gluster
06:18 baojg joined #gluster
06:22 msvbhat joined #gluster
06:22 karnan joined #gluster
06:22 baojg_ joined #gluster
06:23 haomaiwang joined #gluster
06:24 kovshenin joined #gluster
06:27 aspandey joined #gluster
06:29 haomaiwang joined #gluster
06:32 skoduri joined #gluster
06:33 jwd joined #gluster
06:35 baojg joined #gluster
06:37 sbulage joined #gluster
07:00 prasanth joined #gluster
07:02 [Enrico] joined #gluster
07:07 baojg joined #gluster
07:11 jri joined #gluster
07:18 deniszh joined #gluster
07:28 Gnomethrower joined #gluster
07:29 raghug joined #gluster
07:33 karthik___ joined #gluster
07:34 anil joined #gluster
07:35 atinm joined #gluster
07:39 lanning joined #gluster
07:39 takarider1 joined #gluster
07:42 kramdoss_ joined #gluster
07:42 kramdoss__ joined #gluster
07:43 kovsheni_ joined #gluster
07:44 ibotty joined #gluster
07:46 ibotty hchiramm, ashiq: do you have a minute re the gluster docker container?
07:46 ibotty I was wondering, why the password get sets (to such an insecure value!)
07:47 jri_ joined #gluster
07:48 arcolife joined #gluster
08:13 arif-ali joined #gluster
08:19 hchiramm ibotty, sure
08:19 hchiramm missed ur ping yesterday
08:19 hchiramm we default to 'redhat'
08:20 anti[Enrico] joined #gluster
08:20 takarider joined #gluster
08:22 hchiramm ibotty, we can pass it as env variable
08:23 hchiramm or unset the password itself
08:23 ibotty i'll get back in a sec. In a phone call.
08:23 hchiramm sure
08:24 takarider1 joined #gluster
08:31 muneerse joined #gluster
08:33 ibotty Let's step back a bit. Why is ssh installed (and not running when starting the container)? Does it get enabled by glusterd?
08:33 ibotty I figure it's to let heketi add bricks
08:35 ibotty same with sudo I suppose.
08:35 Saravanakmr joined #gluster
08:35 baojg joined #gluster
08:36 hchiramm ibotty, not really
08:36 hchiramm we need ssh because geo replication use it
08:36 ibotty ic
08:36 hchiramm isnt it not started when container is spawned
08:36 hchiramm ?
08:37 ibotty might be. I did not check exactly thorough ;)
08:37 hchiramm yes, we disabled it in latest versions
08:37 hchiramm systemctl will start it though
08:37 hchiramm I see the dockerfile
08:38 ibotty yeah, it's not enabled.
08:38 hchiramm the container actually dont have any relation to heketi
08:38 hchiramm it should be used standalone
08:38 ibotty Well, it does mount /var/lib/heketi/fstab
08:38 ibotty does heketi need anything on the image to use it as a node?
08:38 ibotty i thought not
08:39 hchiramm it need
08:39 hchiramm as u suspected thats the reason for enabling sudo.etc
08:40 hchiramm ashiq, in plain setup we dont need that mount point ..
08:40 hchiramm may be a patch is required to the script
08:40 ibotty it does not do any harm if it's not there
08:40 ibotty I think it's nice to have that auto-mounting
08:41 ibotty It should be specified by a env var though.
08:41 ibotty and fail, if the env var is set but the file does not exist (or something can't be mounted)
08:41 ibotty I can prepare a patch to do that.
08:41 ibotty but I want to go through the todo list first
08:41 hchiramm yeah, make sense
08:41 ibotty you answered ssh ;)
08:42 hchiramm same can be used for password
08:42 hchiramm it can be an ENV
08:42 hchiramm patches are welcome !!
08:42 hchiramm waiting for it !
08:42 hchiramm ibotty++
08:42 glusterbot hchiramm: ibotty's karma is now 2
08:43 ibotty Re credentials: what about using files (bind-mounted) as authorized_keys?
08:43 ibotty is there a user name that is canonical in any way?
08:44 hchiramm ibotty, authorized keys will work
08:44 hchiramm we are setting the root password in case if we need to troubleshoot
08:44 hchiramm yes, exec can be a solution
08:44 hchiramm however if its not local , it may not work
08:45 ibotty maybe I am missing part of the conversation: what about exec? exec as alternative to ssh?
08:45 Slashman joined #gluster
08:45 hchiramm I am referring 'docker exec'
08:46 ibotty ah. I talked about kubeexec with heketi ;)
08:46 hchiramm ibotty, looks like u r related to heketi a lot :)
08:46 hchiramm appreciated if you can share how are you using the container images
08:47 ibotty that's the primary use case I have in mind. I want to go away from my home-grown tool to do automatic volume management
08:47 harish joined #gluster
08:47 ibotty right now, I am using https://github.com/ibotty/gluster-server on (different) openshift clusters
08:48 hchiramm ok..
08:48 ibotty And I'd really like to move to a standard docker image, yours ;)
08:49 hchiramm sure.. much appreciated
08:49 hchiramm dynamic volume provisioner for glusterfs is on the way :)
08:49 kotreshhr joined #gluster
08:49 ibotty based on heketi?
08:49 atinm joined #gluster
08:49 ppai joined #gluster
08:49 hchiramm it can use any rest service
08:50 hchiramm the poc is on heketi..
08:50 hchiramm but if rest service is available in gluster we can use that as well
08:50 ibotty is there a repo somewhere? I wrote quiet a few auto-do-things with kubernetes.
08:50 ibotty _for_ kubernetes
08:51 hchiramm https://github.com/kubernetes/kubernetes/compare/master...humblec:gluster-wip-prov?expand=1
08:51 glusterbot Title: Comparing kubernetes:master...humblec:gluster-wip-prov · kubernetes/kubernetes · GitHub (at github.com)
08:51 hchiramm ibotty, its a patch in kubernetes
08:51 ibotty oh in upstream. I did not notice that!
08:51 ibotty great!
08:51 ibotty hchiramm++
08:51 glusterbot ibotty: hchiramm's karma is now 2
08:51 hchiramm I will be refreshing it soon.
08:51 hchiramm ibotty, yw!
08:52 hchiramm ibotty, bit of refactoring going on in upstream , so was waiting for it
08:52 kshlm joined #gluster
08:52 ibotty no hurry from my part.
08:52 ibotty next q, ntp:
08:53 hchiramm however please let us know if you need any help on docker/k8s/openshift wrt gluster
08:53 hchiramm I am happy to discuss about it
08:53 ibotty I have ntpd running on the h
08:53 ibotty .. host
08:53 hchiramm ok.
08:53 ibotty What's the reason to have it in the container?
08:53 hchiramm ibotty, its not really required inside the container .
08:54 ibotty can I disable it? Can we make these things configurable? Say, with env variables?
08:54 hchiramm yes, now a days chronyd
08:55 ibotty in the docker file it's ntpd
08:55 hchiramm yep, however u may notice container pick a diferent timezone
08:55 ibotty but that's a different story
08:55 hchiramm yep.
08:55 ibotty k
08:55 hchiramm just thought of giving a heads up :)
08:55 ibotty I'll prepare some patches ;)
08:55 hchiramm thanks !!!
08:55 ibotty I'll send a pull request today
08:55 hchiramm sure..
08:56 hchiramm thanks again ibotty++
08:56 glusterbot hchiramm: ibotty's karma is now 3
08:56 hchiramm please feel free to reach out to us directly or via gluster mailing lists
08:56 ibotty I will
08:56 raghug joined #gluster
09:02 jiffin joined #gluster
09:02 muneerse joined #gluster
09:04 ibotty hchiramm: btw: why is rpcbind running in the container? nfs/ganesha is disabled (or not installed) is that needed?
09:06 nehar joined #gluster
09:07 jri joined #gluster
09:09 kramdoss_ joined #gluster
09:10 kramdoss__ joined #gluster
09:17 hchiramm ibotty, gluster nfs can make use of it
09:17 ibotty i thought that was using ganesha? (I am a gluster noop as you can tell ;)
09:18 hchiramm jiffin, gluster nfs is still enabled by default right
09:19 hchiramm jiffin, ^^^
09:23 karthik___ joined #gluster
09:27 msvbhat joined #gluster
09:32 rouven joined #gluster
09:34 baojg joined #gluster
09:36 jiffin hchiramm: on 3.8 onwards, it is disabled
09:36 jiffin by default
09:37 kshlm joined #gluster
09:44 hchiramm jiffin++ , thanks
09:44 glusterbot hchiramm: jiffin's karma is now 4
09:45 ppai joined #gluster
09:47 atinm joined #gluster
10:09 hchiramm ashiq++
10:09 glusterbot hchiramm: ashiq's karma is now 4
10:18 nehar joined #gluster
10:29 ghenry joined #gluster
10:29 ghenry joined #gluster
10:32 baojg joined #gluster
10:35 karnan joined #gluster
10:41 a_ok joined #gluster
10:42 a_ok Performance tuning is always hard but I find gluster perticulary hard
10:43 a_ok for example client.event-threads. Nice and descriptive but how is thread usage in gluster? does every client have one connection?
10:55 ibotty joined #gluster
10:55 johnmilton joined #gluster
11:04 gem joined #gluster
11:04 [Enrico] joined #gluster
11:08 ibotty hchiramm: finally pushed the docker container reworking. Please have a look at a suitable time.
11:09 ibotty https://github.com/gluster/docker/pull/18
11:09 glusterbot Title: wip: group RUN's, conditionalize starting of services by ibotty · Pull Request #18 · gluster/docker · GitHub (at github.com)
11:16 karnan joined #gluster
11:19 gem joined #gluster
11:25 kovshenin joined #gluster
11:27 ira_ joined #gluster
11:28 rastar joined #gluster
11:30 hchiramm ibotty, sure
11:30 ashiq joined #gluster
11:30 hchiramm ibotty++ thanks !!!
11:30 glusterbot hchiramm: ibotty's karma is now 4
11:31 hchiramm ashiq, https://github.com/gluster/docker/pull/18/files please have a look at this pr
11:31 glusterbot Title: wip: group RUN's, conditionalize starting of services by ibotty · Pull Request #18 · gluster/docker · GitHub (at github.com)
11:39 ibotty hchiramm: There is still the authorized_keys handling missing. I'll get to it today or next week.
11:41 poornimag joined #gluster
11:44 rafaels joined #gluster
11:44 cloph joined #gluster
11:45 nehar joined #gluster
11:54 cloph Hi *, is it legal to absolute symlinks in .glusterfs? (e.g. .glusterfs/ac/bf/acbfc720-23db-49dd-b136-81908b506e56 -> /usr/foobar/baz) - or is that invalid/a corruption?
12:01 msvbhat joined #gluster
12:03 hchiramm ibotty, sure
12:05 kkeithley cloph: I'm not sure what you're trying to do, but I'd say it's a corruption.  Unless you really know what you're doing you should never (never never never) touch the brick, and definitely not the .glusterfs dir on the brick.
12:07 cloph well, there are already symlinks like that in there (no idea how that happened though, I myself did never touch that myself)
12:07 robb_nl joined #gluster
12:08 karnan joined #gluster
12:08 cloph geo-replication did stumble over one of these, and I thought it was a single instance, but there are many such links....
12:08 kkeithley cloph: you're the second person I've heard report wacky symlinks in the brick's .glusterfs dir
12:08 kkeithley hmmmm
12:09 cloph if that other person was awerner, then we're looking at the same one
12:09 kkeithley have the clients got any bind mounts in their mounted gluster volume?
12:09 * kkeithley doesn't remember who the other person was
12:09 cloph no, no additional mounts
12:09 cloph (and also no nfs exports)
12:09 cloph it is only mounted on the master and on the geo-replication slave
12:11 cloph it also only contains a single brick, only using glusterfs for that volume to use it's geo-replication feature. other volumes have more bricks
12:13 armyriad joined #gluster
12:13 d0nn1e joined #gluster
12:14 ppai joined #gluster
12:16 plarsen joined #gluster
12:18 robb_nl joined #gluster
12:21 7GHABELAA joined #gluster
12:21 5EXAAX427 joined #gluster
12:21 rwheeler joined #gluster
12:21 armyriad joined #gluster
12:22 merlink joined #gluster
12:23 luizcpg joined #gluster
12:28 karnan joined #gluster
12:29 ndarshan joined #gluster
12:34 om2 joined #gluster
12:34 om3 joined #gluster
12:39 jwd joined #gluster
12:39 haomaiwang joined #gluster
12:40 alvinstarr joined #gluster
12:45 haomaiwang joined #gluster
12:49 ben453 joined #gluster
12:54 unclemarc joined #gluster
12:55 gem joined #gluster
12:55 msvbhat joined #gluster
13:07 harish joined #gluster
13:11 shyam joined #gluster
13:16 Manikandan joined #gluster
13:18 muneerse2 joined #gluster
13:21 shyam left #gluster
13:23 merlink joined #gluster
13:24 shyam joined #gluster
13:25 ashka hi, I reckon it's not possible with gluster but is there a way to have a replicated gluster with delayed replication without using nfs ? (e.g. single node copies to a single brick then the brick writes the replica to another brick instead of the node copying to several bricks at the same time)
13:37 cloph ashka: maybe geo-replication fits your needs?
13:38 ashka cloph: I'll look into it, thanks
13:38 gowtham joined #gluster
13:40 dnunez joined #gluster
13:44 dnunez joined #gluster
13:47 mchangir joined #gluster
13:54 squizzi_ joined #gluster
13:59 gowtham joined #gluster
14:03 _nixpanic joined #gluster
14:03 _nixpanic joined #gluster
14:04 dnunez joined #gluster
14:09 msvbhat joined #gluster
14:10 dlambrig joined #gluster
14:12 dlambrig left #gluster
14:16 julim joined #gluster
14:26 kramdoss_ joined #gluster
14:27 kramdoss__ joined #gluster
14:35 dnunez joined #gluster
14:42 dlambrig joined #gluster
14:45 rafaels joined #gluster
14:48 paul98 joined #gluster
14:48 paul98 hey, how comes i have a parition with a iscsi target setup, with a windows iscsi target mapped ot it, within centos i get input / output rorr when i try to read the directory
14:50 bowhunter joined #gluster
14:52 shaunm joined #gluster
14:54 msvbhat joined #gluster
14:57 vanshyr joined #gluster
15:03 wushudoin joined #gluster
15:19 baojg joined #gluster
15:20 Apeksha joined #gluster
15:22 dnunez joined #gluster
15:28 pdrakeweb joined #gluster
15:35 skylar joined #gluster
15:36 javi404 joined #gluster
15:37 dgandhi1 joined #gluster
15:39 Manikandan joined #gluster
15:43 RameshN joined #gluster
15:48 jwd joined #gluster
15:51 squizzi joined #gluster
15:54 kpease joined #gluster
15:56 haomaiwang joined #gluster
15:59 om joined #gluster
16:10 haomaiwang joined #gluster
16:30 djgerm left #gluster
16:31 Gambit15 joined #gluster
16:51 F2Knight joined #gluster
16:54 skoduri joined #gluster
16:55 RameshN joined #gluster
16:57 dnunez joined #gluster
17:11 rwheeler joined #gluster
17:15 shubhendu joined #gluster
17:17 wushudoin joined #gluster
17:26 Guest44508 joined #gluster
17:28 shubhendu joined #gluster
17:29 karnan joined #gluster
17:29 jwd joined #gluster
17:36 Manikandan joined #gluster
17:42 jhc76 joined #gluster
17:42 om joined #gluster
17:46 jhc76 I'm experiencing high load with gluster. My gluster three bricks in replication mode. load average is upto 50. I'm not sure where to look first to properly troubleshoot this. I have restarted the gluster services but the load climbs right back up. from what I can see glusterfs is using upto 400% constantly.
17:47 jhc76 400% cpu utillization
17:50 JoeJulian Sounds like healing is being done. What version is this?
17:50 kpease joined #gluster
17:51 nishanth joined #gluster
17:51 jhc76 Hi Joe, this is running on glusterfs 3.7.6
17:52 JoeJulian So you can disable client-side heals (cluster.data-self-heal off) and/or you can upgrade to 3.7.11.
17:55 jhc76 ah. to just diasble, is the command gluster volume set myvolume cluster.data-self-heal off ?
17:56 Slashman joined #gluster
17:58 JoeJulian yes
18:02 jhc76 thanks, joe. can't seem to tell any difference however...
18:03 jiffin joined #gluster
18:05 Slashman joined #gluster
18:08 JoeJulian jhc76: What's the performance like from a client?
18:08 chirino joined #gluster
18:09 jhc76 it just takes awhile to list directory files. so naturally, people complain. :D
18:10 unclemarc joined #gluster
18:11 JoeJulian See if it makes a difference disabling metadata-self-heal and entry-self-heal as well (I did all three, myself, without testing each one individually).
18:11 jhc76 Either the turning off self heal has helped or users are not hitting the machine as much. the load is down to 20.
18:11 unclemarc joined #gluster
18:13 JoeJulian So the self-heal daemon will continue to perform the necessary heals, but that should help you meet ,,(joe's performance metric)
18:13 glusterbot nobody complains.
18:18 jhc76 thanks. joe. I appreciate it. do you recommend turning data-self-heal off as well?
18:21 JoeJulian Right, I thought we'd already done that.
18:22 jhc76 god.. you are right. my brain is not here with me today.
18:23 jhc76 so that looks promising. the load is now down to 13. wow. the self-heal really puts stress on the systems.
18:23 cliluw joined #gluster
18:24 jhc76 probably having the volume 90% full doesn't help either. we are trying migrate unused data off from the cluster to archive system to make room. but not until next week.
18:25 JoeJulian Well, the self-heal doesn't put that much stress, but the bug that's fixed in 3.7.11 does.
18:26 cloph I try again with the geo-replication problem I have - I cannot run config command for example, because it complains about no geo-replication being set up on other peers in the cluster (the volume only has one brick, the master - and is replicated to a volume with only one brick on the slave)
18:27 cloph https://botbot.me/freenode/gluster/2016-06-16/?msg=68061537&page=4 for output sample
18:27 jhc76 i see. well, I'm stuck with this version until the project is complete. joejulian: thank you so much for your help. Btw, I frequent your blog. You don't know how much your blog helped me in the past. I appreciate everything you are doing for this community.
18:28 JoeJulian I'm happy I could help.
18:29 JoeJulian cloph: Wow, that's still a problem? Yuck. I don't know how that can happen, and am in the middle of a critical $dayjob thing so I can't really read through the source and see how that message gets triggered at the moment.
18:30 JoeJulian I honestly hate passing the buck, but in this case since it's been a problem for so long I'd like to refer you to the gluster-users mailing list.
18:31 cloph thanks anyway
19:19 kpease joined #gluster
19:40 jvargas joined #gluster
19:40 jvargas Hello. I started working with Gluster for replication and looks fine
19:41 jvargas However, I'd like to consider replacing a contingency strategy I am using now between to servers on distinct sites.
19:42 jvargas Currently I use rsync running each 10 minutes, but I'd like to know if using Gluster over WAN would be a good idea or if there is some risk to consider.
19:43 JoeJulian jvargas: For remote content, geo-replication might be what you're looking for.
19:43 JoeJulian For bidirectional replication, wan adds too much latency.
19:44 jvargas I see. What would be the difference?
19:46 post-factum jvargas: georep is just another rsync, but a little bit more clever
19:56 jvargas Thanks, I'll take a look and make tests.
20:15 om joined #gluster
20:21 tom[] joined #gluster
20:34 hackman joined #gluster
20:52 Guest87932 joined #gluster
21:00 om joined #gluster
21:17 gbox Is someone here (post-factum?) is a maestro with kvm on gluster?
21:22 post-factum maestro?
21:22 post-factum i'm just msc in telecommunications ;)
21:23 gbox post-factum: maestro: a great or distinguished figure in any sphere
21:24 gbox post-factum: Looking at the logs you use kvm+ceph now?  The gluster docs seem incomplete/outdated: http://www.gluster.org/community/documentation/index.php/Libgfapi_with_qemu_libvirt
21:24 level7 joined #gluster
21:26 gbox post-factum:  I'm just wondering if kvm on gluster will work well enough & if config details are documented anywhere.
21:27 pampan joined #gluster
21:30 post-factum correct, we use ceph for that now, but here are ppl that use gluster. you may take some look at mailing list, there were some questions about it
21:39 twm joined #gluster
21:39 F2Knight joined #gluster
21:43 gbox post-factum:  Thanks.  A lot of gluster stuff is still at POC level so I will wait and keep looking
21:49 gbox OK it worked for CyrilPeponnet in February.  I'll try it
21:49 JoeJulian It works for me. :)
21:50 JoeJulian Won't work with 3.8.1 today though, so stick with 3.7.11.
21:50 JoeJulian s/8.1/8.0/
21:50 glusterbot What JoeJulian meant to say was: Won't work with 3.8.0 today though, so stick with 3.7.11.
21:51 gbox JoeJulian: Thanks!  I'm a late adopter so I'm only now moving from 3.7.6 to 3.7.11
21:53 cloph JoeJulian, *: FYI: my geo-replication problem (complaining about no session from peers not taking part in the geo-replication) is solved by forcefully re-creating the replication entry.
21:54 JoeJulian Ah, cool.
21:54 JoeJulian Not cool that it happened, but cool that you found a solution.
21:54 cloph yeah - hope it sticks :-)
21:56 shyam joined #gluster
22:34 nishanth joined #gluster
22:46 kovshenin joined #gluster
22:47 plarsen joined #gluster
23:02 om joined #gluster
23:03 uosiu joined #gluster
23:10 luizcpg joined #gluster
23:12 pampan hey guys, I'm using 3.5.1 and will update soon, but we can't do it right now. Are you aware of some bugs on this version that could cause healing to basically do nothing? I've added a new brick 'gluster volume add-brick gv-reports-stg replica 3 glusterfs3.stg:/exports/reports-stg/brick', triggered healing with both glfsheal and 'gluster volume heal gv-reports-stg'... but nothing happens!
23:12 pampan am I missing something?
23:13 JoeJulian iirc, 3.5.1 had a ton of bugs. I wouldn't be surprised if that was one.
23:13 JoeJulian As usual, check the log files for clues.
23:14 pampan it does sync some of the files, but not all of them
23:16 JoeJulian It might do them in chunks 10 minutes apart.
23:20 pampan JoeJulian: is there a way to force it not to wait?
23:20 JoeJulian It shouldn't wait.
23:21 JoeJulian I never used that version so I never had to figure out all the workarounds.
23:22 pampan In any case, thanks for the input... just wanted to make sure I wasn't missing anything obvious
23:23 JoeJulian Not to mention I don't really remember details from 2014 bugs. I just remember advising against 3.5.1 a bunch during that period. :/
23:24 JoeJulian But do check your logs. Perhaps there's a simple answer.
23:24 JoeJulian Not connecting or something.
23:24 pampan We should move at least to 3.5.9, but oh well :)
23:24 pampan I don't see anything weird in the logs for the moment
23:25 JoeJulian bug 1115748 maybe
23:25 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1115748 unspecified, unspecified, ---, ravishankar, CLOSED DUPLICATE, Bricks are unsync after recevery even if heal says everything is fine
23:27 pampan I'm gonna check that
23:34 level7_ joined #gluster
23:45 pampan the only warning, not even error, that I'm seeing in the logs is:  0-gv-reports-stg-client-2: remote operation failed: No such file or directory. Path:  (f761f9c8-3f8e-4421-8e8f-492e470ede91)
23:45 pampan I don't think that's the cause 3000 files won't sync

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary