Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-02-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 owlbot joined #gluster
00:07 owlbot joined #gluster
00:11 johnmilton joined #gluster
00:11 owlbot joined #gluster
00:15 owlbot joined #gluster
00:19 owlbot joined #gluster
00:23 owlbot joined #gluster
00:26 rcampbel3 joined #gluster
00:27 owlbot joined #gluster
00:31 owlbot joined #gluster
00:35 owlbot joined #gluster
00:39 owlbot joined #gluster
00:43 owlbot joined #gluster
00:45 gildub joined #gluster
00:47 owlbot joined #gluster
00:51 owlbot joined #gluster
00:55 owlbot joined #gluster
00:59 owlbot joined #gluster
01:03 owlbot joined #gluster
01:07 owlbot joined #gluster
01:11 owlbot joined #gluster
01:15 owlbot joined #gluster
01:19 owlbot joined #gluster
01:20 rcampbel3 joined #gluster
01:23 owlbot joined #gluster
01:24 haomaiwa_ joined #gluster
01:27 owlbot joined #gluster
01:31 owlbot joined #gluster
01:32 Lee1092 joined #gluster
01:35 owlbot joined #gluster
01:39 owlbot joined #gluster
01:43 owlbot joined #gluster
01:47 owlbot joined #gluster
01:48 baojg joined #gluster
01:49 masterzen joined #gluster
01:51 owlbot joined #gluster
01:55 owlbot joined #gluster
01:59 owlbot joined #gluster
02:01 haomaiwa_ joined #gluster
02:01 nishanth joined #gluster
02:03 owlbot joined #gluster
02:06 Guest91645 joined #gluster
02:07 owlbot joined #gluster
02:11 owlbot joined #gluster
02:14 rcampbel3 joined #gluster
02:15 owlbot joined #gluster
02:18 harish joined #gluster
02:19 owlbot joined #gluster
02:23 owlbot joined #gluster
02:28 owlbot joined #gluster
02:32 owlbot joined #gluster
02:36 owlbot joined #gluster
02:40 owlbot joined #gluster
02:46 owlbot joined #gluster
02:48 owlbot joined #gluster
02:48 unlaudable joined #gluster
02:52 owlbot joined #gluster
02:54 volga629_ joined #gluster
02:55 volga629_ Hello Everyone, if i have 3 bricks and when add another one is this should affect actual virtual machines ?
02:55 unlaudable joined #gluster
02:56 owlbot joined #gluster
02:57 arcolife joined #gluster
03:00 owlbot joined #gluster
03:01 haomaiwa_ joined #gluster
03:04 owlbot joined #gluster
03:08 owlbot joined #gluster
03:08 bharata-rao joined #gluster
03:09 rcampbel3 joined #gluster
03:12 owlbot joined #gluster
03:16 owlbot joined #gluster
03:20 owlbot joined #gluster
03:24 owlbot joined #gluster
03:26 vmallika joined #gluster
03:28 owlbot joined #gluster
03:32 owlbot joined #gluster
03:36 owlbot joined #gluster
03:40 owlbot joined #gluster
03:40 rcampbel3 joined #gluster
03:44 owlbot joined #gluster
03:48 owlbot joined #gluster
03:52 owlbot joined #gluster
03:56 owlbot joined #gluster
04:00 owlbot joined #gluster
04:00 ppai joined #gluster
04:01 haomaiwang joined #gluster
04:04 owlbot joined #gluster
04:06 nbalacha joined #gluster
04:08 owlbot joined #gluster
04:08 atinm joined #gluster
04:12 sloop joined #gluster
04:12 owlbot joined #gluster
04:13 haomaiwa_ joined #gluster
04:16 owlbot joined #gluster
04:19 kbyrne joined #gluster
04:20 abyss^ joined #gluster
04:20 owlbot joined #gluster
04:20 nangthang joined #gluster
04:23 gem joined #gluster
04:24 owlbot joined #gluster
04:25 nehar joined #gluster
04:25 hgowtham joined #gluster
04:28 owlbot joined #gluster
04:29 shubhendu joined #gluster
04:31 sakshi joined #gluster
04:32 owlbot joined #gluster
04:33 vimal joined #gluster
04:36 owlbot joined #gluster
04:37 jiffin joined #gluster
04:40 karthikfff joined #gluster
04:40 owlbot joined #gluster
04:44 owlbot joined #gluster
04:48 owlbot joined #gluster
04:49 rafi joined #gluster
04:52 owlbot joined #gluster
04:56 owlbot joined #gluster
04:59 RameshN joined #gluster
05:00 jermudgeon joined #gluster
05:00 owlbot joined #gluster
05:01 haomaiwa_ joined #gluster
05:02 rcampbel3 joined #gluster
05:02 cpetersen joined #gluster
05:03 atrius joined #gluster
05:04 mzink_gone joined #gluster
05:04 owlbot joined #gluster
05:08 owlbot joined #gluster
05:12 owlbot joined #gluster
05:14 aravindavk joined #gluster
05:16 owlbot joined #gluster
05:17 skoduri_ joined #gluster
05:18 karnan joined #gluster
05:20 ppai joined #gluster
05:20 owlbot joined #gluster
05:23 pppp joined #gluster
05:24 poornimag joined #gluster
05:24 owlbot joined #gluster
05:28 owlbot joined #gluster
05:32 owlbot joined #gluster
05:35 ndarshan joined #gluster
05:36 owlbot joined #gluster
05:39 nehar joined #gluster
05:40 owlbot joined #gluster
05:43 kdhananjay joined #gluster
05:44 owlbot joined #gluster
05:45 atalur joined #gluster
05:47 Bhaskarakiran joined #gluster
05:48 owlbot joined #gluster
05:49 baojg joined #gluster
05:52 owlbot joined #gluster
05:54 kotreshhr joined #gluster
05:55 kanagaraj joined #gluster
05:56 owlbot joined #gluster
05:58 hgowtham_ joined #gluster
06:00 owlbot joined #gluster
06:01 haomaiwa_ joined #gluster
06:01 nishanth joined #gluster
06:04 owlbot joined #gluster
06:08 owlbot joined #gluster
06:10 poornimag joined #gluster
06:12 owlbot joined #gluster
06:14 kshlm joined #gluster
06:14 kotreshhr joined #gluster
06:14 haomaiwang joined #gluster
06:16 owlbot joined #gluster
06:17 vmallika joined #gluster
06:20 skoduri joined #gluster
06:20 Apeksha joined #gluster
06:20 owlbot joined #gluster
06:23 gem joined #gluster
06:24 owlbot joined #gluster
06:25 python_lover joined #gluster
06:28 owlbot joined #gluster
06:31 pppp joined #gluster
06:32 owlbot joined #gluster
06:33 poornimag joined #gluster
06:35 ramky joined #gluster
06:36 owlbot joined #gluster
06:40 owlbot joined #gluster
06:44 owlbot joined #gluster
06:49 owlbot joined #gluster
06:53 owlbot joined #gluster
06:57 owlbot joined #gluster
06:59 poornimag joined #gluster
07:01 owlbot joined #gluster
07:01 haomaiwa_ joined #gluster
07:03 purpleidea joined #gluster
07:03 purpleidea joined #gluster
07:03 lanning joined #gluster
07:04 rossdm joined #gluster
07:04 kblin joined #gluster
07:04 kblin joined #gluster
07:04 zerick joined #gluster
07:04 Gugge joined #gluster
07:04 _nixpanic joined #gluster
07:04 delhage joined #gluster
07:04 _nixpanic joined #gluster
07:04 pocketprotector joined #gluster
07:04 jbrooks joined #gluster
07:04 ron-slc joined #gluster
07:04 troj joined #gluster
07:04 natgeorg joined #gluster
07:04 hagarth joined #gluster
07:04 papamoose joined #gluster
07:05 owlbot joined #gluster
07:06 javi404 joined #gluster
07:06 sjohnsen joined #gluster
07:09 owlbot joined #gluster
07:13 owlbot joined #gluster
07:14 jtux joined #gluster
07:15 mhulsman joined #gluster
07:16 BuffaloCN joined #gluster
07:17 owlbot joined #gluster
07:21 owlbot joined #gluster
07:23 hgowtham__ joined #gluster
07:24 robb_nl joined #gluster
07:25 owlbot joined #gluster
07:25 unlaudable joined #gluster
07:27 nangthang joined #gluster
07:29 owlbot joined #gluster
07:33 owlbot joined #gluster
07:36 hgowtham joined #gluster
07:37 owlbot joined #gluster
07:37 [Enrico] joined #gluster
07:38 poornimag joined #gluster
07:41 owlbot joined #gluster
07:42 Bud joined #gluster
07:44 PingLeeCN joined #gluster
07:45 owlbot joined #gluster
07:47 RameshN joined #gluster
07:48 PingLeeCN left #gluster
07:48 Bud Hello all! I'm using gluster on Debian Jessie with a 3 node distributed replica setup. When I want to expand from 2 nodes to 3 nodes, I see that gluster volume status reports a rebalance task, but it's status doesn't update from "not started".
07:48 Bud However, the files do get stored into the new replaced-brick
07:48 Bud If I manually perform rebalance, status does get updated.
07:49 owlbot joined #gluster
07:49 Apeksha joined #gluster
07:49 armyriad joined #gluster
07:51 Bud I've found this line in the logs of particular interest when executing the replace-brick .... commit force:
07:51 Bud [2016-02-15 07:44:24.925892] W [socket.c:588:__socket_rwv] 0-glustershd: readv on /var/run/gluster/1f644f37955ef54ef854b71460e17f46.socket failed (Invalid argument)
07:52 Bud which is on both nodes concerned
07:52 pppp joined #gluster
07:53 owlbot joined #gluster
07:57 kanagaraj joined #gluster
07:57 owlbot joined #gluster
08:01 owlbot joined #gluster
08:01 haomaiwa_ joined #gluster
08:05 owlbot joined #gluster
08:09 Saravanakmr joined #gluster
08:09 owlbot joined #gluster
08:10 kovshenin joined #gluster
08:13 owlbot joined #gluster
08:15 mobaer joined #gluster
08:15 mhulsman joined #gluster
08:17 owlbot joined #gluster
08:18 jww joined #gluster
08:20 jiffin joined #gluster
08:20 kanagaraj_ joined #gluster
08:21 owlbot joined #gluster
08:21 [diablo] joined #gluster
08:21 caveat- joined #gluster
08:21 cuqa hello, whats a good way to delete a whole folder from a brick? colleagues thought it might be a good idea to store php sessions in glusterfs
08:22 cuqa however, it was not, now we have two different folders with a lot of differnet files
08:22 cuqa i changed to memcached and would like to just delete this folder complete inside the brick, because even ls -f1 fails
08:23 toshywoshy joined #gluster
08:24 Nakiri_ joined #gluster
08:24 Nakiri_ Hello, could someone point me in the right direction?
08:25 owlbot joined #gluster
08:25 Nakiri_ I have glusterfs installed on two servers with replication going on between them. Everything works fine until I shut down one of the peers.
08:25 Nakiri_ Then the other machine goes basically non-responsive for about a minute.
08:26 Nakiri_ Waiting for some kind of timeout?
08:26 Nakiri_ Yet I've done "gluster volume set fs network.ping-timeout "5""
08:26 Nakiri_ Changed nothing.
08:26 Nakiri_ I'd like to pull that down to 10 seconds tops.
08:27 Bud @Nakiri_: have you checked that all hostnames involved are correctly resolvable, and do not point to 127.0.0.1 by mistake ?
08:27 Bud I've had similar issues before ;)
08:27 Nakiri_ I'll try, just a second.
08:28 ahino joined #gluster
08:29 Nakiri_ Alright, I wrote the proper IP-s to hosts file and deleted all 127.0.0.1
08:29 owlbot joined #gluster
08:30 Nakiri_ time ls during other node shutdown still says 1m16
08:30 Bud you should have kept 127.0.0.1 localhost, that one is ok ofcourse
08:30 Nakiri_ I'll put it back, just for testing.
08:31 Nakiri_ The silly thing is that it works perfectly when the two peers are up or after the long timeout period when one of them is shut down.
08:32 fsimonce joined #gluster
08:33 owlbot joined #gluster
08:37 owlbot joined #gluster
08:38 RameshN joined #gluster
08:38 hgowtham joined #gluster
08:40 Nakiri_ Might it be the problem that "gluster peer status" is somewhy reporting each other's hostnames to be their IP-s? And then their actual hostname is listed under "Other names"?
08:41 armyriad joined #gluster
08:41 owlbot joined #gluster
08:41 Bud I don't have a "other names" in that output
08:45 owlbot joined #gluster
08:48 kmmndr hi all :-)
08:48 Nakiri_ Hello.
08:48 glusterbot Nakiri_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:49 Nakiri_ Alright, I'll try to get the output to proper format then.
08:49 Nakiri_ Thank you for the help.
08:49 owlbot joined #gluster
08:51 ctria joined #gluster
08:53 owlbot joined #gluster
08:54 ivan_rossi joined #gluster
08:57 owlbot joined #gluster
09:01 haomaiwa_ joined #gluster
09:01 owlbot joined #gluster
09:04 ctria joined #gluster
09:05 ctria joined #gluster
09:05 owlbot joined #gluster
09:09 owlbot joined #gluster
09:13 owlbot joined #gluster
09:15 TonyBurn_ joined #gluster
09:16 Slashman joined #gluster
09:16 poornimag joined #gluster
09:16 Bud joined #gluster
09:16 pppp joined #gluster
09:16 Saravanakmr joined #gluster
09:17 jiffin joined #gluster
09:17 kanagaraj_ joined #gluster
09:17 ahino joined #gluster
09:17 RameshN joined #gluster
09:17 owlbot joined #gluster
09:21 owlbot joined #gluster
09:28 owlbot joined #gluster
09:28 om3 joined #gluster
09:28 rcampbel3 joined #gluster
09:29 deniszh joined #gluster
09:29 owlbot joined #gluster
09:30 scubacuda joined #gluster
09:33 owlbot joined #gluster
09:37 owlbot joined #gluster
09:39 unlaudable joined #gluster
09:41 owlbot joined #gluster
09:42 hgowtham_ joined #gluster
09:45 owlbot joined #gluster
09:45 chirino joined #gluster
09:49 owlbot joined #gluster
09:52 nisroc joined #gluster
09:53 owlbot joined #gluster
09:55 owlbot joined #gluster
09:59 owlbot joined #gluster
10:01 haomaiwa_ joined #gluster
10:03 owlbot joined #gluster
10:16 asdfgh joined #gluster
10:20 asdfgh left #gluster
10:24 ravage joined #gluster
10:24 Ravage_ left #gluster
10:25 Jules- joined #gluster
10:35 owlbot joined #gluster
10:43 chirino_m joined #gluster
10:44 arcolife joined #gluster
10:45 Wizek joined #gluster
10:47 muneerse joined #gluster
10:48 harish_ joined #gluster
10:49 Ravage joined #gluster
10:51 Ravage Hi. Re-added a brick to a volume with 2 bricks and replica 2. The self-heal process uses all my cpu cores and the cluster is practically unanavailable. Is there a way to limit the impact somehow?
10:52 Ravage Also the bandwidth used for the resync is about 80mbit. Which is kind of slow for the CPU ressources used
10:59 python_lover joined #gluster
11:00 bharata_ joined #gluster
11:01 karthikfff joined #gluster
11:01 haomaiwang joined #gluster
11:02 vmallika joined #gluster
11:09 anil joined #gluster
11:13 post-factum Ravage: show us gluster volume info plz
11:15 Ravage http://nopaste.linux-dev.org/?942800&download
11:28 post-factum number of cpus?
11:29 post-factum incl. hyperthreading if any
11:33 Ravage 2 CPUs. every CPU 16 cores (32 with hyperthreading)
11:33 Ravage sync takes about 1600% CPU
11:33 owlbot` joined #gluster
11:35 post-factum that is because performance.io-thread-count is set to 16 by default
11:35 post-factum 16*100==1600%
11:35 post-factum you need to lower that value in order to calm down cpu
11:35 Ravage ok. but that uses less threads in general. there is no way reduce the load od the self-heal process alone?
11:35 Ravage *of
11:38 Ravage but thanks for the answer so far. will try this
11:38 post-factum there are also: performance.high-prio-threads, performance.normal-prio-threads, performance.low-prio-threads, performance.least-prio-threads
11:38 post-factum probably, some of them could be related to self-heal
11:39 post-factum another knob: cluster.background-self-heal-count
11:39 post-factum also 16 by default
11:39 post-factum "This specifies the number of self-heals that can be  performed in background without blocking the fop"
11:39 post-factum try lowering this value to 4 first
11:39 Ravage sounds good
11:40 Ravage found the page with the options
11:40 Ravage "undocumented"
11:40 Ravage thanks
11:40 post-factum "sudo gluster volume set help" :)
11:40 Ravage yep will have a deeper look at that too
11:42 Nakiri_ Hello, my two node gluster is still taking a long time to time out when one of the peers shuts down.
11:42 Nakiri_ About a minute.
11:42 Nakiri_ And even an "ls /" won't work properly while timeouting.
11:42 Nakiri_ Could I somehow reduce it to like 10 seconds?
11:43 Nakiri_ tried "gluster volume set fs network.ping-timeout "10"" already
11:45 post-factum have you restarted volume and remounted clients?
11:47 Nakiri_ Yes.
11:48 Nakiri_ I had the problem, I pretty much deleted the volume, detached the peers and remade the whole system.
11:48 Nakiri_ And it still happens.
12:01 haomaiwa_ joined #gluster
12:06 hgowtham__ joined #gluster
12:06 msvbhat joined #gluster
12:14 wnlx joined #gluster
12:14 gem joined #gluster
12:16 nehar joined #gluster
12:33 ahino joined #gluster
12:34 wnlx joined #gluster
12:35 jri joined #gluster
12:41 pkoro joined #gluster
12:41 jri Hi all! I'm new to Gluster, and I trying to do a webserver cluster with some app in PHP and others in Python. I've some performance issue on PHP sites. All python/django are running fast. I've noticed that if I mount my volume with NFS the read performances are improved dramatically rather that  glusterfs/fuse.
12:42 jri Is this behaviour seems logical ? or did better configure gluster/fuse client ? if you have some experience with that
12:42 jri (hope my english level is not too bad...)
12:49 jiffin jri: NFS better read performance due to its client side caching
12:51 Manikandan joined #gluster
12:53 nbalacha joined #gluster
12:55 jri jiffin: oh I didn't know that. So NFS could be a better option for my webserver environment (99% read) ? I didn't find many return with this
12:59 johnmilton joined #gluster
13:00 jiffin jri: should be , if you have lots of read than writes
13:01 7YUAAXNE8 joined #gluster
13:10 ira joined #gluster
13:13 pablor joined #gluster
13:16 kotreshhr left #gluster
13:17 ninkotech joined #gluster
13:17 ninkotech_ joined #gluster
13:18 volga629_ joined #gluster
13:19 volga629_ Hello Everyone, what possible missing 0-xlator: /usr/lib64/glusterfs/3.7.6/xlator/cluster/tier.so: undefined symbol: dht_methods
13:22 ndevos volga629_: that seems to have been fixed with http://review.gluster.org/12877 - in 3.7.8 which was released last week
13:22 glusterbot Title: Gerrit Code Review (at review.gluster.org)
13:23 volga629_ or I see my still in glusterfs-libs-3.7.6-2.fc23.x86_64
13:23 volga629_ so need wait
13:24 ndevos it's almost there, see https://apps.fedoraproject.org/packages/glusterfs/overview/ - you install the version from the updates-testing repo
13:24 glusterbot Title: Package glusterfs (at apps.fedoraproject.org)
13:25 volga629_ When add or remove brick I see auto heal kick in and then libvirt can't connect to some vm.
13:25 volga629_ but no split brain or some thing like this
13:26 volga629_ is this expected behaviour when add or remove brick from pool ?
13:27 chirino joined #gluster
13:28 ndevos not really, when you do a remove-brick, the files on that brick should get migrated to other bricks, without applications noticing anything
13:30 volga629_ each when I remove or add brick that what I see. right now I have 4 bricks, but today latter going remove one for maintenance of nic replacement on server. Just want make it stays up
13:33 arcolife joined #gluster
13:34 volga629_ how possible diagnose this issue ?
13:36 luizcpg joined #gluster
13:37 kdhananjay volga629_: What's the version you are using?
13:38 volga629_ just updated to glusterfs-server-3.7.8-1.fc23.x86_64
13:39 kdhananjay volga629_: did you attempt removal of one brick from a replica set or did you attempt removal of the whole set?
13:39 volga629_ I added one brick
13:39 kdhananjay volga629_: rather, could you share the volume info output and the command line you used to invoke remove-brick?
13:39 kdhananjay ...
13:39 volga629_ ok just
13:42 pablor joined #gluster
13:43 volga629_ http://fpaste.org/322837/45554381/
13:43 glusterbot Title: #322837 Fedora Project Pastebin (at fpaste.org)
13:44 volga629_ gluster volume add-brick datapoint02 replica 4 srv4:/var/lib/vm_store/tmp force
13:44 bluenemo joined #gluster
13:45 volga629_ sometimes in log I see W [socket.c:869:__socket_keepalive] 0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 36, Invalid argument
13:45 volga629_ no sure if it related
13:46 luizcpg joined #gluster
13:46 unclemarc joined #gluster
13:46 kdhananjay volga629_: just trying to understand the test better
13:46 kdhananjay volga629_: so the replica count was 3 initially and you wanted to make it 4, is that correct?
13:46 volga629_ yes
13:47 kdhananjay volga629_: and then you did a heal-full?
13:47 volga629_ no I didn't engage the heal procedure
13:48 kdhananjay volga629_: ok what happened after add-brick ?
13:48 volga629_ when attached brick I did gluster volume heal datapoint02 info and I so vms in heal state
13:48 kdhananjay volga629_: ok ...
13:48 volga629_ right now I tried same command and it just hangs in shell no output
13:49 kdhananjay volga629_: which command? heal-info?
13:49 volga629_ yes
13:49 kdhananjay volga629_: that is interesting, because this hang issue was actually fixed in 3.7.8
13:49 volga629_ it just come back and it says again 12 entries
13:49 volga629_ in heal state
13:49 kdhananjay volga629_: phew! great. :)
13:50 kdhananjay volga629_: ok so what is the issue you are seeing then?
13:51 volga629_ that want I see
13:51 volga629_ http://fpaste.org/322840/55442661/
13:51 glusterbot Title: #322840 Fedora Project Pastebin (at fpaste.org)
13:51 volga629_ and it not in split brain
13:52 volga629_ that will cause the libvirt to hang and kill virtual machine to stop
13:52 chromatin joined #gluster
13:52 volga629_ I so exactly the same when tried add or remove brick where heal procedure kick automatically
13:53 kdhananjay volga629_: when you add a new brick, it is clean, with no data.
13:54 kdhananjay volga629_: and the remaining three bricks contain data.
13:54 kdhananjay volga629_: since this fourth brick is also one of the replicas, it also needs to have the contents of the remaining three bricks healed into it
13:54 volga629_ yes, I though it should copy data without locking ?
13:54 kdhananjay volga629_: does that make sense?
13:54 kdhananjay volga629_: ok can you try the following:
13:55 volga629_ yes, I follow what you saying
13:55 kdhananjay volga629_:  for i in data metadata entry; do gluster volume set <VOLNAME> $i-self-heal off; done
13:55 kdhananjay volga629_:  and tell me if this fixes the issue for you
13:55 volga629_ ok
13:56 kdhananjay volga629_: this should fix the hang in my opinion.
13:56 kdhananjay volga629_: just curious, how big are your files?
13:56 volga629_ some vm about 50GB
13:57 volga629_ any heal commands hangs
13:57 haomaiwa_ joined #gluster
13:57 kdhananjay volga629_: for how long?
13:57 volga629_ Brick srv02:/var/lib/vm_store/tmp
13:57 volga629_ Failed to process entries completely. Number of entries so far: 0
13:57 kdhananjay volga629_: ok there should be something in the logs.
13:57 volga629_ right now it sit for 30 sec
13:57 volga629_ let me try for loop
13:57 pablor hi
13:57 glusterbot pablor: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:59 pablor I have 6 node gluster with 2 replicas.  What is the best way to take one of the nodes for maintenance who requires a reboot?
13:59 volga629_ I just in log http://fpaste.org/322845/45554476/
13:59 glusterbot Title: #322845 Fedora Project Pastebin (at fpaste.org)
13:59 volga629_ so
14:00 kdhananjay volga629_: no that's glusterd logs.
14:00 kdhananjay volga629_: what does glfs-heal*.log say?
14:01 haomaiwang joined #gluster
14:02 mhulsman joined #gluster
14:02 kdhananjay pablor: you mean a 3x2 volume?
14:02 volga629_ in this log just informational messages I don't see any warning or errors ?
14:03 kdhananjay volga629_: ok then its fine. so are the vms functioning normally now?
14:03 volga629_ checking
14:04 pablor yes.3x2
14:04 kdhananjay pablor: do a gluster volume heal <VOL> info
14:04 volga629_ no let show the screen
14:04 volga629_ shot
14:05 volga629_ let me show screen shot
14:05 kdhananjay pablor: it should show number of entries as '0'
14:06 pablor Number of entries: 0 for the 6 bricks
14:06 cyberbootje joined #gluster
14:06 jri_ joined #gluster
14:06 volga629_ https://www.dropbox.com/s/9cqriwvity2c316/Screenshot_canlmail01_2016-02-15_10%3A05%3A41.png?dl=0
14:07 pablor Currently the gluster is Ok.  The idea is to apply a firmware update to one of the nodes who require a reboot.
14:08 volga629_ it not stays up even 4 bricks in pool
14:09 kdhananjay pablor: then you can bring one of the nodes down after checking heal info shows 0 entries.
14:10 kdhananjay volga629_: i dont know how to interpret the screenshot :). are the vms working?
14:10 kdhananjay volga629_: on a related note, you might want to try sharding feature. It fits your use case. :) Here's some doc: http://blog.gluster.org/2015/12/introducing-shard-translator/
14:10 volga629_ that screen shot says vm dead and can't write to virtual disk
14:11 kdhananjay volga629_: What does dead mean? Does reboot of the vm  work?
14:11 pablor Ok. I need to do anything special or just stop the services and do a reboot on the node who require maintenance?
14:12 volga629_ it can easy break data on vm a specially if it db I need understand what is not right first in current
14:12 volga629_ setup
14:12 kdhananjay volga629_: ok what does the gluster client log say? any warnings/errors?
14:12 volga629_ heal commands hangs
14:13 Wizek joined #gluster
14:13 volga629_ in heal nothing special
14:13 kdhananjay pablor: nope, that should work.
14:13 volga629_ [root@n ~]#  gluster volume heal datapoint02 info split-brain
14:13 volga629_ that will sit forever
14:15 volga629_ I check all vm and whole pool is write disk problem which 18 vm
14:17 volga629_ heal commands hangs not all bricks only on 2 from 4
14:17 volga629_ Failed to process entries completely. Number of entries so far: 0
14:17 harish_ joined #gluster
14:21 volga629_ what is mean  Lock for vol datapoint02 not held
14:22 volga629_ dead mean that vm can't write to disk anymore it read only
14:24 archit_ joined #gluster
14:24 ron-slc joined #gluster
14:25 Slashman joined #gluster
14:25 karnan joined #gluster
14:25 plarsen joined #gluster
14:25 kdhananjay joined #gluster
14:25 kdhananjay (07:50:17  IST) kdhananjay: volga629_: ok, there will be a glfs-heal*.log under /var/log/glusterfs
14:25 kdhananjay (07:50:28  IST) kdhananjay: volga629_: could you tell me what you see in the file?
14:25 kdhananjay (07:52:01  IST) kdhananjay: volga629_: sorry, it would be glfsheal-<volname>.log
14:25 volga629_ yes I can paste 100 lines
14:28 volga629_ http://fpaste.org/322864/14555465/
14:28 glusterbot Title: #322864 Fedora Project Pastebin (at fpaste.org)
14:29 kdhananjay volga629_: there are no errors in the logs :(
14:29 volga629_ yes
14:29 plarsen joined #gluster
14:30 volga629_ I need to know how to troubleshoot, I can afford when infrastructure goes up and down
14:31 volga629_ not sure what the issue why can't stay up or why heal options causing lock the data
14:34 pranithk joined #gluster
14:35 kdhananjay volga629_: i need to head home now. mind if i get back to you in 1.5 hours?
14:35 volga629_ sure I am hear
14:36 kdhananjay volga629_: great. thanks. i'll see you in some time
14:36 volga629_ need recover some vm. I tried start firewall and complain about bad sector
14:36 volga629_ no problem  thanks for help
14:38 hamiller joined #gluster
14:39 pranithk volga629_: Why did you want to make it 4-way replication? Do you want to reduce again to 3-way replication in future by doing remove brick?
14:39 mobaer joined #gluster
14:40 volga629_ yes
14:40 volga629_ we moving some data around and today bring one server down
14:40 volga629_ need
14:40 pranithk volga629_: so you just want to replace the brick? why are you not using replace-brick command?
14:41 volga629_ because the server will go a way completely from this pool
14:42 volga629_ production will be 3 only for right now
14:42 volga629_ should be
14:42 pranithk volga629_: let us say you have 3 servers, A, B, C. Now what will you do?
14:42 volga629_ and D
14:43 pranithk volga629_: okay
14:43 pranithk volga629_: Then? which server do you want to remove?
14:43 pranithk volga629_: let's say 'C'
14:43 volga629_ then tried remove C and self heal kick in not sure why that locked all vms to write
14:44 volga629_ and heal commands are hangs in terminal
14:44 pranithk volga629_: So essentially you wanted to replace C with D?
14:44 volga629_ yes, correct . That cause libvirt to hangs too.
14:44 pranithk volga629_: how did you execute the commands?
14:45 volga629_ gluster volume heal datapoint02 info
14:45 volga629_ that hangs
14:45 pranithk volga629_: No, No. You did add brick of D and then doing remove-brick of C. What we test a lot is replace-brick of C with D
14:46 volga629_ yes you correct I added D then remove C
14:46 pranithk volga629_: my question is why are you not doing replace-brick. Why are you doing add-brick then remove-brick
14:46 volga629_ never tried replace-bick
14:46 volga629_ replace-brick
14:47 volga629_ https://www.dropbox.com/s/9cqriwvity2c316/Screenshot_canlmail01_2016-02-15_10%3A05%3A41.png?dl=0
14:47 pranithk volga629_: Ah! :-). For this use case we suggest replace-brick. We also test it a lot more than increasing and reduce replica count.
14:48 volga629_ the my question is when add or remove brick it should not lock vm to write as on screen shot
14:48 aravindavk joined #gluster
14:48 MessedUpHare joined #gluster
14:48 volga629_ even it should copy data to new disk
14:48 volga629_ is this correct ?
14:49 pranithk volga629_: That is true. We will definitely work on fixing it. But your usecase doesn't need add then remove brick. I am suggesting that if you use replace-brick then everything should work fine for you with 3.7.8
14:49 volga629_ or I need shutdown all vms before add or remove brick
14:51 pranithk volga629_: Don't do add-brick remove-brick. There seems to be a bug, that we need to fix. Use replace-brick until then.
14:52 volga629_ replace-brick will only if I need replace existing one. But if I need just remove or add new example 4 or 5 th brick that not going help
14:52 pranithk volga629_: That is true. We must fix this bug. But at the moment you seem to want to remove the brick also right?
14:52 volga629_ right now need see how recovery and why heal commands hangs
14:52 volga629_ yes you correct
14:53 pranithk volga629_: Could you use replace-brick for now? It should work perfectly for your case. We test that command a lot
14:53 volga629_ I need see first what heal state says
14:53 nehar joined #gluster
14:54 pranithk volga629_: do "gluster volume replace-brick <volname> C:/brick D:/brick commit force"
14:54 pranithk volga629_: Oh this is not test setup?
14:54 volga629_ no
14:54 volga629_ that brought down production
14:54 pranithk volga629_: okay got it.
14:56 volga629_ you see this
14:56 volga629_ http://fpaste.org/322887/45554815/
14:56 glusterbot Title: #322887 Fedora Project Pastebin (at fpaste.org)
14:56 volga629_ that cause libvirt and vms goes down
14:56 pranithk volga629_: heal info doesn't cause pause
14:57 pranithk volga629_: add-brick/remove-brick causes it
14:57 pranithk volga629_: We saw a similar issue reported on gluster-users mailing list by Lindsay. We are working on this issue. We are yet to find why it leads to paused VMs
14:57 bowhunter joined #gluster
14:57 volga629_ yes, I understand. That just says which vm in trouble and when I check that exactly what it is
14:57 pranithk volga629_: As a workaround we suggested replace-brick for those users who faced this
14:58 volga629_ I just need to know how to recover for right now
14:59 pranithk volga629_: You said the production is already down right?
15:00 volga629_ yes
15:00 ira joined #gluster
15:00 pranithk volga629_: In that case can you check if the first two bricks have same VMs?
15:00 pranithk volga629_: I mean the ones which are always up
15:00 volga629_ you see this it says that can get info on 2 nodes about split-brain http://fpaste.org/322888/54839114/
15:00 glusterbot Title: #322888 Fedora Project Pastebin (at fpaste.org)
15:00 volga629_ let me check
15:01 pranithk volga629_: I want you to confirm that the md5sum of the two VM images on first two bricks match
15:01 volga629_ ok
15:04 baojg joined #gluster
15:06 pranithk volga629_: I need to leave for home. It is 8:35 PM here in India... How long do you think this may take? I mean the md5sum... 50GB vms right?
15:06 volga629_ I checked two virtual disk with md5sum
15:06 volga629_ and they look the same
15:06 volga629_ yes
15:07 kanagaraj joined #gluster
15:07 volga629_ this hangs md5sum camailsrv01.qcow2
15:07 volga629_ this disk in heal state
15:08 pranithk volga629_: Did the VMs pause or are they hung?
15:08 volga629_ hangs
15:08 volga629_ hung
15:08 coredump joined #gluster
15:09 pranithk volga629_: Did you disable heals from mount?
15:09 pranithk volga629_: I mean in gluster volume info, do you see cluster.data-self-heal: off ?
15:09 volga629_ cluster.entry-self-heal: off
15:09 volga629_ cluster.data-self-heal: off
15:09 volga629_ cluster.metadata-self-heal: off
15:09 volga629_ that on
15:10 pranithk volga629_: Could you please unmount and mount the volume?
15:10 pranithk volga629_: From wherever it is mounted right now?
15:10 volga629_ it mounted only by libvirt
15:10 volga629_ I guess it direct url
15:10 pranithk volga629_: okay, how do you ask libvirt to remount the volume?
15:10 volga629_ restart libivrt
15:11 pranithk volga629_: Could you do that and see if your VMs start working?
15:11 volga629_ let me try
15:11 volga629_ in libvirt I see
15:11 volga629_ libvirtd[7275]: unable to read 'canlmsg01.qcow2': Input/output error
15:12 pranithk volga629_: okay, don't worry about it. Just remount
15:12 volga629_ ok just restarted
15:13 pranithk volga629_: Start VMs and see if it still hangs
15:13 volga629_ Error starting domain: failed to initialize gluster connection to server: 'srv01': Invalid argument
15:14 pranithk volga629_: What do you see in the logs?
15:14 volga629_ let me check
15:16 Humble joined #gluster
15:16 volga629_ glfsheal-datapoint02.log only informational messages
15:16 pranithk volga629_: Are you using gfapi based mount or fuse mount for libvirt?
15:17 volga629_ I know until heal option will not finish libvirt will not be able to start any vm
15:17 volga629_ pretty sure it use url gluster://ip/vol_name/file
15:18 pranithk volga629_: okay, that is gfapi based mount.
15:18 pranithk volga629_: All the VMs are giving Invalid argument when you try to restart?
15:19 volga629_ most yes
15:19 volga629_ libvirtd[16703]: failed to connect to gluster://srv01/datapoint02/: Invalid argument
15:20 pranithk volga629_: Okay, it seems like the volume is not even mounting
15:20 nangthang joined #gluster
15:20 volga629_ until heal will be not completed, libvirt will not be able do any vm manipulation
15:21 pranithk volga629_: That is not true. I am a developer, and wrote that code. Believe me
15:21 volga629_ ok, I just share my observations
15:21 ekuric1 joined #gluster
15:22 pranithk volga629_: Is srv01 still in cluster?
15:22 volga629_ yes
15:22 volga629_ gluster peer status
15:22 pranithk volga629_: could you give the output of `gluster peer status`?
15:23 rwheeler joined #gluster
15:24 volga629_ http://fpaste.org/322898/54983314/
15:24 glusterbot Title: #322898 Fedora Project Pastebin (at fpaste.org)
15:24 pranithk volga629_: As per the logs, it is not able to connect to srv01
15:24 volga629_ yes
15:24 pranithk volga629_: What is the volume name?
15:25 julim joined #gluster
15:25 volga629_ datapoint02
15:25 rideh- joined #gluster
15:26 PsionTheory joined #gluster
15:26 pranithk volga629_: Invalid argument can only come if the volume doesn't exist on srv01
15:26 volga629_ is in peer list do I need to see all nodes or nodes with exclude him self ?
15:27 pranithk volga629_: They exclude themselves
15:27 pranithk volga629_: Could you check the output of "gluster volume info" on srv01 machine?
15:27 volga629_ ok then that correct
15:28 pranithk volga629_: Could you check the output of "gluster volume info" on srv01 machine?
15:28 volga629_ http://fpaste.org/322902/50128145/
15:28 glusterbot Title: #322902 Fedora Project Pastebin (at fpaste.org)
15:31 pranithk volga629_: It is not able to mount the volume for some reason :-(
15:31 volga629_ yes
15:31 volga629_ I  don't see anything logs which give some indication about it
15:31 pranithk volga629_: on which machine do you have libvirt running? srv01/2/3/4?
15:32 volga629_ except 4
15:32 volga629_ that just for backup
15:32 volga629_ production server 123
15:32 coredump|br joined #gluster
15:33 dgandhi joined #gluster
15:34 pranithk volga629_: Do you have libvirt log? what does it say?
15:35 volga629_ something like this
15:35 volga629_ 23436: info : virObjectNew:202 : OBJECT_NEW: obj=0x564604ed4770 classname=virDomainEventIOError
15:36 volga629_ 2016-02-15 15:35:01.546+0000: 23436: info : qemuMonitorJSONIOProcessLine:201 : QEMU_MONITOR_RECV_EVENT: mon=0x7fd848001020 event={"timestamp": {"seconds": 1455550501, "microseconds": 546318}, "event": "BLOCK_IO_ERROR", "data": {"device": "drive-ide0-0-0", "nospace": false, "reason": "Operation not permitted", "operation": "read", "action": "report"}}
15:36 unclemarc joined #gluster
15:36 volga629_ "reason": "Operation not permitted" that mean  virtual disk is locked
15:36 pranithk volga629_: It says Operation not permitted here but says Invalid operation on the application :-/
15:38 pranithk volga629_: I am not sure if these logs correspond to the error we are seeing. We should have something with invalid argument.
15:38 pranithk volga629_: Could you do 'grep -i invalid <log-file-path>'?
15:38 volga629_ is this gluster log ?
15:39 pranithk volga629_: Where does gluster logs of the gfapi based mount come?
15:39 volga629_ qemu is emulator for vm, but libvirt provide api and hypervisor
15:40 volga629_ no I don't see any logs about attempts of mount
15:40 pranithk volga629_: When we restarted libvirt, it connects to gluster again (This is similar to mounting)
15:41 pranithk volga629_: so check in libvirt
15:41 volga629_ yes
15:41 farhoriz_ joined #gluster
15:43 pranithk volga629_: so? do you see anything similar? I mean any message about 'Invalid' etc?
15:44 volga629_ I am checking libvirt log
15:44 pranithk volga629_: is 'grep -i invalid <log-file>' give any output?
15:45 volga629_ I don't see anything like this
15:47 pranithk volga629_: that is strange :-(.
15:47 volga629_ yes
15:47 pranithk volga629_: Then where is invalid argument coming from :-(
15:48 pranithk volga629_: We checked that the glusterds are present. The only thing missing is the connection to gluster
15:48 pranithk volga629_: okay my ride is here. I will come back in 40 minutes
15:48 volga629_ sure thank you for all help
16:01 wushudoin joined #gluster
16:01 hagarth joined #gluster
16:01 robb_nl joined #gluster
16:03 kdhananjay joined #gluster
16:04 wushudoin joined #gluster
16:05 kdhananjay volga629_: back, there?
16:05 volga629_ yes
16:05 volga629_ we also troubleshoot with <pranithk>
16:05 kdhananjay volga629_: right, he's my colleague. :)
16:06 kdhananjay volga629_: so what's happening?
16:06 volga629_ after look up and I think in some case libvirt failing to mount gluster
16:06 kdhananjay volga629_: did you restart libvirt service?
16:07 volga629_ and after few restarts of libvirt vm machines start coming back, but only one which not marked under going heal in heal status
16:08 luizcpg joined #gluster
16:08 kdhananjay volga629_: how many vms do you have?
16:08 volga629_ on one server 18
16:09 mhulsman joined #gluster
16:09 volga629_ error like this
16:09 volga629_ error : virStorageFileBackendGlusterInit:628 : failed to initialize gluster connection to server: 'srv1': No such file or directory
16:09 unclemarc joined #gluster
16:09 wushudoin joined #gluster
16:10 kdhananjay volga629_: is that libvirt log?
16:10 volga629_ yes
16:10 kdhananjay volga629_: ok checking ... give me a min
16:12 kdhananjay volga629_: could you go to srv1
16:12 kdhananjay volga629_: and check the glusterd log
16:12 pdrakeweb joined #gluster
16:12 kdhananjay volga629_: and tell me if you see anything?
16:12 kdhananjay volga629_: any errors/warnings i mean of course
16:13 atinm joined #gluster
16:13 volga629_ [2016-02-15 15:08:15.383249] W [MSGID: 108034] [afr-self-heald.c:445:afr_shd_index_sweep] 0-datapoint02-replicate-0: unable to get index-dir on datapoint02-client-1
16:14 kdhananjay volga629_: etc-glusterfs-glusterd.vol.log
16:15 kdhananjay volga629_: i want the log from srv1 in etc-glusterfs-glusterd.vol.log
16:15 volga629_ in log etc-glusterfs-glusterd.vol.log no warnings or errors
16:15 kdhananjay volga629_: ok checking .
16:15 volga629_ do you want whole log
16:15 volga629_ ?
16:16 kdhananjay volga629_: not whole log, maybe the tail of the file
16:16 volga629_ there are was warning earlier today
16:17 volga629_ [2016-02-15 13:31:07.500727] W [glusterd-locks.c:681:glusterd_mgmt_v3_unlock] (-->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4b) [0x7f8e82aae04b] -->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x14a) [0x7f8e82ab78aa] -->/usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x4c3) [0x7f8e82b4ba63] ) 0-management: Lock f
16:17 volga629_ or vol datapoint02 not held
16:17 glusterbot volga629_: ('s karma is now -122
16:17 volga629_ that before update
16:18 volga629_ aslo
16:18 volga629_ [2016-02-15 13:31:17.646309] W [socket.c:869:__socket_keepalive] 0-socket: failed to set TCP_USER_TIMEOUT -1000 on socket 5, Invalid argument
16:18 mhulsman joined #gluster
16:19 kdhananjay volga629_: could you restart glusterd on srv1 and then start the vm one more time
16:19 kdhananjay ?
16:19 volga629_ ok
16:21 volga629_ here are all errors or warnings from etc log
16:21 volga629_ http://fpaste.org/322922/55327914/
16:21 glusterbot Title: #322922 Fedora Project Pastebin (at fpaste.org)
16:22 kdhananjay volga629_: was this before restarting glusterd or after?
16:22 volga629_ that before I am restarting right now
16:23 kdhananjay volga629_: ah ok.
16:24 volga629_ that interesting [2016-02-15 16:23:08.257344] E [socket.c:2965:socket_connect] 0-management: Failed to set keep-alive: Invalid argument
16:27 volga629_ after restart vm starts booting up
16:27 kdhananjay volga629_: wow! good news! :)
16:27 JoeJulian kdhananjay: been stable since we talked last.
16:28 kdhananjay JoeJulian: Awesome :)
16:28 volga629_ that good
16:28 volga629_ yes
16:28 ivan_rossi left #gluster
16:28 kdhananjay volga629_: so all ok now? :)
16:28 volga629_ under heal info only one disk left
16:28 volga629_ canldbm01_disk02.qcow2 - Possibly undergoing heal
16:29 kdhananjay volga629_: no split-brains right?
16:29 volga629_ no split brain, it coming back yes, but question why it happened and 4 or 3 bricks not enough to be stable
16:30 kdhananjay volga629_: it looks like a connection/network issue to me.
16:30 kdhananjay volga629_: i mean the part after restarting libvirtd
16:30 volga629_ that almost impossible they in same datacenter different rack
16:31 volga629_ ping at most 0.6
16:31 kdhananjay volga629_: libvirtd was not able to talk to the glusterd in srv1
16:31 volga629_ that the one
16:31 volga629_ yes too
16:31 kdhananjay volga629_: which is why we saw all those errors and warnings from the glusterd logs about tcp timeout etc
16:31 volga629_ I am not if anything to do with healing procedure
16:31 JoeJulian kdhananjay: when you get a chance, I'm just curious about this lock entry. http://fpaste.org/322927/45555377/ It looks like it has three locks, one one of them valid and one of them negative. Is this expected behavior, or is this another bug?
16:31 glusterbot Title: #322927 Fedora Project Pastebin (at fpaste.org)
16:32 kdhananjay volga629_: i didnt understand
16:33 kdhananjay JoeJulian: they are all ACTIVE, which means none of them is blocked for certain.
16:33 kdhananjay JoeJulian: how can you tell if a lock is negative, just curious
16:33 volga629_ when I used add remove ad brick instead brick-replace and I so the self healing start him self and after that right a way vm start get read only file system
16:34 volga629_ and hung that all vm
16:34 JoeJulian Oh, maybe not negative. I was guessing but 9223372036854775805 = 0x7ffffffffffffffd.
16:36 volga629_ that ping 64 bytes from ip : icmp_seq=2 ttl=63 time=0.149 ms
16:40 kdhananjay JoeJulian: thats LLONG_MAX - 2
16:40 JoeJulian Have we disabled client-side self-heals like I had to?
16:40 kdhananjay JoeJulian: which is perfectly normal, metadata transactions in afr take lock in that range.
16:41 JoeJulian Ah, I see now. Cool.
16:41 shubhendu joined #gluster
16:41 kdhananjay volga629_: i have had two more people report issues with add-brick and remove-brick.
16:42 kdhananjay volga629_: we need to recreate it and debug it. but is replace-brick a good enough workaround until then for you?
16:43 volga629_ my plan to remove one  brick that production will have 3 for right now. Hope that volume will stay up
16:43 volga629_ is vol info look ok in term of options ?
16:45 volga629_ I think 3 bricks should be enough to keep pool up an running
16:45 kdhananjay volga629_: yes 3 bricks should be fine
16:50 kdhananjay volga629_: you intend to use 4 bricks always? or it was just a temporary state?
16:50 kdhananjay volga629_: i mean what was the initial brick count in your volume?
16:50 volga629_ initial count was 3
16:51 kdhananjay volga629_: ok and you wanted to replace a brick or replace a node?
16:51 volga629_ yes I want to 4 that 3 can be removed
16:51 volga629_ in libvirtd log I see
16:51 volga629_ warning : virKeepAliveTimerInternal:143 : No response from client 0x5565d7620420 after 5 keepalive messages in 30 seconds
16:53 unclemarc joined #gluster
16:53 jwd joined #gluster
16:54 wushudoin joined #gluster
16:54 kdhananjay volga629_: ive not worked on these either before. just googling this message to see why it occurs
16:56 kdhananjay volga629_: it seems to be just a warning.
16:56 kdhananjay volga629_: are you facing any issues with vms?
16:57 volga629_ on first node was need fix few tables in database
17:01 hagarth joined #gluster
17:03 EinstCra_ joined #gluster
17:03 kdhananjay volga629_: database? i didnt understand.
17:03 volga629_ database vm had few tables which was need to fix after recovery
17:03 JoeJulian I suspect he's referring to a database on the vm image.
17:06 kdhananjay JoeJulian: Oh ok.
17:06 volga629_ another warning in log
17:06 volga629_ W [MSGID: 108034] [afr-self-heald.c:445:afr_shd_index_sweep] 0-datapoint02-replicate-0: unable to get index-dir on datapoint02-client-2
17:06 ChrisHolcombe joined #gluster
17:07 kdhananjay volga629_: that log doesn't look right.
17:07 kdhananjay volga629_: did you modify anything in the brick backend? like rm -rf under bricks?
17:07 volga629_ no
17:07 volga629_ ?
17:08 mobaer joined #gluster
17:08 volga629_ http://fpaste.org/322962/55560831/
17:08 glusterbot Title: #322962 Fedora Project Pastebin (at fpaste.org)
17:08 volga629_ that production rm -f is prohibited  :-)
17:08 kdhananjay volga629_: :)
17:10 kdhananjay volga629_: can you share output of /var/lib/glusterd/vols/<volname>/trusted-<volname>.tcp-fuse.vol
17:10 volga629_ ok
17:13 volga629_ http://fpaste.org/322965/55642514/
17:14 glusterbot Title: #322965 Fedora Project Pastebin (at fpaste.org)
17:16 kdhananjay volga629_: in the brick on srv02, can you do an ls on <brick-path>/.glusterfs/indices/ ?
17:16 volga629_ ok
17:18 volga629_ http://fpaste.org/322971/55671314/
17:18 glusterbot Title: #322971 Fedora Project Pastebin (at fpaste.org)
17:19 pablor joined #gluster
17:21 kdhananjay volga629_: ok so the index directory is there, its just that glustershd is not able to detect that
17:25 kdhananjay volga629_: ok so now that your vms are back in action, what is it that is still not working?
17:27 kdhananjay volga629_: where are you from, by the way?
17:30 volga629_ second node trying bring up
17:30 B21956 joined #gluster
17:38 caitnop joined #gluster
17:41 rafi joined #gluster
17:42 nishanth joined #gluster
17:44 raghu` joined #gluster
17:49 papamoose joined #gluster
17:56 mhulsman joined #gluster
18:10 raghu` left #gluster
18:15 rafi joined #gluster
18:22 rafi joined #gluster
18:23 jiffin joined #gluster
18:24 lchabert joined #gluster
18:24 lchabert exit
18:24 rafi joined #gluster
18:24 lchabert left #gluster
18:25 lchabert joined #gluster
18:25 lchabert left #gluster
18:42 DV joined #gluster
18:43 skoduri joined #gluster
18:53 rafi joined #gluster
19:03 ahino joined #gluster
19:04 jwaibel joined #gluster
19:05 jwd_ joined #gluster
19:24 hagarth joined #gluster
19:28 rafi1 joined #gluster
19:30 jiffin joined #gluster
19:51 hagarth joined #gluster
19:55 haomaiwang joined #gluster
19:59 shaunm joined #gluster
20:13 deniszh joined #gluster
20:18 deniszh joined #gluster
20:39 mhulsman joined #gluster
20:40 illogik joined #gluster
21:01 ctria joined #gluster
21:10 BuffaloCN joined #gluster
21:16 Reiner030 joined #gluster
21:17 BuffaloCN joined #gluster
21:18 mhulsman joined #gluster
21:24 ovaistariq joined #gluster
21:31 wushudoin joined #gluster
21:33 wushudoin joined #gluster
21:36 mhulsman joined #gluster
21:41 Melamo joined #gluster
21:43 pablor joined #gluster
21:44 haomaiwa_ joined #gluster
21:57 m0zes joined #gluster
22:11 squeakyneb joined #gluster
22:20 ahino joined #gluster
22:30 volga629_ joined #gluster
22:30 liewegas joined #gluster
22:30 B21956 joined #gluster
22:33 wushudoin joined #gluster
22:36 tom[] joined #gluster
22:38 crashmag joined #gluster
22:51 m0zes joined #gluster
23:02 misc joined #gluster
23:11 Wizek joined #gluster
23:14 misc joined #gluster
23:15 kenansul- joined #gluster
23:25 kenansul- joined #gluster
23:29 hagarth joined #gluster
23:30 unclemarc joined #gluster
23:32 haomaiwa_ joined #gluster
23:32 ovaistariq joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary