Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-06-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:26 maveric_amitc_ joined #gluster
00:43 bjornar joined #gluster
01:04 TheCthulhu3 joined #gluster
01:25 julim joined #gluster
01:29 B21956 joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:57 harish joined #gluster
02:55 bharata-rao joined #gluster
03:20 kdhananjay joined #gluster
03:22 overclk joined #gluster
03:51 [7] joined #gluster
03:56 shubhendu joined #gluster
03:57 RameshN joined #gluster
03:58 itisravi joined #gluster
04:15 nbalacha joined #gluster
04:18 kshlm joined #gluster
04:18 Bhaskarakiran joined #gluster
04:18 kshlm joined #gluster
04:18 shaunm joined #gluster
04:26 vimal joined #gluster
04:30 stickyboy joined #gluster
04:33 sakshi joined #gluster
04:36 atinm joined #gluster
04:37 ppai joined #gluster
04:40 ramteid joined #gluster
04:42 pppp joined #gluster
04:43 ASDXX joined #gluster
04:44 schandra joined #gluster
04:46 Bhaskarakiran joined #gluster
04:53 zeittunnel joined #gluster
04:55 fleducquede_ joined #gluster
04:58 ndarshan joined #gluster
05:01 gem joined #gluster
05:10 gildub joined #gluster
05:14 maveric_amitc_ joined #gluster
05:21 kdhananjay joined #gluster
05:24 kdhananjay joined #gluster
05:33 spandit joined #gluster
05:34 deepakcs joined #gluster
05:40 soumya joined #gluster
05:42 overclk joined #gluster
05:44 kdhananjay joined #gluster
05:45 hagarth joined #gluster
05:48 anrao joined #gluster
05:51 zeittunnel joined #gluster
05:55 atalur joined #gluster
05:56 ashiq joined #gluster
05:57 Manikandan joined #gluster
05:58 glusterbot News from newglusterbugs: [Bug 1231617] Scrubber crash upon pause <https://bugzilla.redhat.com/show_bug.cgi?id=1231617>
05:58 glusterbot News from newglusterbugs: [Bug 1231619] BitRot :- Handle brick re-connection sanely in bitd/scrub process <https://bugzilla.redhat.com/show_bug.cgi?id=1231619>
06:04 kdhananjay joined #gluster
06:05 nsoffer joined #gluster
06:23 meghanam joined #gluster
06:23 jtux joined #gluster
06:24 owlbot` joined #gluster
06:30 rastar_afk joined #gluster
06:32 al joined #gluster
06:34 [Enrico] joined #gluster
06:37 kovshenin joined #gluster
06:39 anrao joined #gluster
06:48 soumya joined #gluster
06:54 nbalacha joined #gluster
06:55 spalai joined #gluster
06:56 joshin joined #gluster
07:00 karnan joined #gluster
07:03 nangthang joined #gluster
07:05 sakshi joined #gluster
07:07 Philambdo joined #gluster
07:08 soumya joined #gluster
07:08 glusterbot News from resolvedglusterbugs: [Bug 1185259] cli crashes when listing quota limits with xml output <https://bugzilla.redhat.com/show_bug.cgi?id=1185259>
07:11 ASDXX a dir i removed from the brick (stupid, i know) keeps re-appearing but doing an 'ls' on it shows the dir itself is pointing to a broken sym link (glusterfs 3.4.). i remove that, and it keeps coming back. tried removing the thing it's symlinking to with "find /<gluster mount point>/ -name <sym link mix of letters+numbers>" and remvoing the file in that, but it still re-appears later
07:15 rjoseph joined #gluster
07:18 anrao joined #gluster
07:25 anti[Enrico] joined #gluster
07:34 fsimonce joined #gluster
07:36 ramteid joined #gluster
07:39 anrao joined #gluster
07:47 baoboa joined #gluster
07:49 mbukatov joined #gluster
08:08 liquidat joined #gluster
08:09 shubhendu joined #gluster
08:18 nbalacha joined #gluster
08:25 anrao joined #gluster
08:26 lyang0 joined #gluster
08:28 glusterbot News from newglusterbugs: [Bug 1231678] geo-rep: gverify.sh throws error if slave_host entry is not added to know_hosts file <https://bugzilla.redhat.com/show_bug.cgi?id=1231678>
08:33 arcolife joined #gluster
08:35 rgustafs joined #gluster
08:38 jcastill1 joined #gluster
08:40 lyang0 joined #gluster
08:43 gem joined #gluster
08:43 jcastillo joined #gluster
08:47 deniszh joined #gluster
08:48 nsoffer joined #gluster
08:49 ndarshan joined #gluster
08:50 raghu joined #gluster
08:51 anil joined #gluster
08:54 Bhaskarakiran joined #gluster
08:58 glusterbot News from newglusterbugs: [Bug 1231686] No nightly build for 3.7.x <https://bugzilla.redhat.com/show_bug.cgi?id=1231686>
08:59 glusterbot News from newglusterbugs: [Bug 1231688] Bitrot: gluster volume stop <volnaerd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped"me> show logs  "[glust <https://bugzilla.redhat.com/show_bug.cgi?id=1231688>
09:00 saurabh_ joined #gluster
09:21 harish joined #gluster
09:23 spalai joined #gluster
09:28 ndarshan joined #gluster
09:33 Slashman joined #gluster
09:34 foexle joined #gluster
09:34 foexle left #gluster
09:34 foexle joined #gluster
09:35 foexle hi guys,  i there any way to stop the glusterfs self heal ?
09:38 ndevos itisravi should know that ;-)
09:38 haomaiwa_ joined #gluster
09:39 itisravi foexle:  `gluster volume set <volname> self-heal-daemon off`
09:41 DV joined #gluster
09:45 foexle itisravi: yeah thanks
09:48 anrao joined #gluster
09:48 autoditac joined #gluster
09:49 itisravi foexle: This only prevents the glustershd from performing heals. Heals via mount will still happen though.
10:02 frugo3000|2 joined #gluster
10:02 frugo3000|2 hi there
10:02 frugo3000|2 i have a problem when trying to make a gluster cluster
10:03 frugo3000|2 everytime when i'm trying to add peer i get
10:03 frugo3000|2 "peer probe: failed: 10.22.0.XXX is already part of another cluster"
10:04 frugo3000|2 even if i remove package with gluster and remove /etc/glusterfs and /var/lib/glusterd
10:06 kd1 joined #gluster
10:06 kshlm frugo3000|2, is 10.22.0.XXX part of another cluster?
10:06 spalai joined #gluster
10:06 frugo3000|2 nope
10:06 frugo3000|2 i have 12 nodes
10:06 frugo3000|2 and try to probe each other
10:07 frugo3000|2 there was a cluster on part of them earlier
10:07 kshlm Did you clean out 10.22.0.XXX then?
10:07 frugo3000|2 yeah, new configuration
10:07 kshlm Or did you just clean the node from which you are running the command?
10:07 frugo3000|2 deleted those catalogs
10:07 frugo3000|2 reinstalled package
10:07 frugo3000|2 and restarted services
10:08 kshlm A reinstall doesn't delete configuration.
10:09 frugo3000|2 yeah, that's why i have deleted those files manually
10:14 poornimag joined #gluster
10:20 atinm joined #gluster
10:31 vovcia frugo3000|2: You should clean bricks too
10:35 anrao joined #gluster
10:37 Bhaskarakiran joined #gluster
10:37 Bhaskarakiran joined #gluster
10:47 atinm joined #gluster
10:51 RaSTar10 joined #gluster
10:53 LebedevRI joined #gluster
11:00 Pupeno joined #gluster
11:08 frugo3000|2 k, fixed
11:08 frugo3000|2 now i can't mount volume on 4 of 12 hosts
11:09 glusterbot News from resolvedglusterbugs: [Bug 1191486] daemons abstraction & refactoring <https://bugzilla.redhat.com/show_bug.cgi?id=1191486>
11:09 frugo3000|2 127.0.0.1:/static /home/virtual/static glusterfs defaults,_netdev 0 0
11:10 frugo3000|2 gives me "extra arguments at end (ignored)
11:10 frugo3000|2 after mount
11:16 firemanxbr joined #gluster
11:16 kd2 joined #gluster
11:21 Trefex joined #gluster
11:23 jcastill1 joined #gluster
11:28 jcastillo joined #gluster
11:30 spalai joined #gluster
11:39 glusterbot News from resolvedglusterbugs: [Bug 1229228] Ubuntu launchpad PPA packages outdated <https://bugzilla.redhat.com/show_bug.cgi?id=1229228>
11:48 [Enrico] joined #gluster
11:49 Trefex i get this written to my logs every other second
11:49 Trefex [2015-06-15 11:48:44.883862] W [socket.c:620:__socket_rwv] 0-management: readv on /var/run/d1096ce0b0ac38b7c2b15dad9d268651.socket failed (Invalid argument)
11:49 kovshenin joined #gluster
11:49 Trefex does naybody know what is going on?
11:59 glusterbot News from newglusterbugs: [Bug 1231767] tiering:compiler warning with gcc v5.1.1 <https://bugzilla.redhat.com/show_bug.cgi?id=1231767>
12:04 gem joined #gluster
12:06 zeittunnel joined #gluster
12:07 rjoseph joined #gluster
12:12 arcolife joined #gluster
12:12 RameshN joined #gluster
12:16 itisravi_ joined #gluster
12:18 Trefex how do i start gluster nfs?
12:23 ppai joined #gluster
12:42 ndevos Trefex: cat /var/lib/glusterd/nfs/run/nfs.pid | xargs kill ; sleep 3 ; systemctl restart glusterd
12:42 ndevos or, something like that :)
12:43 Trefex ndevos: thanks :)
12:44 jcastill1 joined #gluster
12:44 Trefex ndevos: would you have any pointers also on my more "bigger" problem?
12:44 * ndevos reads up
12:44 plarsen joined #gluster
12:45 ndevos Trefex: ah, thats an issue that has been fixed in 3.7.1, or will be fixed in 3.7.2 (I cant remember exactly)
12:47 ndevos Trefex: http://article.gmane.org/gmane.comp.file-systems.gluster.devel/11025 contains some details
12:47 Trefex ndevos: oh perfect
12:47 Trefex and the more pressing issue is that
12:47 Trefex i am mounting the gluster volume via fuse and write to it using rsyncd
12:48 Trefex speeds of 1.1 GB /S
12:48 Trefex but then after some hours, the driver or something crashes and the mountpount becomes unavailable
12:49 jcastillo joined #gluster
12:49 ndevos well, thats not good :-/ you should probably file a bug with details about the volume and steps to reproduce
12:49 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
12:50 wkf joined #gluster
12:50 * ndevos isnt aware of any recent crashes in the fuse process
12:50 Trefex ndevos: problem is i'm not sure where to start investigating :(
12:52 ndevos Trefex: you may have a coredump, and the logs in /var/log/glusterfs/path-to-mount-point.log should have some details
12:55 Trefex ndevos: ah very helpful ! Here is the log of the mount point http://paste.fedoraproject.org/232108/43437288
12:55 Trefex ndevos: i think near the end, there is the information about the crash
12:55 jcastill1 joined #gluster
12:55 Trefex however no clue what these cryptic numbers mean :)
12:58 julim joined #gluster
13:00 ndevos Trefex: well, obviously it crashes, but the stack (those .so + function names) does not look familiar to me
13:00 jcastillo joined #gluster
13:01 ndevos Trefex: I *think* it could be a memory issue, "frame : type(0) op(0)" looks like something that could not get allocated - do you see high mempry usage just before the crash?
13:01 Trefex ndevos: mhhh not sure
13:02 arcolife joined #gluster
13:02 ndevos Trefex: maybe a hint in /var/log/messages or 'dmesg'?
13:04 autoditac joined #gluster
13:05 Trefex ndevos: currently, 29 GB mem are free (used free -m)
13:07 ndevos Trefex: is that just before a crash? once the process died, the RAM it used will be free'd
13:08 Trefex nope, this is now duing transefer, trying to replicate the problem again
13:08 squizzi joined #gluster
13:08 Trefex ndevos: however, buff/cache is at 89 GB, and slowly increasing
13:08 Trefex maybe it crashes as the buffer fills up
13:10 frugo3000|2 left #gluster
13:10 Trefex ndevos: on which machine would you look for in the /var/log/messages, the machine that mounted ?
13:11 Trefex the machine doing the mount and distributing data to the gluster cluster, has allocated 98% of the RAM
13:18 Slashman joined #gluster
13:19 johnmark joined #gluster
13:19 aaronott joined #gluster
13:24 plarsen joined #gluster
13:24 ekuric joined #gluster
13:27 fleducquede_ joined #gluster
13:28 georgeh-LT2 joined #gluster
13:30 theron joined #gluster
13:32 arcolife joined #gluster
13:32 rwheeler joined #gluster
13:39 Trefex ndevos: crashed right now
13:45 ndevos Trefex: yes, check on the machine that gets the crash, the client
13:46 ndevos Trefex: there is nothing to worry about 98% ram allocated, ram is to be used ;-)
13:46 Twistedgrim joined #gluster
13:46 Trefex ndevos: dmesg http://paste.fedoraproject.org/232132/43437596/
13:46 ndevos Trefex: if you see swapping, that could suggest an issue, but only ram usage not - unless its a single process
13:47 Trefex /var/log/messages http://paste.fedoraproject.org/232135/14343760/
13:48 squizzi joined #gluster
13:48 ndevos Trefex: many SElinux errors, nothing that suggests anything for the crash
13:48 Trefex pending frames, and then boum, finito
13:49 squizzi joined #gluster
13:50 dgandhi joined #gluster
13:51 Trefex not sure if the errors in nfs.log are relevant ndevos ? http://ur1.ca/mu04z
13:56 txomon|fon joined #gluster
13:57 txomon|fon hi, I am having a problem trying to boot gluster, https://gist.github.com/txomon/8455f165adb3c8f3df28
13:57 txomon|fon I had first one host, and it's working OK
13:57 txomon|fon but this one, it just doesn't boot
14:14 jcastill1 joined #gluster
14:19 jcastillo joined #gluster
14:21 txomon|fon the worrying thing really is the signum it says has received... suppossing signum == SIGFPE
14:23 shubhendu joined #gluster
14:23 wushudoin joined #gluster
14:24 ndevos Trefex: the nfs issue is http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/11438/focus=11439
14:24 [Enrico] joined #gluster
14:35 wushudoin joined #gluster
14:46 archit_ joined #gluster
14:49 txomon|fon glusterd won't start for me
14:49 txomon|fon the daemon
14:49 txomon|fon any idea why ?
14:49 txomon|fon the logs say nothing
14:51 nangthang joined #gluster
14:51 thangnn_ joined #gluster
14:54 ndevos txomon|fon: try to start glusterd from the commandline, and add --log-level=DEBUG or TRACE ?
14:58 txomon|fon thanks ndevos
15:01 meghanam joined #gluster
15:01 nbalacha joined #gluster
15:02 nsoffer joined #gluster
15:03 bennyturns joined #gluster
15:04 txomon|fon ndevos, why wouldn't glusterd launch the nfs server?
15:04 haomaiwang joined #gluster
15:13 haomaiw__ joined #gluster
15:13 ndevos txomon|fon: I dont know, the logs are not very helpful to me
15:16 spalai joined #gluster
15:17 Slashman joined #gluster
15:17 meghanam joined #gluster
15:17 cholcombe joined #gluster
15:20 vovcia hi :) mi still trying to selinux on gluster, operation not supported, tried with xfs, ext4, -o selinux.... no luck :**(
15:20 vovcia command that is failing is chcon
15:34 hagarth joined #gluster
15:34 krink joined #gluster
15:42 harish joined #gluster
15:44 firemanxbr_ joined #gluster
15:57 shubhendu joined #gluster
16:01 firemanxbr joined #gluster
16:01 premera joined #gluster
16:01 premera joined #gluster
16:03 spalai joined #gluster
16:19 spalai joined #gluster
16:24 woakes07004 joined #gluster
16:27 sage joined #gluster
16:40 craigcabrey joined #gluster
16:43 eljrax Is it about right to see 70 MB/s difference between 1 1x2 and 2x2 volume for sequential reads?
16:49 plarsen joined #gluster
16:51 spalai joined #gluster
16:55 bene-at-car-repa joined #gluster
16:58 eljrax vovcia: I thought selinux didn't work on FUSE mounts. But I'm not 100% sure
17:02 bennyturns joined #gluster
17:02 haomaiwang joined #gluster
17:17 nsoffer joined #gluster
17:30 rwheeler joined #gluster
17:37 Slashman joined #gluster
17:39 chirino joined #gluster
17:40 anrao joined #gluster
17:42 joshin joined #gluster
17:44 jcastill1 joined #gluster
17:49 spalai joined #gluster
17:49 jcastillo joined #gluster
17:55 dgandhi joined #gluster
18:02 lpabon joined #gluster
18:04 dave_noob_12345 joined #gluster
18:05 dave_noob_12345 hey.  how do i reset glusterfs settings?
18:05 dave_noob_12345 i am getting an error while trying to probe a peer
18:05 dave_noob_12345 peer probe: failed: backend-prod002 is already part of another cluster
18:06 dave_noob_12345 i tried deleting everything under /var/lib/glusterd/peers/*  but i still get the same error
18:10 rwheeler joined #gluster
18:11 msvbhat dave_noob_12345: Is backend-prod002 part of another cluster or you had setup in there already?
18:12 msvbhat dave_noob_12345: if you delete everyhing under /var/lib/glusterd/peer on all of your nodes, you shouldn't be getting this
18:12 JoeJulian ... with glusterd stopped while you're deleting them.
18:14 msvbhat JoeJulian: Yeah, Sorry.. Forgot to mention it :)
18:15 dave_noob_12345 nope, i am trying to set up a brand new cluster on 8 servers.  i deleted everything in the peer directory on all servers, but i still got the same error.
18:15 dave_noob_12345 but this command worked: sudo service glusterfs-server stop; sudo apt-get purge glusterfs-server --yes; sudo apt-get purge glusterfs-client --yes; sudo apt-get purge glusterfs-common --yes; sudo rm -r /var/lib/glusterd; sudo apt-get install glusterfs-server --yes;
18:16 hagarth joined #gluster
18:22 Rapture joined #gluster
18:30 Slashman joined #gluster
18:31 glusterbot News from newglusterbugs: [Bug 1231983] Upstart job mounting-glusterfs.conf increases unnecessary 30 seconds in Ubuntu boot <https://bugzilla.redhat.com/show_bug.cgi?id=1231983>
18:53 wushudoin| joined #gluster
18:59 wushudoin| joined #gluster
19:09 squizzi joined #gluster
19:27 Susant_ left #gluster
19:31 glusterbot News from newglusterbugs: [Bug 1232002] nfs-ganesha: 8 node pcs cluster setup fails <https://bugzilla.redhat.com/show_bug.cgi?id=1232002>
19:37 papamoose joined #gluster
19:45 ekuric left #gluster
19:48 Rapture getting a lot of this in my gluster log: W [fuse-bridge.c:2167:fuse_writev_cbk] 0-glusterfs-fuse: 3038304951: WRITE => -1 (Input/output error).
19:48 Rapture Any ideas on how to solve?
19:55 anrao joined #gluster
20:00 theron_ joined #gluster
20:00 nsoffer joined #gluster
20:21 squizzi joined #gluster
20:22 gnudna joined #gluster
20:23 squizzi joined #gluster
20:27 DV joined #gluster
20:34 gnudna left #gluster
20:37 bennyturns joined #gluster
20:41 hagarth joined #gluster
20:54 badone_ joined #gluster
21:02 craigcabrey joined #gluster
21:02 spalai joined #gluster
21:14 tessier joined #gluster
21:15 anrao joined #gluster
21:26 TheCthulhu joined #gluster
21:27 milkyline joined #gluster
21:35 anrao joined #gluster
21:41 wkf joined #gluster
21:46 krink joined #gluster
21:54 gsaadi joined #gluster
22:00 gsaadi left #gluster
22:00 gsaadi joined #gluster
22:02 surabhi joined #gluster
22:10 craigcabrey joined #gluster
22:25 victori joined #gluster
22:30 bennyturns joined #gluster
22:42 joshin joined #gluster
22:42 joshin joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary