Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-05-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 coredump joined #gluster
00:01 CyrilPeponnet 10.0.2.143 is up ?
00:01 tessier Yes
00:01 CyrilPeponnet gluster peer status
00:01 tessier hmm...10.0.2.143 does not appear in the gluster peer status output
00:02 CyrilPeponnet try peer list
00:02 CyrilPeponnet status doesn't show you the node from you are running the cmd
00:02 tessier ah
00:02 CyrilPeponnet pool list
00:03 tessier http://fpaste.org/223162/19937641/ It shows up in pool list as localhost since that is the machine from which I am running the command
00:03 CyrilPeponnet from what I see only diska is fine
00:03 CyrilPeponnet maybe your brick was not up when you restart
00:03 CyrilPeponnet try to restart gluster
00:03 CyrilPeponnet I have do go sorry (and check the logs)
00:04 coredump joined #gluster
00:04 tessier Thanks!
00:04 ppai joined #gluster
00:07 coredump joined #gluster
00:10 tessier hmm...there are not nearly as many gluster processes running on .143
00:10 coredump joined #gluster
00:13 coredump joined #gluster
00:13 harish joined #gluster
00:17 tessier DOH
00:17 tessier Fixed it. SELinux came up enabled after we rebooted the box.
00:18 tessier It's resynching now. Although it does worry me that it is using 36% of the CPU.
00:18 coredump joined #gluster
00:23 coredump joined #gluster
00:27 coredump joined #gluster
00:30 Prilly joined #gluster
00:32 coredump joined #gluster
00:36 coredump joined #gluster
00:40 coredump joined #gluster
00:42 coredump joined #gluster
00:43 ppai joined #gluster
00:45 Gill joined #gluster
00:46 DV joined #gluster
00:48 coredump joined #gluster
00:53 coredump joined #gluster
00:54 badone joined #gluster
00:55 coredump joined #gluster
01:00 coredump joined #gluster
01:11 David_H_Smith joined #gluster
01:20 wushudoin joined #gluster
01:21 tessier Is there any way to tell how long until a brick will be resynched or to monitor progress?
01:23 B21956 joined #gluster
01:23 B21956 left #gluster
01:27 JoeJulian The service does not know how many files are on the brick, nor how large they are, no how large the changes to a file were. Without all this information, prediction is impossible.
01:28 JoeJulian ... for a full sync.
01:28 JoeJulian gluster volume heal $vol info will show what files need synced after a disconnect.
01:29 JoeJulian It still doesn't know how large the changes are or what size the files are though.
01:29 coredump joined #gluster
01:29 B21956 joined #gluster
01:30 tessier Ah, ok.
01:30 tessier Thanks
01:45 gildub joined #gluster
02:08 Gill joined #gluster
02:16 nangthang joined #gluster
02:16 kdhananjay joined #gluster
02:19 harish joined #gluster
02:28 stopbyte joined #gluster
02:36 jmarley joined #gluster
02:39 stopbyte i'm having an issue wherein a volume which was previously set with both client.ssl and server.ssl to "on", but has been changed to "off" now fails during mount on a client - the client log shows the failure as an SSL connect error
02:40 stopbyte are there additional steps i've missed to switch a volume to non-ssl? no clients were connected when the volume set commands were issed, volume info shows both options as 'off', and all glusterd processes were even restarted for good measure
02:43 julim joined #gluster
02:44 smohan joined #gluster
02:44 cholcombe joined #gluster
03:11 hagarth joined #gluster
03:15 rjoseph joined #gluster
03:18 DV joined #gluster
03:20 stopbyte issue solved - fwiw, each client also needed the 0-byte /var/lib/glusterd/secure-access file removed to signal to it that it should no longer attempt negotiating ssl
03:26 overclk joined #gluster
03:29 kdhananjay joined #gluster
03:32 sakshi joined #gluster
03:40 TheSeven joined #gluster
03:47 itisravi joined #gluster
03:53 shubhendu joined #gluster
03:57 akay1 hi guys, ive got lots of entries showing as split brain gfid's. ive used the gfid_resolver.sh to find which files they are and delete them from the mount (as i dont actually need them) but the split brain entries still appear every minute. is it ok to find the file in the /.glusterfs/ folder and just delete it?
03:58 coredump joined #gluster
03:58 rjoseph joined #gluster
04:02 coredump joined #gluster
04:17 rajesh joined #gluster
04:18 rejy joined #gluster
04:18 coredump joined #gluster
04:19 nbalacha joined #gluster
04:22 yazhini joined #gluster
04:22 yazhini_ joined #gluster
04:23 prabu joined #gluster
04:23 kanagaraj joined #gluster
04:24 RameshN joined #gluster
04:24 prabu left #gluster
04:24 prabu joined #gluster
04:25 yazhini__ joined #gluster
04:26 yazhini joined #gluster
04:30 prabu joined #gluster
04:30 coredump joined #gluster
04:31 atinmu joined #gluster
04:34 prabu_ joined #gluster
04:34 coredump joined #gluster
04:35 prabu_ left #gluster
04:37 coredump joined #gluster
04:40 DV joined #gluster
04:41 ndarshan joined #gluster
04:42 coredump joined #gluster
04:43 glusterbot News from newglusterbugs: [Bug 1222748] server_readdirp_cbk links inodes that don't have the .glusterfs/gfid link <https://bugzilla.redhat.com/show_bug.cgi?id=1222748>
04:45 dusmant joined #gluster
04:48 ashiq joined #gluster
04:52 rafi joined #gluster
04:52 bharata-rao joined #gluster
04:53 glusterbot News from resolvedglusterbugs: [Bug 1220058] Disable known bad tests <https://bugzilla.redhat.com/show_bug.cgi?id=1220058>
04:54 julim joined #gluster
04:54 prabu joined #gluster
04:59 deepakcs joined #gluster
05:01 anil_ joined #gluster
05:06 jiffin joined #gluster
05:08 rafi joined #gluster
05:09 Bhaskarakiran joined #gluster
05:09 anrao joined #gluster
05:10 spandit joined #gluster
05:10 Apeksha joined #gluster
05:11 pppp joined #gluster
05:14 glusterbot News from newglusterbugs: [Bug 1222750] non-root geo-replication session goes to faulty state, when the session is started <https://bugzilla.redhat.com/show_bug.cgi?id=1222750>
05:17 schandra joined #gluster
05:17 Anjana joined #gluster
05:26 gem joined #gluster
05:28 ramteid joined #gluster
05:32 DV joined #gluster
05:52 hgowtham joined #gluster
05:54 julim joined #gluster
06:04 Anjana joined #gluster
06:06 Manikandan joined #gluster
06:06 mbukatov joined #gluster
06:07 dusmant joined #gluster
06:08 poornimag joined #gluster
06:08 Le22S joined #gluster
06:10 coredump joined #gluster
06:12 coredump joined #gluster
06:16 julim joined #gluster
06:17 jtux joined #gluster
06:18 coredump joined #gluster
06:21 coredump joined #gluster
06:21 aravindavk joined #gluster
06:24 coredump joined #gluster
06:27 coredump joined #gluster
06:31 coredump joined #gluster
06:31 Guest72040 joined #gluster
06:32 kumar joined #gluster
06:33 poornimag joined #gluster
06:33 meghanam joined #gluster
06:33 coredump joined #gluster
06:36 coredump joined #gluster
06:39 coredump joined #gluster
06:41 raghu joined #gluster
06:43 liquidat joined #gluster
06:43 coredump joined #gluster
06:43 rgustafs joined #gluster
06:44 glusterbot News from newglusterbugs: [Bug 1222769] libglusterfs: fix uninitialized argument value <https://bugzilla.redhat.com/show_bug.cgi?id=1222769>
06:44 nangthang joined #gluster
06:47 coredump joined #gluster
06:50 coredump joined #gluster
06:51 spalai joined #gluster
06:51 kdhananjay joined #gluster
06:53 coredump joined #gluster
06:54 glusterbot News from resolvedglusterbugs: [Bug 1212062] [Geo-replication] cli crashed and core dump was observed while running gluster volume geo-replication vol0 status command <https://bugzilla.redhat.com/show_bug.cgi?id=1212062>
06:54 glusterbot News from resolvedglusterbugs: [Bug 1217939] Have a fixed name for common meta-volume for nfs, snapshot and geo-rep and mount it at a fixed mount location <https://bugzilla.redhat.com/show_bug.cgi?id=1217939>
06:54 atalur joined #gluster
06:55 poornimag joined #gluster
06:55 coredump joined #gluster
07:03 dusmant joined #gluster
07:07 [Enrico] joined #gluster
07:09 Guest72040 joined #gluster
07:09 atinmu joined #gluster
07:12 arao joined #gluster
07:15 coredump joined #gluster
07:17 karnan joined #gluster
07:19 coredump joined #gluster
07:22 coredump joined #gluster
07:22 spalai1 joined #gluster
07:24 coredump joined #gluster
07:30 Anjana joined #gluster
07:39 fsimonce joined #gluster
07:41 wtracz2 joined #gluster
07:42 coredump joined #gluster
07:43 arao joined #gluster
07:44 gildub joined #gluster
07:44 atinmu joined #gluster
07:44 sac joined #gluster
07:45 harish_ joined #gluster
07:46 nishanth joined #gluster
07:49 social joined #gluster
07:51 dusmant joined #gluster
07:53 ira joined #gluster
07:54 TvL2386 joined #gluster
08:04 coredump joined #gluster
08:07 wtracz2 If .dht attrs vary across the cluster, guess that is a big problem?
08:10 _shaps_ joined #gluster
08:21 coredump joined #gluster
08:22 necrogami joined #gluster
08:25 Norky joined #gluster
08:33 coredump joined #gluster
08:39 anrao joined #gluster
08:40 kdhananjay joined #gluster
08:43 nsoffer joined #gluster
08:50 Slashman joined #gluster
08:57 yazhini joined #gluster
09:04 ghenry joined #gluster
09:04 ju5t joined #gluster
09:16 aravindavk joined #gluster
09:16 jcastill1 joined #gluster
09:17 schandra joined #gluster
09:22 jcastillo joined #gluster
09:22 autoditac joined #gluster
09:27 Manikandan joined #gluster
09:36 deniszh joined #gluster
09:37 LebedevRI joined #gluster
09:47 Manikandan joined #gluster
09:49 wtracz2 If there are no xattrs on a directory (i.e. vol-client-2 and vol-client-3 lack them compared to vol-client-0 in {{0,1},{2,3}} setup), heal shows no entries yet we get I/O errors, any ideas?
09:51 autoditac joined #gluster
09:54 sripathi joined #gluster
09:54 glusterbot News from resolvedglusterbugs: [Bug 1056085] logs flooded with invalid argument errors with quota enabled <https://bugzilla.redhat.com/show_bug.cgi?id=1056085>
09:54 glusterbot News from resolvedglusterbugs: [Bug 1184885] Quota: Build ancestry in the lookup <https://bugzilla.redhat.com/show_bug.cgi?id=1184885>
09:55 ashiq joined #gluster
09:59 autoditac_ joined #gluster
09:59 prabu_ joined #gluster
10:02 ashiq joined #gluster
10:10 ira joined #gluster
10:11 ira joined #gluster
10:15 glusterbot News from newglusterbugs: [Bug 1211949] [Snapshot Doc] Upstream feature page needs to be changed for snapshot scheduler as per latest changes/implementation <https://bugzilla.redhat.com/show_bug.cgi?id=1211949>
10:15 glusterbot News from newglusterbugs: [Bug 1218060] [SNAPSHOT]: Initializing snap_scheduler from all nodes at the same time should give proper error message <https://bugzilla.redhat.com/show_bug.cgi?id=1218060>
10:15 glusterbot News from newglusterbugs: [Bug 1210185] [RFE- SNAPSHOT] : Provide user the option to list the failed jobs <https://bugzilla.redhat.com/show_bug.cgi?id=1210185>
10:15 glusterbot News from newglusterbugs: [Bug 1218573] [Snapshot] Scheduled job is not processed when one of the node of shared storage volume is down <https://bugzilla.redhat.com/show_bug.cgi?id=1218573>
10:15 glusterbot News from newglusterbugs: [Bug 1218164] [SNAPSHOT] : Correction required in output message after initilalising snap_scheduler <https://bugzilla.redhat.com/show_bug.cgi?id=1218164>
10:27 AndroUser joined #gluster
10:35 rgustafs joined #gluster
10:46 kkeithley1 joined #gluster
10:46 ashiq joined #gluster
10:47 bene2 joined #gluster
10:53 nsoffer joined #gluster
10:54 wtracz2 Is the split brain heal triggered if you do stat via NFS or must it be fuse mounted?
11:17 Anjana joined #gluster
11:17 nsoffer joined #gluster
11:31 wkf joined #gluster
11:40 ndevos REMINDER: the weekly Gluster Bug Triage starts in ~20 minutes in #gluster-meeting
11:44 meghanam_ joined #gluster
11:45 haomaiwa_ joined #gluster
11:57 dusmant joined #gluster
12:01 rafi1 joined #gluster
12:01 ndevos REMINDER: the weekly Gluster Bug Triage starts now minutes in #gluster-meeting
12:07 prabu_ joined #gluster
12:07 yazhini joined #gluster
12:08 nangthang joined #gluster
12:15 glusterbot News from newglusterbugs: [Bug 1222898] geo-replication: fix memory leak in gsyncd <https://bugzilla.redhat.com/show_bug.cgi?id=1222898>
12:15 glusterbot News from newglusterbugs: [Bug 1220270] nfs-ganesha: Rename fails while exectuing Cthon general category test <https://bugzilla.redhat.com/show_bug.cgi?id=1220270>
12:17 rafi joined #gluster
12:18 spalai1 left #gluster
12:25 glusterbot News from resolvedglusterbugs: [Bug 1220338] unable to start the volume with the latest beta1 rpms <https://bugzilla.redhat.com/show_bug.cgi?id=1220338>
12:26 ira_ joined #gluster
12:27 prabu_ joined #gluster
12:27 yazhini joined #gluster
12:30 rgustafs joined #gluster
12:33 atalur joined #gluster
12:34 jcastill1 joined #gluster
12:39 jcastillo joined #gluster
12:41 ju5t joined #gluster
12:42 shaunm_ joined #gluster
12:45 glusterbot News from newglusterbugs: [Bug 1221578] nfs-ganesha: cthon general category test fails with vers=4 <https://bugzilla.redhat.com/show_bug.cgi?id=1221578>
12:45 glusterbot News from newglusterbugs: [Bug 1220021] bitrot testcases fail spuriously <https://bugzilla.redhat.com/show_bug.cgi?id=1220021>
12:45 glusterbot News from newglusterbugs: [Bug 1220173] SEEK_HOLE support (optimization) <https://bugzilla.redhat.com/show_bug.cgi?id=1220173>
12:45 glusterbot News from newglusterbugs: [Bug 1222915] usage text is wrong for use-readdirp mount default <https://bugzilla.redhat.com/show_bug.cgi?id=1222915>
12:45 glusterbot News from newglusterbugs: [Bug 1222917] usage text is wrong for use-readdirp mount default <https://bugzilla.redhat.com/show_bug.cgi?id=1222917>
12:45 glusterbot News from newglusterbugs: [Bug 1221489] nfs-ganesha +dht :E [server-rpc-fops.c:1048:server_unlink_cbk] 0-vol2-server: 2706777: UNLINK (Permission denied) <https://bugzilla.redhat.com/show_bug.cgi?id=1221489>
12:45 glusterbot News from newglusterbugs: [Bug 1220703] nfs-ganesha: issue with registering or deregistering of services with port 875 <https://bugzilla.redhat.com/show_bug.cgi?id=1220703>
12:45 glusterbot News from newglusterbugs: [Bug 1221457] nfs-ganesha+posix: glusterfsd crash while executing the posix testuite <https://bugzilla.redhat.com/show_bug.cgi?id=1221457>
12:45 glusterbot News from newglusterbugs: [Bug 1220713] Scrubber should be disabled once bitrot is reset <https://bugzilla.redhat.com/show_bug.cgi?id=1220713>
12:45 glusterbot News from newglusterbugs: [Bug 1221511] nfs-ganesha: OOM killed for nfsd process while executing the posix test suite <https://bugzilla.redhat.com/show_bug.cgi?id=1221511>
12:45 glusterbot News from newglusterbugs: [Bug 1220996] Running `gluster volume heal testvol info` on a volume that is not started results in a core. <https://bugzilla.redhat.com/show_bug.cgi?id=1220996>
12:46 shubhendu joined #gluster
12:48 arao joined #gluster
12:57 dgandhi joined #gluster
12:59 R0ok_ joined #gluster
13:03 shubhendu joined #gluster
13:09 pdrakeweb joined #gluster
13:10 vikumar joined #gluster
13:12 pppp joined #gluster
13:12 edong23 joined #gluster
13:15 glusterbot News from newglusterbugs: [Bug 1221473] BVT: Posix crash while running BVT on 3.7beta2 build on rhel6.6 <https://bugzilla.redhat.com/show_bug.cgi?id=1221473>
13:15 glusterbot News from newglusterbugs: [Bug 1221560] `-bash: fork: Cannot allocate memory' error seen regularly on nodes on execution of any command <https://bugzilla.redhat.com/show_bug.cgi?id=1221560>
13:15 glusterbot News from newglusterbugs: [Bug 1222942] BVT: Posix crash while running BVT on 3.7beta2 build on rhel6.6 <https://bugzilla.redhat.com/show_bug.cgi?id=1222942>
13:15 glusterbot News from newglusterbugs: [Bug 1219346] nfs: posix test errors out with multiple failures <https://bugzilla.redhat.com/show_bug.cgi?id=1219346>
13:15 glusterbot News from newglusterbugs: [Bug 1221866] DHT Layout selfheal code should log errors <https://bugzilla.redhat.com/show_bug.cgi?id=1221866>
13:15 glusterbot News from newglusterbugs: [Bug 1221737] Multi-threaded SHD support <https://bugzilla.redhat.com/show_bug.cgi?id=1221737>
13:16 glusterbot News from newglusterbugs: [Bug 1221869] Even after reseting the bitrot and scrub demons are running <https://bugzilla.redhat.com/show_bug.cgi?id=1221869>
13:16 wkf joined #gluster
13:18 dusmant joined #gluster
13:19 rejy joined #gluster
13:19 aaronott joined #gluster
13:21 bene2 joined #gluster
13:25 georgeh-LT2 joined #gluster
13:25 rgustafs joined #gluster
13:25 squizzi joined #gluster
13:25 glusterbot News from resolvedglusterbugs: [Bug 1138897] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=1138897>
13:31 badone_ joined #gluster
13:31 nsoffer joined #gluster
13:33 hamiller joined #gluster
13:35 arao joined #gluster
13:39 coredump|br joined #gluster
13:42 Twistedgrim joined #gluster
13:46 glusterbot News from newglusterbugs: [Bug 1220347] Read operation on a file which is in split-brain condition is successful <https://bugzilla.redhat.com/show_bug.cgi?id=1220347>
13:46 glusterbot News from newglusterbugs: [Bug 1220348] Client hung up on listing the files on a perticular directory <https://bugzilla.redhat.com/show_bug.cgi?id=1220348>
13:46 glusterbot News from newglusterbugs: [Bug 1221390] Replication is active but skips files and Rsync reports errcode 23 <https://bugzilla.redhat.com/show_bug.cgi?id=1221390>
13:46 glusterbot News from newglusterbugs: [Bug 1221584] Disperse volume: gluster volume heal info lists entries of all bricks <https://bugzilla.redhat.com/show_bug.cgi?id=1221584>
13:46 glusterbot News from newglusterbugs: [Bug 1184626] Community Repo RPMs don't include attr package as a dependency <https://bugzilla.redhat.com/show_bug.cgi?id=1184626>
13:46 anil_ joined #gluster
13:47 wushudoin joined #gluster
13:50 spiekey joined #gluster
13:50 spiekey Hello
13:50 glusterbot spiekey: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:50 spiekey is this a serious error? E [glusterd-syncop.c:961:_gd_syncop_commit_op_cbk] 0-management: Failed to aggregate response from  node/brick
13:56 harish_ joined #gluster
13:56 itisravi joined #gluster
14:15 pdrakeweb joined #gluster
14:16 glusterbot News from newglusterbugs: [Bug 1218961] snapshot: Can not activate the name provided while creating snaps to do any further access <https://bugzilla.redhat.com/show_bug.cgi?id=1218961>
14:17 dusmant joined #gluster
14:17 swebb joined #gluster
14:20 kdhananjay joined #gluster
14:29 nsoffer joined #gluster
14:29 nsoffer joined #gluster
14:30 DV joined #gluster
14:45 aravindavk joined #gluster
14:50 badone_ joined #gluster
14:53 anil_ joined #gluster
14:58 itisravi joined #gluster
15:01 kanagaraj joined #gluster
15:04 neofob joined #gluster
15:06 jcastill1 joined #gluster
15:11 jcastillo joined #gluster
15:11 julim joined #gluster
15:16 glusterbot News from newglusterbugs: [Bug 1220623] Seg. Fault during yum update <https://bugzilla.redhat.com/show_bug.cgi?id=1220623>
15:17 cholcombe joined #gluster
15:20 kumar joined #gluster
15:20 badone_ joined #gluster
15:24 wtracz2 joined #gluster
15:25 anil_ joined #gluster
15:26 nsoffer joined #gluster
15:28 Fuz1on joined #gluster
15:28 Fuz1on HI Guys
15:28 Fuz1on congrats to the new GlusteFS release :)
15:28 srepetsk left #gluster
15:28 Fuz1on +GlusterFS
15:28 hagarth Fuz1on: thanks :)
15:28 Fuz1on it brings some very interresting features
15:28 Fuz1on i look forward to test it
15:28 ndevos Fuz1on: thanks!
15:29 Fuz1on just trying to a way to give it a try on ubuntu LTS14.04
15:29 hagarth Fuz1on: your feedback would be very welcome!
15:29 Fuz1on i will
15:35 spiekey i have this situation: http://fpaste.org/223403/04968314/
15:35 spiekey maybe its healing itself right now, but if i do a iftop on my network interface it does not use much bandwidth
15:35 spiekey does it heal very slow or does it heal at all?
15:39 spiekey if i do a dd bs=1M count=2048 if=/dev/zero of=test1 conv=fdatasync  on my mounted glusterfs i get about 150MB/sec
15:44 ndevos spiekey: I dont know, but you could check the glustershd.log (shd=self-heal-daemon) and see if there are any messages
15:45 [Enrico] joined #gluster
15:46 glusterbot News from newglusterbugs: [Bug 1206461] sparse file self heal fail under xfs version 2 with speculative preallocation feature on <https://bugzilla.redhat.com/show_bug.cgi?id=1206461>
15:49 spiekey ndevos: looks good
15:50 kanagaraj joined #gluster
15:50 ndevos spiekey: well, thats all I can contribute to it... maybe itisravi or someone else has ideas
15:50 ndevos not sure who's online and reading here though
15:51 ninkotech joined #gluster
15:51 ninkotech_ joined #gluster
15:51 ctria joined #gluster
15:54 spiekey ndevos: i only have big files. maybe it has to do something with that?
15:57 ndevos spiekey: my healing foo is very weak
16:06 poornimag joined #gluster
16:07 Le22S joined #gluster
16:14 seth12345 joined #gluster
16:15 squizzi joined #gluster
16:15 seth12345 hello all, I'm wondering if someone in here can help me out.
16:15 seth12345 I just upgraded to version 3.7 from version 3.6 and now anytime I run a gluster command (gluster volume status xxx) it says Locking failed on GUID.  Please check log file for details.
16:20 anil_ seth12345, which rpm version you are using ?
16:21 rejy joined #gluster
16:21 seth12345 glusterfs-3.7.0-1.el6.x86_64.rpm
16:26 badone_ joined #gluster
16:29 mbrgm joined #gluster
16:30 mbrgm hello! is there anything that speaks against creating a single-brick volume on one node and then mounting that volume from the same node, for testing purposes?
16:31 kkeithley_ mbrgm: no, that works.
16:32 mbrgm ok. I have an error saying it can't find the port for the brick?
16:33 mbrgm I only found out when I tried to mount the volume and `mount ...` would not return
16:33 mbrgm kkeithley_: which is the best way to debug that?
16:35 kkeithley_ @ports
16:35 glusterbot kkeithley_: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
16:35 swebb joined #gluster
16:35 kkeithley_ do you have a firewall running? iptables?
16:36 mbrgm none of them, vanilla debian machine
16:37 seth12345 does anyone know how to solve locking issues after performing an upgrade?
16:37 kkeithley_ nfs or gluster-native?  `mount -t glusterfs $hostname:$volname $mntpoint` should do it
16:37 seth12345 it seems like any attempt to run gluster volume status results in a locking failed error
16:38 mbrgm kkeithley_: the command does not return
16:38 mbrgm also, ls on the mountpoint hangs
16:39 pdrakeweb joined #gluster
16:39 kkeithley_ and you 'started' (exported) the volume before you tried to mount it?
16:40 mbrgm yup
16:40 mbrgm gluster volume start <volname> if you mean that?
16:40 JoeJulian Don't mount over your brick.
16:41 kkeithley_ and what does `gluster volume info` and `gluster volume status` show you? (use fpaste or equivalent)
16:41 kkeithley_ @fpaste
16:42 JoeJulian mbrgm: Don't mount over your brick.
16:43 mbrgm JoeJulian: didn't do it...
16:43 mbrgm i recreated the volume and now it works...
16:43 mbrgm strange
16:43 mbrgm could i be because I had '%' in the path of the brick?
16:44 JoeJulian I wouldn't think so
16:44 mbrgm ok. I'll try tomorrow
16:44 mbrgm thank you anyways guys! :)
16:46 coredump joined #gluster
16:48 itisravi_ joined #gluster
16:57 mbrgm ok, back again :D
16:58 mbrgm one more question I have is: how do I safely remove a (set of) brick(s) from a volume
16:58 itisravi_ joined #gluster
16:58 mbrgm let's say I have a volume consisting of 4 bricks with replica count 2
17:00 mbrgm now all bricks have part of the data in the volume. I see that 2 bricks would be enough to store the data, which is why I want to remove the superfluous pair.
17:00 mbrgm what is the best way to do that and ensure that the data is moved to the two remaining bricks?
17:02 mbrgm does rebalancing the volume defragment it in a way so no data is left on the redundant set of bricks
17:02 mbrgm ?
17:06 ninkotech__ joined #gluster
17:10 badone_ joined #gluster
17:11 devilspgd joined #gluster
17:21 Rapture joined #gluster
17:30 autoditac joined #gluster
17:32 jiffin joined #gluster
17:40 autoditac joined #gluster
17:44 spiekey joined #gluster
17:47 spiekey left #gluster
17:51 nbalacha joined #gluster
18:04 rafi joined #gluster
18:10 corretico joined #gluster
18:20 ppai joined #gluster
18:21 shaunm_ joined #gluster
18:28 kumar joined #gluster
18:28 shylesh__ joined #gluster
18:36 stickyboy joined #gluster
18:36 stickyboy joined #gluster
18:40 badone_ joined #gluster
18:41 gluster-user joined #gluster
18:43 pdrakeweb joined #gluster
18:43 gluster-user Hello, is anyone around that might be able to help me troubleshoot an issue I am having?
18:45 JoeJulian hello
18:45 glusterbot JoeJulian: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
18:45 JoeJulian gluster-user: ^
18:47 glusterbot News from newglusterbugs: [Bug 1223083] daemons abstraction & refactoring <https://bugzilla.redhat.com/show_bug.cgi?id=1223083>
18:47 gluster-user after performing an upgrade from version 3.6.1 to version 3.7.0, the gluster management is having a locking issue.  For instance, when I run gluster volume status, I get a line that looks like this for each node in my cluster "Locking failed on 3e8e21dd-03cd-4509-82b4-41af3869183f. Please check log file for details."
18:47 gluster-user I checked the log file, but I am really unsure what I need to be looking for
18:50 JoeJulian You're not the first one to report this.
18:51 JoeJulian 3e8e21dd-03cd-4509-82b4-41af3869183f should be the uuid of a server.
18:51 JoeJulian Did you do this as a live upgrade?
18:52 gluster-user well, before I performed the upgrade, I shut down gluster on all the servers
18:53 gluster-user I do see something in the log "[2015-05-19 18:49:09.999332] E [glusterd-syncop.c:562:_gd_syncop_mgmt_lock_cbk] 0-management: Could not find peer with ID 12000000-0000-0000-5008-218c7f7f0000", but I have no idea where it's getting the 12000000-0000-0000-5008-218c7f7f0000 from
18:54 plarsen joined #gluster
18:55 jiffin joined #gluster
18:55 JoeJulian Looks ugly to me. Too many 0s.
18:55 gluster-user yeah, and none of my nodes have that ID
18:56 JoeJulian is 218c7f7f in any of them?
18:56 gluster-user no, it doesn't appear so
18:57 glusterbot News from resolvedglusterbugs: [Bug 1191486] daemons abstraction & refactoring <https://bugzilla.redhat.com/show_bug.cgi?id=1191486>
18:58 JoeJulian I'd go back to 3.6.3. That looks like an invalid pointer assignment to me.
18:58 gluster-user are there any instructions on downgrading... that's something I've never had to do
18:59 JoeJulian There shouldn't be anything more to it than uninstalling and installing.
19:01 deniszh joined #gluster
19:17 gluster-user is there some process I can follow to manually remove the volumes and nodes and re-add them?
19:17 squizzi joined #gluster
19:24 JoeJulian gluster-user: yes, the quick and dirty way: you can wipe /var/lib/glusterd/vols
19:24 JoeJulian If you have data, ,,(path or prefix)
19:24 glusterbot http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
19:24 JoeJulian If you have *no* data, you can just format the bricks.
19:24 gluster-user I do have data
19:25 JoeJulian Just be sure to recreate your volume with the bricks in the same order.
19:28 gluster-user alright, so order of operations... shut down gluster on all servers
19:28 gluster-user then delete /var/lib/glusterd/vols
19:28 gluster-user and then run the commands in your link
19:28 gluster-user and then re-add bricks in same order
19:37 ira_ joined #gluster
19:38 JoeJulian yep
19:39 pdrakeweb joined #gluster
19:49 neofob left #gluster
20:26 squizzi joined #gluster
20:31 plarsen joined #gluster
20:38 vimal joined #gluster
20:48 pdrakeweb joined #gluster
20:56 neofob joined #gluster
21:03 Bardack joined #gluster
21:05 Bardack joined #gluster
21:09 Bardack joined #gluster
21:10 dgandhi joined #gluster
21:11 Bardack joined #gluster
21:14 Bardack joined #gluster
21:16 wtracz left #gluster
21:22 Bardack joined #gluster
21:44 ppai joined #gluster
21:47 mbrgm left #gluster
21:56 premera joined #gluster
21:59 Prilly joined #gluster
22:12 gnudna joined #gluster
22:13 gnudna hi guys have an issue with a replica going due to hdd issue and now after i rebooted the other node in the replica it does not want to mount
22:13 gnudna is there a simple fix for this?
22:14 Twistedgrim joined #gluster
22:15 gnudna any pointers on the above question?
22:19 JoeJulian check the client log
22:20 gnudna Hi JoeJulian
22:21 JoeJulian o/
22:21 gnudna in this case the the log just says "All subvolumes are down. Going offline until atleast one of them comes back up."
22:21 gnudna the issue is i have a replicated setup and one of the nodes is dead
22:21 JoeJulian gluster volume status
22:22 JoeJulian Yep, got that. The client's saying neither replica is up.
22:22 gnudna yes
22:23 gnudna #gluster volume status did you want anything specific from it?
22:24 gnudna i see this Brick flux:/data/gluster/brickN/ANN/A
22:24 JoeJulian Is that the one that should be up?
22:24 gnudna yes i would think so
22:25 JoeJulian "gluster volume start $volname force" on that server.
22:25 JoeJulian I think you may have to kill glusterfsd for the broken brick if that works.
22:26 gnudna yes that worked
22:27 gnudna when you say kill glusterfsd what do you mean exactly
22:27 JoeJulian Do you have more than one brick per server?
22:28 gnudna i have another volume but 1 brick per volume
22:28 gnudna replicated across what used to be 2 nodes
22:28 JoeJulian So more than one brick per server. "pkill -f $broken_brick_path" should work.
22:29 gnudna would this be required on reboot?
22:29 JoeJulian Assuming you don't need to kill more than one brick.
22:29 JoeJulian perhaps, if you are not going to fix the broken brick. :)
22:30 gnudna well in this case i need to re-install
22:30 gnudna the issues of raid0
22:30 JoeJulian With currently supported versions, as long as the brick isn't mounted it shouldn't be able to start.
22:31 gnudna well for now im good till i fix the issue itself shortly
22:31 JoeJulian cool
22:32 gnudna thanks again for the help
22:34 pdrakeweb joined #gluster
22:36 gnudna any howto on how to join a new node to the existing replicated setup
22:36 gnudna since i am doing a re-install
22:37 gnudna aka in this case to replace the dead node
22:37 gnudna or do i break the volume and recreate it like i have done in the past?
22:48 JoeJulian gnudna: I like to follow this process: http://www.gluster.org/community/documentation/index.php/Archives/Gluster_3.2:_Brick_Restoration_-_Replace_Crashed_Server
22:59 nsoffer joined #gluster
23:02 mkzero joined #gluster
23:18 ShaunR 3.7 repo is broken
23:18 gildub joined #gluster
23:20 JoeJulian So it 3.7, imho.
23:20 JoeJulian *is
23:21 JoeJulian It looks unbroken to me. What are you seeing?
23:53 ShaunR looks ok now, was giving dep errors earlier

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary