Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:28 lpabon joined #gluster
00:44 daMaestro joined #gluster
00:53 topshare joined #gluster
00:57 topshare_ joined #gluster
01:03 DV joined #gluster
01:15 sprachgenerator joined #gluster
01:26 kripper joined #gluster
01:26 kripper Hi Joe
01:26 kripper JoeJulian: can you please help me to enable nfs on a gluster volume?
01:26 JoeJulian @nfs
01:26 glusterbot JoeJulian: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
01:28 kripper JoeJulian: NFS Server on localhost                                 N/A     N       N/A
01:28 JoeJulian Unless you've actively disabled the nfs server it's enabled by default.
01:29 kripper JoeJulian: I remember there was a global NFS option somewhere
01:29 JoeJulian Does "gluster volume status" list it?
01:29 kripper I used  nfs.disable off
01:29 JoeJulian Just "gluster volume reset $vol nfs.disable"
01:30 kripper JoeJulian: It is listed, but with N/A
01:30 JoeJulian "off" should be false - so it should work, but I have always just reset the options I'm not using.
01:30 kripper JoeJulian: reseted and still not working
01:30 JoeJulian @ports
01:30 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
01:30 kripper netstat -tnlp | grep 2049
01:31 JoeJulian maybe?
01:31 kripper # netstat -tnlp | grep 111
01:31 kripper tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      2662/rpcbind
01:31 kripper tcp6       0      0 :::111                  :::*                    LISTEN      2662/rpcbind
01:31 JoeJulian check /var/log/glusterfs/nfs.log
01:32 kripper [2015-03-04 01:20:44.499506] E [rpcsvc.c:1303:rpcsvc_program_register_portmap] 0
01:32 kripper -rpc-service: Could not register with portmap 100021 4 38468
01:32 kripper [2015-03-04 01:20:44.499538] E [nfs.c:331:nfs_init_versions] 0-nfs: Program  NLM
01:32 kripper 4 registration failed
01:32 kripper [2015-03-04 01:20:44.499553] E [nfs.c:1342:init] 0-nfs: Failed to initialize pro
01:32 kripper tocols
01:32 kripper [2015-03-04 01:20:44.499567] E [xlator.c:425:xlator_init] 0-nfs-server: Initiali
01:32 kripper zation of volume 'nfs-server' failed, review your volfile again
01:32 kripper [2015-03-04 01:20:44.499577] E [graph.c:322:glusterfs_graph_init] 0-nfs-server:
01:32 kripper initializing translator failed
01:32 kripper [2015-03-04 01:20:44.499598] E [graph.c:525:glusterfs_graph_activate] 0-graph: init failed
01:32 JoeJulian @paste
01:32 glusterbot JoeJulian: For RPM based distros you can yum install fpaste, for debian, ubuntu, and arch it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
01:33 JoeJulian Please use a pastebin instead of flooding IRC channels. :D
01:33 kripper http://pastebin.com/wVLqCk4d
01:33 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
01:34 kripper some missing packages?
01:34 kripper (CentOS 7)
01:34 JoeJulian Doesn't look that way. Looks like it failed to register with portmap.
01:35 kripper I also disabled file locking in the /etc/nfsmount.conf file with the line Lock=False
01:35 kripper do I need to reload something?
01:36 gildub joined #gluster
01:37 JoeJulian rpcinfo -p localhost
01:37 sprachgenerator joined #gluster
01:37 kripper yep, it is portmapped
01:37 kripper 100003    3   tcp   2049  nfs
01:37 kripper is it kernel's NFS?
01:37 JoeJulian looks like it
01:37 kripper how do I disable it?
01:37 JoeJulian 100021 is also the lock manager.
01:38 kripper 100021    3   udp  41387  nlockmgr
01:39 JoeJulian systemctl stop nfs-server nfs-lock nfs-idmap ; systemctl disable nfs-server nfs-lock nfs-idmap
01:40 kripper still visible in portmap:
01:40 kripper 100003    3   tcp   2049  nfs
01:43 luis_silva joined #gluster
01:47 kripper mmm...NFS exports are still mounted
01:50 bala joined #gluster
01:54 kripper should nfs-lock be loaded?
01:56 JoeJulian Argh, I should have guessed...
01:56 JoeJulian it's probably bug 1184661
01:56 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1184661 unspecified, unspecified, rc, steved, CLOSED WONTFIX, systemd service contains -w switch
01:56 JoeJulian or not...
01:57 JoeJulian or yes... AND selinux
02:02 kripper any workarround?
02:08 JoeJulian sed 's/-w //' /usr/lib/systemd/system/rpcbind.service >/etc/systemd/system/rpcbind.service ; systemctl daemon-reload ; setenforce 0 ; systemctl restart glusterd
02:08 JoeJulian Then edit /etc/sysconfig/selinux and set it to permissive (for now).
02:11 gem joined #gluster
02:14 kripper working
02:14 kripper I guess rpcbind restart was also needed after removing thr "-w"
02:15 kripper thanks!!!
02:16 kripper why the "WONTFIX"?
02:17 kripper this is CentOS 7
02:18 JoeJulian it is, and I disagree with his comments. Feel free to add your own and change the status to "NEW".
02:18 kripper ok
02:19 sprachgenerator joined #gluster
02:21 sprachgenerator_ joined #gluster
02:22 RameshN joined #gluster
02:26 kripper JoeJulian: just added some noise
02:29 * JustinClift me too
02:30 JustinClift Btw, with sed, wouldn't it be easier to do "sed -i" for inplace edit instead of the > version?
02:30 JustinClift I haven't checked if the > version mucks up SELinux labels or not... kind of suspect it might
02:32 JoeJulian JustinClift: no, because the next time rpcbind is upgraded, it would overwrite the one in /usr/lib.
02:32 JoeJulian The correct way to override a packaged service file is to put the override in /etc/systemd/system
02:33 JustinClift Gah.  I didn't noticed it wasn't going over the same file
02:33 JoeJulian The selinux problem seems to be related to the named pipes.
02:33 JustinClift I should probably hit the sack, but waiting for stuff to finish ;?
02:33 JoeJulian I'm seeing it on F21 also.
02:34 JustinClift Wonder if we should open a GlusterFS bug along the lines of "Doesn't work with rpcbind in EL7/F21 due to -w switch"
02:35 JustinClift Just in case there's a code workaround we can do, and the rpcbind people really won't budge :/
02:36 harish joined #gluster
02:38 gem joined #gluster
02:38 JoeJulian He responded in less than a week last time. Either way, we're going to need to change that switch when we install.
02:39 JustinClift JoeJulian: Is it important enough I should email him directly and ask him to reconsider?
02:40 JoeJulian meh, let's see how it goes first.
02:40 JustinClift No worries. :)
02:41 nangthang joined #gluster
02:50 sprachgenerator joined #gluster
02:50 luis_silva joined #gluster
02:56 sprachgenerator joined #gluster
02:56 bala joined #gluster
02:59 bharata-rao joined #gluster
03:03 mat1010 joined #gluster
03:26 Maya_ joined #gluster
03:33 kanagaraj joined #gluster
03:47 rjoseph joined #gluster
03:56 nbalacha joined #gluster
04:02 badone_ joined #gluster
04:02 h4rry joined #gluster
04:04 haomaiwa_ joined #gluster
04:04 kripper left #gluster
04:06 bala joined #gluster
04:07 gburiticato joined #gluster
04:07 shubhendu_ joined #gluster
04:07 dgandhi joined #gluster
04:11 bharata-rao joined #gluster
04:27 schandra joined #gluster
04:30 dockbram joined #gluster
04:34 bala joined #gluster
04:34 anoopcs joined #gluster
04:48 shubhendu_ joined #gluster
04:48 ndarshan joined #gluster
04:50 gem joined #gluster
04:53 rafi joined #gluster
04:53 anil joined #gluster
04:55 jiffin joined #gluster
04:56 meghanam joined #gluster
05:09 gem_ joined #gluster
05:14 Apeksha joined #gluster
05:17 ppai joined #gluster
05:19 prasanth_ joined #gluster
05:23 o5k joined #gluster
05:23 DV joined #gluster
05:24 badone_ joined #gluster
05:29 spandit joined #gluster
05:33 kshlm joined #gluster
05:34 kumar joined #gluster
05:34 Maya_ joined #gluster
05:34 vikumar joined #gluster
05:35 overclk joined #gluster
05:36 atalur joined #gluster
05:43 lalatenduM joined #gluster
05:55 atalur joined #gluster
05:56 sputnik13 joined #gluster
05:57 ramteid joined #gluster
05:58 Bhaskarakiran joined #gluster
06:04 bala joined #gluster
06:14 raghu joined #gluster
06:16 rjoseph joined #gluster
06:28 anrao joined #gluster
06:29 kalzz joined #gluster
06:33 topshare joined #gluster
06:35 topshare_ joined #gluster
06:37 victori joined #gluster
06:40 delhage joined #gluster
06:46 atalur joined #gluster
06:47 nbalacha joined #gluster
06:48 nangthang joined #gluster
06:50 javi404 joined #gluster
07:01 kalzz joined #gluster
07:03 sac`away joined #gluster
07:03 rotbeard joined #gluster
07:05 mbukatov joined #gluster
07:05 deepakcs joined #gluster
07:17 sac`away joined #gluster
07:18 jtux joined #gluster
07:31 glusterbot News from newglusterbugs: [Bug 1191919] Disperse volume: Input/output error when listing files/directories under nfs mount <https://bugzilla.redhat.co​m/show_bug.cgi?id=1191919>
07:39 poornimag joined #gluster
07:40 atalur joined #gluster
07:40 Netbulae joined #gluster
07:40 Netbulae left #gluster
07:43 topshare joined #gluster
07:55 Philambdo joined #gluster
07:58 aravindavk joined #gluster
08:06 badone_ joined #gluster
08:07 VeggieMeat_ joined #gluster
08:12 deniszh joined #gluster
08:16 [Enrico] joined #gluster
08:28 fsimonce joined #gluster
08:30 topshare joined #gluster
08:31 bala joined #gluster
08:45 VeggieMeat joined #gluster
08:55 DV__ joined #gluster
08:56 liquidat joined #gluster
08:58 liquidat joined #gluster
09:00 [Enrico] joined #gluster
09:06 fattaneh1 joined #gluster
09:13 Slashman joined #gluster
09:20 DV joined #gluster
09:21 T0aD joined #gluster
09:22 vikumar joined #gluster
09:22 Norky joined #gluster
09:25 fattaneh1 left #gluster
09:37 anti[Enrico] joined #gluster
09:44 Netbulae joined #gluster
09:45 prasanth_ joined #gluster
09:46 ppai joined #gluster
09:48 anil joined #gluster
09:50 owlbot joined #gluster
09:54 vikumar joined #gluster
09:57 topshare joined #gluster
10:02 kshlm joined #gluster
10:03 kshlm joined #gluster
10:12 LebedevRI joined #gluster
10:12 vikumar joined #gluster
10:28 badone_ joined #gluster
10:42 nshaikh joined #gluster
10:47 harish_ joined #gluster
10:47 Prilly joined #gluster
10:53 prasanth_ joined #gluster
10:54 Bhaskarakiran joined #gluster
10:56 Bhaskarakiran_ joined #gluster
11:06 mtpmoni joined #gluster
11:14 schandra joined #gluster
11:27 rolfb joined #gluster
11:35 firemanxbr joined #gluster
11:49 partner joined #gluster
11:51 atalur joined #gluster
11:57 meghanam joined #gluster
11:58 JustinClift *** REMINDER: Weekly GlusterFS Community meeting is in Freenode #gluster-meeting in 2 minutes :) ***
12:02 glusterbot News from newglusterbugs: [Bug 1191423] upgrade to gluster 3.6 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1191423>
12:03 bene2 joined #gluster
12:06 doekia joined #gluster
12:08 ira_ joined #gluster
12:15 kanagaraj joined #gluster
12:17 ppai joined #gluster
12:18 kanagaraj_ joined #gluster
12:20 Prilly joined #gluster
12:29 Paul_ joined #gluster
12:32 jdarcy joined #gluster
12:32 glusterbot News from newglusterbugs: [Bug 1198573] libgfapi APIs overwrite the existing THIS value when called from other xlators like snapview <https://bugzilla.redhat.co​m/show_bug.cgi?id=1198573>
12:36 RameshN joined #gluster
12:36 julim joined #gluster
12:37 snewpy joined #gluster
12:41 Paul_ Hi All, I've got a very simple setup to try-out glusterfs. 2 Nodes on OL6.6, 1 volume with a brick on each node. Volume is running, iptables is off etc.
12:41 Paul_ When trying to mount the volume on one of two nodes, it fails
12:42 Paul_ showmount -e <node> shows the volume properly
12:43 Paul_ The only thing I can find in the logs is a single warning
12:43 Paul_ [2015-03-04 08:47:51.442746] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (15), shutting down
12:43 glusterbot Paul_: ('s karma is now -60
12:50 bala joined #gluster
12:51 harish_ joined #gluster
12:55 harish_ joined #gluster
13:02 meghanam joined #gluster
13:03 nbalacha joined #gluster
13:03 Leildin what exactly is the problem here Paul_ ?
13:04 lalatenduM_ joined #gluster
13:04 Paul_ The mount is failing, that's where i'm currently stuck
13:05 Paul_ mount -t glusterfs gc12:/vol1 /mnt/gluster/ Mount failed. Please check the log file for more details.
13:10 bala joined #gluster
13:17 Leildin is your gc12 resolved correctly ?
13:19 prasanth_ joined #gluster
13:19 Paul_ It is, "nslookup gc12" gives the right response
13:22 Leildin and a gluster volume info says the volume is up and running fine ?
13:24 Paul_ yes. 'volume info vol1' says status: Started
13:24 nshaikh joined #gluster
13:24 _shaps_ joined #gluster
13:25 Leildin I can only think of the hosts not being resolved correctly but your volume wouldn't be working I don't think
13:25 Leildin or maybe it's the /mnt/gluster which isn't allowed to be created ?
13:26 Paul_ /mnt/gluster is an existing directory
13:26 Paul_ which is empty
13:27 Paul_ which is hosted on mount-point /, which only has as mount option (rw), so no restrictions it seems
13:28 rjoseph joined #gluster
13:31 Paul_ I just now tried to mount the same volume on a host that is not part of the gluster servers. It fails with the same error.
13:36 Paul_ error 15 suggests 'Block device required' but having searched the interwebs for this issue I've seen this error more often in different error context. So it's probably more of a generic error in glusterfs case
13:37 chirino joined #gluster
13:48 theron joined #gluster
13:52 B21956 joined #gluster
13:53 B21956 left #gluster
13:54 Apeksha joined #gluster
13:54 B21956 joined #gluster
13:56 meghanam joined #gluster
13:57 RameshN joined #gluster
14:01 kovshenin joined #gluster
14:03 glusterbot News from newglusterbugs: [Bug 1165938] Fix regression test spurious failures <https://bugzilla.redhat.co​m/show_bug.cgi?id=1165938>
14:03 kovshenin joined #gluster
14:07 wkf joined #gluster
14:09 mayae joined #gluster
14:10 geaaru joined #gluster
14:11 geaaru hi, i'm trying to enable --sparse option on rsync for geo-replication (gluster 3.5.3) but from I log i see that on rysnc command is used --inplace
14:11 geaaru options that is not usable with --sparse... is there a way to enable --sparse and disable --inplace ?
14:12 geaaru thanks in advance
14:19 coredump joined #gluster
14:27 georgeh-LT2 joined #gluster
14:29 plarsen joined #gluster
14:29 td_ joined #gluster
14:30 Philambdo joined #gluster
14:31 Philambdo joined #gluster
14:31 malevolent joined #gluster
14:32 td_ Hi.  How do I remove a dead brick?  remove-brick results in "Incorrect brick" for me http://dpaste.com/30MDYQH
14:33 glusterbot News from newglusterbugs: [Bug 1198614] geo-replication create command must have an option to avoid slave verification. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1198614>
14:33 glusterbot News from newglusterbugs: [Bug 1198615] geo-replication create command must have an option to avoid slave verification. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1198615>
14:40 geaaru fi: I find that --inplace options is present on /usr/libexec/glusterfs/pyt​hon/syncdaemon/resource.py file. I try to remove directly from file to permit use of --sparse option.
14:40 deepakcs joined #gluster
14:44 bennyturns joined #gluster
14:48 luis_silva joined #gluster
14:59 geaaru ndevos: hi, sorry for interrupt. Is there a way from gluster shell to disable use of --inplace option on geo-replication rsync command without change resource.py file ? thanks in advance
15:00 dgandhi joined #gluster
15:02 dgandhi joined #gluster
15:02 dgandhi joined #gluster
15:03 dgandhi joined #gluster
15:05 aravindavk joined #gluster
15:05 squizzi joined #gluster
15:07 elico joined #gluster
15:11 mayae joined #gluster
15:13 sputnik13 joined #gluster
15:15 mayae joined #gluster
15:16 sputnik1_ joined #gluster
15:17 T3 joined #gluster
15:19 mayae joined #gluster
15:26 Prilly joined #gluster
15:27 yosafbridge joined #gluster
15:34 pelox joined #gluster
15:34 kshlm joined #gluster
15:39 jobewan joined #gluster
15:43 nbalacha joined #gluster
15:49 B21956 left #gluster
15:49 B21956 joined #gluster
16:03 tigert joined #gluster
16:04 bene2 joined #gluster
16:04 papamoose left #gluster
16:07 papamoose joined #gluster
16:12 xavih joined #gluster
16:12 malevolent joined #gluster
16:16 wushudoin joined #gluster
16:16 sprachgenerator joined #gluster
16:26 rwheeler joined #gluster
16:45 julim joined #gluster
17:01 ira joined #gluster
17:07 gem joined #gluster
17:10 vipulnayyar joined #gluster
17:11 olim joined #gluster
17:18 RameshN joined #gluster
17:19 bennyturns joined #gluster
17:24 elico joined #gluster
17:24 DV joined #gluster
17:27 virusuy joined #gluster
17:34 lalatenduM joined #gluster
17:38 tetreis joined #gluster
17:47 JoeJulian geaaru: No, there's no option for replacing --inplace in geo-replication. The reason it uses --inplace is to work around rsync's use of temporary filenames. When a temporary filename is used, the filename hash often points to a dht brick that will be different from what the final filename will be.
17:47 JoeJulian I imagine that if you're not using a target with multiple dht subvolumes, that wouldn't matter. Feell free to modify that source file, and file a bug report with that feature request.
17:47 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
17:49 geaaru JoeJulian: in my scenario i have a source volume with replica 2x2 to a replication volume with only a brick
17:50 deniszh joined #gluster
17:50 mayae joined #gluster
17:51 geaaru so in that case is correct remove --inplace or probably is more conveniente use rsync command directly between gluster mount fs to handle correctly sparse file ?
17:51 geaaru s/conveniente/correct/
17:51 glusterbot What geaaru meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
17:54 victori joined #gluster
17:56 Rapture joined #gluster
18:06 sputnik13 joined #gluster
18:10 xiu joined #gluster
18:22 DV joined #gluster
18:23 lalatenduM joined #gluster
18:24 hchiramm__ joined #gluster
18:28 geaaru JoeJulian: thanks for support.
18:32 ira joined #gluster
18:33 PeterA joined #gluster
18:34 glusterbot News from newglusterbugs: [Bug 1198746] Volume passwords are visible to remote users <https://bugzilla.redhat.co​m/show_bug.cgi?id=1198746>
18:44 qtmn joined #gluster
18:44 qtmn left #gluster
18:44 jmarley joined #gluster
18:47 jackdpeterson @purpleidea -- added new commit to handle OS params for the nobootwait option. Seeking some peer review on that change
18:47 jackdpeterson ^^ https://github.com/purplei​dea/puppet-gluster/pull/34
18:48 skroz joined #gluster
18:56 rotbeard joined #gluster
19:08 purpleidea jackdpeterson: will try to review today! thanks
19:08 balacafalata joined #gluster
19:08 jackdpeterson @purpleidea Cheers
19:09 sputnik13 joined #gluster
19:15 lpabon joined #gluster
19:16 jmarley joined #gluster
19:18 purpleidea jackdpeterson: review done
19:18 quantm joined #gluster
19:19 jackdpeterson @puepleidea - thanks, will make modifications
19:20 quantm Hi! I user proxmox 3.1 with gluster 3.4 version. I upgrade proxmox ve from  3.1 to 3.4 and upgrade gluster from 3.4 to 3.5.2. After reboot gluster server work normal, but glustter client failure. i try #gluster peer status - gluster: symbol lookup error: gluster: undefined symbol: xdr_gf1_cli_probe_rsp
19:20 quantm How to fix it?
19:22 quantm google dont help me
19:24 sputnik13 joined #gluster
19:25 JoeJulian quantm: My guess is that after upgrading, some services didn't get restarted.
19:25 quantm1 joined #gluster
19:27 quantm1 joined #gluster
19:30 DV joined #gluster
19:32 quantm joined #gluster
19:33 sputnik13 joined #gluster
19:37 sputnik13 joined #gluster
19:38 papamoose1 joined #gluster
19:41 quantm joined #gluster
19:44 sputnik13 joined #gluster
19:54 quantm joined #gluster
19:56 sputnik13 joined #gluster
19:58 quantm1 joined #gluster
20:09 quantm joined #gluster
20:11 jmarley_ joined #gluster
20:11 quantm joined #gluster
20:14 jmarley joined #gluster
20:18 quantm1 joined #gluster
20:20 quantm joined #gluster
20:21 gem joined #gluster
20:22 jmarley joined #gluster
20:24 quantm1 joined #gluster
20:34 quantm joined #gluster
20:36 quantm1 joined #gluster
20:39 quantm joined #gluster
21:04 glusterbot News from newglusterbugs: [Bug 1198810] vm creation from template fails when multiple builds running concurrently <https://bugzilla.redhat.co​m/show_bug.cgi?id=1198810>
21:57 hagarth joined #gluster
22:34 T0aD joined #gluster
22:39 T3 joined #gluster
22:40 deniszh joined #gluster
22:45 theron joined #gluster
22:47 victori joined #gluster
22:57 pelox joined #gluster
23:05 glusterbot News from newglusterbugs: [Bug 1162910] mount options no longer valid: noexec, nosuid, noatime <https://bugzilla.redhat.co​m/show_bug.cgi?id=1162910>
23:05 glusterbot News from newglusterbugs: [Bug 1198849] Minor improvements and cleanup for the build system <https://bugzilla.redhat.co​m/show_bug.cgi?id=1198849>
23:06 gildub joined #gluster
23:08 theron joined #gluster
23:28 gildub joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary