Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-04-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 doekia joined #gluster
00:21 gdubreui joined #gluster
00:23 nightwalk joined #gluster
01:05 doekia joined #gluster
01:13 doekia_ joined #gluster
01:16 vpshastry joined #gluster
01:18 bala joined #gluster
01:34 bharata-rao joined #gluster
01:47 haomaiwang joined #gluster
01:56 harish joined #gluster
02:07 gdubreui joined #gluster
02:09 jag3773 joined #gluster
02:11 haomaiwang joined #gluster
02:41 ceiphas_ joined #gluster
02:43 badone joined #gluster
02:47 AaronGr joined #gluster
02:48 AaronGr joined #gluster
03:00 jag3773 joined #gluster
03:01 nightwalk joined #gluster
03:02 rjoseph joined #gluster
03:13 wgao joined #gluster
03:17 primechuck joined #gluster
03:17 kdhananjay joined #gluster
03:23 shubhendu joined #gluster
03:27 shylesh joined #gluster
03:32 rastar joined #gluster
03:37 ravindran1 joined #gluster
03:37 itisravi joined #gluster
03:40 ravindran1 left #gluster
03:44 marcoceppi joined #gluster
03:45 gmcwhistler joined #gluster
03:51 jayunit100 joined #gluster
03:55 kdhananjay joined #gluster
03:55 dusmant joined #gluster
04:17 ndarshan joined #gluster
04:26 deepakcs joined #gluster
04:27 _ndevos joined #gluster
04:28 harish joined #gluster
04:29 raghu joined #gluster
04:31 ppai joined #gluster
04:33 sripathi1 joined #gluster
04:45 kdhananjay joined #gluster
04:48 rjoseph joined #gluster
05:05 vpshastry joined #gluster
05:06 sputnik13 joined #gluster
05:06 ceiphas_ good morning
05:09 atinm joined #gluster
05:27 ravindran1 joined #gluster
05:28 vkoppad joined #gluster
05:34 prasanth_ joined #gluster
05:34 sahina joined #gluster
05:34 Durzo ceiphas, any luck with your issue?
05:34 ceiphas yes and no
05:34 ceiphas https://bugzilla.redhat.co​m/show_bug.cgi?id=1085425
05:35 glusterbot Bug 1085425: high, unspecified, ---, csaba, NEW , Input/Output Errors with 64bit Server and 32bit client
05:35 RameshN joined #gluster
05:36 bala1 joined #gluster
05:42 lalatenduM joined #gluster
05:44 kanagaraj joined #gluster
05:54 spandit joined #gluster
06:03 sripathi2 joined #gluster
06:05 sripathi2 joined #gluster
06:08 RobertLaptop joined #gluster
06:15 rgustafs joined #gluster
06:17 ppai joined #gluster
06:20 aravindavk joined #gluster
06:21 hagarth joined #gluster
06:25 jtux joined #gluster
06:26 rjoseph joined #gluster
06:36 ekuric joined #gluster
06:37 psharma joined #gluster
06:38 glusterbot New news from newglusterbugs: [Bug 1085671] [barrier] reconfiguration of barrier time out does not work <https://bugzilla.redhat.co​m/show_bug.cgi?id=1085671>
06:40 benjamin_____ joined #gluster
06:42 fidevo joined #gluster
06:46 ceiphas is the gluster fuse client blocking kNFS daemon on this host? i get "adress already in use" although nfs is not running
06:46 sputnik13 joined #gluster
06:46 an joined #gluster
06:50 vimal joined #gluster
06:57 ctria joined #gluster
07:01 ravindran1 joined #gluster
07:06 glusterbot New news from resolvedglusterbugs: [Bug 1066837] File creation on cifs mount of a gluster volume fails <https://bugzilla.redhat.co​m/show_bug.cgi?id=1066837> || [Bug 1054696] Got debug message in terminal while qemu-img creating qcow2 image <https://bugzilla.redhat.co​m/show_bug.cgi?id=1054696>
07:07 ngoswami joined #gluster
07:07 eseyman joined #gluster
07:13 haomaiwa_ joined #gluster
07:15 cyber_si ceiphas, rpcinfo -p localhost
07:16 ceiphas it shows nfs running, but i didnt start it
07:16 cyber_si tcp   2049  nfs ?
07:16 ceiphas is the glusterfs fuse running its own nfs daemon?
07:16 cyber_si no
07:17 ceiphas but "ps ax | grp nfs" shows this
07:18 ceiphas /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S /var/run/e33736816767dee71faa9bb609d22e04.socket
07:18 ceiphas on the client that just mounts the fs with fuse
07:19 cyber_si unmount it and look again
07:20 cyber_si you have not gluster server on this node?
07:20 ceiphas i even stopped glusterd, but still have these processes
07:21 cyber_si init script stop only glusterd, ex. debian
07:22 _ndevos ceiphas: you can disable the nfs-server, you have to 'gluster volume set $VOLUME set nfs.disable true' for all volumes
07:22 ceiphas i know, i killed these by hand now
07:22 ndevos uh, that command without the 2nd 'set;
07:23 ceiphas ndevos, is already disabled... i am curious why my machine claims that nfs is running when it isnt, gluster was my first suspect, but mybe it is innocent
07:24 ndevos ceiphas: in /var/lib/glusterd/nfs/nfs.vol all volumes should be listed that have nfs enabled
07:24 ceiphas file not found
07:24 ppai joined #gluster
07:24 cyber_si /var/log/glusterfs/nfs.log
07:25 ndevos ceiphas: can it be that the process was running from before you set nfs.disable?
07:25 ceiphas any news about the 32bit/64bit problem we talked about yesterday?
07:25 ndevos no, I did not have time to test that, fixing these kind of issues isnt really my job ;)
07:26 ceiphas ndevos, maybe... i still need an init-script that stops gluster completely
07:27 ndevos ceiphas: I can build some test RPMs in case you can test a 64-bit client with your 32-bit volume, the fix is completely client-side
07:27 ceiphas ndevos, i use gentoo, lemme just check which file my ebuild needs
07:27 Joe630 joined #gluster
07:28 ndevos ah, I've got no idea about gentoo, a tar.gz or a single patch would work for you?
07:28 ceiphas normally a patch-file would work
07:28 sripathi1 joined #gluster
07:29 ndevos okay, and what version are you on? just to make sure that the patch applies :)
07:29 ndevos was that 3.4.2?
07:29 ceiphas glusterfs-3.4.2
07:29 ndevos right, I'll check for the patch now
07:35 nshaikh joined #gluster
07:35 fsimonce joined #gluster
07:36 glusterbot New news from resolvedglusterbugs: [Bug 1020848] Enable per client logging for gluster shares served by Samba <https://bugzilla.redhat.co​m/show_bug.cgi?id=1020848> || [Bug 1080970] SMB:samba and ctdb hook scripts are not present in corresponding location after installation of 3.0 rpm's <https://bugzilla.redhat.co​m/show_bug.cgi?id=1080970> || [Bug 1084964] SMB: CIFS mount fails with the latest glusterfs rpm's <https://bugzi
07:37 nightwalk joined #gluster
07:40 Pavid7 joined #gluster
07:41 Philambdo joined #gluster
07:42 ndevos ceiphas: http://paste.fedoraproject.org/92782/97028833/raw/ should apply and compile cleanly
07:44 ndevos ceiphas: when it does not resolve the problem for you, you want to capture a complete log with TRACE enabled: rm /var/log/glusterfs/<mntpnt>.log ; mount -t glusterfs -o log-level=TRACE ...
07:45 ndevos that patch adds some extra logging so that similar issues can get diagnosed a little easier
07:45 shireesh joined #gluster
07:47 sripathi joined #gluster
07:49 ceiphas ndevos, installing
07:49 ceiphas gentoo is perfect for patch-testing
07:49 ndevos ceiphas: cool!
07:50 ceiphas the package is calles "sys-cluster/glusterfs" so i just have to put the patch into /etc/portage/patches/sys-cluster/glusterfs and it will be picked up
07:51 ceiphas seems as if it works now
07:51 ceiphas any tests i should perform?
07:53 ceiphas ndevos, it works now (forgot to add your name)
07:55 ndevos ceiphas: great, can you leave a confirmation in that bug you filed? I'll mark it as a duplicate of the other one then (or you could do that too)
07:57 Durzo hooray for squashed bugs
07:57 Andyy2 joined #gluster
07:59 ceiphas ndevos, could you please upload the patch to one or both bugs as i'm not able to do that, something blocks me
07:59 ceiphas i'll leave my confirmation afterwards
08:00 ndevos ceiphas: sure, what was the 64-bit server 32-bit client bug again?
08:00 ceiphas https://bugzilla.redhat.co​m/show_bug.cgi?id=1085425
08:00 glusterbot Bug 1085425: high, unspecified, ---, csaba, NEW , Input/Output Errors with 64bit Server and 32bit client
08:01 cyber_si left #gluster
08:02 cyber_si joined #gluster
08:02 cyber_si left #gluster
08:02 TvL2386 joined #gluster
08:04 Slash joined #gluster
08:06 X3NQ joined #gluster
08:06 ndevos ceiphas: it's there now
08:07 calum_ joined #gluster
08:09 cyber_si joined #gluster
08:10 an joined #gluster
08:11 social joined #gluster
08:14 kdhananjay joined #gluster
08:23 bala1 joined #gluster
08:23 andreask joined #gluster
08:23 ceiphas ndevos, my confirmation too
08:33 Calum joined #gluster
08:37 hagarth joined #gluster
08:46 23LAAA7TB joined #gluster
09:01 tonyxx joined #gluster
09:02 tonyxx hello, anybody could help me?
09:02 tonyxx i have ubuntu 12.04 with gluster server 3.4
09:02 tonyxx in two servers
09:03 tonyxx gluster have been working right from now
09:03 tonyxx just i reboot node2
09:03 tonyxx and gluster daemons not start
09:03 tonyxx my log
09:03 tonyxx is
09:04 tonyxx E [rdma.c:4485:init] 0-rdma.management: Failed to initialize IB Device
09:04 tonyxx E [rpc-transport.c:320:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed
09:04 tonyxx and then
09:05 tonyxx [2014-04-09 08:58:52.034066] E [glusterd-store.c:1858:glus​terd_store_retrieve_volume] 0-: Unknown key: brick-0
09:05 tonyxx [2014-04-09 08:58:52.034094] D [store.c:566:gf_store_iter_get_next] 0-: Returning with 0
09:05 tonyxx [2014-04-09 08:58:52.034119] E [glusterd-store.c:1858:glus​terd_store_retrieve_volume] 0-: Unknown key: brick-1
09:05 tonyxx at the end of log shows
09:05 tonyxx E [xlator.c:390:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
09:05 tonyxx [2014-04-09 08:58:52.515693] E [graph.c:292:glusterfs_graph_init] 0-management: initializing translator failed
09:05 tonyxx [2014-04-09 08:58:52.515713] E [graph.c:479:glusterfs_graph_activate] 0-graph: init failed
09:05 tonyxx [2014-04-09 08:58:52.515972] W [glusterfsd.c:1002:cleanup_and_exit] (-->glusterd(main+0x3cd) [0x7f778f44a85d] (-->glusterd(glusterfs_volumes_init+0xc0) [0x7f778f44d650] (-->glusterd(glusterfs_process_volfp+0x103) [0x7f778f44d553]))) 0-: received signum (0), shutting down
09:06 tonyxx very thanks dev-team
09:07 tonyxx can anybody help me
09:08 tonyxx ??
09:08 nightwalk joined #gluster
09:08 an joined #gluster
09:16 meghanam joined #gluster
09:16 meghanam_ joined #gluster
09:18 social tonyxx: what volume? grep brick-0 -R /var/lib/glusterd
09:18 Pavid7 joined #gluster
09:20 andreask joined #gluster
09:20 RameshN joined #gluster
09:21 elico I erased a file on one brick of a mirror and I would like to be fixed\completed. what command should I use?
09:24 nshaikh joined #gluster
09:27 social elico: stat it on mounted volume
09:28 elico social: what?
09:28 elico like on the nfs?
09:28 social elico: for example
09:29 elico can I do it on the FS itself?
09:29 social elico: the rule is not to touch bricks directly
09:30 social if you have replica you stat it, it will cause afr to kick in and heal it or you'll have split brain and you'll have to copy it manually
09:31 elico OK and by stat what do you mean?
09:31 elico (not sure about what you mean)
09:31 social stat command
09:31 social like "stat ./a.out"
09:32 elico stat it on a mounted nfs ? did I understood right?
09:33 social yes, you have volume mounted on some node or just mount it on server like mount -t glusterfs server:volume /mnt/fixup; stat /mnt/fixup/broken_file
09:33 elico social: thanks!
09:34 social please check if it got healed
09:34 bala1 joined #gluster
09:35 elico I will verify it later.
09:36 prasanth|mtg joined #gluster
09:37 hagarth joined #gluster
09:40 haomaiwang joined #gluster
09:50 lalatenduM joined #gluster
09:53 Pavid7 joined #gluster
09:55 haomai___ joined #gluster
09:58 meghanam joined #gluster
09:59 meghanam_ joined #gluster
09:59 an joined #gluster
09:59 cyber_si https://bugzilla.redhat.co​m/show_bug.cgi?id=1037511
10:00 glusterbot Bug 1037511: high, unspecified, ---, vbellur, NEW , Operation not permitted occurred during setattr of <nul>
10:00 ppai joined #gluster
10:00 cyber_si ndevos, can you help me ?
10:01 ndevos cyber_si: sorry, not atm
10:02 deepakcs joined #gluster
10:04 spandit joined #gluster
10:05 rjoseph joined #gluster
10:10 kdhananjay joined #gluster
10:12 derelm joined #gluster
10:19 aravindavk joined #gluster
10:19 abyss_ Can I update glusterfs online? Just for example on debian: apt-get update glusterfs-server then apt-get on clients and that's all or better do it offline?
10:24 sputnik13 joined #gluster
10:34 Pavid7 joined #gluster
10:39 smithyuk1 joined #gluster
10:46 rjoseph joined #gluster
10:49 shyam joined #gluster
10:58 an joined #gluster
11:02 lalatenduM joined #gluster
11:02 andreask joined #gluster
11:18 kdhananjay joined #gluster
11:27 Slash joined #gluster
11:35 vsa joined #gluster
11:35 stickyboy It seems gluster write gmt times in logs... but the system is isn't in gmt...
11:35 vsa Hi all. In when using geo-replication, does anybody see high usage swap space&
11:36 vsa Hi all. In when using geo-replication, does anybody see high usage swap space?
11:39 qdk joined #gluster
11:40 ira_ joined #gluster
11:40 morse joined #gluster
11:48 micu joined #gluster
11:49 deepakcs joined #gluster
12:07 glusterbot New news from resolvedglusterbugs: [Bug 1085425] Input/Output Errors with 64bit Server and 32bit client <https://bugzilla.redhat.co​m/show_bug.cgi?id=1085425>
12:08 sputnik13 joined #gluster
12:10 chirino joined #gluster
12:11 diegows joined #gluster
12:13 shubhendu joined #gluster
12:14 RameshN joined #gluster
12:16 vsa Hi all. when using geo-replication, does anybody see high usage swap space?
12:16 social vsa: what version?
12:16 vsa glusterfs 3.4.2
12:19 tonyxx please somebody knows how can i monitorize gluster node replication with nagios ?
12:20 tonyxx thank u
12:21 harish joined #gluster
12:22 Ark joined #gluster
12:25 social vsa: there was memleak bug fixed in 3.4.3 which caused georeplication to leak a lot. Could you check whether gluster is eating ton of ram? If yes I'd suggest restarting georeplication and considering upgrade
12:27 japuzzo joined #gluster
12:28 benjamin_____ joined #gluster
12:31 sputnik13 joined #gluster
12:33 sputnik13 joined #gluster
12:35 sputnik13 joined #gluster
12:39 sputnik13 joined #gluster
12:42 sputnik13 joined #gluster
12:44 sputnik13 joined #gluster
12:46 bennyturns joined #gluster
12:46 rgustafs joined #gluster
12:49 sputnik13 joined #gluster
12:50 sputnik1_ joined #gluster
12:51 sroy joined #gluster
12:52 sputnik13 joined #gluster
12:55 sputnik13 joined #gluster
13:01 sputnik13 joined #gluster
13:02 sputnik13 joined #gluster
13:03 jmarley joined #gluster
13:03 jmarley joined #gluster
13:05 sputnik13 joined #gluster
13:06 ravindran1 left #gluster
13:10 sputnik13 joined #gluster
13:11 gmcwhistler joined #gluster
13:13 sputnik13 joined #gluster
13:13 tdasilva joined #gluster
13:15 itisravi joined #gluster
13:16 sputnik13 joined #gluster
13:17 Slash joined #gluster
13:18 sputnik13 joined #gluster
13:19 [o__o] joined #gluster
13:21 sputnik13 joined #gluster
13:25 sputnik13 joined #gluster
13:28 sputnik13 joined #gluster
13:31 sputnik13 joined #gluster
13:34 sputnik13 joined #gluster
13:36 lalatenduM joined #gluster
13:37 gmcwhistler joined #gluster
13:37 sputnik13 joined #gluster
13:38 deepakcs joined #gluster
13:38 wgao joined #gluster
13:39 vpshastry left #gluster
13:40 theron joined #gluster
13:40 wgao_ joined #gluster
13:42 sputnik13 joined #gluster
13:43 sputnik1_ joined #gluster
13:44 lmickh joined #gluster
13:45 sputnik1_ joined #gluster
13:48 sputnik13 joined #gluster
13:48 T0aD joined #gluster
13:49 ekuric joined #gluster
13:50 primechuck joined #gluster
13:50 an joined #gluster
13:50 sputnik13 joined #gluster
13:53 sputnik13 joined #gluster
13:55 nikk joined #gluster
14:08 rgustafs joined #gluster
14:08 itisravi joined #gluster
14:09 wushudoin joined #gluster
14:09 Slash joined #gluster
14:11 dbruhn joined #gluster
14:14 wushudoin joined #gluster
14:16 T0aD joined #gluster
14:19 davinder joined #gluster
14:20 rpowell joined #gluster
14:22 primechuck joined #gluster
14:24 zaitcev joined #gluster
14:34 sputnik13 joined #gluster
14:34 ccha hum with glusterfs_client 3.4.3 there is no depend for fuse anymore ?
14:36 plarsen joined #gluster
14:41 Pavid7 joined #gluster
14:45 Ark joined #gluster
14:45 dbruhn ccha the client is still fuse
14:48 sputnik13 joined #gluster
14:55 kkeithley GlusterFS Community Meeting in five minutes in #gluster-meeting @freenode
14:56 chirino_m joined #gluster
14:58 rpowell left #gluster
15:01 jclift Gluster Community Meeting time
15:02 lpabon joined #gluster
15:07 jdarcy joined #gluster
15:09 Georgyo joined #gluster
15:16 RobertLaptop joined #gluster
15:16 benjamin_____ joined #gluster
15:16 ircolle joined #gluster
15:18 primechuck joined #gluster
15:19 hchiramm__ joined #gluster
15:22 wushudoin joined #gluster
15:28 andreask joined #gluster
15:38 jayunit100 joined #gluster
15:50 daMaestro joined #gluster
15:52 nage joined #gluster
15:54 plarsen joined #gluster
15:55 primechuck joined #gluster
15:56 in joined #gluster
15:58 jobewan joined #gluster
16:02 primechuck joined #gluster
16:04 an joined #gluster
16:07 Georgyo joined #gluster
16:09 primechu_ joined #gluster
16:13 Mo__ joined #gluster
16:18 hagarth joined #gluster
16:24 T0aD joined #gluster
16:28 in joined #gluster
16:29 sputnik13 joined #gluster
16:31 dbruhn_ joined #gluster
16:35 dbruhn joined #gluster
16:43 semiosis wow!
16:43 semiosis so this ec2 bug prevents connections from privileged source ports
16:43 semiosis i used iptables SNAT to translate priv ports to 50000
16:43 semiosis and it worked!
16:44 semiosis (after allowing insecure ports on gluster of course)
16:44 semiosis this is insane
16:45 zerick joined #gluster
16:46 semiosis well, it worked with a netcat test.  gluster client doesnt seem to like it
16:49 John_HPC joined #gluster
16:54 semiosis @ports
16:54 glusterbot semiosis: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
16:57 failshell joined #gluster
17:07 semiosis anyone know if it's possible to force a glusterfs fuse client to use unprivileged source ports?
17:13 purpleidea semiosis: allow insecure you mean?
17:13 purpleidea semiosis: i know there are options to glusterfs -- ... for the client mounting that aren't exposed in the normal mount
17:14 cfeller joined #gluster
17:15 Matthaeus joined #gluster
17:17 Matthaeus joined #gluster
17:19 Pavid7 joined #gluster
17:30 semiosis well, there's that, https://forums.aws.amazon.com​/thread.jspa?threadID=149933
17:30 * semiosis not optimistic
17:34 dbruhn joined #gluster
17:40 lalatenduM joined #gluster
17:40 primechuck joined #gluster
17:47 John_HPC semiosis: whats that link? I can't seem to access it
17:48 semiosis John_HPC: it's a link to the public forums for Amazon EC2 where I describe a nasty bug affecting the new generation of EC2 instances
17:48 John_HPC lovely
17:51 shubhendu joined #gluster
17:56 dtrainor joined #gluster
17:58 chirino joined #gluster
17:58 vpshastry joined #gluster
17:59 failshell joined #gluster
18:01 in joined #gluster
18:01 failshel_ joined #gluster
18:03 _dist joined #gluster
18:05 T0aD joined #gluster
18:08 plarsen joined #gluster
18:16 vpshastry left #gluster
18:25 SFLimey joined #gluster
18:29 chirino_m joined #gluster
18:29 T0aD joined #gluster
18:39 _Bryan_ joined #gluster
18:44 shyam joined #gluster
18:48 in_ joined #gluster
18:50 dbruhn_ joined #gluster
18:53 John_HPC left #gluster
19:00 chirino joined #gluster
19:04 SFLimey joined #gluster
19:08 Georgyo joined #gluster
19:09 primechuck joined #gluster
19:13 JoeJulian semiosis: run it as a non-privileged user?
19:14 semiosis couldn't figure out how to do that in <10 minutes of trying
19:15 JoeJulian @lucky how to mount a gluster volume as an unprivileged user
19:15 glusterbot JoeJulian: http://joejulian.name/blog/mounting-a-gl​usterfs-volume-as-an-unprivileged-user/
19:16 JoeJulian Good glusterbot
19:16 JoeJulian Of course, that does nothing for peer communications.
19:18 wgao_ joined #gluster
19:20 wgao__ joined #gluster
19:27 criticalhammer1 joined #gluster
19:28 criticalhammer1 Hello, anyone know of any good blogs that support mediawiki markup?
19:28 semiosis JoeJulian: peer communications is the easy part :)
19:28 criticalhammer1 I have some test data i'd like to share with the community.
19:28 wgao__ joined #gluster
19:31 wgao__ joined #gluster
19:34 criticalhammer1 left #gluster
19:36 criticalhammer1 joined #gluster
19:37 wgao__ joined #gluster
19:39 wgao__ joined #gluster
19:41 wgao_ joined #gluster
19:43 wgao_ joined #gluster
19:45 JoeJulian criticalhammer: Go ahead an make a page on the gluster wiki.
19:46 criticalhammer http://wiki.gluster.org/ ??
19:47 ndevos semiosis: maybe you can use /proc/sys/net/ipv4/ip_local_reserved_ports or similar to force a fuse client to use certain ports
19:47 semiosis oooohh
19:49 ndevos semiosis: some more hints are in <kernel-source>/Documentati​on/networking/ip-sysctl.txt
19:50 semiosis any day i get to tune kernel parameters is a good day
19:50 * ndevos remembers a support case where a customer used that, but somehow it didnt really work and needed a patch for something (gluster maybe?)
19:52 ndevos seems to be related to bug 762989
19:52 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=762989 low, high, ---, rabhat, CLOSED CURRENTRELEASE, Possibility of GlusterFS port clashes with reserved ports
19:52 semiosis ah yes, people have had problems with imaps (iirc) due to use of upper priv ports
19:52 JoeJulian yeah, that's been addressed (hopefully fixed).
19:53 semiosis sure would be nice if gluster would just drop the whole priv ports thing
19:53 semiosis any chance of that happening?
19:53 wgao__ joined #gluster
19:53 JoeJulian I think that's why they're working in ssl.
19:55 wgao__ joined #gluster
19:57 ndevos yeah, before the unpriviledged ports are used by default, some form of ssl/auth is wanted
19:58 Philambdo joined #gluster
20:00 ndevos semiosis: oh, you're lucky, in xlators/protocol/client/src/client.c there is the option client-bind-insecure, I think thats a volume option you could set
20:00 semiosis sweet
20:02 wgao__ joined #gluster
20:05 ndevos hmm, if you figure out how/where to set that option, please let me know... quickly testing doesnt make me able to set it...
20:09 ndevos semiosis: ah, probably something like: mount -t glusterfs -o xlator-option=protocol/clien​t,client-bind-insecure=true ...
20:09 ndevos or, maybe protocol client should be replaced by the name of the section in the fuse-$VOL.vol file
20:12 JoeJulian bug 764600
20:12 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=764600 medium, medium, ---, sgowda, CLOSED CURRENTRELEASE, Add xlator-option to support insecure-bind for clients
20:12 kmai007 joined #gluster
20:12 kmai007 can someone tell me what this means?
20:12 kmai007 0-prodstatic-client-5: fdctx not valid
20:13 kmai007 its after my FUSE client disconnects and reconnects to a storage server
20:13 kmai007 http://fpaste.org/93008/97074414/
20:13 glusterbot Title: #93008 Fedora Project Pastebin (at fpaste.org)
20:17 nage joined #gluster
20:18 JoeJulian Looks like the locks table for the reconnected server has existing locks for files that need released. The client tries to release those locks and the server says the file descriptor context, which is how those locks are identified, is no longer valid. Presumably this is because the server process was restarted and no longer has any lock table.
20:20 kmai007 thanks JoeJulian
20:21 kmai007 I had an issue today
20:21 kmai007 i have some web clients mounting FUSE, and I am looking to add a glusterNFS mount
20:21 JoeJulian semiosis, ndevos: I would bet xlator-option=*client*.client-bind-insecure=true
20:22 kmai007 I did that today for 20 clients, and it appears that the FUSE mounts on the clients started to disconnect
20:22 ndevos JoeJulian: this gets accepted, but it still connects from a < 1024 port: mount -t glusterfs -o xlator-option=rpc-transport​.client-bind-insecure=true localhost:/VOL /mnt/
20:23 ndevos but that may be because I have set the other insecure options on the volume
20:23 kmai007 my volume network.ping-timeout is set to 10 seconds.
20:24 semiosis whyyyyyyyy didnt the devs document how to use in the bug?????
20:24 semiosis s/to use/TO TEST/
20:24 glusterbot What semiosis meant to say was: whyyyyyyyy didnt the devs document how TO TEST in the bug?????
20:24 JoeJulian I was wondering the same thing.
20:24 semiosis srsly wtf
20:24 JoeJulian "I tested..." .. ok, but HOW?!
20:25 dbruhn_ "I looked at a screen and didn't see an error"
20:25 ndevos well, uhm, yeah... thats where you can call support for
20:25 kmai007 could the addtional glusterNFS mounts cause the glusterpool to stress ?
20:26 sijis if my volume named app1, has a path in it named /static ... is ther a way to mount it like mount -t glusterfs gluster1:app/static /mnt ? that seems to be failing for me
20:28 daMaestro joined #gluster
20:28 ndevos sijis: no, that is not possible, if you need that you can use nfs
20:28 ndevos sijis: that is a feature request, filed as bug 892808
20:28 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=892808 low, low, ---, aavati, NEW , [FEAT] Bring subdirectory mount option with native client
20:29 sijis ndevos: ahh. ok. that's how we did it with nfs. so i wasn't sure if tha'ts was possible with gluster
20:29 ndevos sijis: it's just not supported by the fuse client (yet)
20:30 * ndevos is a little annoyed with that too, but not enough yet to fix it, its not a trivial thing to code
20:31 sijis what would it take to code :)
20:31 JoeJulian kmai007: There shouldn't be any problem with what you're trying to do. Check server loads? Are the clients losing connection with the same server? different servers? Is your switch overheating and rebooting?
20:31 sijis or how can i make annoying :)
20:32 ndevos sijis: if you're good at writing C, you should be able to get it functioning in 3-4 days - there might be some corner cases
20:33 sijis damn. i don't konw C. i konw some python
20:33 JoeJulian I bet if you bought RHS and absolutely had to have that feature, it would have someone assigned to it. :D
20:33 sijis but tthat doesn't help in this case.
20:34 JoeJulian Theoretically you could write a python translator to translate a subdirectory to root I bet.
20:34 ndevos oh, yes, if you open a support case and have a good use-case of why you need it, we can put some pressure on getting such a feature in
20:35 ndevos in theory, that should be possible, JoeJulian
20:35 sijis or a softlink ;)
20:35 nightwalk joined #gluster
20:35 JoeJulian The last time I saw anything about it, the thing you don't want is a bind-mount.
20:36 ndevos I think bind mounts should work now, at least when I triaged a bug about it
20:36 * JoeJulian wonders if doing the subdirectory mount as a root translator might actually be the best way to accomplish it.
20:37 sijis is a translator a hook into glusterfs?
20:37 JoeJulian glusterfs is a series of "microkernel" translators that each do a single thing.
20:37 ndevos JoeJulian: nah, it should be in the fuse-bridge part, libgfapi and other clients dont need it
20:38 JoeJulian sijis: If you look at .vol files in /var/lib/glusterd/vols/$volume_name you can see how the translators are stacked to accomplish things.
20:39 andreask joined #gluster
20:40 lpabon joined #gluster
20:41 adpaolucci joined #gluster
20:41 * sijis looks
20:41 adpaolucci One question for you gentlemen.
20:41 glusterbot New news from resolvedglusterbugs: [Bug 764600] Add xlator-option to support insecure-bind for clients <https://bugzilla.redhat.com/show_bug.cgi?id=764600>
20:41 adpaolucci does gluster run masterless?
20:41 semiosis JoeJulian: whats wrong with bind mounts?  used to work for me
20:41 semiosis adpaolucci: yes gluster is fully distributed
20:41 * semiosis loves saying that
20:41 * adpaolucci begins to drool
20:41 adpaolucci I love it.
20:42 semiosis no MDS, no namenode, no head/master
20:42 semiosis we love it too
20:44 JoeJulian semiosis: iirc, it was a deadlock issue.
20:48 dbruhn joined #gluster
20:49 sijis JoeJulian: that translator must be on the server side
20:54 tdasilva left #gluster
21:08 chirino joined #gluster
21:11 glusterbot New news from resolvedglusterbugs: [Bug 1060259] 3.4.3 tracker <https://bugzilla.redhat.co​m/show_bug.cgi?id=1060259>
21:12 Matthaeus My technophobe wife just texted me:  "Hey, did you know about this SSL vulnerability?"
21:15 semiosis saw two(!) articles about it in the NYT this morning
21:30 diegows joined #gluster
21:42 kmai007 can glusterFS handle a client that has both FUSE volumes and glusterNFS volume mounted ?  So apache servers would be FUSE to read it, and coldfusion servers glusterNFS writing to the same volume ?
21:43 kmai007 would you guys foresee any disaasters
21:43 dbruhn test it, but I can't see an issue
21:44 kmai007 yep
21:44 dbruhn You're not trying to manipulate the same files your reading at the same time right?
21:45 kmai007 not concurrently no
21:45 dbruhn then, I can't see any issues with locking or anything
21:45 kmai007 but a report could be generated by coldfusion, that apache would serve up
21:47 dbruhn as long as you're not dealing with a race condition I can't see an issue
21:47 kmai007 if a client disconnects and reconnects to a storage node, multiple times in a span of 25 mins.....but I  know that both servers are up.....what else would cause the client to want to disconnect ?
21:48 dbruhn kmai007, back to the first question first
21:48 dbruhn you might want to use this setting on your volumes
21:48 dbruhn performance.write-behind: off
21:49 dbruhn I've had issues where gluster will not be done writing the file but report it written, before the same app accesses it
21:49 kmai007 gotcha
21:49 dbruhn this resolves that if it becomes an issue
21:49 dbruhn have you checked your network connections, and made sure they are stable?
21:50 fidevo joined #gluster
21:50 kmai007 as far as I know yes, for network, our monitors for hosts never alerted
21:50 JoeJulian kmai007: Are your storage servers also coldfusion servers?
21:51 kmai007 no, i have 8 dedicated storage gluster servers
21:51 kmai007 i'm wondering if the replication is killing me
21:51 kmai007 fudge, i should have took a dump of the gluster servers
21:51 kmai007 while putting out fires
21:52 kmai007 basically our "web environment" degraded, when I started to mount up my glusterNFS coldfusion volume
21:53 kmai007 to my apache servers, 20 of them, but only to test the mount
21:53 kmai007 it wasn't like it was actively being used
21:53 kmai007 but all the volumes are sourced from the same 8 gluster storage servers
21:53 dbruhn Any resource issues causing timeouts when you started connecting like that?
21:53 dbruhn and what OS?
21:54 kmai007 the interesting thing i find most, is that the FUSE mount volume of static content, the client logs shows it was killing off connections due to network.ping-timeout i have set for 10 sec., when in actuality the network was available
21:54 dbruhn I had some nasty NFS bugs in RH 6.5 and CentOS 6.5 for a bit with kernel crashes and what not
21:54 kmai007 Linux Redhat 2.6.32-431.3.1.el6.x86_64
21:54 dbruhn why did you change it from the 47 seconds?
21:55 Ark joined #gluster
21:55 JoeJulian 42
21:55 dbruhn +1
21:55 dbruhn sorry
21:55 JoeJulian The answer to life, the universe, and everything.
21:55 kmai007 the theory was that if  an apache client experienced a "hang" from the mount, 42 seconds is too long, and our proxy governer will not be able to kill off requests to keep up.
21:57 crashmag joined #gluster
21:58 kmai007 JoeJulian: is network.ping-timeout solely on a ping timer?
21:59 kmai007 if it blipes for 1 sec. it does nothing, but if it is out for 43 seconds it moves on to the next storage,
22:00 semiosis wow, coldfusion?  haven't heard of anyone using that since the 90s
22:00 dbruhn last project I worked on that used cold fusion was because I was patching for y2k...
22:01 dbruhn I have a buddy who works on a huge analysis system that has a bunch of cold fusion stuff, I feel bad for him.
22:07 primechu_ joined #gluster
22:12 primechuck joined #gluster
22:13 jag3773 joined #gluster
22:13 hchiramm__ joined #gluster
22:14 jag3773 left #gluster
22:25 primechu_ joined #gluster
22:28 primechuck joined #gluster
22:31 primechu_ joined #gluster
22:58 MacWinner joined #gluster
23:15 jbrooks left #gluster
23:15 sputnik13 joined #gluster
23:22 primechuck joined #gluster
23:23 gmcwhistler joined #gluster
23:33 chirino joined #gluster
23:36 badone joined #gluster
23:38 badone joined #gluster
23:40 gdubreui joined #gluster
23:50 badone joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary