Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-10-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:19 gildub joined #gluster
00:34 dlambrig joined #gluster
00:40 nangthang joined #gluster
01:02 EinstCrazy joined #gluster
01:20 dlambrig joined #gluster
01:25 dlambrig joined #gluster
01:31 Lee1092 joined #gluster
01:38 mjrosenb gah, I need to check irc more often.
01:40 mjrosenb JoeJulian: bummer; #gluster-dev seems to be ignoring me
01:46 EinstCrazy joined #gluster
01:47 beeradb_ joined #gluster
02:01 nangthang joined #gluster
03:30 maveric_amitc_ joined #gluster
03:34 julim joined #gluster
03:42 TheSeven joined #gluster
03:43 gildub joined #gluster
03:53 ramteid joined #gluster
04:13 spalai left #gluster
04:44 raghug joined #gluster
04:58 arcolife joined #gluster
05:29 arcolife joined #gluster
05:31 RameshN_ joined #gluster
05:32 DV joined #gluster
05:38 hchiramm joined #gluster
05:50 vimal joined #gluster
05:57 arcolife joined #gluster
05:58 TvL2386 joined #gluster
06:15 mhulsman joined #gluster
06:21 jtux joined #gluster
06:24 arcolife joined #gluster
06:25 [Enrico] joined #gluster
06:25 LebedevRI joined #gluster
06:32 rgustafs joined #gluster
06:40 jwaibel joined #gluster
07:03 [Enrico] joined #gluster
07:11 arcolife joined #gluster
07:23 DV joined #gluster
07:38 [Enrico] joined #gluster
07:44 haomaiwa_ joined #gluster
07:57 ctria joined #gluster
08:01 haomaiwang joined #gluster
08:35 So4ring joined #gluster
08:38 mator left #gluster
08:45 jvandewege joined #gluster
09:01 haomaiwa_ joined #gluster
09:04 nishanth joined #gluster
09:12 kovsheni_ joined #gluster
09:15 nixpanic_ joined #gluster
09:15 natgeorg joined #gluster
09:15 nixpanic_ joined #gluster
09:15 m0zes_ joined #gluster
09:15 the-me_ joined #gluster
09:15 obnox_ joined #gluster
09:16 suliba_ joined #gluster
09:16 lbarfiel1 joined #gluster
09:17 __NiC joined #gluster
09:17 ndevos_ joined #gluster
09:17 ndevos_ joined #gluster
09:17 julim joined #gluster
09:18 ndevos_ joined #gluster
09:18 ndevos_ joined #gluster
09:18 morse joined #gluster
09:19 [o__o] joined #gluster
09:20 hflai joined #gluster
09:26 squaly joined #gluster
09:32 tdasilva joined #gluster
09:39 stickyboy joined #gluster
09:41 [Enrico] joined #gluster
09:55 muneerse2 joined #gluster
10:01 haomaiwa_ joined #gluster
10:09 jamesc joined #gluster
10:13 pcaruana joined #gluster
10:16 marlinc joined #gluster
10:28 morse_ joined #gluster
10:49 Philambdo joined #gluster
10:59 jwaibel joined #gluster
11:01 haomaiwa_ joined #gluster
11:18 overclk joined #gluster
11:44 julim joined #gluster
12:01 haomaiwa_ joined #gluster
12:11 17SADP04P joined #gluster
12:21 morse joined #gluster
12:25 xavih joined #gluster
12:36 unclemarc joined #gluster
12:48 dlambrig joined #gluster
12:52 ninkotech joined #gluster
12:52 ninkotech_ joined #gluster
13:01 unclemarc joined #gluster
13:11 Raide joined #gluster
13:12 dblack joined #gluster
13:12 rmgroth joined #gluster
13:13 shyam joined #gluster
13:15 nishanth joined #gluster
13:20 TvL2386 joined #gluster
13:21 rmgroth Hello, I am on 3.7.4, trying to do: gluster v set gv0 cluster.lookup-optimize on and getting volume set: failed: One or more connected clients cannot support the feature being set. I don't have the volume mounted anywhere.
13:22 Philambdo joined #gluster
13:22 luis_silva joined #gluster
13:25 haomaiwang joined #gluster
13:25 dgandhi joined #gluster
13:28 harold joined #gluster
13:35 mhulsman joined #gluster
13:38 haomaiwa_ joined #gluster
13:38 unclemarc joined #gluster
13:38 ajneil are all your servers 3.7.4?
13:44 haomaiwa_ joined #gluster
13:48 rmgroth yes they are
13:48 Slashman joined #gluster
13:48 rmgroth it is only 2 servers - glusterfsd -V and glusterfs -V on both show 3.7.4
13:49 ajneil what does gluster volume status volname clients show?
13:53 rmgroth it shows 4 clients, two of which are the servers themselves - checking on the 3rd ip shown now, maybe it's my culprit
13:53 rmgroth was not aware of the "clients show" - thanks for that
13:54 jvandewege joined #gluster
13:54 ajneil yeah not as useful as it could be if it actually reported fqdn and client version though I have submitted an RFE
13:55 ajneil I have been bitten by this too and it's tough to track down when you have 60 clients or so
14:01 haomaiwang joined #gluster
14:02 rmgroth ajneil: the 3rd client was in fact a 3.6.3, so thank you
14:04 haomaiwang joined #gluster
14:07 atrius joined #gluster
14:10 rwheeler joined #gluster
14:11 bowhunter joined #gluster
14:12 kenansulayman joined #gluster
14:12 pwu__ joined #gluster
14:12 haomaiwang joined #gluster
14:16 atrius joined #gluster
14:18 coredump joined #gluster
14:18 pwu__ Yep Everybody, I have a big Issue with Gluster, I tried to add 2 replica in my distributed 2 node but something goes wrong, I don't know why. I tried to rebalance but that slow that our application workflow (ls for 100 files took from 3 to 5 minutes), So I tried to remove bricks, but the process failed, I tried to rebalance again and it's very very slow. Currently, logs pop every 4 secondes that messages :
14:18 pwu__ [2015-10-02 14:17:30.983035] W [dict.c:1055:data_to_str] (-->/usr/lib/x86_64-linux-gnu/glusterfs​/3.5.2/rpc-transport/socket.so(+0x4f31) [0x7f06b8b24f31] (-->/usr/lib/x86_64-linux-gnu/gluster​fs/3.5.2/rpc-transport/socket.so(sock​et_client_get_remote_sockaddr+0x4a) [0x7f06b8b2b48a] (-->/usr/lib/x86_64-linux-gnu/glust​erfs/3.5.2/rpc-transport/socket.so(​client_fill_address_family+0x20d) [0x7f06b8b2b10d]))) 0-dict: data is NULL
14:18 ctria joined #gluster
14:18 glusterbot pwu__: ('s karma is now -107
14:18 glusterbot pwu__: ('s karma is now -108
14:18 glusterbot pwu__: ('s karma is now -109
14:18 neofob joined #gluster
14:18 pwu__ [2015-10-02 14:17:30.983081] E [name.c:147:client_fill_address_family] 0-glusterfs: transport.address-family not specified. Could not guess default value from (remote-host:(null) or transport.unix.connect-path:(null)) options
14:19 pwu__ Is that speak to somebody ?
14:19 pwu__ thanks for any help :D
14:20 kanagaraj joined #gluster
14:22 mpietersen joined #gluster
14:23 shubhendu__ joined #gluster
14:24 _maserati joined #gluster
14:24 aravindavk joined #gluster
14:26 _Bryan_ joined #gluster
14:27 plarsen joined #gluster
14:29 kanagaraj joined #gluster
14:32 squizzi joined #gluster
14:35 haomaiwa_ joined #gluster
14:36 spcmastertim joined #gluster
14:39 squizzi joined #gluster
14:44 firemanxbr joined #gluster
14:45 tessier joined #gluster
14:47 kanagaraj_ joined #gluster
14:49 meteormanaged joined #gluster
14:49 orksnork Good morning ;)
14:50 orksnork Trying to work out the last little bits of my hardware set up before placing some orders today
14:57 luis_silva Morning all, Anyone know off the top of the head the max latency for Gluster replication?
14:57 luis_silva it just for a basic storage system.
15:01 plarsen joined #gluster
15:02 kanagaraj__ joined #gluster
15:02 RedW joined #gluster
15:03 luis_silva @pwu__ I'm not a pro at this but you may want to check your heal stats. gluster volume heal <vol_name> info, especially make sure there's no split -brain with gluster volume heal <vol_name> info split-brain.
15:03 ChrisNBlum joined #gluster
15:04 GB21 joined #gluster
15:04 GB21_ joined #gluster
15:08 skylar joined #gluster
15:09 coredump|br joined #gluster
15:09 wushudoin joined #gluster
15:10 kovshenin joined #gluster
15:13 JoeJulian luis_silva: per read operation, first to respond should be the shortest possible. If a file doesn't exist, <= max latency * 2. Write operations should be just over max latency. All latencies are measured from the client.
15:25 skoduri joined #gluster
15:29 coredump joined #gluster
15:33 Trefex joined #gluster
15:36 stickyboy joined #gluster
15:38 coredump|br joined #gluster
15:40 luis_silva Sounds good, Thx Joe
15:47 cholcombe joined #gluster
15:50 primehaxor joined #gluster
16:04 bennyturns joined #gluster
16:14 gnudna joined #gluster
16:18 shyam joined #gluster
16:25 Rapture joined #gluster
16:26 julim joined #gluster
16:28 So4ring joined #gluster
16:30 F2Knight joined #gluster
16:48 maveric_amitc_ joined #gluster
16:54 julim joined #gluster
16:56 kotreshhr joined #gluster
16:59 So4ring joined #gluster
17:06 kotreshhr1 joined #gluster
17:06 kotreshhr1 left #gluster
17:11 maveric_amitc_ joined #gluster
17:16 bluenemo joined #gluster
17:18 JoeJulian mjrosenb: As far as the channel itself goes, your posts did show up. I can see them. As for people, they're not accepting my configuration changes. I think they have a bug in their firmware.
17:19 bennyturns joined #gluster
17:30 coredump joined #gluster
17:32 jobewan joined #gluster
17:47 ramky joined #gluster
17:48 kotreshhr joined #gluster
17:51 coredump|br joined #gluster
17:54 beeradb_ joined #gluster
17:55 maserati joined #gluster
18:30 julim joined #gluster
18:47 kotreshhr joined #gluster
18:47 bennyturns joined #gluster
18:50 kotreshhr left #gluster
18:54 ctria joined #gluster
18:58 primehaxor joined #gluster
19:05 julim joined #gluster
19:11 julim joined #gluster
19:16 julim_ joined #gluster
19:27 bluenemo I think I found a nifty new use case for glusterfs. HA OpenVPN. I'm thinking about sharing /etc/openvpn with multiple Server Configs via glusterfs geo replication. There will be one master node who "edits" /etc/openvpn and then geo replicates.
19:29 JoeJulian seems reasonable.
19:32 bluenemo geo replication is one-way right? There is no way for the clients to write sth?
19:32 bluenemo I thought about just rsync at the beginning as it rarely changes but then I dont want to run a single rsync when having 200 vpn servers..
19:39 JoeJulian bluenemo: right, one way.
19:39 JoeJulian And it's encrypted so there's that benefit as well.
19:50 csim I tend to use configuration management for that, so it seems a bit uncommon to do that
19:51 JoeJulian As do I, but I also love out-of-the-box ideas.
19:54 csim I wonder now if you could place the whole /usr on glusterfs
19:54 JoeJulian Puppetlabs has (or at least had) "Bad Idea Thursday" meetings where the object was to come up with the worst possible solution to a problem. It got people thinking out of the box and often came up with some really good "Yeah, that's awful but if you did this with it..." moments.
20:19 Slashman joined #gluster
20:26 bluenemo lol
20:26 bluenemo well the problem is that I want to create server / client certificates using configuration management - and I dont want to upload those files from the client to the master to then download them to the next client.
20:27 bluenemo there is also no reason why all vpn servers shouldnt have all configs, as this way they can just take over
20:27 bluenemo then which server does which vpn atm is managed by config management using sth like supervisord or systemd
20:31 maserati joined #gluster
20:31 brandon joined #gluster
20:32 _maserati_ joined #gluster
20:33 maserati joined #gluster
20:35 mhulsman joined #gluster
20:36 mhulsman joined #gluster
20:48 julim joined #gluster
20:51 gnudna left #gluster
20:53 julim joined #gluster
20:58 side_control joined #gluster
21:27 Rapture joined #gluster
21:28 timotheus1_ joined #gluster
21:30 timotheus1__ joined #gluster
21:34 Slashman joined #gluster
21:38 stickyboy joined #gluster
21:39 timotheus1 joined #gluster
21:48 ayma joined #gluster
21:49 ayma having trouble with nfs-ganesha tied to glusterfs
21:51 ayma I'm trying to mount the nfs-ganesha but keep getting that the mount failed  reason given by server: No such file or directory
21:52 ayma so i'm wondering for the section in the conf file for path and pseudo_path do you put the brick path or the gluster volume path?
21:54 ayma when i say gluster volume path I mean gluster volume name that comes from gluster volume status?
21:54 JoeJulian The volume name
21:57 JoeJulian ayma: Here's a config for a home volume I have. The volume name is "public". http://fpaste.org/274214/38230221/
21:57 glusterbot Title: #274214 Fedora Project Pastebin (at fpaste.org)
22:00 ayma JoeJulian: and are you just doing mount -t nfs -o vers=4 192.168.2.10:/public /mnt?
22:02 JoeJulian Well, I use dns, not IP, but yes.
22:03 amye hagarth, published on that post. Apologies on the delay, today's a travel day for me.
22:06 ayma JoeJulian: and 192.168.2.10 is also where the glusterfs is installed?
22:06 JoeJulian It is. That's why I specify "localhost".
22:06 ayma JoeJulian: is there anything I need to download on the client side to do the mount?
22:06 JoeJulian Nope
22:07 JoeJulian You have nfs disabled on the volume, kernel nfs server is not running...
22:12 ayma JoeJulian: here's what i have for my ganesha.conf and the error i keep getting with mount
22:12 ayma https://paste.fedoraproject.org/274224/38239011/
22:12 glusterbot Title: #274224 Fedora Project Pastebin (at paste.fedoraproject.org)
22:14 mhulsman joined #gluster
22:14 JoeJulian showmount doesn't work with ganesha
22:14 JoeJulian So the fact that you see it perplexes me.
22:15 maveric_amitc_ joined #gluster
22:16 ayma i verified that the nfsd service is dead
22:16 neofob left #gluster
22:16 ayma running ganesha 2.2 and glusterfs 3.7
22:16 ayma on centos7...not sure if that matters
22:17 JoeJulian What about "netstat -tlnp | grep 2049"
22:19 ayma tcp6       0      0 :::2049                 :::*                    LISTEN      20139/ganesha.nfsd
22:20 ayma one thing i do notice is with glusterfs when i run showmount -e it does /gvol0 * vs ganesha does /gvol0 (everyone)
22:23 JoeJulian Yeah, that's weird. It should be empty.
22:32 JoeJulian Maybe it's rpcbind...
22:33 JoeJulian In fact... I'm almost sure of it. "ps ax| grep rpcbind" I bet shows "/usr/bin/rpcbind -w"
22:33 ayma i verified rpcbind was running
22:33 ayma 12132 ?        Ss     0:00 /sbin/rpcbind -w
22:33 JoeJulian If so, kill it and run it without the "-w"
22:34 JoeJulian bug 1181779
22:34 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1181779 unspecified, unspecified, rc, steved, VERIFIED , rpcbind prevents Gluster/NFS from registering itself after a restart/reboot
22:34 julim joined #gluster
22:34 JoeJulian I'm betting the same is true for ganesha.
22:34 Raide joined #gluster
22:35 ayma now getting clnt_create: RPC: Program not registered for showmount
22:35 ayma ganesha.nfsd -f /etc/ganesha/ganesha.conf -L ganesha.log -N NIV_DEBUG
22:35 ayma that's what i had
22:35 ayma hmm still same error
22:35 JoeJulian restart nfs-ganesha to get it to register
22:36 ayma tcp6       0      0 :::2049                 :::*                    LISTEN      20255/ganesha.nfsd
22:37 ayma yep restarted it
22:37 ayma pkill ganesha and then ganesha.nfsd -f /etc/ganesha/ganesha.conf -L ganesha.log -N NIV_DEBUG
22:37 JoeJulian So now does showmount return nothing?
22:38 ayma returns: clnt_create: RPC: Program not registered
22:39 JoeJulian :/ I don't get that, I just get
22:40 JoeJulian $ showmount -e
22:40 JoeJulian Export list for strabo:
22:41 JoeJulian Look for differences in rpcinfo maybe? Here's mine: http://termbin.com/gbzc
22:42 ayma how did u generate that output?
22:43 JoeJulian rpcinfo | nc termbin.com 9999
22:44 ayma https://paste.fedoraproject.org/274242/43825864/
22:44 ayma that's mine
22:44 glusterbot Title: #274242 Fedora Project Pastebin (at paste.fedoraproject.org)
22:45 ayma also it seems now showmount -e is back to showing /gvol0 (everyone)
22:46 JoeJulian What timezone are you in?
22:46 ayma rpcinfo -p 172.192.0.50
22:46 ayma is the cmd in ran....I'm in 22:46:30 UTC 2015
22:46 ayma in US, its Central time
22:47 ayma if its too late for you, do u mind if we reconvene on monday...just pick a time
22:47 JoeJulian I'd like to point you at ndevos. He's in Europe so if you could ping him early in the morning you should have luck.
22:47 ayma sure thanks
22:47 JoeJulian He knows the nfs tools inside and out.
22:48 ayma yep, this is a really odd one
22:48 ayma thanks for the ref
22:48 JoeJulian You're welcome.
23:10 mjrosenb I love it when the devs on a project are unhelpful :-/
23:11 julim joined #gluster
23:11 JoeJulian Who?
23:12 JoeJulian Which one?
23:13 JoeJulian I hope you don't mean me. I'm not a dev.
23:13 mjrosenb JoeJulian: I do not mean you.  You have been the most helpful person by far.
23:14 JoeJulian I've had a lot of good interaction with both the gluster devs and the ceph devs.
23:15 JoeJulian With ceph, though, the testing isn't as thorough and they seem to fear their own codebase.
23:17 JoeJulian Probably the biggest problem for this hemisphere in dealing with gluster devs is that they're (for the most part) in Bangalore.
23:18 csim JoeJulian: I am surprised, because given the respective size of gluster and cpeh CI, I would have supposed they do test ceph a lot more
23:18 julim joined #gluster
23:22 JoeJulian Maybe I've just been touching the bits that don't have tests written.
23:23 JoeJulian Probably just perspective.
23:23 Guest70345 joined #gluster
23:23 csim maybe they do different tests too
23:24 julim joined #gluster
23:24 plarsen joined #gluster
23:45 JoeJulian Ah, crap. I can't use nfs-ganesha at work like I'd hoped. It's totally in violation of copyrights and has no open source license.
23:47 csim mhh ?
23:48 JoeJulian https://github.com/nfs-gan​esha/nfs-ganesha/issues/69
23:48 glusterbot Title: non-free file(s) / please clarify license · Issue #69 · nfs-ganesha/nfs-ganesha · GitHub (at github.com)
23:48 JoeJulian There's also a bsd-4 header in there.
23:49 csim yeah, quit ebad
23:49 JoeJulian Also, unless I'm missing something somewhere, there's no license included at all.
23:49 csim there is one
23:50 csim https://github.com/nfs-ganesha/nfs-​ganesha/blob/master/src/LICENSE.txt
23:50 glusterbot Title: nfs-ganesha/LICENSE.txt at master · nfs-ganesha/nfs-ganesha · GitHub (at github.com)
23:50 JoeJulian Ah, ok. Found it.
23:50 JoeJulian Which is, of course, invalid if it's using proprietary code.
23:52 csim guess I could escalate to legal, as we are shipping it in rhel
23:52 csim mhh, in epel only
23:54 julim joined #gluster
23:58 csim https://bugzilla.redhat.co​m/show_bug.cgi?id=1268505
23:58 glusterbot Bug 1268505: high, unspecified, ---, kkeithle, NEW , Licensing issue in src/FSAL/FSAL_PT/fsal_attrs.c

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary