Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-02-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 johnmilton joined #gluster
00:05 amye joined #gluster
00:35 gildub left #gluster
01:08 social joined #gluster
01:10 chirino joined #gluster
01:12 johnmilton joined #gluster
01:14 nangthang joined #gluster
01:28 EinstCrazy joined #gluster
01:35 EinstCrazy joined #gluster
01:48 baojg joined #gluster
02:05 nangthang joined #gluster
02:05 harish joined #gluster
02:16 haomaiwa_ joined #gluster
02:23 plarsen joined #gluster
02:24 nishanth joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:48 haomaiwang joined #gluster
03:01 haomaiwang joined #gluster
03:08 unlaudable joined #gluster
03:18 Lee1092 joined #gluster
03:43 ahino joined #gluster
03:53 kovshenin joined #gluster
03:58 jiffin joined #gluster
03:58 nbalacha joined #gluster
04:01 haomaiwa_ joined #gluster
04:05 itisravi joined #gluster
04:05 sakshi joined #gluster
04:08 atinm joined #gluster
04:12 alghost joined #gluster
04:13 RameshN joined #gluster
04:14 kanagaraj joined #gluster
04:17 gem joined #gluster
04:26 poornimag joined #gluster
04:39 ppai joined #gluster
04:41 DV joined #gluster
04:44 DV__ joined #gluster
04:52 pppp joined #gluster
04:53 nehar joined #gluster
04:54 jbrooks joined #gluster
04:59 sakshi joined #gluster
04:59 Wizek joined #gluster
04:59 sakshi joined #gluster
05:01 haomaiwa_ joined #gluster
05:04 ndarshan joined #gluster
05:08 hgowtham joined #gluster
05:15 Manikandan joined #gluster
05:29 jiffin joined #gluster
05:32 Apeksha joined #gluster
05:40 overclk joined #gluster
05:45 DV joined #gluster
05:45 baojg joined #gluster
05:51 kdhananjay joined #gluster
05:53 gowtham joined #gluster
05:57 ovaistariq joined #gluster
05:57 ovaistar_ joined #gluster
06:01 haomaiwang joined #gluster
06:01 rafi joined #gluster
06:03 Bhaskarakiran joined #gluster
06:11 atalur joined #gluster
06:16 karthikfff joined #gluster
06:22 dmnchild1 left #gluster
06:24 arcolife joined #gluster
06:27 yoavz joined #gluster
06:28 ashiq_ joined #gluster
06:29 nangthang joined #gluster
06:34 karnan joined #gluster
06:36 skoduri joined #gluster
06:45 nishanth joined #gluster
06:48 anil joined #gluster
06:51 merp_ joined #gluster
06:55 ekuric joined #gluster
06:56 Wizek joined #gluster
06:57 EinstCra_ joined #gluster
07:01 64MAABNRU joined #gluster
07:04 badone joined #gluster
07:07 BuffaloCN left #gluster
07:18 mobaer joined #gluster
07:35 Ulrar joined #gluster
07:36 karnan joined #gluster
07:47 [diablo] joined #gluster
08:01 haomaiwa_ joined #gluster
08:09 dariol joined #gluster
08:13 robb_nl joined #gluster
08:15 ovaistariq joined #gluster
08:20 [Enrico] joined #gluster
08:20 aravindavk joined #gluster
08:26 fsimonce joined #gluster
08:39 jri joined #gluster
08:43 skoduri joined #gluster
08:48 ahino1 joined #gluster
09:01 skoduri joined #gluster
09:01 ivan_rossi joined #gluster
09:01 haomaiwa_ joined #gluster
09:06 Bhaskarakiran joined #gluster
09:11 itisravi joined #gluster
09:20 ctria joined #gluster
09:24 mbukatov joined #gluster
09:25 mhulsman joined #gluster
09:29 Slashman joined #gluster
09:44 7YUAABNX6 joined #gluster
09:44 gem joined #gluster
09:45 baojg joined #gluster
09:50 mobaer joined #gluster
09:55 petan joined #gluster
09:58 mobaer1 joined #gluster
10:01 haomaiwa_ joined #gluster
10:14 jvandewege_ joined #gluster
10:16 ovaistariq joined #gluster
10:25 sakshi joined #gluster
10:25 mobaer joined #gluster
10:33 ivan_rossi left #gluster
10:44 skoduri joined #gluster
10:48 harish joined #gluster
10:53 mobaer joined #gluster
10:58 baojg joined #gluster
11:01 haomaiwa_ joined #gluster
11:04 matclayton joined #gluster
11:22 mobaer joined #gluster
11:30 matclayton joined #gluster
11:42 sakshi joined #gluster
11:52 nehar joined #gluster
11:55 skoduri joined #gluster
11:57 itisravi joined #gluster
12:02 Bhaskarakiran joined #gluster
12:07 fale joined #gluster
12:14 matclayton_ joined #gluster
12:17 ovaistariq joined #gluster
12:27 EinstCrazy joined #gluster
12:46 johnmilton joined #gluster
13:00 skoduri joined #gluster
13:00 kanagaraj joined #gluster
13:06 Wizek joined #gluster
13:07 theron joined #gluster
13:10 karnan joined #gluster
13:18 Leildin joined #gluster
13:31 anti[Enrico] joined #gluster
13:36 Leildin joined #gluster
13:43 ashiq_ joined #gluster
13:46 plarsen joined #gluster
13:48 unclemarc joined #gluster
13:48 skoduri joined #gluster
13:59 chirino joined #gluster
14:04 mobaer joined #gluster
14:07 [diablo] Afternoon #gluster .... guys, we're using CTDB for the SMB shares, and are about to do the NFS shares (HA)... we see that ganesha is used in the docs (RHGS), but we also see that CTDB can handle HA NFS... anyone here using the CTDB for NFS?
14:07 baojg joined #gluster
14:13 harish_ joined #gluster
14:16 kdhananjay joined #gluster
14:26 ndk joined #gluster
14:28 unclemarc joined #gluster
14:32 coredump joined #gluster
14:34 nehar joined #gluster
14:41 portante joined #gluster
14:43 bennyturns joined #gluster
14:44 bennyturns joined #gluster
14:49 skylar joined #gluster
14:52 hamiller joined #gluster
14:52 [Enrico] joined #gluster
15:01 robb_nl joined #gluster
15:06 rwheeler joined #gluster
15:07 nbalacha joined #gluster
15:28 theron joined #gluster
15:29 mhulsman1 joined #gluster
15:30 ahino joined #gluster
15:34 robb_nl joined #gluster
15:35 jri_ joined #gluster
15:42 farhoriz_ joined #gluster
15:52 jwd joined #gluster
15:53 jwaibel joined #gluster
15:54 jwd joined #gluster
15:57 JoeJulian [diablo]: I haven't heard of anybody in here doing that, but that's not to say it hasn't been done.
16:02 kovshenin joined #gluster
16:03 haomaiwang joined #gluster
16:05 ovaistariq joined #gluster
16:06 [diablo] hi JoeJulian ok thanks
16:14 shaunm joined #gluster
16:19 unclemarc joined #gluster
16:20 johnmark joined #gluster
16:21 JoeJulian Hiya johnmark! How's it goin?
16:23 kovshenin joined #gluster
16:30 fgd joined #gluster
16:33 JoeJulian Hey, hagarth, red/black gluster server updates wouldn't work, would it? Consider 3.7.8 running in a container. 3.7.9 comes out, I clone my 3.7.8 image, update the gluster version and start it. Once it's all up and listening, I migrate the IP to the new container. Any chance of avoiding an inconsistency on an open file and thus avoiding self-heal?
16:46 mhulsman joined #gluster
16:47 mhulsman1 joined #gluster
16:49 mhulsman joined #gluster
16:52 hagarth JoeJulian: heading out now. will get back to you after I am back.
16:55 matclayton joined #gluster
16:59 rcampbel3 joined #gluster
17:04 decay joined #gluster
17:17 ovaistariq joined #gluster
17:17 Wizek joined #gluster
17:20 calavera joined #gluster
17:29 ashiq_ joined #gluster
17:31 jiffin joined #gluster
17:33 cvstealth We are currently running 3.7.2 in our production environment and have run into an issue when starting a new volume it is causing some of the glusterfsd process that are servicing NFS request in the gluster to stop processing traffic. Ultimately the end clients get nfs timeoutes and we have to restart the glusterd process for everything to function again. Looking in the log files gives entries like "read
17:33 cvstealth v on /var/run/gluster/ff3086e5aa7​ec07280132815af9b1bea.socket failed" / "De-registered MOUNTV3 successfully"/"De-registered NFSV3 successfully". I don't really see any stacktraces or similar message in the logs that would indicate the underlying issue, any thought on how to troubleshoot this a bit further?
17:36 jri joined #gluster
17:37 bennyturns joined #gluster
17:44 plarsen joined #gluster
17:48 kenansulayman joined #gluster
17:48 jiffin cvstealth: Let me guess, ur issue glusterfsd(brick process) is got hung while handling request from NFS server and which leads in time out errors for nfs client
17:49 matclayton joined #gluster
17:49 hagarth joined #gluster
17:52 haomaiwa_ joined #gluster
17:53 B21956 joined #gluster
17:54 squizzi joined #gluster
17:55 caveat- left #gluster
17:57 cvstealth jiffin: you're correct, any known bugs of that type? I looked through the release notes for any bugs that would be fixed in newer version but nothing really stuck out at me.
17:58 cvstealth jiffin: when stracing the main glusterd process we just repeatedly see the socket connection failures
18:00 jiffin cvstealth: volume configuration?
18:03 merp_ joined #gluster
18:04 cvstealth jiffin: we have maybe 15 volumes spread across 6 hosts, I'd say all but 2 volumes are 2 bricks with replica 1 set. The other 2 are 4 bricks with a replica of 1.
18:05 jiffin cvstealth: if the connection between nfs-client -> nfs-server-> glusterfsd established , then glusterd has lees important role
18:05 cvstealth Sorry that should be replica 2 not replica 1
18:06 jiffin cvstealth: you didn't get any hint from the logs? (/var/log/glusterfs/)
18:06 cvstealth jiffin: I looked at most of the logs in there and also looked in the individual brick logs
18:07 jiffin k fine
18:07 cvstealth jiffin: The incident we had today impacted peers where the volume wasn't even getting started on.
18:08 jiffin cvstealth: best way to debug ur issue(hang) is using statedump and tcpdump
18:11 cvstealth jiffin: thanks, we'll do the statedump next time it gets in this weird
18:11 jiffin cvstealth: it will better if you drop a mail to gluster-user and gluster-dev ML with ur logs
18:11 robb_nl joined #gluster
18:13 jiffin cvstealth: since most of the contributors are from India(IST), right now they are offline
18:14 cvstealth jiffin: noted, will clean up the logs and get something posted in a few hours... thx for the pointers.
18:25 ironhalik joined #gluster
18:25 squeakyneb joined #gluster
18:25 delhage joined #gluster
18:25 Bardack joined #gluster
18:25 Champi joined #gluster
18:25 Ramereth joined #gluster
18:25 msvbhat joined #gluster
18:26 doekia joined #gluster
18:26 mattmcc joined #gluster
18:26 ghenry joined #gluster
18:26 Dave_____ joined #gluster
18:26 Vaizki joined #gluster
18:26 klaxa joined #gluster
18:27 devilspgd joined #gluster
18:27 Gugge joined #gluster
18:27 suliba joined #gluster
18:27 NuxRo joined #gluster
18:27 bio_ joined #gluster
18:27 rastar joined #gluster
18:27 ccha2 joined #gluster
18:27 dgandhi joined #gluster
18:27 python_lover joined #gluster
18:27 dlambrig_ joined #gluster
18:28 edong23 joined #gluster
18:28 shruti joined #gluster
18:28 monotek joined #gluster
18:28 owlbot joined #gluster
18:29 glusterbot joined #gluster
18:29 Telsin joined #gluster
18:29 dgandhi joined #gluster
18:30 dgandhi joined #gluster
18:31 rastar joined #gluster
18:31 shruti joined #gluster
18:33 owlbot joined #gluster
18:37 owlbot joined #gluster
18:38 merp_ joined #gluster
18:41 owlbot joined #gluster
18:41 morse joined #gluster
18:45 owlbot joined #gluster
18:46 nishanth joined #gluster
18:48 Telsin left #gluster
18:49 owlbot joined #gluster
18:49 kovshenin joined #gluster
18:53 owlbot joined #gluster
18:57 owlbot joined #gluster
19:01 owlbot joined #gluster
19:05 owlbot joined #gluster
19:06 ovaistariq joined #gluster
19:08 hagarth joined #gluster
19:09 owlbot joined #gluster
19:11 theron joined #gluster
19:13 owlbot joined #gluster
19:17 owlbot joined #gluster
19:21 owlbot joined #gluster
19:25 owlbot joined #gluster
19:27 Wizek joined #gluster
19:29 owlbot joined #gluster
19:33 owlbot joined #gluster
19:37 owlbot joined #gluster
19:40 Wizek_ joined #gluster
19:41 owlbot joined #gluster
19:45 owlbot joined #gluster
19:48 hagarth JoeJulian: gluster bricks would be bind mounted from the host?
19:49 owlbot joined #gluster
19:52 atrius joined #gluster
19:53 owlbot joined #gluster
19:57 owlbot joined #gluster
20:01 owlbot joined #gluster
20:04 tswartz joined #gluster
20:05 owlbot joined #gluster
20:05 skylar joined #gluster
20:05 voobscout joined #gluster
20:09 owlbot joined #gluster
20:11 DV joined #gluster
20:13 owlbot joined #gluster
20:17 owlbot joined #gluster
20:21 owlbot joined #gluster
20:22 kovshenin joined #gluster
20:23 ashiq joined #gluster
20:25 owlbot joined #gluster
20:27 * hagarth shakes his head at the troll
20:29 owlbot joined #gluster
20:30 dlambrig_ joined #gluster
20:31 calavera joined #gluster
20:33 owlbot joined #gluster
20:37 owlbot joined #gluster
20:40 bitpushr joined #gluster
20:41 haomaiwang joined #gluster
20:41 owlbot joined #gluster
20:45 owlbot joined #gluster
20:49 owlbot joined #gluster
20:52 Slashman joined #gluster
20:53 owlbot joined #gluster
20:57 theron joined #gluster
20:57 owlbot joined #gluster
21:00 Wizek_ joined #gluster
21:01 owlbot joined #gluster
21:02 post-factum cvstealth: well, and I was wonder why my NFS clients stall when I create new GlusterFS volume...
21:02 post-factum cvstealth: ping me plz when you are available
21:05 owlbot joined #gluster
21:05 CyrilPeponnet Hey boys
21:06 CyrilPeponnet I've trouble settings up libgfapi with kvm
21:06 CyrilPeponnet works as root but not as qemu user. Even if I followed everything from http://www.gluster.org/community/documenta​tion/index.php/Libgfapi_with_qemu_libvirt
21:07 CyrilPeponnet @JoeJulian I think you have some feedback to provide ;p
21:09 rwheeler joined #gluster
21:09 owlbot joined #gluster
21:13 owlbot joined #gluster
21:15 kovshenin joined #gluster
21:17 rGil joined #gluster
21:17 owlbot joined #gluster
21:18 dthrvr joined #gluster
21:19 CyrilPeponnet forget it, it works :/ Looks like puppet was not restarting glusterd after adding the rpc-auth-allow-insecure part
21:21 owlbot joined #gluster
21:25 owlbot joined #gluster
21:29 owlbot joined #gluster
21:29 dlambrig_ joined #gluster
21:30 theron joined #gluster
21:33 owlbot joined #gluster
21:37 owlbot joined #gluster
21:40 verdurin joined #gluster
21:41 owlbot joined #gluster
21:42 theron joined #gluster
21:45 mobaer joined #gluster
21:45 owlbot joined #gluster
21:49 owlbot joined #gluster
21:53 owlbot joined #gluster
21:57 owlbot joined #gluster
22:01 owlbot joined #gluster
22:05 owlbot joined #gluster
22:07 DV joined #gluster
22:10 owlbot joined #gluster
22:13 owlbot joined #gluster
22:14 doekia left #gluster
22:18 owlbot joined #gluster
22:19 merp_ joined #gluster
22:22 owlbot joined #gluster
22:26 owlbot joined #gluster
22:29 ovaistariq joined #gluster
22:29 haomaiwa_ joined #gluster
22:30 owlbot joined #gluster
22:34 owlbot joined #gluster
22:37 cpetersen Whelp...
22:38 cpetersen JoeJulian: I'm having a hard time believing I can use Gluster to store VMs for ESXi.
22:38 owlbot joined #gluster
22:38 cpetersen Is anyone around?
22:42 owlbot joined #gluster
22:46 owlbot joined #gluster
22:47 calavera joined #gluster
22:48 cpetersen I failed one of my replicated storage nodes and the VIP failed over (NFS3) to another node.  The two remaining nodes went in to brief split-brain.  Now I am pretty sure that my VM is corrupted.
22:48 cpetersen =)
22:50 owlbot joined #gluster
22:51 MACscr joined #gluster
22:52 MACscr what can i do to stop gluster from eating all my ram (4gb) and then swapping? Its a very low activity cluster
22:53 MACscr is it because i actually have its other node down right now? I had to get things stable again for the data its serving and i actually had to take down the second node in order to stabilize things
22:54 owlbot joined #gluster
22:54 post-factum cpetersen: that is why i still prefer using ceph rbd for vm image storage
22:54 cpetersen :(
22:55 post-factum MACscr: are you talking about client-side ram consumption?
22:56 Wizek joined #gluster
22:58 owlbot joined #gluster
22:59 cpetersen post-factum:  Could I do a three node, highly available, replicated cluster with Ceph?
22:59 cpetersen What kind of overhead is there?
23:00 plarsen joined #gluster
23:02 owlbot joined #gluster
23:02 MACscr post-factum: client and server are the same as im using the gluster cluster as an iscsi target
23:06 owlbot joined #gluster
23:06 post-factum cpetersen: sure you can
23:07 post-factum cpetersen: client talks to one osd at given time, and server-side does synchronous replication by itself
23:07 post-factum so at least on client-server connection there is no triple traffic overhead
23:07 cpetersen Can I do it with only 3 servers?
23:08 post-factum yep, 3 servers is ok for ceph
23:08 post-factum 3 mons + 3 osds
23:08 post-factum mon and osd lives ok on the same node
23:08 cpetersen Where is the client situated?  How do I advertise iSCSI (I'm assuming?) for ESXi?
23:09 post-factum sorry guys, no experience with iscsi :(
23:09 post-factum cpetersen: http://www.sebastien-han.fr/blog/2014/07​/07/start-with-the-rbd-support-for-tgt/
23:09 glusterbot Title: Start with the RBD support for TGT - Sébastien Han (at www.sebastien-han.fr)
23:09 cpetersen Well if I were to use NFS then, where does the client go logically?
23:10 post-factum probably, that could help
23:10 owlbot joined #gluster
23:10 haomaiwa_ joined #gluster
23:10 post-factum reexporting cephfs via nfs is generally bad idea
23:10 post-factum you need to explore reexporting ceph rbd via iscsi
23:11 cpetersen Okie.
23:11 post-factum MACscr: and gluster version is?..
23:12 post-factum cpetersen: qemu is better with that -- it has ceph rbd client built in, and it is pretty trivial to attach network block device from ceph to vm
23:13 post-factum but i know, esxi enterprise blah-blah :)
23:14 owlbot joined #gluster
23:14 post-factum btw, why not just stick to vmware storage solution in that case?
23:15 post-factum virtual san they call it afaik
23:16 cpetersen VSAN = expensive
23:16 cpetersen As hell.
23:16 post-factum yep :(
23:16 post-factum so, why esxi?
23:17 cpetersen Kind of a sticky situation.  It's what we're most comfortable with.
23:18 owlbot joined #gluster
23:18 cpetersen Three servers with storage on each with 2 critical applications that need to be HA.
23:19 post-factum sounds like perfect hardware platform for small fully opensource cluster with kvm+gluster+ceph
23:20 cpetersen Yes and I'm thinking that would work too.,
23:20 cpetersen Too bad we've already bought the vmware essentials licensing (minus VSAN).  ;)
23:22 owlbot joined #gluster
23:22 post-factum hah
23:22 post-factum vendor locking works
23:24 cpetersen sure does
23:24 cpetersen blah
23:26 owlbot joined #gluster
23:28 MACscr post-factum: glusterfs 3.7.6 built on Nov  9 2015 15:17:09
23:30 post-factum 3.7.6 is subjected to memory leaks, please, try 3.7.8 or even 3.7.8+memleak-related patches
23:30 owlbot joined #gluster
23:30 post-factum or better wait for 3.7.9. i hope those patches will be merged
23:30 Wizek_ joined #gluster
23:31 post-factum 3.7.8 is subjected to write-behind+replica performance issue, so it could be quite a tricky trade-off
23:34 owlbot joined #gluster
23:35 MACscr Just my luck
23:36 MACscr that was nov 9th. you would hope patches would have already been merged by now
23:37 cpetersen post-factum: Shouldn't a split-brain always be a manual intervention?
23:38 cpetersen Why would a split-brain show up and then resolve itself in gluster?
23:38 owlbot joined #gluster
23:40 cpetersen What confuses me is, why would split-brain happen at all.  There is no multi-path IO going on.  The share is mounted from one VIP only, not multiple.  When it moves, the files are the same on the other two replicas.
23:40 cpetersen Baahhh!
23:42 owlbot joined #gluster
23:46 owlbot joined #gluster
23:46 post-factum MACscr: not all of them :)
23:47 post-factum cpetersen: server-side quorum prevents split-brain from happening
23:47 post-factum that is achievable with gluster 3-node replica, 2-node+arbiter replica or ceph :)
23:47 cpetersen I have that enabled
23:48 cpetersen OK. SO.
23:48 cpetersen I powered up the host again and get this, the VM started up on the one it was on originally...
23:48 cpetersen Whaaaat....
23:48 cpetersen Seems I have some troubleshooting of VMware to do now... ffs
23:49 post-factum justrelax and enjoy it
23:49 post-factum s/justrelax/just relax/
23:49 glusterbot What post-factum meant to say was: just relax and enjoy it
23:50 owlbot joined #gluster
23:50 merp_ joined #gluster
23:50 cpetersen Oh man... this is insane
23:54 owlbot joined #gluster
23:58 owlbot joined #gluster
23:59 theron joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary