Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-10-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 theron joined #gluster
00:20 haomaiwang joined #gluster
01:00 vimal joined #gluster
01:03 EinstCrazy joined #gluster
01:05 zhangjn joined #gluster
01:13 gildub joined #gluster
01:25 zhangjn_ joined #gluster
01:27 najib joined #gluster
01:35 Lee1092 joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:48 CyrilPeponnet joined #gluster
01:50 haomaiwa_ joined #gluster
01:51 CyrilPeponnet joined #gluster
01:53 nangthang joined #gluster
01:54 thangnn_ joined #gluster
01:58 DV joined #gluster
01:59 zhangjn joined #gluster
02:01 baojg joined #gluster
02:13 zhangjn joined #gluster
02:14 RameshN joined #gluster
02:20 zhangjn joined #gluster
02:46 zhangjn joined #gluster
02:51 baojg joined #gluster
02:51 nishanth joined #gluster
02:51 kshlm joined #gluster
02:54 bharata-rao joined #gluster
03:00 zhangjn joined #gluster
03:03 clutchk joined #gluster
03:04 cvstealth joined #gluster
03:05 uebera|| joined #gluster
03:08 nishanth joined #gluster
03:09 rafi joined #gluster
03:18 julim joined #gluster
03:23 zhangjn joined #gluster
03:29 najib left #gluster
03:30 zhangjn joined #gluster
03:30 atinm joined #gluster
03:32 [7] joined #gluster
03:35 Muhammad_Najib joined #gluster
03:39 Muhammad_Najib Hi, I am using gluster nfs for ovirt with HA configuration. I am wondering what is the minimum number of nodes, if we want gluster HA to still be online read/write for scenario of two nodes down?
03:40 stickyboy joined #gluster
03:40 Muhammad_Najib Currently I am using 3 nodes, with replica 3.
03:40 ramteid joined #gluster
03:42 Muhammad_Najib After some testing with that configuration, the gluster storage will be running fine if one of the nodes down. But the gluster will be offline if two nodes down (only one node alive).
03:42 ppai joined #gluster
03:43 shubhendu joined #gluster
03:47 baojg joined #gluster
03:56 itisravi joined #gluster
03:57 kotreshhr joined #gluster
04:02 kdhananjay joined #gluster
04:05 hagarth joined #gluster
04:05 kotreshhr left #gluster
04:08 zhangjn joined #gluster
04:09 zhangjn joined #gluster
04:10 nbalacha joined #gluster
04:15 gem joined #gluster
04:17 bdurette joined #gluster
04:23 hagarth joined #gluster
04:26 rafi joined #gluster
04:28 harish joined #gluster
04:33 itisravi joined #gluster
04:34 sakshi joined #gluster
04:34 ppai joined #gluster
04:38 pppp joined #gluster
04:43 kotreshhr joined #gluster
04:44 yazhini joined #gluster
04:45 PaulCuzner joined #gluster
04:45 Muhammad_Najib joined #gluster
04:48 Bhaskarakiran joined #gluster
04:50 Bhaskarakiran joined #gluster
04:53 armyriad joined #gluster
04:54 PaulCuzner joined #gluster
04:54 armyriad joined #gluster
04:55 ramky joined #gluster
04:58 hchiramm_home joined #gluster
05:00 skoduri|afk joined #gluster
05:03 maveric_amitc_ joined #gluster
05:03 kanagaraj joined #gluster
05:04 shubhendu joined #gluster
05:06 ndarshan joined #gluster
05:13 rafi joined #gluster
05:13 aravindavk joined #gluster
05:16 nbalacha joined #gluster
05:18 neha_ joined #gluster
05:30 hgowtham joined #gluster
05:31 Bhaskarakiran joined #gluster
05:35 atalur joined #gluster
05:43 PaulCuzner left #gluster
05:48 deepakcs joined #gluster
05:53 jiffin joined #gluster
05:56 rjoseph joined #gluster
06:17 Manikandan joined #gluster
06:18 ashiq joined #gluster
06:20 poornimag joined #gluster
06:21 itisravi joined #gluster
06:22 mhulsman joined #gluster
06:22 msvbhat joined #gluster
06:24 raghu joined #gluster
06:30 vmallika joined #gluster
06:35 jtux joined #gluster
06:36 dusmant joined #gluster
06:40 spalai joined #gluster
06:52 anil joined #gluster
06:53 nangthang joined #gluster
06:58 kdhananjay joined #gluster
07:00 Humble joined #gluster
07:11 ashiq joined #gluster
07:14 [Enrico] joined #gluster
07:19 Philambdo joined #gluster
07:19 auzty joined #gluster
07:21 deniszh joined #gluster
07:21 R0ok_ joined #gluster
07:30 kdhananjay joined #gluster
07:34 kdhananjay joined #gluster
07:42 mbukatov joined #gluster
07:51 ctria joined #gluster
07:52 Slashman joined #gluster
08:17 hgowtham joined #gluster
08:17 jwd joined #gluster
08:31 RameshN joined #gluster
08:33 atalur_ joined #gluster
08:39 mufa joined #gluster
08:40 Norky joined #gluster
08:47 haomaiwa_ joined #gluster
08:52 zhangjn joined #gluster
08:52 R0ok_ anyone on 3.7.5 yet ?
08:53 R0ok_ still no release notes for 3.7.5 -> https://github.com/gluster/glusterfs/tree/release-3.7/doc/release-notes
08:53 glusterbot Title: glusterfs/doc/release-notes at release-3.7 · gluster/glusterfs · GitHub (at github.com)
08:58 LebedevRI joined #gluster
09:01 haomaiwa_ joined #gluster
09:05 jiffin R0ok_: u can check with pranithk , he is the maintainer for 3.7.5
09:07 R0ok_ I'm contemplating upgrading from 3.5.5 to 3.7.5
09:08 Leildin I just did 3.6.2 to 3.7.4
09:08 Leildin no problems
09:08 sripathi joined #gluster
09:09 Leildin i think there's a problem from 3.4
09:09 Leildin http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.7
09:10 R0ok_ Leildin: I've read the upgrade docs. a couple of times. I'm also watching the mailing list for some issues related to the upgrade process
09:11 jiffin1 joined #gluster
09:15 Leildin I had problems from 3.3 to 3.6 but I think the difference gaps were far bigger
09:18 R0ok_ Leildin: why did you decide to use 3.6 ?
09:18 R0ok_ I think I prefer using stable branches on production :)
09:21 maveric_amitc_ joined #gluster
09:21 poornimag joined #gluster
09:22 rjoseph joined #gluster
09:23 beeradb joined #gluster
09:24 Leildin we've had weird stuff happen because we have a strange use case
09:24 Leildin we started with a normal 4 cluster 12 bricks distribute volume
09:24 Leildin had some lock file issues attacking it from samba and fuse clients
09:25 Leildin we had to move back to single node 12 brick and that just made us upgrade to 3.6 at the same time
09:26 Leildin it did have some volume optimisation options we liked if I remember correctly like disabling nfs so the log file wouldn't be spammed and stuff like that
09:26 Leildin nothing major, just improving upon the PoC model we'd made
09:26 spalai left #gluster
09:26 Leildin while cutting off all node redundancy, stupidly. (not my decision)
09:27 spalai joined #gluster
09:30 hchiramm_home joined #gluster
09:31 hgowtham joined #gluster
09:32 _shaps_ joined #gluster
09:36 Leildin oh and we went from 3.6 to 3.7 to try and resolve a rebalance issue but to no avail :)
09:36 Willy joined #gluster
09:38 stickyboy joined #gluster
09:41 zhangjn joined #gluster
09:44 hagarth Leildin: what is the rebalance issue?
09:48 Leildin the rebalance is weird
09:48 Leildin just a sec I've got a bug report open
09:48 Leildin https://bugzilla.redhat.com/show_bug.cgi?id=1264520
09:48 glusterbot Bug 1264520: medium, unspecified, ---, bugs, NEW , volume rebalance start is successfull but status returns failed status
09:48 Leildin my .vol files are "corrupted"
09:48 Leildin instead of having normal 0,1,2,3,4,5,6,7,8 I have 0,0,0,0,0,0,0,1,2
09:49 Leildin so the parsing to get brick details fails and rebalance fails
09:49 Leildin it happened following an add-brick that had no problems
09:50 hagarth Leildin: is this the generated volume file?
09:50 haomaiwa_ joined #gluster
09:51 Leildin data-rebalance.vol has this as well as trusted-data.tcp-fuse.vol
09:51 Leildin and data.tcp-fuse.vol
09:51 maveric_amitc_ joined #gluster
09:51 Leildin joejulian gave me a command to regenerate the files, they had the same problem
09:51 hagarth can you provide the output of gluster volume info?
09:51 hagarth hmm, it is in the bug right?
09:51 Leildin yes
09:52 hagarth i have several single node volumes, never run into this problem. wondering what could trigger this.
09:52 Leildin me too ;)
09:53 Leildin if I redo a rebalance now it 'corrupts' the files again
09:53 Leildin I have to manually change the files and restart the rebalance it's very strange
09:53 hagarth yeah, looks quite strange
09:54 hagarth is it possible to create a test volume in this setup and see if the problem recurs?
10:01 haomaiwa_ joined #gluster
10:03 Bhaskarakiran joined #gluster
10:04 Leildin I'll do it when I get a bit more time
10:05 hagarth Leildin: If the problem recurs, please update the bug with glusterd log file and the generated volume files
10:06 Leildin will do !
10:20 Bhaskarakiran_ joined #gluster
10:28 ju5t_ joined #gluster
10:31 najib joined #gluster
10:31 harish_ joined #gluster
10:41 maveric_amitc_ joined #gluster
10:48 ivan_rossi joined #gluster
10:49 haomaiwang joined #gluster
10:53 maveric_amitc_ joined #gluster
10:57 zhangjn joined #gluster
10:58 EinstCrazy joined #gluster
10:58 ju5t joined #gluster
10:59 TheSeven joined #gluster
11:00 EinstCrazy joined #gluster
11:01 haomaiwang joined #gluster
11:06 gem joined #gluster
11:18 plarsen joined #gluster
11:19 frozengeek joined #gluster
11:24 jiffin1 joined #gluster
11:28 Manikandan joined #gluster
11:28 haomaiwa_ joined #gluster
11:31 kdhananjay joined #gluster
11:42 skoduri joined #gluster
11:43 ju5t_ joined #gluster
11:44 Bhaskarakiran_ joined #gluster
11:44 jiffin1 joined #gluster
11:49 haomaiwa_ joined #gluster
11:49 Manikandan joined #gluster
11:50 atalur_ joined #gluster
11:50 rjoseph joined #gluster
11:52 hchiramm_home joined #gluster
11:59 julim joined #gluster
12:01 jiffin1 joined #gluster
12:01 haomaiwa_ joined #gluster
12:02 mufa 1 ms to read a file on gluster (strace - lstat, open, close) is it a normal value?
12:02 mufa 3k file if it matters
12:05 shubhendu joined #gluster
12:09 spcmastertim joined #gluster
12:16 unclemarc joined #gluster
12:16 ppai joined #gluster
12:19 jtux joined #gluster
12:19 shubhendu joined #gluster
12:22 Philambdo joined #gluster
12:23 theron joined #gluster
12:32 chirino joined #gluster
12:35 neha_ joined #gluster
12:35 kotreshhr left #gluster
12:38 B21956 joined #gluster
12:44 Leildin mufa,  what time do you get when reading on root mount ?
12:49 ppai joined #gluster
12:54 rafi1 joined #gluster
12:55 jwaibel joined #gluster
12:56 mhulsman joined #gluster
13:00 ayma joined #gluster
13:00 Akee joined #gluster
13:01 lpabon joined #gluster
13:05 mufa 140 microseconds
13:05 atrius joined #gluster
13:12 shubhendu joined #gluster
13:21 ira joined #gluster
13:23 theron joined #gluster
13:26 rjoseph joined #gluster
13:27 Philambdo joined #gluster
13:27 bowhunter joined #gluster
13:31 haomaiwa_ joined #gluster
13:32 Leildin mufa, gluster has a slight problem with small files in large quantities but they're revamping some stuff to make it better.
13:39 hamiller joined #gluster
13:50 hagarth joined #gluster
13:51 theron joined #gluster
13:54 skylar joined #gluster
13:54 nbalacha joined #gluster
14:01 haomaiwa_ joined #gluster
14:20 dgandhi joined #gluster
14:21 dgandhi joined #gluster
14:22 shyam joined #gluster
14:22 rwheeler joined #gluster
14:26 bennyturns joined #gluster
14:30 DV joined #gluster
14:32 atinm joined #gluster
14:33 turkleton joined #gluster
14:34 turkleton Does anyone here know about the GFS scheduler and propagation to client side config? When you change the configuration on the volume (server-side), does the change automatically take effect on the client side as well? Is a remount necessary?
14:34 turkleton @JoeJulian ^ If you have a second :)
14:37 mpietersen joined #gluster
14:38 haomaiwa_ joined #gluster
14:41 spalai left #gluster
14:46 jobewan joined #gluster
14:48 aravindavk joined #gluster
14:49 atinm joined #gluster
14:52 wushudoin joined #gluster
14:56 theron joined #gluster
14:59 David_Varghese joined #gluster
15:00 poornimag joined #gluster
15:00 ira joined #gluster
15:01 haomaiwa_ joined #gluster
15:04 shaunm joined #gluster
15:12 Philambdo joined #gluster
15:21 skoduri joined #gluster
15:25 JoeJulian turkleton: When you make changes that affect the client, the client loads the new graph and transfers all the open fd to the new graph, then releases the old graph. So yes, changes are affected on the client without a remount.
15:27 turkleton Awesome! Thanks! We're running in to some performance issues on AWS with the m3.medium nodes. We attempted to make some volume changes that we figured may help, but it appears we're actually throttled right now by network throughput. :-/ We're moving to a larger instance class with more network throughput, and we're hoping that'll help.
15:30 muneerse joined #gluster
15:32 ju5t joined #gluster
15:35 jwd joined #gluster
15:35 turkleton To confirm, if we set this variable on the volume (http://www.gluster.org/community/documentation/index.php/Translators/performance/io-cache) it should propagate to the client as well, yes?
15:39 stickyboy joined #gluster
15:42 JoeJulian Yes, if you turn off the io-cache option, it will take effect on the client.
15:42 daedeloth joined #gluster
15:42 daedeloth hi there
15:42 daedeloth I'm trying to figure out how gluster handles symlinks
15:43 JoeJulian exactly the same way as any other filesystem.
15:43 JoeJulian literally
15:43 daedeloth alright, my question seems dumb now :) sorry
15:44 JoeJulian No, I think I know what you were wondering.
15:44 daedeloth yea I was wondering if it replicated the content of the symlink on the other .. euh... mount points? I'm not very clear on the terminology yet
15:45 jamesc joined #gluster
15:46 JoeJulian So if you make a symlink on you gluster volume pointing to, for instance, /etc/hostname each client that mounts that volume would see their own hostname if you cat $symlink.
15:46 daedeloth yea, I understand
15:46 daedeloth we're trying to set up a cdn where clients can choose in what countries their data can live
15:46 JoeJulian nifty
15:47 plarsen joined #gluster
15:47 daedeloth and from what I see gluster isn't the best fit for that :)
15:48 JoeJulian Could be done with middleware in swift as another idea to toy with.
15:48 daedeloth how do you mean?
15:51 bennyturns joined #gluster
15:51 JoeJulian use some metadata tag as you put your object, write a middleware bit to weight the object to the zone chosen.
15:52 atinm joined #gluster
15:52 daedeloth but the files need to be serveable from an apache server on the location
15:52 daedeloth so that server will have to mount the cluster as well
15:52 daedeloth and it would have access to everything
15:58 maserati joined #gluster
15:59 turkleton JoeJulian: I'm expanding a GFS cluster to add a 3rd and 4th replica (all replication, no striping), how does the GFS client handle this? Will it attempt to send read requests to an unhealthy node and potentially result in a file read miss?
16:00 armyriad joined #gluster
16:01 JoeJulian turkleton: on lookup() if the file needs healed, it'll get queued and the FD will be assigned to the first healthy server.
16:01 haomaiwa_ joined #gluster
16:02 turkleton Will the checking if it needs healed operation potentially be slow?
16:02 shubhendu joined #gluster
16:03 JoeJulian It's the same whether it's healthy or not.
16:04 jwaibel joined #gluster
16:06 plarsen joined #gluster
16:07 bluenemo joined #gluster
16:09 JoeJulian If those self-heal checks take too long for your use case, you can lean more toward AP at the cost of C by disabling self-heal checks on the client. (see gluster volume set help for the settings).
16:09 hagarth joined #gluster
16:17 haomaiw__ joined #gluster
16:17 nbalacha joined #gluster
16:21 nage joined #gluster
16:31 [Enrico] joined #gluster
16:32 skoduri joined #gluster
16:33 gmaruzz joined #gluster
16:34 ivan_rossi left #gluster
16:35 gmaruzz left #gluster
16:40 RedW joined #gluster
16:45 Rapture joined #gluster
16:48 coredump joined #gluster
16:49 jwd joined #gluster
16:50 anil joined #gluster
16:59 frozengeek joined #gluster
17:01 haomaiwa_ joined #gluster
17:03 maveric_amitc_ joined #gluster
17:15 maveric_amitc_ joined #gluster
17:16 RameshN joined #gluster
17:18 monotek joined #gluster
17:18 turkleton JoeJulian: What would be the best resource for learning the internals of how gluster and FUSE handle reads/writes and direction of where to go for said reads/writes?
17:38 hchiramm_home joined #gluster
17:39 JoeJulian turkleton: "gluster volume set help" afaict. Or the source code.
17:40 JoeJulian The default is "first to respond" where the requests go out in order from the first brick.
17:40 JoeJulian (or the local server if the client is on a server)
17:41 rafi joined #gluster
17:41 turkleton Gotcha. Does it do any latency checks or first brick is first hit?
17:44 mufa joined #gluster
17:46 cholcombe joined #gluster
17:49 JoeJulian first to respond is it (by default)
17:53 frozengeek joined #gluster
18:01 haomaiwa_ joined #gluster
18:04 theron_ joined #gluster
18:11 Philambdo joined #gluster
18:16 turkleton Is it possible to stop a full heal operation?
18:21 plarsen joined #gluster
18:23 JoeJulian There's no command to do that.
18:24 JoeJulian Theoretically if you stop all glusterd and ensure all glustershd have stopped, then restart all glusterd it should forget that's what it was doing.
18:24 JoeJulian I think
18:28 shaunm joined #gluster
18:31 ayma joined #gluster
18:34 B21956 joined #gluster
18:35 Ludo- joined #gluster
18:35 Ludo- hi! any experience to use glusterfs as an origin for a CDN?
18:41 spalai joined #gluster
18:42 cholcombe joined #gluster
18:54 _maserati_ joined #gluster
19:00 cholcombe joined #gluster
19:07 jwd joined #gluster
19:17 mikedep333 joined #gluster
19:26 squaly joined #gluster
20:09 ayma joined #gluster
20:20 ira joined #gluster
20:21 cliluw joined #gluster
20:25 cholcombe joined #gluster
20:28 cliluw joined #gluster
20:33 _maserati_ joined #gluster
20:34 _maserati_ joined #gluster
20:54 mhulsman joined #gluster
21:04 _maserati_ joined #gluster
21:21 _maserati_ joined #gluster
21:22 turkleton JoeJulian: Is it fairly straight-forward to make GlusterFS NFS HA or is the FUSE client the only way to achieve "true" HA?
21:25 JoeJulian nfs is a single network connection. To make nfs HA, you would have to float an ip address depending on state. corosync/pacemanker, ucarp, etc.
21:39 cliluw joined #gluster
21:41 stickyboy joined #gluster
21:51 turkleton JoeJulian: When using the GFS NFS client, can you point NFS clients to both nodes or should you point to only one?
21:58 ira joined #gluster
22:01 mufa you should point to the floating IP address I guess
22:01 JoeJulian There is not GlusterFS nfs client. The NFS client is kernel (if using linux/nfs) or whatever it is in windows.
22:02 mufa I've added a new brick to a replica volume, what do I need to do to have all files from the original bricks to the new one?
22:07 JoeJulian That would happen automatically.
22:07 DV__ joined #gluster
22:07 mufa it didn't
22:07 JoeJulian ~pasteinfo | mufa
22:07 glusterbot mufa: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
22:09 mufa JoeJulian: http://fpaste.org/278558/44687741/
22:09 glusterbot Title: #278558 Fedora Project Pastebin (at fpaste.org)
22:12 _Bryan_ joined #gluster
22:12 JoeJulian looks right. check the self-heal logs, /var/log/glusterfs/glustershd.log, firewall(s), I assume the new server is on the same subnet.
22:13 mufa same subnet, firewall: is there a heal port to be opened?
22:15 opal joined #gluster
22:18 opal left #gluster
22:23 JoeJulian Nope, the glustershd service just has to be able to connect to the bricks.
22:23 JoeJulian @ports
22:23 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
22:24 mufa thanks for your help JoeJulian, it's kinda late here so I'll have a look again tomorrow
22:28 daMaestro joined #gluster
22:31 Chr1st1an joined #gluster
22:35 delhage_ joined #gluster
22:41 gildub joined #gluster
23:01 lpabon joined #gluster
23:12 theron joined #gluster
23:15 zhangjn joined #gluster
23:20 sc0 joined #gluster
23:50 zhangjn joined #gluster
23:52 justinmburrous joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary