Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-11-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:21 johnmilton joined #gluster
01:02 vinurs joined #gluster
01:05 shdeng joined #gluster
01:39 haomaiwang joined #gluster
02:22 Gnomethrower joined #gluster
02:31 haomaiwang joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:10 Lee1092 joined #gluster
03:28 mchangir joined #gluster
03:33 magrawal joined #gluster
03:49 kramdoss_ joined #gluster
03:49 atinm joined #gluster
04:02 RameshN joined #gluster
04:06 itisravi joined #gluster
04:37 kdhananjay joined #gluster
04:39 apandey joined #gluster
04:49 nbalacha joined #gluster
04:55 kotreshhr joined #gluster
04:59 satya4ever joined #gluster
05:08 prasanth joined #gluster
05:09 sanoj joined #gluster
05:12 pfactum joined #gluster
05:14 apandey joined #gluster
05:17 ankitraj joined #gluster
05:25 ndarshan joined #gluster
05:27 ashiq joined #gluster
05:28 karthik_us joined #gluster
05:29 suliba joined #gluster
05:32 d0nn1e joined #gluster
05:33 karnan joined #gluster
05:40 Bhaskarakiran joined #gluster
05:45 rafi joined #gluster
05:48 jiffin joined #gluster
05:50 shubhendu joined #gluster
05:54 k4n0 joined #gluster
05:56 bkunal joined #gluster
05:57 msvbhat joined #gluster
06:00 mchangir joined #gluster
06:00 Alghost joined #gluster
06:12 skoduri joined #gluster
06:19 nishanth joined #gluster
06:22 Philambdo joined #gluster
06:28 kramdoss_ joined #gluster
06:29 arcolife joined #gluster
06:33 rastar joined #gluster
06:40 bluenemo joined #gluster
06:51 karthik_us joined #gluster
06:52 apandey joined #gluster
06:55 kramdoss_ joined #gluster
06:56 mhulsman joined #gluster
07:00 [diablo] joined #gluster
07:05 devyani7 joined #gluster
07:07 karthik_us joined #gluster
07:16 PaulCuzner joined #gluster
07:16 ahino joined #gluster
07:17 Saravanakmr joined #gluster
07:18 jtux joined #gluster
07:22 prasanth joined #gluster
07:30 hchiramm joined #gluster
07:37 derjohn_mob joined #gluster
07:42 pdrakewe_ joined #gluster
07:45 jtux joined #gluster
07:48 jvandewege joined #gluster
07:48 PaulCuzner joined #gluster
07:50 mbukatov joined #gluster
07:56 PaulCuzner joined #gluster
07:58 tomaz__ joined #gluster
08:02 k4n0 joined #gluster
08:15 ivan_rossi joined #gluster
08:16 PaulCuzner joined #gluster
08:21 k4n0 joined #gluster
08:33 abyss^ The only way to fix split-brain on directory is remove that directory? Because gluster's tools don't work on dir and getfattr -n replica.split-brain-status is unable to obtain values...
08:34 flying joined #gluster
08:38 ahino joined #gluster
08:44 riyas joined #gluster
08:44 ivan_rossi helsinki92: Yes small files is problematic, however I know personally of many succesfull cases, given that the web stack and gluster are coupled appropriately.
08:46 k4n0 joined #gluster
08:47 Gnomethrower joined #gluster
08:52 om2 joined #gluster
08:55 karthik_us joined #gluster
08:56 percevalbot joined #gluster
08:58 hackman joined #gluster
09:01 arc0 joined #gluster
09:19 luizcpg joined #gluster
09:21 jri joined #gluster
09:29 panina joined #gluster
09:33 percevalbot joined #gluster
09:59 nbalacha joined #gluster
10:00 kdhananjay Ulrar: there?
10:01 kdhananjay Ulrar: had a question on the add-brick + vm corruption issue.
10:03 kdhananjay Ulrar: so we just rc'd a vm corruption bug with replace-brick as part of the 3.9 RC verification work.
10:03 jkroon joined #gluster
10:04 kdhananjay Ulrar: i want to test if the same patch fixes the add-brick issue you'd reported. So if you could tell me what steps it took to corrupt the vms, i can try that out on my setup and confirm that the same patch fixes both issues.
10:08 karnan joined #gluster
10:09 Ulrar kdhananjay:
10:10 kdhananjay Ulrar: Hey!
10:10 Ulrar kdhananjay: I had a 3 brick replica 3 setup, with about 25 VMs using it on proxmox, The promox nodes were also the bricks
10:10 Ulrar I just ran the add-brick, adding 3 new nodes to go from 1x3 to 2x3 and everything got corrupted immediatly
10:11 Ulrar So if you install 3 proxmox with a gluster volume on it and try to add 3 more bricks, you should get the problem
10:11 kdhananjay Ulrar: ok trying it out .. thanks!
10:11 Ulrar kdhananjay: Np ! Got the problem with 3.7.12, and Lindsay was saying it's still there on 3.8
10:12 Ulrar Maybe try it with a "bad" version first to be sure you have the correct steps
10:12 Ulrar kdhananjay: Thanks for looking into it anyway :)
10:12 kdhananjay Ulrar: yes, saw the mail. was a bit confused since lindsay in his bug report said rebalance was required to be done to corrupt the file. looks like thats not necessary
10:13 Ulrar I had corruption before the rebalance, it was really instant when I added the brick
10:13 kdhananjay Ulrar: well, thank YOU for bringing the bug to our notice :)
10:13 Ulrar But then again, 3.7.12 isn't that recent so maybe it doesn't work exactly the same in 3.8
10:13 Ulrar I just never had the courage something that was working fine
10:13 Ulrar +to update
10:14 kdhananjay Ulrar: no, looks like the bug exists even in the latest versions of 3.8 and 3.7
10:19 derjohn_mob joined #gluster
10:25 mhulsman joined #gluster
10:34 msvbhat joined #gluster
10:34 Wizek joined #gluster
10:35 kdhananjay Ulrar: hmm so i did an add-brick while IO was going on in 3 vms and nothing bad happened.
10:36 panina joined #gluster
10:36 Ulrar You had IO on gfapi ?
10:38 kdhananjay Ulrar: ah no. FUSE in my case, since Lindsay confirmed the issue occurs on FUSE too.
10:38 kdhananjay Ulrar: ttyl, gotta go now.
10:46 kramdoss_ joined #gluster
10:48 B21956 joined #gluster
10:53 bkunal joined #gluster
10:58 B21956 joined #gluster
10:58 gem joined #gluster
11:00 Ashutto joined #gluster
11:03 B21956 joined #gluster
11:06 Ashutto joined #gluster
11:06 karnan joined #gluster
11:09 Philambdo joined #gluster
11:09 B21956 joined #gluster
11:37 ira_ joined #gluster
11:39 nbalacha joined #gluster
11:51 atinm joined #gluster
12:02 msvbhat joined #gluster
12:13 [diablo] joined #gluster
12:14 mhulsman joined #gluster
12:15 nishanth joined #gluster
12:16 tdasilva joined #gluster
12:22 bkunal joined #gluster
12:31 kdhananjay joined #gluster
12:41 kotreshhr left #gluster
12:42 mchangir joined #gluster
12:44 shubhendu joined #gluster
12:49 f0rpaxe joined #gluster
12:52 prasanth joined #gluster
12:53 arpu joined #gluster
12:55 johnmilton joined #gluster
12:59 prasanth joined #gluster
13:01 Gnomethrower joined #gluster
13:07 prasanth joined #gluster
13:11 rafi joined #gluster
13:15 atinm joined #gluster
13:15 kdhananjay joined #gluster
13:24 Gnomethrower joined #gluster
13:31 unclemarc joined #gluster
13:32 atinm joined #gluster
13:35 nbalacha joined #gluster
13:35 [diablo] joined #gluster
13:39 luizcpg joined #gluster
13:41 shyam joined #gluster
13:54 haomaiwang joined #gluster
13:57 rwheeler joined #gluster
13:57 plarsen joined #gluster
14:01 skoduri joined #gluster
14:05 gem joined #gluster
14:07 martin_pb joined #gluster
14:16 Philambdo joined #gluster
14:16 nbalacha joined #gluster
14:19 plarsen joined #gluster
14:21 mchangir joined #gluster
14:27 nbalacha joined #gluster
14:32 skylar joined #gluster
14:37 jiffin joined #gluster
14:40 kpease joined #gluster
14:46 shaunm joined #gluster
14:48 mchangir joined #gluster
14:55 [diablo] joined #gluster
14:59 circ-user-6WWwA joined #gluster
15:03 RameshN joined #gluster
15:05 hagarth joined #gluster
15:08 Wizek joined #gluster
15:08 mchangir joined #gluster
15:13 k4n0 joined #gluster
15:15 helsinki joined #gluster
15:15 arc0 joined #gluster
15:26 msvbhat joined #gluster
15:26 Gambit15 joined #gluster
15:37 unclemarc joined #gluster
15:40 farhorizon joined #gluster
15:45 AppStore joined #gluster
15:46 scubacuda_ joined #gluster
15:48 Lee1092 joined #gluster
15:48 twisted` joined #gluster
15:48 telius joined #gluster
15:58 hchiramm joined #gluster
15:59 wushudoin joined #gluster
16:09 mchangir joined #gluster
16:14 daryllee joined #gluster
16:16 farhorizon joined #gluster
16:16 Caveat4U joined #gluster
16:27 farhorizon joined #gluster
16:28 luizcpg joined #gluster
16:28 farhorizon joined #gluster
17:06 derjohn_mob joined #gluster
17:45 msvbhat joined #gluster
18:03 helsinki Is it possible to export subdirectories of a Gluster volume?
18:04 JoeJulian helsinki: Not currently.
18:07 helsinki Thanks JoeJulian
18:10 prth joined #gluster
18:10 klaas joined #gluster
18:17 Caveat4U joined #gluster
18:21 aocole joined #gluster
18:28 wushudoin joined #gluster
18:29 jiffin joined #gluster
18:37 SlickNik joined #gluster
18:43 fcami joined #gluster
18:51 jiffin joined #gluster
18:58 shaunm joined #gluster
19:00 aocole I have a replicated-distributed setup. One large brick per machine, and many volumes. Let's say a machine goes down. If I want to replace the machine (and brick) it seems like I have to do replace-brick on every volume. Is that correct?
19:01 MidlandTroy joined #gluster
19:05 aocole I expected there to be a way to replace a brick as a top-level operation and have that take effect for every volume that used that brick
19:06 prth joined #gluster
19:12 d0nn1e joined #gluster
19:17 JoeJulian aocole: I always just replace a machine while keeping the same hostname and server uuid (/var/lib/glusterd/glusterd.info) so I can just start the volumes (start ... force) and let self-heal handle the rest.
19:19 Caveat4U joined #gluster
19:19 aocole thanks @JoeJulian. I saw some docs about that here: https://support.rackspace.com/how-to/recover-from-a-failed-server-in-a-glusterfs-array/
19:19 glusterbot Title: Recover from a failed server in a GlusterFS array (at support.rackspace.com)
19:20 aocole what about a catastrophic case where no data from the failed machine is available?
19:20 JoeJulian That's what replication is for.
19:20 aocole right.. I meant the /var/lib/gluster data
19:21 k4n0 joined #gluster
19:21 JoeJulian Since I use saltstack to build my servers, I let salt create the glusterd.info file before I start glusterd. Once glusterd is up with the correct uuid, peer probe another server to get the rest of the configuration.
19:22 JoeJulian If you don't have the uuid, you can get it from one of the other servers peer info.
19:23 aocole i see. so salt creates a generic glusterd.info file and then peer probe fills it in?
19:23 aocole (forgive me for my ignorance, I'm picking up someone else's work on this)
19:23 JoeJulian peer probe fills in /var/lib/glusterd/{peers,vols,...}
19:23 JoeJulian That's why I hang out here, to help. :)
19:24 aocole Thanks :-)
19:25 aocole I'll have to see if it's possible in our setup to bring up a new machine with the same IP/hostname in order to do it this way. We're working with a few different IaaS
19:26 JoeJulian different IP *should* be ok, as long as you have the right hostname.
19:28 k4n0 joined #gluster
19:29 aocole but to the original question: If I want to replace a brick with a new one on a different host, I would need to replace-brick on every volume. Correct?
19:29 JoeJulian Correct
19:30 JoeJulian Besides, the destination brick would not be able to necessarily be "guessed" for each volume.
19:31 JoeJulian Also, sometimes it's beneficial to do one-at-a-time and let self-heal finish between each.
19:31 JoeJulian Just to manage server load
19:31 aocole understood, makes sense
19:32 aocole is there a command to list volumes using brick X?
19:32 JoeJulian No. It would only be one volume though.
19:35 aocole hm-- I will need to look over our code again in that case. I thought we were provisioning multiple volumes/brick
19:35 glusterbot aocole: hm's karma is now -1
19:36 aocole sorry hm
19:36 Caveat4U joined #gluster
19:43 JoeJulian aocole: you may be provisioning multiple volumes per storage device, but a brick is a specific path.
19:44 aocole ok i'm going to read the docs and come back. thanks/sorry. I think folks around my office may have been using the term "brick" incorrectly
19:44 JoeJulian It happens.
19:46 panina joined #gluster
19:50 k4n0 joined #gluster
19:51 k4n0 joined #gluster
20:00 plarsen joined #gluster
20:01 Caveat4U joined #gluster
20:02 irated halp..
20:03 irated What does this mean to you guys? https://gist.github.com/anonymous/a63db3a97e718a250e38ff71d7cfd6d7
20:03 glusterbot Title: Client Debug · GitHub (at gist.github.com)
20:03 mhulsman joined #gluster
20:04 JoeJulian looks like a dht miss
20:04 JoeJulian https://joejulian.name/blog/dht-misses-are-expensive/
20:04 glusterbot Title: DHT misses are expensive (at joejulian.name)
20:05 irated JoeJulian: risk of this? Finally, you can set lookup-unhashed off. This will cause distribute to assume that if the file or linkfile cannot be found using the hash, the file doesn't exist.
20:07 JoeJulian I never looked in to what processes could cause that.
20:07 JoeJulian I wouldn't *expect* anything to, but Murphey's Law suggests that if there is a way, you'll probably find it at the most inopportune time.
20:08 JoeJulian If you're never renaming files or adding bricks, it seems pretty safe to me.
20:09 irated JoeJulian: let me back up. Right we are experiencing high load on a client and one of the volumes on one datastore. Seems like the servers issue is only triggered after the heal deamon starts. The client is ongoing.
20:09 irated different volumes too
20:11 irated We rename files a lot
20:11 JoeJulian irated: Try disabling client-side self-heals, cluster.{data,metadata,entry}-self-heal off
20:11 JoeJulian What version?
20:12 irated sec
20:16 irated JoeJulian: PM.
20:17 jobig joined #gluster
20:17 jobig hello, could someone tell me what is better between NFS and a GLusterFS servers in terms or HA and performance ?
20:18 JoeJulian irated: Ok, it's not the one bug I was looking for.
20:18 irated Good to know
20:18 irated so i disabled cluster.{data,metadata,entry}-self-heal
20:18 irated and its still blowing up
20:18 JoeJulian jobig: That's comparing apples to orchards.
20:19 irated brb going to my monitors been at the table all day
20:20 jobig @joejulian, i agree with you.. but could you help me describe pros vs cons.. i'm getting challenged.  some people wish to get NFS instead of Fuse.. They claim fuse is very slow.. and it's in the userspace...
20:21 JoeJulian Ah, that's different. I thought you were asking to compare a single nfs server with a glusterfs cluster.
20:22 Caveat4U joined #gluster
20:23 irated 1ping
20:23 irated left #gluster
20:23 irated joined #gluster
20:23 glusterbot irated: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
20:23 JoeJulian I think the biggest thing you got with nfs was the kernel cache which didn't always get properly invalidated so you would have the ability to get stale metadata (and data in some rare cases).
20:23 JoeJulian With the fuse mount, you get higher overall throughput, which probably won't matter much for your use case.
20:23 JoeJulian heh
20:23 irated glusterbot: dont tell me what to do :)
20:23 irated Kidding
20:25 irated JoeJulian: Anymore ideas for troubleshooting this mess
20:25 irated ?
20:27 JoeJulian jobig: With features.cache-invalidation on, features.cache-invalidation-timeout set to a long time, and performance.md-cache-timeout set very long (30 minutes?) they should be pretty equal.
20:27 JoeJulian jobig: You could also suggest your developers use gfapi and avoid the kernel swaps altogether.
20:28 irated Are those valid in the version im on?
20:29 JoeJulian irated: Look for a busy log (without the debug log-level) first. If there's a problem it's unlikely to be a debug message.
20:29 JoeJulian irated: I think only performance.md-cache-timeout is in 3.7.
20:31 jobig ok
20:33 Caveat4U joined #gluster
20:33 jobig @joejulian, so it terms of performance, fuse client could scale better ?  but single node performance, nfs is slighty better, but doesn't perform well under node failure ? is that correct ?
20:34 JoeJulian That sounds about right.
20:35 jobig @joejulian.. ok great.. and what about support ?  is fuse well supported compared to NFS ?
20:35 jobig glusterfs (fuse)
20:35 luizcpg joined #gluster
20:36 JoeJulian About the same, if you use ganesha for nfs.
20:39 jobig @joejulian, what should be today recommendation for building a fileserver  (mainly for storing logs) with very high file integrity/stability, redundancy and adequate performance.
20:50 farhorizon joined #gluster
21:01 rwheeler joined #gluster
21:08 akanksha__ joined #gluster
21:13 panina joined #gluster
21:13 panina joined #gluster
21:15 irated What happens after a self heal  of a bunch of files completes?
21:20 hagarth joined #gluster
21:26 farhoriz_ joined #gluster
21:30 farhoriz_ joined #gluster
21:34 Caveat4U joined #gluster
21:35 Caveat4U_ joined #gluster
21:43 shyam joined #gluster
21:46 Wizek__ joined #gluster
21:48 shyam left #gluster
21:52 helsinki Hello all,  when using LVM to create bricks, is it possible to extend the LVM volume live while the gluster volume is started, or do i need to stop the Gluster volume first?
22:00 panina joined #gluster
22:10 Caveat4U joined #gluster
22:15 Wizek__ joined #gluster
22:27 gem joined #gluster
22:46 luizcpg joined #gluster
22:52 Caveat4U joined #gluster
22:58 JoeJulian jobig: Sorry, my answer to questions like that is always, "it depends". Even common use cases have a wide variety of expectations and constraints.
22:59 JoeJulian irated: After a bunch of files self-heal, they're done. Nothing further happens.
23:00 johnmilton joined #gluster
23:00 Klas joined #gluster
23:00 farhorizon joined #gluster
23:01 irated okay werid
23:01 irated okay weird
23:01 irated not sure why we are seeing the highload then
23:01 irated Is it possible to rebuild/replace a node?
23:03 Caveat4U joined #gluster
23:05 JoeJulian Sure, but then you'll just have a high load on the new node while it self-heals everything.
23:05 * JoeJulian whacks his own hand for typing "node" without thinking.
23:10 Caveat4U joined #gluster
23:14 skylar joined #gluster
23:22 ws2k3 joined #gluster
23:31 farhoriz_ joined #gluster
23:33 Wizek_ joined #gluster
23:37 Marbug joined #gluster
23:49 Caveat4U joined #gluster
23:49 Caveat4U_ joined #gluster
23:51 Caveat4U joined #gluster
23:56 Caveat4U_ joined #gluster
23:59 Caveat4U joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary