Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:21 gbox joined #gluster
00:23 plarsen joined #gluster
01:06 nangthang joined #gluster
01:21 EinstCrazy joined #gluster
01:44 d0nn1e joined #gluster
02:04 calavera joined #gluster
02:08 baojg joined #gluster
02:09 harish joined #gluster
02:13 [1]Ethical joined #gluster
02:13 [1]Ethical Hi
02:13 glusterbot [1]Ethical: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
02:13 Lee1092 joined #gluster
02:13 [1]Ethical Can't wait for 3.8 , any ETA ?
02:35 haomaiwa_ joined #gluster
02:49 Pilgrim_ joined #gluster
02:53 tyler274 how do I remove a peer from the pool?
02:54 tyler274 as in the case of the peer no longer existing
03:01 haomaiwa_ joined #gluster
03:02 tyler274 nvm figured it out
03:12 nishanth joined #gluster
03:17 ovaistariq joined #gluster
03:29 kdhananjay joined #gluster
03:38 atinm joined #gluster
03:43 haomaiwang joined #gluster
03:44 nathwill joined #gluster
03:46 [1]Ethical @tyler274 you forced it ?
03:46 [1]Ethical -force
03:48 tyler274 yea
03:48 tyler274 I'm still figuring out many of the semantics of the cli
03:49 tyler274 and the mailing list posts take a bit longer to read and parse
03:49 tyler274 thank you
04:00 natarej joined #gluster
04:01 haomaiwang joined #gluster
04:02 shubhendu joined #gluster
04:05 gbox joined #gluster
04:09 haomai___ joined #gluster
04:10 haomai___ joined #gluster
04:11 haomaiwa_ joined #gluster
04:11 [1]Ethical @tyler274 np
04:12 haomaiwang joined #gluster
04:12 kanagaraj joined #gluster
04:13 haomaiwa_ joined #gluster
04:13 ramteid joined #gluster
04:14 haomaiwa_ joined #gluster
04:15 haomaiwang joined #gluster
04:16 haomaiwa_ joined #gluster
04:16 nangthang joined #gluster
04:17 haomaiwang joined #gluster
04:18 haomaiwa_ joined #gluster
04:18 Manikandan joined #gluster
04:19 ppai joined #gluster
04:19 haomaiwa_ joined #gluster
04:20 haomaiwang joined #gluster
04:21 sakshi joined #gluster
04:21 haomaiwa_ joined #gluster
04:22 haomaiwang joined #gluster
04:23 hchiramm joined #gluster
04:23 haomaiwa_ joined #gluster
04:24 haomaiwa_ joined #gluster
04:24 tyler274 what would be the command to reduce the replica count of a volume?
04:25 haomaiwa_ joined #gluster
04:25 tyler274 @[1]Ethical as in I have 1 brick on each server, with the 4th designated as an arbiter, but I want only 2 replicated copies on the cluster
04:26 haomaiwa_ joined #gluster
04:27 haomaiwa_ joined #gluster
04:28 haomaiwa_ joined #gluster
04:28 jiffin joined #gluster
04:29 haomaiwa_ joined #gluster
04:30 haomaiwa_ joined #gluster
04:31 haomaiwang joined #gluster
04:32 haomaiwang joined #gluster
04:33 haomaiwang joined #gluster
04:33 tyler274 perhaps I'm looking for a Distributed Replicated volume instead
04:34 haomaiwang joined #gluster
04:35 haomaiwa_ joined #gluster
04:36 nangthang joined #gluster
04:36 haomaiwa_ joined #gluster
04:37 haomaiwang joined #gluster
04:38 overclk joined #gluster
04:38 haomaiwa_ joined #gluster
04:39 haomaiwa_ joined #gluster
04:40 haomaiwa_ joined #gluster
04:40 tyler274 but I would still like to make use of the arbiter node...
04:41 haomaiwa_ joined #gluster
04:41 tyler274 gluster volume create important replica 3 arbiter 1 server1:/var/gluster/important/brick1 server2:/var/gluster/important/brick1 server3:/var/gluster/important/brick1 master:/var/gluster/important/brick1 fails due to "number of bricks is not a multiple of replica count"
04:42 haomaiwa_ joined #gluster
04:43 pppp joined #gluster
04:43 kshlm joined #gluster
04:43 haomaiwang joined #gluster
04:43 nehar joined #gluster
04:44 haomaiwa_ joined #gluster
04:45 haomaiwa_ joined #gluster
04:46 haomaiwa_ joined #gluster
04:47 tyler274 maybe a Distributed Dispersed Volume?
04:47 haomaiwang joined #gluster
04:47 RameshN joined #gluster
04:48 haomaiwang joined #gluster
04:49 kdhananjay tyler274: I presume you want a 3-way replicated volume with arbiter support.
04:49 P0w3r3d joined #gluster
04:49 kdhananjay tyler274: If that's indeed the case, you need to specify only three brick paths, and not four.
04:49 haomaiwa_ joined #gluster
04:50 haomaiwa_ joined #gluster
04:51 haomaiwa_ joined #gluster
04:52 haomaiwang joined #gluster
04:52 tyler274 @kdhananjay I have a volume that I want to exist on 2 servers to guard against one point of failure, 3 storage arrays, and 1 client server which mounts gluster. to optimize for space it would be better if the files in the volume could be spread across all 3 storage arrays but only exist on 2 of them at any time.
04:53 haomaiwa_ joined #gluster
04:54 haomaiwang joined #gluster
04:54 tyler274 I'd like the arbiter to be the client server as that is smaller and will be ssd backed
04:55 haomaiwang joined #gluster
04:55 gem joined #gluster
04:56 haomaiwa_ joined #gluster
04:56 tyler274 although I could go without an arbiter I'm hesitant to as I have had issues with metadata loss before on moosefs
04:57 haomaiwang joined #gluster
04:57 kdhananjay tyler274: Hmm I don't completely understand your reqs. I will pass on your questions to the arbiter dev, once he's online. You can talk to him then.
04:58 EinstCrazy joined #gluster
04:58 haomaiwa_ joined #gluster
04:58 pur joined #gluster
04:58 tyler274 @kdhananjay thanks.
04:59 haomaiwa_ joined #gluster
05:00 haomaiwa_ joined #gluster
05:01 haomaiwa_ joined #gluster
05:02 haomaiwang joined #gluster
05:03 haomaiwang joined #gluster
05:04 haomaiwa_ joined #gluster
05:05 haomaiwa_ joined #gluster
05:06 haomaiwang joined #gluster
05:07 haomaiwang joined #gluster
05:08 haomaiwa_ joined #gluster
05:09 haomaiwa_ joined #gluster
05:10 haomaiwa_ joined #gluster
05:11 Manikandan_ joined #gluster
05:11 haomaiwang joined #gluster
05:12 haomaiwang joined #gluster
05:13 haomaiwa_ joined #gluster
05:13 ramky joined #gluster
05:14 haomaiwang joined #gluster
05:15 haomaiwang joined #gluster
05:16 haomaiwa_ joined #gluster
05:17 ndarshan joined #gluster
05:17 haomaiwa_ joined #gluster
05:18 haomaiwa_ joined #gluster
05:19 haomaiwang joined #gluster
05:19 ovaistariq joined #gluster
05:20 haomaiwa_ joined #gluster
05:21 haomaiwang joined #gluster
05:22 haomaiwang joined #gluster
05:23 Gaurav_ joined #gluster
05:23 rafi joined #gluster
05:23 haomaiwang joined #gluster
05:24 ahino joined #gluster
05:24 ppai joined #gluster
05:24 haomaiwang joined #gluster
05:25 haomaiwa_ joined #gluster
05:25 nehar joined #gluster
05:26 haomaiwang joined #gluster
05:27 haomaiwang joined #gluster
05:28 haomaiwang joined #gluster
05:29 haomaiwa_ joined #gluster
05:30 haomaiwang joined #gluster
05:31 R0ok_ joined #gluster
05:31 haomaiwa_ joined #gluster
05:32 poornimag joined #gluster
05:32 haomaiwang joined #gluster
05:33 haomaiwang joined #gluster
05:34 haomaiwa_ joined #gluster
05:35 haomaiwa_ joined #gluster
05:36 haomaiwa_ joined #gluster
05:37 haomaiwa_ joined #gluster
05:38 haomaiwa_ joined #gluster
05:38 karnan joined #gluster
05:39 haomaiwa_ joined #gluster
05:40 haomaiwa_ joined #gluster
05:41 haomaiwa_ joined #gluster
05:42 haomaiwa_ joined #gluster
05:43 nehar joined #gluster
05:43 haomaiwang joined #gluster
05:44 haomaiwa_ joined #gluster
05:45 haomaiwa_ joined #gluster
05:46 16WAACSXB joined #gluster
05:47 haomaiwa_ joined #gluster
05:48 haomaiwang joined #gluster
05:49 haomaiwa_ joined #gluster
05:50 karthik__ joined #gluster
05:50 haomaiwa_ joined #gluster
05:51 arcolife joined #gluster
05:51 haomaiwa_ joined #gluster
05:52 haomaiwa_ joined #gluster
05:52 tyler274 @kdhananjay the reqs are to have more storage space than any single node can provide, with 3 nodes, and redundancy such that 1 node can die and there is still a working copy available on another node.
05:53 tyler274 with something like lizard or moosefs I can configure a replication goal of 2
05:53 haomaiwa_ joined #gluster
05:53 tyler274 and the volume (directory is moose/lizard) will exist on 2 of the nodes at any time
05:54 haomaiwa_ joined #gluster
05:55 haomaiwang joined #gluster
05:56 ashiq joined #gluster
05:56 Saravanakmr joined #gluster
05:56 tyler274 the arbiter is just a desired insurance of metadata backup and to protect against split brain
05:56 7JTAADPFL joined #gluster
05:57 nishanth joined #gluster
05:57 haomaiwa_ joined #gluster
05:58 kotreshhr joined #gluster
05:58 haomaiwa_ joined #gluster
05:59 haomaiwa_ joined #gluster
06:00 haomaiwa_ joined #gluster
06:00 kdhananjay joined #gluster
06:01 tyler274 what's the difference between "disperse-data" and just "disperse"
06:01 haomaiwa_ joined #gluster
06:01 tyler274 ah I see they're the same
06:02 haomaiwa_ joined #gluster
06:03 haomaiwa_ joined #gluster
06:03 gowtham joined #gluster
06:04 jiffin1 joined #gluster
06:04 haomaiwa_ joined #gluster
06:05 haomaiwa_ joined #gluster
06:05 Bhaskarakiran joined #gluster
06:06 haomaiwang joined #gluster
06:07 5EXAAB7VD joined #gluster
06:08 haomaiwang joined #gluster
06:09 16WAACS0V joined #gluster
06:10 haomaiwang joined #gluster
06:11 haomaiwa_ joined #gluster
06:12 haomaiwa_ joined #gluster
06:13 haomaiwa_ joined #gluster
06:14 haomaiwang joined #gluster
06:15 haomaiwa_ joined #gluster
06:15 atinm joined #gluster
06:16 haomaiwang joined #gluster
06:17 haomaiwa_ joined #gluster
06:18 haomaiwang joined #gluster
06:19 ppai joined #gluster
06:19 7GHAAGWD8 joined #gluster
06:20 haomaiwa_ joined #gluster
06:21 haomaiwang joined #gluster
06:22 kotreshhr joined #gluster
06:22 haomaiwa_ joined #gluster
06:23 atalur joined #gluster
06:23 7YUAAHB86 joined #gluster
06:24 haomaiwang joined #gluster
06:25 [Enrico] joined #gluster
06:25 haomaiwa_ joined #gluster
06:26 haomaiwang joined #gluster
06:27 haomaiwang joined #gluster
06:28 haomaiwa_ joined #gluster
06:29 haomaiwa_ joined #gluster
06:30 haomaiwang joined #gluster
06:31 Gnomethrower joined #gluster
06:31 haomaiwang joined #gluster
06:32 haomaiwa_ joined #gluster
06:33 18VAADM4W joined #gluster
06:34 haomaiwa_ joined #gluster
06:35 haomaiwa_ joined #gluster
06:35 ekuric joined #gluster
06:36 haomaiwang joined #gluster
06:37 haomaiwa_ joined #gluster
06:38 haomaiwa_ joined #gluster
06:39 haomaiwang joined #gluster
06:40 haomaiwa_ joined #gluster
06:41 haomaiwa_ joined #gluster
06:42 haomaiwa_ joined #gluster
06:43 haomaiwa_ joined #gluster
06:44 haomaiwa_ joined #gluster
06:45 haomaiwa_ joined #gluster
06:46 haomaiwang joined #gluster
06:47 haomaiwang joined #gluster
06:48 16WAACS6O joined #gluster
06:49 haomaiwang joined #gluster
06:50 haomaiwang joined #gluster
06:51 haomaiwang joined #gluster
06:52 haomaiwa_ joined #gluster
06:52 vmallika joined #gluster
06:53 haomaiwa_ joined #gluster
06:53 atalur joined #gluster
06:54 baojg joined #gluster
06:54 haomaiwa_ joined #gluster
06:54 nehar joined #gluster
06:55 haomaiwa_ joined #gluster
06:56 haomaiwa_ joined #gluster
06:57 haomaiwang joined #gluster
06:58 haomaiwa_ joined #gluster
06:59 haomaiwa_ joined #gluster
06:59 rwheeler joined #gluster
07:00 haomaiwang joined #gluster
07:01 haomaiwa_ joined #gluster
07:02 7GHAAGWMD joined #gluster
07:03 haomaiwa_ joined #gluster
07:04 haomaiwa_ joined #gluster
07:05 haomaiwa_ joined #gluster
07:06 7F1AAE61A joined #gluster
07:07 haomaiwang joined #gluster
07:08 haomaiwa_ joined #gluster
07:09 haomaiwang joined #gluster
07:10 haomaiwang joined #gluster
07:11 haomaiwang joined #gluster
07:12 haomaiwang joined #gluster
07:13 7YUAAHCIQ joined #gluster
07:14 haomaiwa_ joined #gluster
07:15 haomaiwang joined #gluster
07:16 arcolife joined #gluster
07:16 haomaiwa_ joined #gluster
07:17 haomaiwa_ joined #gluster
07:18 haomaiwang joined #gluster
07:19 haomaiwang joined #gluster
07:20 ovaistariq joined #gluster
07:20 haomaiwa_ joined #gluster
07:21 haomaiwa_ joined #gluster
07:22 haomaiwa_ joined #gluster
07:23 kdhananjay joined #gluster
07:23 haomaiwa_ joined #gluster
07:23 mhulsman joined #gluster
07:24 haomaiwa_ joined #gluster
07:24 overclk joined #gluster
07:25 kotreshhr joined #gluster
07:26 jiffin1 joined #gluster
07:37 atinm joined #gluster
07:39 ppai joined #gluster
07:39 DV joined #gluster
07:39 vmallika joined #gluster
07:41 hackman joined #gluster
07:43 csaba joined #gluster
07:43 Apeksha joined #gluster
07:45 DV joined #gluster
07:47 unlaudable joined #gluster
07:50 DV joined #gluster
07:58 armyriad joined #gluster
08:01 haomaiwa_ joined #gluster
08:04 deniszh joined #gluster
08:04 jri joined #gluster
08:08 aravindavk joined #gluster
08:11 DV joined #gluster
08:15 [diablo] joined #gluster
08:20 nix0ut1aw joined #gluster
08:22 ivan_rossi joined #gluster
08:26 baojg joined #gluster
08:28 overclk joined #gluster
08:35 mhulsman joined #gluster
08:44 baojg joined #gluster
08:44 haomaiwa_ joined #gluster
08:53 skoduri joined #gluster
09:04 bitchecker hi @ all
09:04 bitchecker i've reported this bug: https://bugzilla.redhat.co​m/show_bug.cgi?id=1312421
09:04 glusterbot Bug 1312421: medium, medium, ---, ndevos, NEW , glusterfs mount-point return permission denied
09:04 bitchecker can anyone give me an help?
09:09 ekuric joined #gluster
09:12 jiffin ndevos: ^^
09:15 haomaiwa_ joined #gluster
09:15 robb_nl joined #gluster
09:17 muneerse joined #gluster
09:17 spalai joined #gluster
09:18 lh joined #gluster
09:33 atalur_ joined #gluster
09:33 kdhananjay joined #gluster
09:47 gem joined #gluster
10:00 mbukatov joined #gluster
10:01 haomaiwang joined #gluster
10:04 rastar bitchecker: the brick processes don't seem to have permission to perform setxattr on brick mount paths
10:05 bitchecker rastar, thanks for reply
10:05 bitchecker how can i solve it?
10:05 rastar bitchecker: also, you seem to be using container for clients, could you first try mounting on the server itself?
10:06 bitchecker so, there are 2 server
10:06 bitchecker they can mount volume
10:06 bitchecker 3 clients
10:06 bitchecker they can mount volume
10:07 bitchecker both servers and clients can't write on it
10:07 rastar bitchecker: ok, so mount succeeds but operations don't
10:08 rastar bitchecker: what is the type of filesystem on bricks?
10:08 bitchecker xfs
10:08 rastar bitchecker: and user performing writes on mount point?
10:09 rastar bitchecker: did you try as root user?
10:10 bitchecker for read only i think that is not so much useful
10:10 bitchecker yes
10:10 rastar even read failed?
10:10 bitchecker also root get Permission Denied
10:10 bitchecker i can't read because i can't create a file on volume! xD
10:10 poornimag joined #gluster
10:12 rastar bitchecker: :), yes foolish of me to ask that question
10:12 rastar anoopcs: ^ any problems you know with trash xlator that can cause this?
10:13 ppai joined #gluster
10:13 rastar bitchecker: please update the bug with 1. gluster volume status 2. df -h on servers 3. ps aux | grep gluster on servers
10:15 rastar bitchecker: Any reason you have set server.root-squash: ON
10:15 bitchecker you're logged on bugzilla as rastar?
10:15 rastar ?
10:16 harish_ joined #gluster
10:16 bitchecker can i say on my update that you say me that?
10:16 rastar bitchecker: You can use rastar
10:17 rastar bitchecker: This might fix the issue "gluster vol set volume server.root-squash off". Give it a try.
10:20 nachosmooth joined #gluster
10:25 rastar bitchecker: after the volume set, you will have to restart the volume
10:25 rastar bitchecker: stop and start
10:25 bitchecker rastar, it works
10:25 bitchecker without restart! °_°
10:26 rastar bitchecker: I did not remember if restart was required so added that.
10:26 bitchecker it worked for me
10:26 bitchecker but i also stopped and started
10:26 bitchecker now
10:26 rastar bitchecker: cool! that particular setting converts all requests from uid/gid 0 to anon uid.
10:27 bitchecker *_*
10:27 rastar bitchecker: any by default anon uid won't have permission to write.
10:27 bitchecker but for security is a good choice?
10:28 rastar bitchecker: I don't think so. I am not the best authority on that but use-case for that option is following:
10:29 rastar 1. as root, you set mode bits or acls on the dirs in your volume for named users
10:29 ctria joined #gluster
10:29 rastar 2. then you set this root-squash to on
10:30 rastar 3. that way only users having their acls set on the dirs would be allowed to write in respective dirs
10:30 rastar 4. anyone logging in and writing as root user would be denied writes.
10:31 jiffin1 joined #gluster
10:32 rastar bitchecker: please update the bug with the solution so that we can close it.
10:32 kdhananjay joined #gluster
10:40 skoduri joined #gluster
10:45 bitchecker i'll try to use it before update and close
10:48 bitchecker rastar, i need gluster for persistence on my kubernetes cluster
10:48 baojg joined #gluster
10:49 bitchecker i'm tring to use volume as a mount-point for cointainers
10:51 bitchecker rastar, i see: FailedMount        {kubelet node02}    Unable to mount volumes for pod "mysql-v5v1g_default": glusterfs: mount failed: exit status 1
10:52 haomaiwang joined #gluster
10:54 mdavidson joined #gluster
10:55 ppai joined #gluster
10:57 tyler274 can ovirt function as a web status page like lizardfs/moosefs's web ui?
10:58 atalur_ joined #gluster
10:59 bitchecker rastar, i see that error only on a node of cluster on others it seems to works
11:01 gem joined #gluster
11:01 haomaiwa_ joined #gluster
11:02 Debloper joined #gluster
11:13 kshlm joined #gluster
11:38 ppai joined #gluster
11:40 mhulsman joined #gluster
11:47 jiffin1 joined #gluster
11:49 spalai joined #gluster
12:01 haomaiwang joined #gluster
12:12 mhulsman joined #gluster
12:15 ira joined #gluster
12:19 johnmilton joined #gluster
12:21 EinstCrazy joined #gluster
12:33 armyriad joined #gluster
12:33 sakshi joined #gluster
12:45 EinstCrazy joined #gluster
12:51 kanagaraj joined #gluster
12:51 sebamontini joined #gluster
12:51 kanagaraj joined #gluster
13:01 haomaiwang joined #gluster
13:03 bluenemo joined #gluster
13:17 kdhananjay joined #gluster
13:22 Lee1092 joined #gluster
13:22 ovaistariq joined #gluster
13:23 plarsen joined #gluster
13:31 EinstCrazy joined #gluster
13:51 EinstCrazy joined #gluster
13:57 unclemarc joined #gluster
13:57 shubhendu joined #gluster
13:58 anmol joined #gluster
13:58 shubhendu anmol, hi
14:11 haomaiwa_ joined #gluster
14:16 mpietersen joined #gluster
14:32 DV__ joined #gluster
14:32 rafi1 joined #gluster
14:33 dgandhi joined #gluster
14:34 rwheeler joined #gluster
14:35 dgandhi joined #gluster
14:37 kotreshhr left #gluster
14:37 B21956 joined #gluster
14:37 dgandhi joined #gluster
14:38 moss joined #gluster
14:39 dgandhi joined #gluster
14:40 hamiller joined #gluster
14:41 ayma joined #gluster
14:43 dabukalam joined #gluster
14:43 skylar joined #gluster
14:52 rafi joined #gluster
15:01 haomaiwa_ joined #gluster
15:02 shaunm joined #gluster
15:06 bennyturns joined #gluster
15:12 chirino_m joined #gluster
15:15 spalai joined #gluster
15:23 and` joined #gluster
15:23 robb_nl joined #gluster
15:35 DV__ joined #gluster
15:36 shubhendu joined #gluster
15:37 nishanth joined #gluster
15:37 farhorizon joined #gluster
15:38 wushudoin joined #gluster
15:39 coredump joined #gluster
15:40 wushudoin joined #gluster
15:41 jiffin joined #gluster
15:41 jiffin joined #gluster
15:44 hagarth joined #gluster
15:44 squizzi joined #gluster
15:49 amye joined #gluster
16:01 haomaiwa_ joined #gluster
16:13 shaunm joined #gluster
16:14 rafi1 joined #gluster
16:20 spalai joined #gluster
16:21 Gaurav_ joined #gluster
16:22 EinstCrazy joined #gluster
16:22 timotheus1_ joined #gluster
16:28 shyam joined #gluster
16:30 Akee joined #gluster
16:37 arcolife joined #gluster
16:41 gem joined #gluster
16:43 cdhouch joined #gluster
16:44 cdhouch Hey guys, crazy question... I had an admin do an rm -rf through an nfs mount to the entire gluster volume, that's around 100TB of data.  On the extreme off chance, is there some magic way to get it back?
16:45 cdhouch it's distributed but not replicated
16:46 msvbhat cdhouch: Huh, Which version? Does the version have trash xlator?
16:46 msvbhat https://gluster.readthedocs.org/en/re​lease-3.7.0-1/Features/trash_xlator/
16:46 glusterbot Title: trash_xlator - Gluster Docs (at gluster.readthedocs.org)
16:47 msvbhat cdhouch: ^^
16:47 cdhouch I'll check
16:47 msvbhat cdhouch: see if that helps. I do not know any other clean method
16:47 cdhouch drat.  glusterfs 3.6.1
16:48 JoeJulian http://linuxwebdev.blogspot.com/2005/06/​xfs-undelete-howto-how-to-undelete.html
16:48 glusterbot Title: multi-core workouts (at linuxwebdev.blogspot.com)
16:49 cdhouch Thanks Joe, I'll check into that one.  You've saved my bacon more than once before :)
16:50 JoeJulian Good luck!
16:51 JoeJulian I always love being the hero when someone else screws up. :D
16:53 cdhouch Hopefully see you in SF in June heh.
17:00 farhoriz_ joined #gluster
17:01 haomaiwa_ joined #gluster
17:18 nathwill joined #gluster
17:19 Vicomte joined #gluster
17:23 pur joined #gluster
17:24 ovaistariq joined #gluster
17:28 robb_nl joined #gluster
17:46 rafi joined #gluster
17:52 ninjaryan joined #gluster
17:52 spalai joined #gluster
17:53 EinstCrazy joined #gluster
18:01 haomaiwang joined #gluster
18:04 bennyturns joined #gluster
18:10 sebamontini joined #gluster
18:17 robb_nl joined #gluster
18:31 karnan joined #gluster
18:44 dlambrig joined #gluster
18:59 ahino joined #gluster
19:01 haomaiwa_ joined #gluster
19:10 ovaistariq joined #gluster
19:12 ovaistariq joined #gluster
19:15 dlambrig left #gluster
19:17 squizzi joined #gluster
19:19 hagarth joined #gluster
19:25 _Bryan_ joined #gluster
19:32 7JTAADU3K joined #gluster
19:50 dlambrig joined #gluster
19:51 calavera joined #gluster
20:01 haomaiwa_ joined #gluster
20:04 hackman joined #gluster
20:07 csterling joined #gluster
20:08 csterling Hey all - I’m trying to understand some gluster semantics, and I’m hoping the gluster gurus can help me.
20:08 calavera joined #gluster
20:10 csterling We have 4 gluster clients, and they are all different sizes. Right now, the smallest one is running out of space. I understand that gluster replicates files across nodes somewhat akin to RAID, but I don’t understand why it doesn’t distribute more across the larger nodes
20:10 csterling Why is the smallest node the bottleneck?
20:11 JoeJulian First, I'm going to correct you slightly as I believe you have four gluster *servers* that you're concerned with. Clients being full would be irrelevant. :)
20:12 JoeJulian DHT (Distributed hash table) distribution is done by creating a hash of the filename and comparing it with a hash map on distribute subvolumes to determine which subvolume it should be part of.
20:13 JoeJulian If the dht target is full, only then does it choose to create the file on a different subvolume.
20:13 JoeJulian @lucky dht misses are expensive
20:13 glusterbot JoeJulian: https://joejulian.name/blog​/dht-misses-are-expensive/
20:13 JoeJulian See ^ that article for a detailed version of how that works.
20:15 ovaistar_ joined #gluster
20:15 JoeJulian Very large files (like 20T vm images on 56T bricks <grumble, grumble>) can easily skew brick utilization.
20:15 csterling So, if the server was part of a popular hash location, then it could get overwhelmed - I understand
20:16 csterling Is there a way to alter the DHT default behavior? Retarget it somehow?
20:16 JoeJulian A rebalance might help.
20:16 B21956 joined #gluster
20:16 csterling Gotcha -
20:16 JoeJulian Otherwise, there are ways to manually manipulate the hash map.
20:17 csterling Which sounds…dangerous?
20:17 JoeJulian left #gluster
20:17 csterling JoeJulian++
20:17 glusterbot csterling: JoeJulian's karma is now 25
20:17 JoeJulian joined #gluster
20:17 JoeJulian Not that dangerous, no.
20:18 csterling I like the idea of the secondary cache table you suggest in your article to improve access time
20:18 ovaistariq joined #gluster
20:18 deniszh joined #gluster
20:19 csterling I’m assuming (possibly incorrectly), that would play a good part in manipulating the default behavior
20:23 csterling @JoeJulian - if I do a rebalance, that would help for now, but for the future, this server would probabilistically get overwhelmed again without a change to the DHT, would that be a correct statement?
20:24 JoeJulian Seems probably
20:24 JoeJulian probable even
20:24 JoeJulian since I can't seem to type.
20:24 csterling I’ve made more typos than you have and you’ve done a majority of the conversation
20:24 JoeJulian I always recommend similar brick sizes. Sometimes that's easiest to achieve by splitting up larger bricks in to multiple smaller ones.
20:24 csterling So, errors per word, you’re clean
20:25 rafi joined #gluster
20:25 JoeJulian That's because I self-edit a lot. :D
20:26 csterling I’ll look into breaking those up and see if that’s feasbile - we’ll probably just take them down and grow their volumes, or rotate a new (larger) partition in
20:27 JoeJulian https://github.com/gluster/gluster​fs/blob/master/extras/rebalance.py
20:27 glusterbot Title: glusterfs/rebalance.py at master · gluster/glusterfs · GitHub (at github.com)
20:28 JoeJulian There's a tool that jdarcy wrote for rebalancing based on size.
20:30 ctria joined #gluster
20:31 csterling Awesome
20:31 csterling Thank you so much - I’ll go play with this knowledge/power
20:37 ayma joined #gluster
20:50 dlambrig joined #gluster
20:52 deniszh joined #gluster
20:56 ninjaryan joined #gluster
20:56 cyberbootje joined #gluster
21:00 csterling_ joined #gluster
21:01 haomaiwa_ joined #gluster
21:07 farhoriz_ joined #gluster
21:09 farhorizon joined #gluster
21:10 hagarth joined #gluster
21:10 farhoriz_ joined #gluster
21:10 farhorizon joined #gluster
21:14 farhorizon joined #gluster
21:25 ovaistariq joined #gluster
21:29 ovaistariq joined #gluster
21:29 farhorizon joined #gluster
21:37 csterling @JoeJulian - would there be any reason why I couldn’t create a second brick on the same server in a separate disk? So 3 servers with 1 partition and 1 server with 2 partitions
21:37 JoeJulian No problem. Just make sure you keep your replicas on separate computers (assuming you're replicating).
21:38 csterling Makes sense :-)
21:39 post-factum btw
21:39 post-factum JoeJulian: how arbiter brick(s) fit into distributed-replicated scheme?
21:40 post-factum should arbiter node has the same amount of distributed bricks as replica nodes have?
21:40 JoeJulian Arbiters are a metadata-only store. They will have no block data.
21:40 post-factum i know that
21:40 post-factum that is not my question
21:40 bennyturns joined #gluster
21:40 post-factum :)
21:41 post-factum i mean, you know, during volume creation bricks order matters
21:41 post-factum replica1.1 replica2.1 arbiter3.1 — this is classic replica with arbiter
21:41 post-factum what if i want add more bricks to make it distributed?
21:42 JoeJulian Since they can only be replica 2 + 1 arbiter, all dht subvolumes must have 2 replica + 1 arbiter. (I was typing this out as you were clarifying. I'm going to press enter anyway just to get it out there).
21:42 post-factum how arbiter bricks fit?
21:42 post-factum oh sorry for speeding things :)
21:42 JoeJulian You would add 2+1 again.
21:42 post-factum so, on arbiter node i should keep separate folders for each arbiter brick as usual
21:42 post-factum like for ordinary replica
21:42 JoeJulian The question that I haven't given any thought... can you use the same arbiter for the second pair of replica...
21:43 post-factum that is what i thought about
21:43 post-factum but it doesn't sound like a good idea
21:43 JoeJulian You probably could, but think about how it's going to affect your availability.
21:43 post-factum "same arbiter" — you mean brick?
21:43 JoeJulian No, server.
21:44 JoeJulian Multiple arbiter bricks on one server.
21:44 post-factum ah, server is ok. i talk about bricks
21:44 JoeJulian I can't think of any compelling reason why you shouldn't use one server for multiple arbiter bricks.
21:44 post-factum that is what i'm talking about :)
21:45 post-factum but the question was whether i should create 2+1, or 1 in total is enough
21:45 JoeJulian And, of course, the bricks are just a directory. They could share the same phyisical storage device. Probably even the same filesystem.
21:45 post-factum it seems i cannot just use one arbiter brick for all subvolumes
21:45 post-factum yep, sure
21:45 post-factum so, the scheme is:
21:45 JoeJulian No, one arbiter brick per dht subvolume.
21:46 post-factum replica1.1 replica2.1 arbiter3.1 replica1.2 replica2.2 arbiter3.2
21:46 post-factum and arbiter3.1/arbiter3.2 just reside on same device or partition or whatever
21:46 JoeJulian sure
21:47 post-factum ok, thx then
21:48 post-factum anyway, i did some research on arbiter size, and pranithk told me they will do arbiter hot-add before i finish my research :)
21:48 post-factum where is my arbiter hotplug :)?
21:52 ovaistariq joined #gluster
21:53 post-factum btw, wanna look at glusterfs memory consumption before and after memleak patches?
21:54 post-factum we use VM to reexport GlusterFS volume via Samba for Windows fags
21:54 deniszh joined #gluster
21:56 post-factum http://i.piccy.info/i9/397acde87be756eedd41a97efb3​545f5/1457387800/47977/951663/vm_samba_memory.png
21:57 post-factum guess what happened just before 28.02
22:00 ovaistariq joined #gluster
22:01 JoeJulian 3.7.8?
22:01 post-factum nope, unfortunately
22:01 haomaiwa_ joined #gluster
22:01 post-factum 3.7.6 + cherry-picked fixes
22:01 JoeJulian Well that's still a pretty good sign.
22:01 post-factum 3.7.8 suffers from performance degradation
22:02 post-factum so plan to upgrade to 3.7.9 if it is ok
22:02 post-factum but now 3.7.6 + patches just works
22:05 post-factum another very interesting chart:
22:06 post-factum http://i.piccy.info/i9/79a52fcda54349f4dedf32b3a0c​8d86a/1457388358/44082/951663/glfs_fuse_vs_api.png
22:06 post-factum this server still uses glusterfs version that leaks. all the jumps are OOM trigger actually
22:06 post-factum but then we replaced fuse mount with api calls
22:06 post-factum and you see
22:07 JoeJulian Interesting.
22:08 post-factum the issue is that it is el6, which i don't like, and it's quite difficult to build packages for it. i guess i will just update it to 3.7.9 as well
22:08 post-factum but that is how glusterfs 3.7.6 leaks in real production. obvious picture
22:08 JoeJulian Pfft.. it's easy to build packages for any EL.
22:08 post-factum yep, but i prefer to kill el6 at all
22:08 JoeJulian Well, there is always that.
22:08 post-factum we have migrated almost all the servers to el7
22:09 post-factum and i should fire up el6 instance just to build gluster packages... no way, die in hell
22:09 JoeJulian Nah, use koji
22:09 post-factum or opensuse build system
22:10 post-factum (which i do use for kernel builds)
22:10 JoeJulian I've never used theirs.
22:11 post-factum pretty handy. build power for free
22:11 JoeJulian same with koji
22:12 post-factum i mean, obs builds packages in their cloud. does koji do the same?
22:12 JoeJulian yes
22:12 post-factum umm, sounds good then
22:12 JoeJulian https://fedoraproject.org/wik​i/Using_the_Koji_build_system
22:12 glusterbot Title: Using the Koji build system - FedoraProject (at fedoraproject.org)
22:13 post-factum but that doesn't change that i hate building rpms at all. kinda arch guy, love pkgbuild, you know
22:13 JoeJulian I actually do know.
22:13 post-factum will try to build glusterfs with koji, thx
22:13 post-factum at least just for fun
22:14 JoeJulian I'm usually the one that marks the gluster package out-of-date in the pacman repo.
22:15 post-factum :D
22:15 post-factum using arch?
22:15 JoeJulian extensively
22:16 post-factum mate!
22:16 JoeJulian Hopefully within the next few months, we'll be able to share our arch wayback machine where we're snapshotting the daily mirror.
22:17 post-factum do you maintain any packages?
22:17 JoeJulian Nothing upstream. We re-package a number of things internally.
22:18 post-factum i look after some handy stuff in aur. and out-of-tree pf-kernel, of course
22:19 post-factum nothing upstream as well
22:24 post-factum .ощшт дщк
22:24 post-factum oops, sorry
22:28 ninjaryan joined #gluster
22:33 deniszh joined #gluster
22:34 farhorizon joined #gluster
22:36 ovaistariq joined #gluster
22:38 dblack joined #gluster
22:46 farhorizon joined #gluster
22:58 farhorizon joined #gluster
23:01 haomaiwa_ joined #gluster
23:12 jwang_ joined #gluster
23:20 csterling Hey @JoeJulian - last question of the day - promise -
23:21 csterling I know that when I add a new brick to a distributed replicated file system, I need to add a second brick to be the replicant
23:21 csterling But do you know where I can find documentation for specifying the second brick is that replicant?
23:21 JoeJulian @brick order
23:22 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
23:25 shyam joined #gluster
23:27 csterling The issue I have is I’m expanding an existing volume
23:28 csterling so when I run the add command, it will oscillate between being a master and slave depending on x%2 basically?
23:29 csterling I have 4 bricks right now
23:30 csterling Ahhh - I just have to add both of them at the same time on the add
23:30 * csterling hangs his head in shame
23:35 JoeJulian Right
23:35 JoeJulian (Was on the phone)
23:36 JoeJulian And there's no oscillating. The only master/slave relationship is for geo-sync.
23:36 JoeJulian replication happens at the client. They are live replicas.
23:37 juhaj joined #gluster
23:38 juhaj Can I use glusterfs to "live sync" a home directory from a laptop to a server?
23:39 JoeJulian If your latency is decent.
23:39 JoeJulian You could possibly use geo-replicate, stopping it when you're away and starting it up again when you're back.
23:40 juhaj I was thinking geo-replicating ~once a day; that's probably live enough
23:40 ovaistariq joined #gluster
23:41 juhaj I find it impossible to believe that there does not seem to be any useable solution for syncing home directories on laptops – on desktops one can always mount over the network, but on a laptop that fails (and complicates backups horribly)
23:41 JoeJulian I used to use unity.
23:42 JoeJulian More recently I've taken to using git.
23:47 juhaj git hardly syncs automatically (dvcs-autosync scripts around that using inotify but needs manual intervention when anything goes wrong, like network fails mid-push)
23:48 hagarth joined #gluster
23:48 juhaj unity used to be rather rubbish, but that was ~10 years ago, so perhaps I should give that a try. I'd prefer a proper filesystem based solution so it can be a) relied on and b) fire-and-forget
23:49 juhaj geo-repl will only work one-way, though, right?
23:49 JoeJulian Right, it's not automatic. I use zfs with little dots that tell me my sync state and if I've done anything that's critical I push. I have a ton of ignores and a few things that if they change git status, I revert them.
23:49 JoeJulian Right, one-way.
23:49 JoeJulian Not to say that git's right for everybody, of course.
23:50 JoeJulian Probably an easy thing for me since half my day involves git.
23:50 juhaj Doing it one-way is not much more useful than making backups from the laptops
23:50 juhaj Yea, git is ok for most things, but I would not git, say ~/.config
23:51 juhaj And I do not want history here, just synced up $HOME's and ease-of-backing-up (from single location)
23:51 JoeJulian No, most of .config is ignored. There's a few anti-ingores for like hexchat.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary