Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-03-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 nishanth joined #gluster
01:06 javi404 joined #gluster
01:07 shdeng joined #gluster
01:09 javi404 joined #gluster
01:12 moneylotion joined #gluster
01:56 BatS9_ joined #gluster
02:00 prasanth joined #gluster
02:18 plarsen joined #gluster
02:40 skoduri joined #gluster
02:41 javi404 joined #gluster
02:46 javi404 joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:49 jbrooks joined #gluster
02:54 jbrooks joined #gluster
03:01 sbulage joined #gluster
03:04 kramdoss_ joined #gluster
03:14 derjohn_mob joined #gluster
03:30 msvbhat joined #gluster
03:34 magrawal joined #gluster
03:44 nthomas joined #gluster
03:53 rejy joined #gluster
04:01 itisravi joined #gluster
04:19 satya4ever_ joined #gluster
04:25 karthik_us joined #gluster
04:25 jiffin joined #gluster
04:30 Shu6h3ndu joined #gluster
04:39 RameshN joined #gluster
04:39 gyadav joined #gluster
04:46 skumar joined #gluster
04:52 kdhananjay joined #gluster
04:56 buvanesh_kumar joined #gluster
04:59 Prasad joined #gluster
05:00 skumar_ joined #gluster
05:02 ankitr joined #gluster
05:23 ndarshan joined #gluster
05:27 Humble joined #gluster
05:37 sbulage joined #gluster
05:38 apandey joined #gluster
05:41 ashiq joined #gluster
05:41 sanoj joined #gluster
05:43 riyas joined #gluster
05:45 kotreshhr joined #gluster
06:03 msvbhat joined #gluster
06:04 susant joined #gluster
06:05 kraynor5b_ joined #gluster
06:09 shdeng joined #gluster
06:12 skoduri joined #gluster
06:12 ankush joined #gluster
06:13 shdeng joined #gluster
06:14 Saravanakmr joined #gluster
06:15 Karan joined #gluster
06:16 ashiq joined #gluster
06:22 hgowtham joined #gluster
06:23 gyadav joined #gluster
06:24 skumar_ joined #gluster
06:25 susant joined #gluster
06:25 [diablo] joined #gluster
06:30 mb_ joined #gluster
06:35 mhulsman joined #gluster
06:36 mhulsman joined #gluster
06:45 mb_ joined #gluster
06:46 nishanth joined #gluster
06:50 ahino joined #gluster
06:58 rastar joined #gluster
07:10 shdeng joined #gluster
07:21 jkroon joined #gluster
07:25 jiffin1 joined #gluster
07:25 jtux joined #gluster
07:26 ankitr joined #gluster
07:35 kdhananjay joined #gluster
07:42 atm0sphere_work joined #gluster
07:43 ankitr joined #gluster
07:53 mbukatov joined #gluster
07:56 ankush joined #gluster
08:02 msvbhat joined #gluster
08:08 mbukatov joined #gluster
08:09 mbukatov joined #gluster
08:14 mhulsman joined #gluster
08:15 ivan_rossi joined #gluster
08:15 ivan_rossi left #gluster
08:24 ashiq joined #gluster
08:26 Philambdo joined #gluster
08:29 fsimonce joined #gluster
08:31 ahino joined #gluster
08:42 derjohn_mob joined #gluster
08:57 ShwethaHP joined #gluster
09:14 inodb joined #gluster
09:15 Prasad_ joined #gluster
09:17 atinm joined #gluster
09:27 ankush joined #gluster
09:29 Prasad joined #gluster
09:33 itisravi joined #gluster
09:36 flying joined #gluster
09:37 Seth_Karlo joined #gluster
09:45 jiffin joined #gluster
09:51 ashiq joined #gluster
09:52 RameshN joined #gluster
09:53 msvbhat joined #gluster
09:57 kdhananjay joined #gluster
10:11 derjohn_mob joined #gluster
10:17 Seth_Kar_ joined #gluster
10:20 buvanesh_kumar joined #gluster
10:22 itisravi_ joined #gluster
10:22 kdhananjay joined #gluster
10:29 ahino joined #gluster
10:33 flying joined #gluster
10:44 hybrid512 joined #gluster
10:49 izkasi joined #gluster
10:53 kpease joined #gluster
11:05 ahino joined #gluster
11:17 flying joined #gluster
11:48 ahino joined #gluster
11:50 kpease joined #gluster
11:55 RameshN joined #gluster
11:59 ashiq joined #gluster
12:18 Vytas_ joined #gluster
12:32 sona joined #gluster
12:35 baber joined #gluster
12:44 unclemarc joined #gluster
12:48 mhulsman joined #gluster
12:50 jiffin joined #gluster
12:52 atinm joined #gluster
12:56 shyam joined #gluster
12:59 mb_ joined #gluster
12:59 shaunm joined #gluster
13:02 vbellur joined #gluster
13:04 jbrooks joined #gluster
13:17 vbellur joined #gluster
13:18 skumar joined #gluster
13:18 vbellur joined #gluster
13:19 vbellur joined #gluster
13:19 vbellur joined #gluster
13:21 vbellur joined #gluster
13:25 kramdoss_ joined #gluster
13:26 ic0n joined #gluster
13:28 squizzi joined #gluster
13:28 skylar joined #gluster
13:35 atinm joined #gluster
13:40 plarsen joined #gluster
13:44 kettlewell joined #gluster
13:45 Akram joined #gluster
13:45 skylar joined #gluster
13:51 fubada joined #gluster
13:51 fubada Hi. I have about a terabyte of data to delete, which is happening super slow over the gluster client
13:52 fubada are there any tricks to do deletes directly on bricks w/out causing split-brain
13:55 TBlaar joined #gluster
14:01 jiffin fubada: if it contains lot of small files there nothing much gluster can do
14:02 jiffin try mounting gluster client on same servers and deleting those files
14:02 jiffin from that client
14:02 jiffin don't know how much it improves
14:02 fubada its a terabyte of git clones so yes, small files
14:03 fubada thats jiffin
14:03 fubada thanks**
14:08 msvbhat joined #gluster
14:11 Karan joined #gluster
14:13 cloph neverd did try it out myself, but at least some old google hits claim that a "rsync -r --delete /empt_dir /dir_to_delete" would be faster than a rm -r /dir_to_delete
14:14 fubada cloph: what im working off is a giant list of directories in a file
14:14 fubada i guess that could still work
14:14 cloph if you're doing it one-by-one, also consider trying to do multiple dirs in parallel
14:18 ashka hi, I have an issue with 3.5.2 (wish I was up to date but this is the stable version in debian), 20 bricks in distribute for about 150 glusterfs-clients. I do backups at night on a reliable internal network but sometimes I'll get connection reset by peer to a random brick during high writes.. here is the settings I have for the volume: http://paste.awesom.eu/x1df is there any way to avoid that issue, or ask the glusterfs client to retry
14:18 glusterbot Title: Paste it § (at paste.awesom.eu)
14:18 ashka silently? thanks
14:21 thatgraemeguy joined #gluster
14:21 thatgraemeguy joined #gluster
14:27 ira joined #gluster
14:35 fubada cloph: thanks, using your siuggestion in parallel
14:35 fubada would removing 'acl' mount option speed up deletes over gluster?
14:40 farhorizon joined #gluster
14:44 farhoriz_ joined #gluster
14:47 gyadav joined #gluster
14:51 chawlanikhil24 joined #gluster
14:58 RameshN joined #gluster
15:02 atinm joined #gluster
15:04 wushudoin joined #gluster
15:15 magrawal joined #gluster
15:16 fargox joined #gluster
15:17 ankush joined #gluster
15:21 ahino joined #gluster
15:21 oajs joined #gluster
15:23 jiffin joined #gluster
15:26 unclemarc joined #gluster
15:27 gem joined #gluster
15:32 msvbhat joined #gluster
15:34 Shu6h3ndu joined #gluster
15:40 atinm joined #gluster
15:41 morganb joined #gluster
15:46 bwerthmann joined #gluster
15:49 sbulage joined #gluster
15:50 atinm joined #gluster
15:54 d0nn1e joined #gluster
15:54 izkasi_ joined #gluster
16:13 gyadav joined #gluster
16:14 plarsen joined #gluster
16:17 mhulsman joined #gluster
16:19 anoopcs joined #gluster
16:19 riyas joined #gluster
16:22 jiffin joined #gluster
16:23 Gambit15 joined #gluster
16:31 ghenry joined #gluster
16:31 ghenry joined #gluster
16:38 Saravanakmr joined #gluster
16:51 baber joined #gluster
16:52 vbellur joined #gluster
16:54 sona joined #gluster
16:57 Vide joined #gluster
16:59 Vide Hello, I've a replica 3 volume in production, and now I've got more servers (with storage) to add space to this volume. Is there something I should be aware? I'm using Gluster 3.8
16:59 Vide the original servers are running CentOS 7.2 and the new CentOS 7.3, is that a problem?
17:01 nishanth joined #gluster
17:05 major random idea .. anyone ever thought of using tiering for clients?
17:07 cloph Vide: if you add them in the appropriate count (3), then there should be no issues
17:07 baber joined #gluster
17:13 Vide cloph, you mean by multiples of 3, right?
17:13 atinm joined #gluster
17:14 Vide Right now it's a 4-nodes cluster with 6 bricks in replica 3 (one host has an arbiter + a data brick of the following...shard?)
17:14 cloph yes, so it matches your existing setup, unless you also want to change replica
17:14 Vide I'm not really sure what's the name for my current topology
17:14 cloph if you have arbiter, then you don't have replica3
17:14 Vide well replcia 3 with arbiter
17:15 cloph no, it is replica 2 with arbiter, or (2+1)
17:15 Vide this is how I created it: https://paste.fedoraproject.org/paste/XX9​Z0iMv0CjQnnUPDmJDRF5M1UNdIGYhyRLivL9gydE=
17:15 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
17:18 Vide is this a Distributed Striped Replicated or a Distributed Replicated ?
17:18 Vide I'd say the former but as I said I'm not really sure
17:18 Vide volume size is 2x brick size
17:18 cloph no, strip has nothing to do with that. it is a distributed  2+1
17:19 wushudoin joined #gluster
17:21 JoeJulian cloph: He's right, in a way. You "create volume replica 3 arbiter 1 ..."
17:21 JoeJulian Which I, personally, find misleading.
17:26 Vide so, assuming that I want to add 7 more machines with the same brick size, is this correct? https://paste.fedoraproject.org/paste/43V​6tbsWYuMwetTeWwFx715M1UNdIGYhyRLivL9gydE=
17:26 glusterbot Title: Fork of Untitled - Modern Paste (at paste.fedoraproject.org)
17:27 Vide it should increase to a "5 x (2+1)" bricks volume
17:27 Vide am I right or am I misunderstanding anything?
17:27 JoeJulian Vide: Looks good to me.
17:28 Vide good...I'll try before in a lab environment nonetheless :)
17:30 JoeJulian Always a good plan.
17:32 cloph depending on what you use those servers for: you don't need to have the arbiter on the same server that also has a real brick
17:33 JoeJulian His paste has them rotating that responsibility. Did I miss something?
17:34 cloph no, just wanting to say that there is no need. But yeah, if you do then make sure it is not on the same servers as the corresponding data bricks.
17:39 JoeJulian After a weekend full of 6 & 7 year olds, bouncy castles, and cake - I very well could miss things. Keep me honest today. https://goo.gl/photos/KxhKZmaEwb6Dok5U7
17:40 major heh
17:41 morganb joined #gluster
17:45 Vide cloph, because I had 4 servers initially and I want to use all the storage available
17:46 Vide without bricks (arbiter+data) sharing a server I'd have lost half the capacity
17:46 major in tiering .. is the hot-tier data periodically copied back to the cold tier?
17:46 cloph I guess you misunderstood. Arbiter bricks don't contribute to storage.
17:46 cloph They only store metadata, and that doesn't really take much.
17:46 masber joined #gluster
17:47 Vide cloph, and that's why I put a data brick next to it
17:47 Vide (actually I was given this advice on the ML :P)
17:48 * cloph is using the same, but rather because I don't have enough other peers ;-)
17:52 Vapez joined #gluster
17:52 Vapez joined #gluster
17:56 major okay .. extra insane question .. anyone tried to submount a volume?
17:57 major there is always something disconcerning about the frequency with which I hear crickets ;)
17:58 MrAbaddon joined #gluster
17:58 JoeJulian I have submounted volumes - assuming you mean to mount a volume into a subdirectory under another mounted volume.
18:01 major was more thinking of: mount -t glusterfs:<volume>/<subdir> <destdir>
18:04 major bah
18:04 major brain .. coffee
18:04 major mount -t glusterfs node:<volume>/<subdir> <destdir>
18:04 JoeJulian I think that's supposed to work with 3.10.
18:06 major hmm .. just tried it and didn't get any love :(
18:06 major error about a lack of volfile on the server
18:08 major la sigh
18:08 major [2017-03-20 17:58:48.484632] I [MSGID: 101190] [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
18:08 major [2017-03-20 17:58:48.485531] E [glusterfsd-mgmt.c:1756:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
18:08 major woulda been so nice too
18:08 shyam joined #gluster
18:08 JoeJulian https://github.com/gluster/glusterfs-specs/bl​ob/master/under_review/subdirectory-mounts.md
18:08 glusterbot Title: glusterfs-specs/subdirectory-mounts.md at master · gluster/glusterfs-specs · GitHub (at github.com)
18:09 JoeJulian Still under review.
18:10 major bleh .. gonna have to add that to my list of things to play with ..
18:17 major its curious because I find that to be the more intuitive use-case
18:18 major basically build a large volume pool of gluster bricks, and just create subdirectories as subvolume and mount them out to clients w/ their own individual quotas/acls/perms
18:18 major makes it more intituive for mapping things like auto.home as well
18:22 sona joined #gluster
18:29 rastar joined #gluster
18:36 arpu joined #gluster
18:41 baber joined #gluster
18:44 gem joined #gluster
18:46 raghu joined #gluster
18:47 raghu joined #gluster
18:57 JoeJulian major: in the interim you can just mount the volume someplace with only root access permissions, then bind-mount the user-accessible directory somewhere.
18:57 major yah.. for what I am doing I may have to do that
18:57 major though it can create quite the race in fstab for systemd .. will have to make a home.mount Unit
18:58 major it will work .. just .. sort of clunky
18:58 major so .. on the subject of quasars .. erm ... tiers
18:58 masber joined #gluster
18:59 major tiers are really just caches .. right?
18:59 major its kinda akin to a gluster level fscache for nfs
19:00 major i.e. gluster-tier = nfs+fscache ?
19:00 major "sort of"
19:00 JoeJulian The file is literally moved to the hot tier when it's hot, then moved to the cold tier when it cools.
19:02 major hmm
19:02 major less exciting
19:04 major hurm .. https://www.redhat.com/archives/linux​-cachefs/2014-September/msg00000.html
19:04 glusterbot Title: [Linux-cachefs] Feature proposal - FS-Cache support in FUSE (at www.redhat.com)
19:04 derjohn_mob joined #gluster
19:09 major need to figure out if the infrastructure for upcalling might be usable for fs-notify
19:11 major https://github.com/libfuse/li​bfuse/wiki/Fsnotify-and-FUSE
19:11 glusterbot Title: Fsnotify and FUSE · libfuse/libfuse Wiki · GitHub (at github.com)
19:11 major all the things I would like to have working ..
19:12 major damnit .. I need to finsih up my current list before adding to it >.<
19:15 Vapez_ joined #gluster
19:15 major man .. their weird stuff I find when looking up "fuse fscache": https://indico.cern.ch/event/581101/​contributions/2356267/attachments/13​64229/2065860/cvmfs-coord-nov16.pdf
19:16 major looks like CERN's document describing caching data over fuse as dumped from the LHC?
19:16 major for some random reason I expected them to be using an existing distributed FS solution ..
19:19 major huh .. what do you know: https://cernvm.cern.ch/portal/filesystem
19:19 glusterbot Title: CernVM File System | cernvm.web.cern.ch (at cernvm.cern.ch)
19:21 gem joined #gluster
19:26 derjohn_mob joined #gluster
19:36 raghu joined #gluster
19:37 JoeJulian http://linux.web.cern.ch/linux/ssa/
19:37 glusterbot Title: Linux @ CERN: /linux/ssa/index.shtml (at linux.web.cern.ch)
19:39 major heh
19:40 JoeJulian I need to get some lunch. Have you eaten yet?
19:40 major food is one of those rare things ..
19:40 major like .. sometimes I remember to do it
19:40 major and when I have it I sometimes remember eating it :)
19:41 JoeJulian hehe
19:42 major there is an irish place closer to you than it is to me that has steamed cabbage and corned beef served with horse radish...
19:43 major alas .. no haggis..
19:43 JoeJulian Yeah, it's right downstairs. I had that for dinner on Friday and leftovers last night, though, so I'm thinking I'll do something else.
19:44 JoeJulian Have you been to the Berliner? berlinerseattle.com
19:44 major nope
19:45 JoeJulian Me neither. I'm going to go give it a try.
19:45 major should I show up and start singing baudy irish songs?
19:46 JoeJulian On friday they had all that and bagpipes.
19:46 major dunno how that relates to berlin inspired turkish street food .. but it sounds fun
19:46 major damn...
19:46 JoeJulian I'm leaving now. If I see you there, I'll see you there. :D
19:46 major that woulda brought a tear to ma eyes
19:46 major if you can find me :P
19:47 major oh .. I know where that is
19:48 Wizek joined #gluster
19:50 masber joined #gluster
19:52 Asako joined #gluster
19:56 bwerthmann joined #gluster
20:01 major Hmmm.. finding people might be easier if they gave irc on their phone...
20:02 atinm joined #gluster
20:14 kraynor5b__ joined #gluster
20:15 Vapez joined #gluster
20:15 Vapez joined #gluster
20:15 glusterbot` joined #gluster
20:16 sloop- joined #gluster
20:17 jerrcs__ joined #gluster
20:18 jackhill_ joined #gluster
20:19 siel_ joined #gluster
20:19 malevolent_ joined #gluster
20:19 tom][ joined #gluster
20:19 pioto_ joined #gluster
20:19 d0nn1e_ joined #gluster
20:20 ic0n_ joined #gluster
20:22 lkoranda_ joined #gluster
20:22 scuttle|` joined #gluster
20:22 wistof_ joined #gluster
20:22 eryc_ joined #gluster
20:22 glusterbot joined #gluster
20:23 DV__ joined #gluster
20:24 lanning joined #gluster
20:24 rofl_____ joined #gluster
20:24 puiterwi1 joined #gluster
20:26 markd_ joined #gluster
20:28 varesa_ joined #gluster
20:37 baber joined #gluster
20:40 oajs joined #gluster
20:43 malevolent joined #gluster
20:53 vbellur joined #gluster
20:54 vbellur joined #gluster
20:55 vbellur joined #gluster
21:04 raghu joined #gluster
21:08 vbellur joined #gluster
21:19 BatS9 joined #gluster
21:19 derjohn_mob joined #gluster
21:26 social joined #gluster
21:30 Intensity joined #gluster
22:11 percevalbot joined #gluster
22:19 bitonic joined #gluster
22:24 wushudoin joined #gluster
22:29 Jacob843 joined #gluster
22:45 masber joined #gluster
23:00 bitonic joined #gluster
23:01 buvanesh_kumar joined #gluster
23:04 baber joined #gluster
23:30 plarsen joined #gluster
23:36 gem joined #gluster
23:41 bluenemo joined #gluster
23:41 major I have a feeling that something like this should be part of official documentation: https://github.com/carmstrong​/multinode-glusterfs-vagrant
23:41 glusterbot Title: GitHub - carmstrong/multinode-glusterfs-vagrant: This guide walks users through setting up a 3-node GlusterFS cluster, creating and starting a volume, and mounting it on a client. (at github.com)
23:45 major damn I have a huge stack of testing to do .. and tests to write for that matter ...
23:45 major bleh

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary