Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-01-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:23 sankarshan_ joined #gluster
00:28 sankarshan_ joined #gluster
00:53 farhorizon joined #gluster
01:00 hagarth joined #gluster
01:41 Lee1092 joined #gluster
01:52 harish joined #gluster
01:52 zhangjn joined #gluster
01:56 farhorizon joined #gluster
01:57 nishanth joined #gluster
02:12 haomaiwa_ joined #gluster
02:33 haomaiwa_ joined #gluster
02:46 zhangjn joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:01 21WAAO1NS joined #gluster
03:05 DV joined #gluster
03:20 Peppard joined #gluster
03:32 vmallika joined #gluster
03:50 nbalacha joined #gluster
03:54 shruti joined #gluster
04:01 haomaiwa_ joined #gluster
04:01 atinm joined #gluster
04:07 zhangjn joined #gluster
04:17 kshlm joined #gluster
04:18 gem joined #gluster
04:19 nehar joined #gluster
04:31 shubhendu joined #gluster
04:33 ramky joined #gluster
04:49 MACscr joined #gluster
04:49 MACscr joined #gluster
04:50 MACscr joined #gluster
04:51 RameshN joined #gluster
04:51 skoduri joined #gluster
04:52 rafi joined #gluster
04:56 kanagaraj joined #gluster
04:57 itisravi joined #gluster
04:58 ppai joined #gluster
04:58 itisravi joined #gluster
05:00 hgowtham joined #gluster
05:01 haomaiwang joined #gluster
05:03 Manikandan joined #gluster
05:13 ndarshan joined #gluster
05:18 pppp joined #gluster
05:21 hgowtham joined #gluster
05:22 kotreshhr joined #gluster
05:22 deepakcs joined #gluster
05:23 kdhananjay joined #gluster
05:29 zhangjn joined #gluster
05:35 Bhaskarakiran joined #gluster
05:39 overclk joined #gluster
05:40 Apeksha joined #gluster
05:48 vimal joined #gluster
06:01 zhangjn joined #gluster
06:01 haomaiwang joined #gluster
06:02 sakshi joined #gluster
06:10 atalur joined #gluster
06:14 aravindavk joined #gluster
06:20 nishanth joined #gluster
06:22 Humble joined #gluster
06:35 anil joined #gluster
06:38 arcolife joined #gluster
06:43 karnan joined #gluster
06:49 arcolife joined #gluster
06:50 zhangjn joined #gluster
06:52 lalatenduM joined #gluster
07:01 haomaiwa_ joined #gluster
07:02 zhangjn joined #gluster
07:04 jiffin joined #gluster
07:12 SOLDIERz joined #gluster
07:20 mhulsman joined #gluster
07:27 jtux joined #gluster
07:29 zhangjn joined #gluster
07:31 jwd joined #gluster
07:32 rwheeler joined #gluster
07:33 itisravi joined #gluster
08:01 harish joined #gluster
08:01 haomaiwang joined #gluster
08:04 zhangjn joined #gluster
08:06 zhangjn joined #gluster
08:08 Akee joined #gluster
08:09 vmallika joined #gluster
08:15 Saravana_ joined #gluster
08:19 nangthang joined #gluster
08:30 auzty joined #gluster
08:30 itisravi joined #gluster
08:34 sjohnsen joined #gluster
08:35 arcolife joined #gluster
08:43 zhangjn joined #gluster
08:46 deniszh joined #gluster
08:58 ahino joined #gluster
09:01 sankarshan_ joined #gluster
09:01 haomaiwa_ joined #gluster
09:02 fsimonce joined #gluster
09:03 d0nn1e joined #gluster
09:09 nangthang joined #gluster
09:14 nehar joined #gluster
09:14 ramky joined #gluster
09:16 Saravana_ joined #gluster
09:17 Slashman joined #gluster
09:22 R0ok_ joined #gluster
09:27 bluenemo joined #gluster
09:37 skoduri joined #gluster
09:42 dusmant joined #gluster
09:44 nangthang joined #gluster
09:45 kovshenin joined #gluster
10:01 haomaiwa_ joined #gluster
10:15 Bhaskarakiran joined #gluster
10:16 jwaibel joined #gluster
10:21 dusmant joined #gluster
10:24 sakshi joined #gluster
10:37 itisravi joined #gluster
10:54 Akee joined #gluster
11:01 7JTABGPX3 joined #gluster
11:22 mbukatov joined #gluster
11:23 dusmant joined #gluster
11:49 RedW joined #gluster
11:54 Bhaskarakiran joined #gluster
12:01 haomaiwa_ joined #gluster
12:02 Andreas joined #gluster
12:04 kotreshhr left #gluster
12:05 Bhaskarakiran joined #gluster
12:12 bfoster joined #gluster
12:26 ramky joined #gluster
12:32 nbalacha joined #gluster
12:36 nangthang joined #gluster
12:47 rwheeler joined #gluster
12:59 nangthang joined #gluster
13:01 haomaiwa_ joined #gluster
13:09 16WAAG9X6 joined #gluster
13:11 kkeithley joined #gluster
13:16 dusmant joined #gluster
13:16 nishanth joined #gluster
13:16 shubhendu joined #gluster
13:18 unclemarc joined #gluster
13:26 mhulsman joined #gluster
13:41 dblack joined #gluster
13:53 mhulsman joined #gluster
13:54 skoduri joined #gluster
14:01 kdhananjay joined #gluster
14:04 chirino joined #gluster
14:05 rafi joined #gluster
14:05 shaunm joined #gluster
14:06 haomaiwa_ joined #gluster
14:13 ira joined #gluster
14:23 julim joined #gluster
14:25 B21956 joined #gluster
14:29 morse joined #gluster
14:30 squizzi joined #gluster
14:34 squizzi joined #gluster
14:39 harold joined #gluster
14:42 dusmant joined #gluster
14:49 jwd joined #gluster
14:50 aravindavk joined #gluster
14:57 hatchetjack joined #gluster
14:58 hatchetjack we have a gluster cluster.  From a client when we do an 'ls' on a volume the first time, it takes 45 to 55 seconds for listing results.  After that an 'ls' shows results very fast because they are cached I'm guessing.  Any idea why the initial 'ls' is so slow?
15:01 nehar joined #gluster
15:01 haomaiwang joined #gluster
15:09 arcolife joined #gluster
15:12 arcolife joined #gluster
15:15 nbalacha joined #gluster
15:15 ironhalik joined #gluster
15:17 ndevos hatchetjack: it depends... if you have many files, it may be that each file requires an extra stat(), could you check if executing /bin/ls is faster?
15:19 jbrooks joined #gluster
15:20 nerdcore left #gluster
15:27 farhorizon joined #gluster
15:33 mhulsman joined #gluster
15:35 PatNarciso ndevos, I'm curious, would '/bin/ls' be faster than 'ls'?
15:36 ndevos PatNarciso: on many systems "ls" is actually a shell alias and does something like "ls --color" or such, those extra ls option need more details about directory entries
15:37 PatNarciso boom - got it - that makes a lot of sense.
15:38 ndevos PatNarciso: you can check with "alias ls" to see if there is an alias in your environment
15:39 PatNarciso indeed there is.  also as a test, did an '/bin/ls -la' and noticed the color change you mentioned.
15:40 ndevos we do have something called "readdirp" (p=plus), and that returns the directory entries including most/all of the stat() information, but it might be disabled or unavailable for some reason
15:41 ndevos yes, the color that ls shows depends on the type of the directory entry, and that needs the information from a stat() systemcall
15:41 ramky joined #gluster
15:42 ndevos because a stat() is very expensive on Gluster, "ls --color" (the bash alias) is much slower than /bin/ls
15:44 PatNarciso my primary gluster volume, offers millions of files, via a samba service, over an xfs on a raid6.  lots of random io (as its used for video post production, storage and editing).  any performance suggestions to make the user experience better, is welcomed.
15:45 ndevos depends a little on how your users use it, if they execut "ls", you could remove the standard bash alias ;-)
15:46 ndevos there is also a lot of work being done to improve the Samba experience, ira, obnox and co have made huge progress, you should keep up with the updates that they push out
15:48 dgandhi joined #gluster
16:00 wushudoin joined #gluster
16:01 haomaiwa_ joined #gluster
16:13 cholcombe joined #gluster
16:17 bowhunter joined #gluster
16:21 shyam joined #gluster
16:21 muneerse joined #gluster
16:21 TijG joined #gluster
16:24 shaunm joined #gluster
16:31 dgbaley joined #gluster
16:31 tom[] i see a log messages like these when files are written to a mount https://gist.github.com/tom--/03fe189cc9136ff4ae0a it ammounts to a lot of log messages. i looked at bug reports and i have the impression i can ignore these as it is probabyl a bug
16:31 tom[] is there a better workaround than to set aggressive log rotation?
16:31 tom[] glusterfs 3.4.2 on Ubuntu 14.04.3 LTS
16:31 glusterbot tom[]: https://gist.github.com/tom's karma is now -7
16:31 glusterbot Title: gluster log flooding · GitHub (at gist.github.com)
16:34 arcolife joined #gluster
16:37 CyrilPeponnet joined #gluster
16:40 lkoranda joined #gluster
16:45 bennyturns joined #gluster
16:51 jwaibel joined #gluster
16:52 lkoranda_ joined #gluster
16:54 14WAALHYN joined #gluster
16:59 cornfed78 joined #gluster
17:00 ironhalik joined #gluster
17:01 haomaiwa_ joined #gluster
17:12 bash1235123 joined #gluster
17:13 bash1235123 Hi, concerning "folder not being healed" email thread in glusterfs mailing list. Is anybody here ?
17:17 calavera joined #gluster
17:31 Manikandan joined #gluster
17:31 calavera joined #gluster
17:33 Manikandan joined #gluster
17:34 ironhalik Hello - a bit noobish question - is there an official/documented way of solving split-brain issues on a two brick volume? (something except manualy hunting down files, hardlinks and removing them)?
17:34 ironhalik got over 500 split-brains on brick1 and over 1000 on brick2 :)
17:36 ironhalik also - how does gluster decide where the split-brain is? ie if two files are inconsistend on two bricks, why it reports split-brain on brick1 for one file, and split-brain on brick2 for another file?
17:44 squizzi joined #gluster
17:45 cornfed78 joined #gluster
17:59 plarsen joined #gluster
18:00 mhulsman joined #gluster
18:01 haomaiwa_ joined #gluster
18:12 kotreshhr joined #gluster
18:13 calavera joined #gluster
18:15 Rapture joined #gluster
18:19 togdon joined #gluster
18:25 chirino joined #gluster
18:30 julim_ joined #gluster
18:41 F2Knight joined #gluster
18:43 plarsen joined #gluster
18:47 dgbaley joined #gluster
18:51 ahino joined #gluster
18:53 dlambrig joined #gluster
18:55 PatNarciso @ndevos - thanks for the ira, obnox and co heads up.  I'll keep an eye out for the messages/updates.
19:01 haomaiwa_ joined #gluster
19:09 jwaibel joined #gluster
19:14 kovshenin joined #gluster
19:33 mowntan joined #gluster
19:33 jwd joined #gluster
19:37 jwaibel joined #gluster
19:38 jwd_ joined #gluster
19:44 nathwill joined #gluster
19:44 B21956 joined #gluster
19:44 mowntan So I think I have a fairly reasonible scenario, but the process to implement seems rather counter-intuitive.
19:46 mowntan I currently have a 3 node Replicate volume that I would like to convert to a 4 node Distributed-Replicate volume, but it seems like I need to remove a brick (decrease the replica count) and then re-add it
19:47 mowntan Is that right?
19:54 neofob joined #gluster
19:57 jermudgeon_ joined #gluster
20:01 haomaiwa_ joined #gluster
20:10 dlambrig joined #gluster
20:10 DV joined #gluster
20:14 calavera joined #gluster
20:16 deniszh joined #gluster
20:26 rafi joined #gluster
20:33 JoeJulian mowntan: Yes, you would need to remove a brick while changing the replica count to 2, wipe the removed brick, then add it back with the other new brick
20:33 JoeJulian ~splitbrain | ironhalik
20:33 glusterbot ironhalik: To heal split-brains, see https://github.com/gluster/gluster​fs/blob/master/doc/features/heal-i​nfo-and-split-brain-resolution.md . Also see splitmount https://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/ . For additonal information, see this older article https://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/
20:43 mowntan JoeJulian: thanks!
20:45 ira joined #gluster
20:48 om joined #gluster
20:48 om I am trying to mount glusterfs with acl.  It mounts but does not show the acl option with mount cmd
20:49 deni left #gluster
20:49 om the fs has acl on
20:49 om any ideas?
20:49 om here is my command;
20:50 om mount -t glusterfs 127.0.0.1:/gv_sftp0 /gluster_nfs -o acl,selinux
20:50 om the output of mount:
20:50 om 127.0.0.1:/gv_sftp0 on /gluster_nfs type fuse.glusterfs (rw,allow_other,max_read=131072)
20:50 om no acl :(
20:51 om the filesystem is mounted with acl though
20:51 om just glusterfs doesn't mount the volume with acl no matter what I have tried
20:55 kotreshhr left #gluster
20:56 PsionTheory joined #gluster
20:57 Logos01 joined #gluster
20:58 Logos01 Hello, folks. I'm trying to gain some insight to determining how clients (FUSE) schedule diskIO to bricks. I understand that using the FUSE client allows for automatic failover, but what about IO balancing for read events?
20:58 bash1235123 joined #gluster
20:59 Logos01 If I have a floating VIP between two hosts, which have replica 2 on their bricks, would the clients only ever send read requests to the brick whose host had the VIP at the time of mounting (excluding failovers, again) ?
20:59 Logos01 http://www.gluster.org/community/documentation/in​dex.php/Gluster_3.1:_Understanding_Load_Balancing <-- this page really doesn't help me here. It seems to assume striping of data.
20:59 glusterbot Logos01: <'s karma is now -19
21:01 Logos01 Ahhh... "
21:01 haomaiwa_ joined #gluster
21:01 Logos01 "When accessing the file, Gluster Filesystem uses load balancing to access replicated instances."
21:01 Logos01 Comprehension dawns. Sorry for bothering you folks.
21:06 om Logos01: have you ever gotten responses in this irc?
21:06 JoeJulian ~mount server | Logos01
21:06 glusterbot Logos01: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
21:07 JoeJulian om: Do you have ACL enabled for the volume?
21:07 om yes
21:07 om thanks JoeJulian
21:07 JoeJulian And it's not working?
21:07 om I mean, what do you mean by acl enabled on the volume?
21:07 JoeJulian gluster volume set help | grep acl
21:08 om Option: nfs.acl
21:08 om that's the only option that came up
21:09 JoeJulian Ah, ok... I don't use acl but I remembered there being something...
21:09 om what is the correct command to set acl for glusterfs volume?
21:10 Logos01 om: I hadn't, no.
21:11 JoeJulian Ok, yeah, the "acl" mount option. If you've mounted with this option, "ps ax | grep glusterfs" and you should fine the command which spawns the fuse mount. That should have the --acl switch.
21:11 JoeJulian As for response, everyone in here is a volunteer. Patience is sometimes required as we all have our own jobs to do.
21:12 om oh ok
21:12 om I thought there were glusterfs tech in here... sorry about that
21:12 JoeJulian No worries.
21:12 om the --acl option is an per the ps ax
21:12 om it's on
21:13 JoeJulian Ok, then it should be working. If it's not, check the client log in /var/log/glusterfs
21:15 om hmmm
21:15 om adding option 'cache-posix-acl' for volume 'gv_sftp0-md-cache' with value 'true'
21:15 om it says it's on but mount cmd doesn't show it
21:15 om and I cannot setfacl
21:15 Logos01 om: setfacl is really for posix, not NFS, ACL's.
21:16 om hmmm... right, but this is not an nfs mount
21:16 Logos01 Oh. /me learns to read.
21:16 om it's a glusterffs mount
21:16 Logos01 om: Do the underlying filesystem(s) (on all bricks) also have the ACL option on their mounts?
21:17 om yea
21:17 Logos01 (If XFS, that's a default. extN, no.)
21:17 om the glusterfs mounts on root and this is root
21:17 om /dev/vda1 on / type ext4 (rw,acl,user_xattr)
21:17 Logos01 Try remounting the gluster volume's mountpoint then.
21:18 JoeJulian If it's not supported, there will be an error shown in the client log for the xattr callback.
21:19 om mount -t glusterfs 127.0.0.1:/gv_sftp0 /gluster_nfs -o acl,selinux
21:19 om that is my command
21:19 om I'm sifting through logs but no error so far
21:20 JoeJulian Ah, check and see if it's selinux that's preventing the setfacl.
21:21 Logos01 O_o
21:21 om hmmm...
21:21 om I can setfacl on the gluster volume now
21:21 om interesting
21:22 JoeJulian excellent
21:22 om but the nfs exported mount to other servers cannot do that
21:22 JoeJulian Did you change something, or was it just insanity?
21:22 Logos01 Right -- because NFS and POSIX don't share ACLs with each other.
21:23 om nfs4_setfacl doesn't work though
21:23 om on the remote nfs client host
21:23 Logos01 om: Is it an nfs4 mount?
21:23 om there are 2 level mounts:
21:23 Logos01 If it shows up in showmount -e or exportfs -v it's NOT NFSv4.
21:23 om 1- mount gluster volume as gluster on localhost
21:24 om 2 - mount exported nfs4  for that gluster mount on remote host
21:24 Logos01 om: Right; are you *sure* it's nfsv4 and not nfsv3 ?
21:25 om good question Logos
21:25 om Gonna check it's nfs4
21:25 om the client is mounting nfs4
21:25 JoeJulian Native gluster nfs is version 3. To have an NFSv4 share, you should use nfs-ganesha.
21:25 om but not sure the server is set to that
21:25 Logos01 And ... nfs-ganesha is unreliable to say the least.
21:25 om I haven't tried ganesha yet
21:26 * Logos01 has had more issues w/ ganesha than you can shake a stick at
21:26 om I just installed nfs4 server and disabled nfs on gluster itself
21:26 JoeJulian Interesting, I haven't had one single issue with it yet.
21:27 JoeJulian btw, using a kernel nfs share for any fuse mounted filesystem is a deadlock waiting to happen.
21:27 om type nfs4
21:27 om is reported by mount -v on nfs client remote host
21:27 Logos01 JoeJulian: I've been using it for two clusters in my local environment. They both routinely run into OOM locking events; and getting the ganesha shares back requires full reboot of the OS.
21:28 Logos01 Which is ... irksome.
21:28 om JoeJulian: "btw, using a kernel nfs share for any fuse mounted filesystem is a deadlock waiting to happen."  So I guess I should use ganesha...
21:29 om after all
21:29 JoeJulian Logos01: Do you have a bug report for that? I'd like to see how that happens.
21:29 Logos01 JoeJulian: I've never caught it in the act. But I strongly suspect it's got something to do with the method I'm using the NFS daemon for.
21:30 skylar joined #gluster
21:30 Logos01 (I.e.; multiple nfs daemons each allowing the local brick-host to mount the gluster volumes as clients.)
21:30 JoeJulian I know how it happens with knfs where there's a kernel-userspace contention for memory allocation where they're dependent on each other, but with the pure userspace operation of ganesha+libgfapi, it /shouldn't/ be happening.
21:31 JoeJulian Oh, ok... I could maybe think of a way that could happen.
21:31 Logos01 Yeah, no -- for whatever reason I can't get it to reliably kill off the RPC daemons/processes either.
21:32 Logos01 And it also can't reliably update the local client with information about transactions performed by the other host.
21:32 Logos01 (I.e.; write a new file, it doesn't show up on the other host for at least a few minutes).
21:32 Logos01 Now, this IS abnormal usage -- and I get that.
21:32 JoeJulian Well, that's why you're using NFS.
21:32 JoeJulian For the kernel cache.
21:32 Logos01 True.
21:33 JoeJulian Which is precisely why I don't. :D
21:33 Logos01 I just wish that ganesha's HA stuff didn't have the SPOF metadata host thing.
21:33 Logos01 JoeJulian: I'm kind of between a rock and a hard-place on this one; the content in question is user session credential caching ... so lots of tiny files.
21:33 Logos01 Lots of read.
21:34 JoeJulian I'd probably use redis.
21:34 Logos01 God, if only I could.
21:34 Logos01 I'm not in charge of the codebase.
21:35 Logos01 Gotta go attend a PCI auditor meeting. Yay
21:35 JoeJulian Have fun.
21:35 JoeJulian Flip them the bird under the table for me, too.
21:46 om JoeJulian: trying to use nfs-ganesha.  building from git source on ubuntu 14.04 but the make cmd fails
21:46 om /root/nfs-ganesha/src/include/gsh_rpc.h:17:28: fatal error: rpc/xdr_inline.h: No such file or directory
21:46 om #include <rpc/xdr_inline.h>
21:46 om ^
21:46 om compilation terminated.
21:46 JoeJulian kkeithley: ^
21:47 JoeJulian kkeithley is planning on adding it to the gluster ppa. Not sure what you're schedule is like, om.
21:48 om that would be great
21:48 om schedule is tough though for sure
21:48 neofob joined #gluster
21:48 om supposed to get this cluster within a couple weeks tops
21:49 JoeJulian Let's see what Kaleb has to say.
21:49 om JoeJulian: do you think using ganesha will resolve my nfs4_setfacl issues on the remote nfs client?
21:50 om after all, the glusfterfs mount works, but the nfs4 kernel server mount does not take the setfacl or nfs4_setfacl
21:51 JoeJulian ndevos would be the better guy to ask, but he's on a much earlier day. I think he's at gmt+0.
21:51 om the local glusterfs mount os talomg the ac;
21:51 om acl's and thanks for your support on that
21:51 om perhaps I'll just use glusterfs mount remotely...
21:53 JoeJulian That's usually my recommendation.
21:54 om will see how that works with setfacl and get back here
21:54 om Thanks JoeJulian !!
21:54 JoeJulian You're welcome.
21:57 om Quick questions... I setup heartbeat with a VIP that is setup as failover on the glusterfs servers.
21:57 JoeJulian No need unless you're using it for NFS, but go on...
21:57 om Is that appropriate?  Or is using the glusterfs client mount not supposed to use a VIP because glusterfs manages it?
21:58 JoeJulian @mount server
21:58 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
21:58 JoeJulian So... if you use a floating ip for the hostname of one of your bricks, that'll really break the clients.
21:58 om oh
21:59 om Thing is the gluster volume span 2 DC
21:59 om in different networks
21:59 JoeJulian bad plan
21:59 om the remote gluster client can only access it's local DC gluster server
22:00 JoeJulian Well...
22:00 JoeJulian What's the latency like?
22:00 om not too bad for replication only
22:01 om if I use the gluster client without a VIP, it will try to connect to other brick servers right?
22:01 om and if it does that for brick servers in the other dc it will fail
22:01 haomaiwa_ joined #gluster
22:01 om besides, I don't want gluster client to use bricks in another dc anyhow
22:02 JoeJulian It will *always* connect to *all* the bricks in the volume.
22:02 om I'm just trying to figure out if using heartbeat for gluster client failover is bad
22:02 JoeJulian The replication and distribution logic is in the client.
22:02 om so if one brick server goes down, it will failover
22:03 JoeJulian If one brick server goes down, there's no failover necessary because it's already connected to its replica.
22:03 om hmmm
22:03 om didn't know that!
22:03 om but will the client try to mount a replica that is not accessible?
22:03 om and fail?
22:03 om or just discard it?
22:06 JoeJulian It will try to connect to all the bricks. As long as it can connect to one replica of each replica set, the mount will succeed (unless you've set a quorum to effect otherwise).
22:06 om hmmm
22:06 om that's interesting
22:06 om will try that and not use heartbeat
22:07 DV joined #gluster
22:09 om wohoo!
22:09 om setfacl worked now!
22:10 om Thanks JoeJulian
22:10 om can I buy you a drink?
22:12 calavera joined #gluster
22:12 om thanks either way!
22:12 JoeJulian I've got a better idea, use the link on my blog to donate to open-licensed cancer research. https://joejulian.name
22:12 glusterbot Title: JoeJulian.name (at joejulian.name)
22:13 om excellent idea
22:17 plarsen joined #gluster
22:48 ahino joined #gluster
22:53 d0nn1e joined #gluster
22:57 calavera joined #gluster
23:00 hagarth @channelstats
23:00 glusterbot hagarth: On #gluster there have been 411357 messages, containing 15376987 characters, 2528999 words, 9049 smileys, and 1286 frowns; 1792 of those messages were ACTIONs.  There have been 193563 joins, 4746 parts, 189095 quits, 29 kicks, 2418 mode changes, and 8 topic changes.  There are currently 237 users and the channel has peaked at 314 users.
23:01 haomaiwa_ joined #gluster
23:02 Logos01 om: Re: Use of a VIP -- it can be useful if you have many clients and want them to have uniformity of configuration.
23:02 JoeJulian no
23:02 Logos01 om: That is, they'll connect to the VIP IPaddr to get the brick's configuration.
23:03 JoeJulian Just use rrdns. It's fewer moving parts to have fail.
23:03 Logos01 JoeJulian: Depends on who has control of what.
23:03 Logos01 Getting RRDNS setup here would be ... a challenge.
23:04 Logos01 pcsd for a VIP addr on the other hand is straightforward.
23:04 Logos01 You're not wrong about the number of total components.
23:04 JoeJulian Not really. It's pretty easy to use dnsmasq or powerdns to process your local dns lookups first, and forward misses on upstream. For that matter, you can even use /etc/hosts.
23:05 JoeJulian You could even use mdns
23:05 JoeJulian scratch that.. I don't know if you can do rrdns with mdns.
23:06 Logos01 Ehhh... between using dnsmasq as a local caching dns on each client over using a floating VIP on the servers... I'd use the VIP. That's just me, maybe.
23:07 nathwill ditto here; service-VIPs-4-lyfe
23:07 JoeJulian I think you can disable caching with dnsmasq... I prefer pdns, but I have to use dnsmasq at $dayjob.
23:09 Logos01 JoeJulian: You *CAN*, but that can interfere with NetworkManager if you have that anywhere.
23:09 Logos01 Plus ... it can give you some surprising results unless you get a robust configuration.
23:09 Logos01 I've had this happen.
23:09 JoeJulian You're running desktop clients with gluster mounts? Interesting.
23:09 Logos01 NetworkManager isn't a desktop client.
23:10 Logos01 So sayeth the developers on-high.
23:10 Logos01 I usually rip it out of my server configs anywhere I can, but... you can't always rely on that.
23:10 JoeJulian No, but I haven't seen anyone use it in production. Only on user-facing machines.
23:10 Logos01 I had an environment whose original architect was a huge fan of it.
23:11 Logos01 I thankfully no longer work there, but the point is made.
23:11 JoeJulian I guess I'm just a lot more stubborn. ;)
23:11 Logos01 When it's a few hundred machines it's pretty difficult to just rip it out.
23:11 JoeJulian It helps that I usually end up saying "I told you so."
23:11 Logos01 Indeed.
23:12 JoeJulian I got to do that today. :D I'm such a meanie.
23:12 Logos01 lel
23:13 * Logos01 goes back to setting up a dual-hosted zfs-backed gluster volume setup w/ georeplication between two datacenters
23:13 Logos01 Weee
23:13 JoeJulian lol
23:13 Logos01 2TB of pre-existing data.
23:14 Logos01 Tens -- if not hundreds -- of thousands of less-than-a-megabyte-in-size files.
23:26 neofob joined #gluster
23:56 om Logos01: thanks for the thoughts for VIP's
23:57 om I suppose that is specific to using NFS?
23:57 om not sure what you meant by uniform configs
23:58 om but thanks anyhow!
23:58 farhorizon joined #gluster
23:59 JoeJulian om: He means that every client could have identical fstab entries.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary