Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-12-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:44 mkzero joined #gluster
00:46 gdubreui joined #gluster
01:06 GoDrive joined #gluster
01:45 vpshastry joined #gluster
01:55 mohankumar joined #gluster
01:55 hybrid5121 joined #gluster
01:59 shubhendu joined #gluster
02:10 lyang0 joined #gluster
02:22 gmcwhistler joined #gluster
02:29 vpshastry joined #gluster
02:46 mattappe_ joined #gluster
02:48 mohankumar joined #gluster
02:51 mattappe_ joined #gluster
02:58 hybrid512 joined #gluster
03:00 kshlm joined #gluster
03:11 bharata-rao joined #gluster
03:28 RameshN joined #gluster
03:29 vpshastry joined #gluster
03:33 vpshastry left #gluster
03:36 gmcwhistler joined #gluster
03:42 kameda joined #gluster
03:45 kameda Hello. I use glusterfs on CentOS6. Today, I tried install glusterfs-server using yum. But glusterfs-server package not find. Anyone know package gone where?
03:47 kameda I know that glusterfs-server was exist half year ago on yum repository. But now it is not exist.
03:48 saurabh joined #gluster
03:50 solid_liq joined #gluster
03:50 solid_liq joined #gluster
03:52 sticky_afk joined #gluster
03:53 kameda I understand that glusterfs-server yum package is not available on version 3.4. hmm...
03:53 stickyboy joined #gluster
03:58 diegows joined #gluster
04:00 itisravi joined #gluster
04:00 mohankumar joined #gluster
04:20 gdubreui joined #gluster
04:20 randallman joined #gluster
04:21 randallman howdy, anyone know if there's a way to tell ovirt and/or qemu-kvm about a backup volfile server? :P
04:21 randallman when using the gluster protocol mounts (i.e. not fuse)
04:22 randallman was reading code et al, apparently the backup-volfile-server is a script thing which is stacking arguments to the glusterfs command, but in libgfapi it seems that the prototype for the glfs_set_volfile_server doesn't really want a 2nd (or N'th where n > 1) server
04:32 mattappe_ joined #gluster
04:33 kanagaraj joined #gluster
04:35 mattappe_ joined #gluster
04:36 mattappe_ joined #gluster
04:42 mattappe_ joined #gluster
04:42 aravindavk joined #gluster
04:43 mattapperson joined #gluster
04:45 divbell joined #gluster
04:47 ababu joined #gluster
04:49 mattappe_ joined #gluster
04:50 mattapperson joined #gluster
04:51 kdhananjay joined #gluster
04:52 jag3773 joined #gluster
04:55 MiteshShah joined #gluster
04:57 ppai joined #gluster
05:12 overclk joined #gluster
05:14 vpshastry joined #gluster
05:14 psharma joined #gluster
05:14 prasanth joined #gluster
05:14 hagarth joined #gluster
05:14 satheesh1 joined #gluster
05:27 shylesh joined #gluster
05:28 bala joined #gluster
05:31 CheRi joined #gluster
05:44 satheesh joined #gluster
05:52 ndarshan joined #gluster
05:56 spandit joined #gluster
05:59 shri joined #gluster
05:59 shri hagarth: Hi
06:12 rastar joined #gluster
06:16 lalatenduM joined #gluster
06:21 anands joined #gluster
06:23 RameshN joined #gluster
06:24 hagarth joined #gluster
06:24 hagarth shri: Hello
06:25 shri hagarth Hi
06:25 shri hagarth: any luck for libgfapi
06:25 shri on devstack ?
06:26 glusterbot New news from newglusterbugs: [Bug 1045309] "volfile-max-fetch-attempts" was not deprecated correctl.. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1045309>
06:28 hagarth shri: heard from few others that it's working for them, haven't had time to go through the strace logs.
06:28 zeittunnel joined #gluster
06:29 shri hagarth: Ok
06:38 Dave2 joined #gluster
06:39 twx joined #gluster
06:42 RameshN joined #gluster
06:57 shri_ joined #gluster
07:04 satheesh joined #gluster
07:21 ricky-ti1 joined #gluster
07:21 anands1 joined #gluster
07:26 glusterbot New news from newglusterbugs: [Bug 1040348] mount.glusterfs needs cleanup and requires option validation using getopt <https://bugzilla.redhat.co​m/show_bug.cgi?id=1040348>
07:31 ngoswami joined #gluster
07:31 jtux joined #gluster
07:40 satheesh1 joined #gluster
07:41 fidevo joined #gluster
07:44 RameshN joined #gluster
07:45 aravindavk joined #gluster
07:54 satheesh joined #gluster
07:56 VerboEse joined #gluster
08:01 ctria joined #gluster
08:12 blook joined #gluster
08:18 anands joined #gluster
08:39 mgebbe__ joined #gluster
08:39 mgebbe___ joined #gluster
08:42 mgebbe joined #gluster
08:43 hagarth joined #gluster
08:48 PatNarciso joined #gluster
08:49 twx joined #gluster
08:54 shylesh joined #gluster
09:00 KORG joined #gluster
09:01 andreask joined #gluster
09:11 TomKa joined #gluster
09:13 TomKa joined #gluster
09:13 FarbrorLeon joined #gluster
09:13 thommy_ka joined #gluster
09:24 andreask joined #gluster
09:28 mohankumar joined #gluster
09:46 FarbrorLeon joined #gluster
09:50 ababu joined #gluster
09:57 spandit joined #gluster
10:05 vpshastry1 joined #gluster
10:09 psyl0n joined #gluster
10:17 overclk joined #gluster
10:23 psharma_ joined #gluster
10:24 psyl0n left #gluster
10:33 anands joined #gluster
10:46 vpshastry joined #gluster
10:47 ira joined #gluster
10:49 hagarth joined #gluster
10:52 FarbrorLeon joined #gluster
10:53 ndarshan joined #gluster
10:54 FarbrorLeon joined #gluster
10:55 MiteshShah joined #gluster
10:55 RameshN joined #gluster
10:55 spandit joined #gluster
10:59 ppai joined #gluster
11:00 calum_ joined #gluster
11:10 diegows joined #gluster
11:54 mohankumar joined #gluster
11:57 glusterbot New news from newglusterbugs: [Bug 1045426] geo-replication failed with: (xtime) failed on peer with OSError, when use non-privileged user <https://bugzilla.redhat.co​m/show_bug.cgi?id=1045426>
11:59 ndarshan joined #gluster
12:06 yosafbridge joined #gluster
12:07 brimstone joined #gluster
12:09 ppai joined #gluster
12:17 JoeJulian randallman: Can you use rrdns with a single server name?
12:26 anands1 joined #gluster
12:28 hagarth joined #gluster
12:39 edward1 joined #gluster
13:07 wgao joined #gluster
13:08 ira joined #gluster
13:09 anands joined #gluster
13:09 ira joined #gluster
13:10 ira joined #gluster
13:10 ira joined #gluster
13:11 bennyturns joined #gluster
13:19 mattappe_ joined #gluster
13:20 mattapperson joined #gluster
13:22 zeittunnel joined #gluster
13:23 bala joined #gluster
13:27 satheesh1 joined #gluster
13:34 primechuck joined #gluster
13:36 eseyman joined #gluster
13:36 RicardoSSP joined #gluster
13:36 RicardoSSP joined #gluster
13:37 gmcwhistler joined #gluster
13:46 zwu joined #gluster
13:59 B21956 joined #gluster
14:09 ira_ joined #gluster
14:10 ira joined #gluster
14:19 randallman JoeJulian: Does having multiple A (or AAAA) records actually get account for with glfs_set_volfile_server?
14:19 randallman err accounted
14:19 randallman I was reading the code, but it gets deep into its RPC rather quickly :)
14:19 randallman I suppose I could test it easily enough
14:20 randallman Lots of calls to 'getnameinfo' in src/common-utils.c
14:20 randallman which it seems houses the gf_resolve_ip6 function
14:21 nocturn joined #gluster
14:22 mattappe_ joined #gluster
14:23 japuzzo joined #gluster
14:25 social joined #gluster
14:29 social ndevos: sorry I'm on vacation and got a bit off, well it's hard question we had long discussion with colleagues about this. The thing is that applications usually expect local posix filesystem and with that setup it's quite OK to not to call fsync on close as the write is transparent to other applications and they see data the moment it was written to VFS. You don't require super safety so you don't need to be sure it's on disk but yo
14:30 social ndevos: yet well on other hand our devs are working on networked filesystem and they could realize that it's not that easy :/
14:32 shapemaker joined #gluster
14:33 social ndevos: and I really think that NFS does fsync when you close
14:35 social https://www.ietf.org/rfc/rfc3530.txt > 1.4.6.  Client Caching and Delegation " Also, when the file is closed, any modified data is written to the server."
14:40 ndevos social: oh, nice find, in that case I think that glusterfs-fuse should behave like that too
14:40 social well but that's NFS4
14:41 ndevos social: yes, but it would be good when users know how NFSv4 behaves, and glusterfs matches that
14:42 ndevos social: it also does not say when the data should be written with a close()...
14:43 social I think best way to test this is just to spin up NFS and try it
14:43 ndevos yeah, probably
14:54 TDJACR joined #gluster
14:56 sghosh joined #gluster
15:01 japuzzo left #gluster
15:02 calum_ joined #gluster
15:09 kaptk2 joined #gluster
15:09 plarsen joined #gluster
15:09 wushudoin| joined #gluster
15:10 jobewan joined #gluster
15:11 dbruhn joined #gluster
15:13 calum_ joined #gluster
15:13 65MAAGS1O joined #gluster
15:13 bala joined #gluster
15:14 mattappe_ joined #gluster
15:16 bugs_ joined #gluster
15:17 mattappe_ joined #gluster
15:18 hybrid5121 joined #gluster
15:20 social joined #gluster
15:22 anands joined #gluster
15:24 andreask joined #gluster
15:25 zapotah joined #gluster
15:25 zapotah joined #gluster
15:29 hagarth joined #gluster
15:30 mattappe_ joined #gluster
15:32 mattapperson joined #gluster
15:34 ozux joined #gluster
15:35 mattappe_ joined #gluster
15:36 mattappe_ joined #gluster
15:41 pravka joined #gluster
15:42 asku joined #gluster
15:48 zerick joined #gluster
15:54 mattapperson joined #gluster
15:54 luisp joined #gluster
15:55 LoudNoises joined #gluster
15:57 lpabon joined #gluster
15:57 mattappe_ joined #gluster
15:58 lpabon joined #gluster
15:58 lpabon joined #gluster
15:58 sjoeboo joined #gluster
16:02 bugs_ joined #gluster
16:07 asku joined #gluster
16:16 semiosis :O
16:20 badone joined #gluster
16:24 dbruhn :p
16:25 vpshastry joined #gluster
16:26 vpshastry left #gluster
16:33 hagarth :O
16:36 mattap___ joined #gluster
16:40 sroy_ joined #gluster
16:51 sroy_ joined #gluster
16:52 zaitcev joined #gluster
16:52 sticky_afk joined #gluster
16:52 chirino joined #gluster
16:53 stickyboy joined #gluster
17:10 mattappe_ joined #gluster
17:10 sroy_ joined #gluster
17:12 badone joined #gluster
17:22 Technicool joined #gluster
17:23 zapotah joined #gluster
17:23 vpshastry joined #gluster
17:29 vpshastry joined #gluster
17:32 ctria joined #gluster
17:37 JonnyNomad joined #gluster
17:39 Mo__ joined #gluster
17:49 thogue joined #gluster
17:51 mattappe_ joined #gluster
17:56 Peanut left #gluster
17:57 pravka left #gluster
18:11 lalatenduM joined #gluster
18:11 neofob joined #gluster
18:12 mattappe_ joined #gluster
18:14 mattapp__ joined #gluster
18:16 RedShift joined #gluster
18:20 sroy_ joined #gluster
18:26 zwu joined #gluster
18:38 jbrooks joined #gluster
18:51 failshell joined #gluster
19:06 daMaestro joined #gluster
19:18 jobewan is anyone using gluster as a backend to distributed puppet masters?
19:26 jobewan anyone here?
19:27 rotbeard joined #gluster
19:35 JoeJulian jobewan: I haven't heard of that being done, but I can't imagine any problems.
19:36 jobewan I am, and having HUGE issues, but I think it's in my gluster design...
19:37 jobewan we have 19 regions all around the country.  Each site w/ a puppet master and a single foreman server to manage them all
19:37 JoeJulian high-latency replicated volumes?
19:37 jobewan I want to distribute all of my puppet modules on a gluster volume accross all of those sites
19:37 jobewan EXTREME latency
19:37 JoeJulian Yeah, geo-replication for that.
19:37 jobewan well...
19:38 jobewan If I dd a 1gb file, it takes about 15 seconds
19:38 JoeJulian yep
19:38 jobewan but when I do my puppet modules I get maybe 5K a second it seems
19:38 jobewan so how does geo replication work?
19:39 failshell joined #gluster
19:39 JoeJulian http://gluster.org/community/document​ation/index.php/Gluster_3.2:_GlusterF​S_Geo-replication_Deployment_Overview I've got to run in to Seattle so I'll be back in around an hour.
19:39 glusterbot Title: Gluster 3.2: GlusterFS Geo-replication Deployment Overview - GlusterDocumentation (at gluster.org)
19:39 jobewan gotcha, looking now
19:39 jobewan thanks for the quick pointer
19:40 JoeJulian Well, I'm actually not going to run. That would take considerably longer.
19:40 jobewan :)
19:40 B21956 joined #gluster
19:44 mattappe_ joined #gluster
19:45 mattapperson joined #gluster
19:51 semiosis jobewan: how many nodes connect to the puppetmaster at a site?
19:52 jobewan each site puppet master manages a virtual infrastructure of 100-300+ vms
19:53 jobewan We also have international sites I'm going to add in, so I think geo-rep is definitely the way to go.  I'm just torn on setting up a cron to do a git pull every 'x' minutes, or setup the geo-rep w/ gluster now
19:54 semiosis i would do the cron/git pull, personally
19:55 semiosis if you had many puppetmasters & thousands of nodes, then maybe a gluster volume to distribute the FS load from all the puppet masters, but since all your puppet nodes will be going through a single puppetmaster anyway, i would avoid the complexity of a distributed filesystem if I were you
19:56 semiosis a single ssd in the puppetmaster server should be able to handle your load, right?  if not, then a couple of them in RAID0 i'm sure
19:57 jobewan there are thousands of nodes and 19 puppet masters
19:58 semiosis but at each region, only one puppetmaster, and a node only connects to the single puppetmaster in its region... do I understand that right?
19:59 jobewan each region has 1 puppet master and 300 nodes.  I need all 19 region/masters to keep the module data in sync
20:01 semiosis my advice would be git-pull in a cron job
20:01 jobewan I think I'm now just trying to make a case to use gluster when a script to leverage git would probably be better though.
20:01 jobewan yea
20:02 jobewan I've never used gluster in an environment this distributed.  The read performance is not what I was expecting.
20:02 jobewan I thought it would be poor... but not this poor (although it's very understandable why it is)
20:02 semiosis imho here's the only way I can imagine gluster making sense for a global puppet deploy...
20:03 mattappe_ joined #gluster
20:03 semiosis each region has a gluster volume with replica 3+ (more replicas give higher read performance) and multiple puppetmasters mounting that volume
20:04 semiosis a "master" volume which pulls from git & uses geo-rep to distribute changes to the gluster clusters in all the regions
20:04 semiosis could skip that second part, and just have each region do the git-pull in a cron job, and store the output in a gluster volume
20:05 jobewan gotcha
20:05 jbrooks joined #gluster
20:05 semiosis point being that the gluster volume is on a LAN, each region having it's own local volume, replicated across different parts of the lan, with replicas spread out to be closer to the puppetmasters
20:05 semiosis so you might have a gluster replica server + a puppetmaster in each of three availability zones
20:06 jobewan that makes sense.  If I had more puppet masters it would be more efficient.  Maybe something to look into when I add more masters to each site
20:06 semiosis but i think that would be way overkill for just a few hundred puppet nodes
20:07 semiosis sjoeboo: if you're around, maybe you can chime in with how you do puppet?  you have a large deployment :)
20:11 diegows joined #gluster
20:37 juhaj joined #gluster
20:39 juhaj Can anyone help me: on client A I can write a file to a glusterfs mount but on client B I cannot. The only difference between the clients is the subnet (in fact, even client B can write when it is in the same subnet as A). But auth.allow allows both subnets (it is 10.*). Is there some other subnet-related access control somewhere?
20:44 dbruhn juhaj, are you using DNS or IP addresses between your peers?
20:45 juhaj DNS
20:45 dbruhn if you run gluster peer status what is returned?
20:46 juhaj "peer status: No peers present", but that's normal as there is just one peer
20:46 juhaj This is a client problem
20:46 juhaj (Or, it manifests on a client only)
20:48 juhaj Hmm, I switched to IP only, and now B cannot even mount it
20:48 dbruhn assuming you are using the fuse client?
20:49 juhaj Aha, using IP only causes DNS lookups, which give crazy IP:s...
20:49 juhaj Yes, I am
20:50 juhaj DNS issue dealt with: "search home.net" is the craziest you can have in your /etc/resolv.conf as their DNS will resolve *anything*
20:51 dbruhn The gluster client when it connects to the first server gets a manifest from the server about what servers it can connect to. Unsure how to resolve it with only one gluster server...
20:51 dbruhn why are you using gluster with a single server?
20:53 juhaj Because with two servers I only ever ran into problems. Very similar to this, in fact: I was never able to get all clients to write to shared directories. (It seemed acl support was completely broken: everyone was able to write to directories if the user or user's primary group owned the directory or if world had +w, but if it was owned by someone else and non-primary group, it failed regardless of any acl)
20:55 dbruhn I guess I was more interested why you chose a distributed networked file system instead of a more traditional thing like a NFS or Samba server?
20:55 juhaj replication, which of course is a bit pointless now that I replicate onto the same server
20:56 dbruhn What version are you running?
20:56 juhaj Besides nfsv4+gss was so unreliable when I tried it a couple of years ago that no thanks. And samba does not seem like something I want to work with
20:57 juhaj 3.4.0-1
20:57 dbruhn What file system are you running on your bricks?
20:58 failshell joined #gluster
20:59 juhaj Splendid! Changing to IP only fixedit
20:59 juhaj xfs
21:00 dbruhn Good!
21:02 dbruhn Seems like you have a couple other things you should be checking out though. Seems like you would want to work out the issues, or maybe look into a solution that better fits your requirements.
21:03 juhaj The second one is certainly on my todo-list, but there aren't really that many options
21:03 dbruhn What kind of data are you storing on the system and how much data are you storing?
21:04 juhaj text, photos, videos
21:05 dbruhn How much of it?
21:05 juhaj It varies, but not much
21:05 juhaj (To me much is hundreds of TB)
21:06 dbruhn Well, these are relative terms
21:07 dbruhn If you're talking about 1TB of data, that's very different than 10TB, and that's very different than 20, 30, 40 , 50 TB of data when it comes to a solution for your problem.
21:08 juhaj I guess 10 GB or so is fine here, perhaps 100
21:12 sroy_ joined #gluster
21:24 anands joined #gluster
21:29 neofob left #gluster
21:35 gmcwhistler joined #gluster
21:37 juhaj So, what I have and what I would like are: I have a redundant filesystem with proper (=acl) shared directories available everywhere I or any member of the family goes. What I would like to have is a dropbox-like system with an automatic local cache and sync onto the remote replica, but under my own control.
21:38 dbruhn Maybe you are better off using something like owncloud? and then using gluster under it for the replication?
21:42 juhaj At first thought: excellent idea
21:42 juhaj thanks
21:42 dbruhn np
21:42 juhaj But does it do cache and filesystem?
21:42 juhaj I thought owncloud is just wedav
21:42 juhaj *webdav
21:43 juhaj Hm.. and I am not so keen on running apache: too much attach surface unless I keep it on the 10/8 network, too
21:44 dbruhn own cloud is an open source dropbox style product, it allows you to run an agent just like dropbox
21:44 dbruhn the server side I am not sure about, just offering a suggestion as to what might solve your needs based on the requirements
21:47 juhaj So, owncloud is not just a server-side webdav?
21:47 gmcwhistler joined #gluster
21:48 dbruhn Looks like it is built around webdav from the site.
21:48 dbruhn There are other projects out there that do the same thing in different ways, but most of them are stale
21:49 juhaj I had a look at it and I think my conclusion was that none of those offer anything even close to posix
21:49 juhaj (it=owncloud, those=owncloud + "those other projects" (I only remember sparkleshare for now)
21:50 dbruhn I can't remember what that novel one was, but they had one too
22:05 dbruhn is the rebalance status command broken in 3.3.2?
22:16 chirino joined #gluster
22:27 calum_ joined #gluster
22:44 srsc joined #gluster
22:45 srsc in gluster 3.4.1, is it possible to geo-replicate specific directories in a volume to a slave? i.e. to not replicate the entire volume?
22:46 srsc or should we just manually rsync for that use case?
22:52 mattappe_ joined #gluster
22:56 srsc and am i missing something, or is geo-rep in 3.4.1 just a managed, wrapped one way rsync? seems like the slave is just a remote directory and not a gluster volume.
23:04 mattappe_ joined #gluster
23:04 chirino joined #gluster
23:05 primechuck joined #gluster
23:09 semiosis srsc: yes it's just managed rsync.  slave can be another gluster volume, but doesnt have to be
23:27 badone joined #gluster
23:39 shapemaker joined #gluster
23:44 srsc semiosis: ok, and there's no way to select specific directories from the local (master) volume to be replicated, instead of the whole volume?
23:47 srsc also, getting this when i try to geo-rep to a remote gluster volume over ipsec vpn:
23:47 srsc http://fpaste.org/63652/
23:47 glusterbot Title: #63652 Fedora Project Pastebin (at fpaste.org)
23:48 chirino joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary