Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-11-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 failshell joined #gluster
00:02 coredump joined #gluster
00:06 B21956 joined #gluster
00:19 badone_ joined #gluster
00:23 hollaus joined #gluster
00:24 mbukatov joined #gluster
00:37 Pupeno joined #gluster
01:01 DougBishop joined #gluster
01:13 glusterbot New news from newglusterbugs: [Bug 1158262] Rebalance failed to rebalance files <https://bugzilla.redhat.com/show_bug.cgi?id=1158262>
01:42 hollaus joined #gluster
01:47 ira joined #gluster
01:53 DV__ joined #gluster
01:59 harish joined #gluster
02:06 bala joined #gluster
02:17 msmith_ joined #gluster
02:21 ira joined #gluster
02:49 hchiramm joined #gluster
02:49 bharata-rao joined #gluster
03:04 hollaus joined #gluster
03:11 chirino joined #gluster
03:18 David_H__ joined #gluster
03:23 ira joined #gluster
03:30 David_H_Smith joined #gluster
03:32 David_H__ joined #gluster
03:33 DV joined #gluster
03:34 David_H_Smith joined #gluster
03:34 shubhendu__ joined #gluster
03:35 David_H_Smith joined #gluster
03:35 bharata-rao joined #gluster
03:45 kshlm joined #gluster
04:07 kumar joined #gluster
04:10 nishanth joined #gluster
04:29 RameshN joined #gluster
04:30 anoopcs joined #gluster
04:37 ira joined #gluster
04:38 SOLDIERz joined #gluster
04:40 rafi1 joined #gluster
04:40 Rafi_kc joined #gluster
04:46 atinmu joined #gluster
04:46 lalatenduM joined #gluster
04:48 coredump joined #gluster
04:48 prasanth_ joined #gluster
04:52 nbalachandran joined #gluster
04:52 spandit joined #gluster
04:56 smohan joined #gluster
04:57 Pupeno_ joined #gluster
04:58 sahina joined #gluster
04:58 badone_ joined #gluster
05:00 ricky-ti1 joined #gluster
05:01 kanagaraj joined #gluster
05:01 saurabh joined #gluster
05:03 soumya joined #gluster
05:11 ndarshan joined #gluster
05:11 aravindavk joined #gluster
05:14 ira joined #gluster
05:20 hagarth joined #gluster
05:25 ricky-ticky1 joined #gluster
05:28 atalur joined #gluster
05:30 davemc joined #gluster
05:39 meghanam joined #gluster
05:39 meghanam_ joined #gluster
05:50 ramteid joined #gluster
05:51 dusmant joined #gluster
05:51 hollaus joined #gluster
06:00 kshlm joined #gluster
06:00 kdhananjay joined #gluster
06:15 bala joined #gluster
06:15 anoopcs1 joined #gluster
06:17 ekuric joined #gluster
06:17 dusmant joined #gluster
06:19 SOLDIERz joined #gluster
06:19 ppai joined #gluster
06:20 anoopcs joined #gluster
06:22 ekuric left #gluster
06:22 ekuric joined #gluster
06:25 anoopcs joined #gluster
06:29 Philambdo joined #gluster
06:33 kaushal_ joined #gluster
06:37 kshlm joined #gluster
06:38 badone_ joined #gluster
06:39 raghu` joined #gluster
06:39 nshaikh joined #gluster
06:46 mariusp joined #gluster
06:48 haomaiwang joined #gluster
06:50 nishanth joined #gluster
06:50 ekuric1 joined #gluster
06:51 pdrakewe_ joined #gluster
06:54 azar joined #gluster
06:56 hollaus joined #gluster
07:00 haomaiwa_ joined #gluster
07:02 ctria joined #gluster
07:06 topshare joined #gluster
07:18 pkoro joined #gluster
07:19 atinmu joined #gluster
07:26 jiffin joined #gluster
07:28 ppai joined #gluster
07:31 d-fence joined #gluster
07:33 Fen2 joined #gluster
07:37 nishanth joined #gluster
07:38 rgustafs joined #gluster
07:41 haomaiwang joined #gluster
07:46 dusmant joined #gluster
07:49 prasanth_ joined #gluster
07:57 gildub joined #gluster
07:58 hollaus joined #gluster
07:58 ppai joined #gluster
08:09 ramteid joined #gluster
08:11 RameshN joined #gluster
08:17 prasanth_ joined #gluster
08:19 T0aD joined #gluster
08:20 hollaus joined #gluster
08:23 krullie Hi guys, I've got a serious situation here. Setup: 2 nodes in distributed replicated mode with each node 2 bricks formatted with ZFS. We keep having locks on certain subfolders. We can't do a simple touch for example. The situation is going on and of now for almost 5 days and we just cant get to the root of the problem. Any help would be very much appreciated!
08:23 deniszh joined #gluster
08:32 ira joined #gluster
08:41 overclk joined #gluster
08:43 vimal joined #gluster
08:50 bala joined #gluster
08:50 dusmant joined #gluster
08:51 shubhendu__ joined #gluster
08:52 nishanth joined #gluster
09:02 Slashman joined #gluster
09:06 ira joined #gluster
09:09 MugginsM joined #gluster
09:11 Lethalman joined #gluster
09:11 Lethalman hi
09:11 glusterbot Lethalman: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:12 Lethalman I have 3 iscsi, and I'm going to have 3 bricks
09:12 Lethalman when a server goes down, I'd like the others to mount the iscsi fs RO and serve its files
09:12 Lethalman is there any translator like this or should I wrote my own?
09:13 Lethalman also, when other server comes back online, umount the ro fs
09:13 dusmant joined #gluster
09:21 social joined #gluster
09:29 shubhendu__ joined #gluster
09:31 dusmant joined #gluster
09:33 nishanth joined #gluster
09:48 Slydder joined #gluster
09:48 shubhendu__ joined #gluster
09:48 gildub joined #gluster
09:48 Slydder morning all
09:49 dusmant joined #gluster
09:50 soumya joined #gluster
09:53 liquidat joined #gluster
09:57 krullie morning everyone
09:58 SOLDIERz joined #gluster
09:59 krullie can anybody give me a hint about why i keep getting a "Another transaction is in progress. Please try again after sometime." message when i do a "volume heal storage info" ?
09:59 krullie I also have complete directories that are not accessible. for example touch /path/to/file doesn't return
10:03 Slydder ndevos: you there?
10:06 ndevos Slydder: sure
10:08 ndevos Lethalman: you can configure client-side quorum, that would not remount the iscsi side, but the client would detect that there is an issue and the volume would be seen as read-only
10:09 Lethalman ndevos, this must happen on the server not the client
10:09 Lethalman ndevos, or you mean to add a glusterfs client on the server itself to manage this case?
10:10 Lethalman not sure I understood sorry
10:10 ndevos Lethalman: I assumed you have the bricks on a iscsi mount, but maybe I'm wrong?
10:13 meghanam joined #gluster
10:24 nishanth joined #gluster
10:24 mariusp joined #gluster
10:27 Slydder ndevos: got a question about localhost mounts. I read somewhere that there is a shortcut directly to local media when mounting localhost. If this is true, then the patch for address binding needs to be changed a bit to allow multiple address binds.
10:28 ndevos Slydder: I think this is about the NUFA xlator?
10:29 * ndevos has no idea how it is implemented, and if "localhost" in that case means "127.0.0.1"
10:30 XpineX joined #gluster
10:31 Slydder the bind ip address config option that was being ignored. now you can bind to a single address and it works great. but if the shortcut reads and writes setup for localhost connections are not active then the current patch that you made carries a massive performance hit. unless that is we are allowed to bind to multiple addresses (i.e.     option transport.socket.bind-address 10.1.5.3 127.0.0.1) this would allow for the local read/w
10:33 ndevos Slydder: nah, reads are done from the brick that responds the quickest to the LOOKUP, that is not bound to 127.0.0.1 or localhost
10:33 Slydder ok. then I am at a complete loss as to why I am having such bad read performance.
10:34 ndevos how bad is bad?
10:35 Slydder takes magento about 20 to 60 seconds to respond. and that is with everything being cached in redis (backend and session)
10:35 Slydder compared to the 0.404 ms returns I get on the standard server
10:36 ndevos I never heard of magento, and no idea what redis actually is (but at least I've seen that name before)
10:36 ndevos and, I'd call that not bad, but horrible
10:36 Slydder redis is like memcached but a lot better
10:36 ndevos ah, ok
10:37 Slydder no. that is horrible. like I said. most of the site is in cache (ie. memory and not harddrive) and it takes upwards of 60 seconds to respond. on my standard server magento responds in less than a second (0.404 ms).
10:38 ndevos do you happen to have anything in the logs of the mountpoint? like /var/log/glusterfs/${PATH_TO_MNT}.log
10:38 ndevos or, are you mounting over NFS?
10:39 Slydder nope and nope
10:39 ndevos NFS-clients have a better caching for small files than that the fuse-client has
10:39 ndevos maybe ,,(php) has some hints too?
10:39 glusterbot (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
10:39 glusterbot --fopen-keep-cache
10:40 Slydder apc.stats is at 0 has been for a while. still no help. will post a bit of perf profiling so you can see.
10:40 ndevos okay, but without knowing the details of the workload, perf stats do not really help :-/
10:41 Slydder oh. there are a few points that just jump out at you when you see it.
10:41 ndevos ok :)
10:42 Slydder http://pastebin.com/eFc0EquU
10:42 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
10:42 ndevos hah!
10:43 ndevos you really should install one of the ,,(paste) utilities
10:43 glusterbot For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
10:43 Slydder hmmm. not seeing any ads on pastebin. ah. maybe glusterbot doesn't know how to install a adblocker? ;)
10:44 * ndevos sees many adas
10:44 ndevos *ads
10:45 Lethalman ndevos, iscsi -> mount -> brick, yes
10:45 Lethalman ndevos, e.g. iscsi1, iscsi2... when the server holding the iscsi2 brick goes down, I'd like the server with iscsi1 to also mount iscsi2 RO and serve the files
10:46 ndevos Slydder: so, those stats show *many* LOOKUPs, could you check in the client-side log if performance/md-cache is in the list of xlators?
10:47 Slydder ndevos: sorry?
10:47 ndevos Lethalman: what you intend is happening below the Gluster layer, it is not something Gluster can do for you
10:48 Lethalman ndevos, or I can write a translator for that
10:48 ndevos Lethalman: Gluster *can* make the mount of the volume read-only in case one of your iscsi-bricks fails
10:49 ndevos Slydder: md-cache is an xlator that caches the stat() for files (client-side), that may improve performance for you
10:49 Slydder am googleing for it now.
10:49 ndevos Slydder: are those many small-files in many directories?
10:49 Slydder jepp
10:49 pkoro joined #gluster
10:50 ndevos Slydder: then you surely would be interested in http://www.gluster.org/community/documentation/index.php/Features/Feature_Smallfile_Perf
10:51 ndevos Slydder: maybe you can 'pastebinit' your /var/lib/glusterd/vols/nvwh2/nvwh2-fuse.vol ?
10:53 Slydder http://paste.debian.net/130370/
10:53 glusterbot Title: debian Pastezone (at paste.debian.net)
10:56 ndevos Slydder: md-cache seems to be on - that should be fine
10:56 ndevos Slydder: ah, thats a replica-3 setup?
10:57 ndevos in that case, a LOOKUP is sent to 3 bricks and until all 3 answered, the LOOKUP is blocked
10:57 harish joined #gluster
10:57 ndevos LOOKUP triggers a check for consistency and that is quite expensive
10:58 Slydder yeah. and mostly unneeded.
10:58 ndevos I think that disabling the heal-check on LOOKUP would improve performance, but you'll have to rely on the self-heal daemon in that case
10:59 Slydder fine by me
10:59 ndevos check "gluster volume set help" and see if you can find the option?
10:59 * ndevos would need to do the same, pranithk would know, but he's not online
11:01 ndevos Lethalman: it is probably easier to configure pacemaker to remount the iscsi-bricks read-only in case one is down
11:02 ndevos Lethalman: note that you should also disable the posix.health-check-interval, otherwise a read-only brick will make the glusterfsd exit
11:03 ndevos Lethalman: and, you will have to think of stopping/starting the self-heal daemon too
11:03 Lethalman ndevos, thought about that, but there's no integration between pacemaker and gluster... don't want requests to be lost
11:03 aravindavk joined #gluster
11:09 Lethalman ndevos, is it possible for a xlator to receive events about connect/disconnect of another brick?
11:09 Lethalman I see nothing related in xlator.h
11:10 ndevos Lethalman: no, I do not think they can, there is no brick <-> brick communication, I think
11:10 Lethalman ndevos, right
11:10 ndevos Lethalman: btw, there are ocf resource scripts that you can use with pacemaker
11:11 bala joined #gluster
11:11 Slydder ndevos: do you mean lookup-unhashed?
11:11 Lethalman ndevos, e.g. replacing brick1 which failed with another brick1ro mounted on the fly?
11:12 ndevos Lethalman: like http://www.hastexo.com/misc/static/presentations/lceu2012/glusterfs.html
11:12 glusterbot Title: GlusterFS in High Availability Clusters (at www.hastexo.com)
11:13 ndevos Lethalman: mounting really is one layer below Gluster, Gluster does not mount or manage the bricks themselves, only the contents on the bricks
11:13 Lethalman ndevos, that's ok
11:13 ndevos Slydder: no, I do not think that is the option
11:13 Lethalman ndevos, I mean is gluster able to understand I'm replacing brick1 failed with brick1ro added on the fly?
11:14 Lethalman ndevos, btw that talk doesn't really say anything special :S
11:14 ndevos Lethalman: in general, gluster does not like read-only bricks - but you can probably make them work
11:14 Lethalman ndevos, ok not about read-only... is it possible for gluster to replace brick1 with another brick with the same contents mounted elsewhere?
11:14 Lethalman have to try
11:15 ndevos Lethalman: I think that should work
11:24 shubhendu joined #gluster
11:26 ira joined #gluster
11:35 mbukatov joined #gluster
11:36 meghanam joined #gluster
11:36 meghanam_ joined #gluster
11:39 ppai joined #gluster
11:48 pkoro joined #gluster
11:50 diegows joined #gluster
11:54 rolfb joined #gluster
12:02 jdarcy joined #gluster
12:05 chirino_m joined #gluster
12:05 edward1 joined #gluster
12:06 chirino joined #gluster
12:07 Slydder ndevos: cluster.data-self-heal: off
12:07 ndevos Slydder: right, that sounds more like it
12:08 Slydder unfortunately no big change
12:08 Slydder will do a perf profile later on and let you know how it turns out though.
12:11 SOLDIERz joined #gluster
12:16 chirino joined #gluster
12:18 nbalachandran joined #gluster
12:18 SOLDIERz joined #gluster
12:19 rjoseph joined #gluster
12:27 bala joined #gluster
12:28 kanagaraj joined #gluster
12:30 calisto joined #gluster
12:30 chirino_m joined #gluster
12:41 kshlm joined #gluster
12:46 Zordrak Soo.. gluster's NFS server and the standard nfsd are mutually exclusive, right? You can't run two on the same server. So.. is there any possibility to use the gluster nfs server to serve a normal filesystem directory rather than an expicitly defined gluster brick?
12:46 ndevos Zordrak: correct, and no
12:47 ndevos Zordrak: you could do that with nfs-ganesha
12:47 inodb joined #gluster
12:47 overclk joined #gluster
12:47 Zordrak ndevos: thanks, worth considering
12:48 ndevos Zordrak: we're working to improve the integration with nfs-ganesha, things look good
12:48 haomaiw__ joined #gluster
12:48 Zordrak basically im looking to set up a separate export domain for my ovirt cluster to use for backups, and there's no reason to use a single-replica gluster brick other than gluster requires me to
12:48 Zordrak s/brick/volume/
12:48 glusterbot What Zordrak meant to say was: basically im looking to set up a separate export domain for my ovirt cluster to use for backups, and there's no reason to use a single-replica gluster volume other than gluster requires me to
12:48 smohan_ joined #gluster
12:50 Zordrak i dont know if theres any reason *not* to use a single-replica volume, just seems like a complexity-layer that a backup volume could easily do without
12:53 ndevos yeah, a single-replica (one brick?) would work, but it indeed seems like overkill to go through gluster for that
12:53 Zordrak *nod*
12:53 ndevos but, you can then mount your backup though any peer  in the gluster cluster
12:54 Zordrak the cluster is 100M connected (inorite) so there's no intent to use multiple replicas.. it's single-writes to an NFS share as a backup
12:54 Zordrak i just dont have another box to do it from and ive 600GiB going spare in one of the ovirt hosts
12:55 Zordrak i miss things being simple when i used libvirtd :D
12:56 bene2 joined #gluster
13:01 bennyturns joined #gluster
13:02 LebedevRI joined #gluster
13:04 kdhananjay joined #gluster
13:05 chirino joined #gluster
13:11 prasanth_ joined #gluster
13:12 Fen1 joined #gluster
13:15 glusterbot New news from newglusterbugs: [Bug 1160709] libgfapi: use versioned symbols in libgfapi.so for compatibility <https://bugzilla.redhat.com/show_bug.cgi?id=1160709> || [Bug 1160710] libgfapi: use versioned symbols in libgfapi.so for compatibility <https://bugzilla.redhat.com/show_bug.cgi?id=1160710> || [Bug 1160711] libgfapi: use versioned symbols in libgfapi.so for compatibility <https://bugzilla.redhat.com/show_bug.cgi?id=
13:18 kaushal_ joined #gluster
13:19 topshare joined #gluster
13:23 calisto1 joined #gluster
13:25 parallax-lawrenc joined #gluster
13:25 parallax-lawrenc Hey, we’ve been using Gluster in production for a year or so now but always in replicated setup
13:26 parallax-lawrenc I’m trying to setup replicated + distributed across 4 nodes and am struggling with the volume file
13:26 parallax-lawrenc I tried making two replicated volumes out of the bricks, then a distributed one with the two replicas as subvolumes
13:27 parallax-lawrenc df -h shows the correct disk space (2x brick) but once I’d written files to the mount I got IO errors
13:27 parallax-lawrenc is using a volfile the wrong way of doing it?
13:27 parallax-lawrenc (and in relation to this, I’m guessing the way I setup the volfile is awfully wrong)
13:31 topshare joined #gluster
13:39 partner umm, i think due to me kicking in a rebalance (in a hope to empty some of the bricks that are way under defined min-disk-free limit) i managed to get gluster command not responsive, what exactly should i kick to make it alive, cannot even stop the rebalance and that will lead into disaster in couple of days (due to memory leak -> running out of mem) ?
13:40 partner its serving files just fine and according to logs the rebalance is moving some files and deleting stale linkfiles so in that sense its all fine, until..
13:43 partner maybe kick the daemon (but not the brick ones) ?
13:44 John_HPC joined #gluster
13:45 parallax-lawrenc btw - fixed the above. copied the volume file from the gluster logs when it gets them automatically when you just mount it on one node :)
13:46 John_HPC http://paste.ubuntu.com/8803392/
13:46 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
13:46 John_HPC I am still having trouble. I have found mnost of the files which the gfid points to.
13:47 John_HPC but never seems to heal
13:47 John_HPC even after doing a full heal
13:48 topshare ZFS in Linux?
13:48 John_HPC ext3 linux
13:49 topshare Have anyone use zfs in Gluster?
13:50 John_HPC Not I. I inherited this aging cluster and was told to keep it running =/
13:55 rafi1 joined #gluster
13:57 ira joined #gluster
14:01 bala joined #gluster
14:16 glusterbot New news from newglusterbugs: [Bug 1160732] gluster-devel@nongnu.org should not be accepting email anymore <https://bugzilla.redhat.com/show_bug.cgi?id=1160732>
14:24 Pupeno joined #gluster
14:25 parallax-lawrenc Hey Guys, still having issues with the above distributed + replicated setup
14:26 parallax-lawrenc it works fine if i write a few files to the cluster
14:26 mojibake joined #gluster
14:26 chirino_m joined #gluster
14:26 parallax-lawrenc then if i upload a load of files quickly, it shows an input/output error as it would in a split-brain scenario
14:36 theron joined #gluster
14:36 partner parallax-lawrenc: i guess one can do it with the volfiles too but probably need to do it without any gluster running as the volfile is updated dynamically. i've always used the gluster commands to achieve any volume setups
14:37 rgustafs joined #gluster
14:37 theron_ joined #gluster
14:39 itisravi joined #gluster
14:46 parallax-lawrenc thanks partner - i’m gonna try and mount it using the normal mount syntax so it gets vol files from servers
14:46 parallax-lawrenc see what happens i guess
14:46 parallax-lawrenc any idea on how to mount and reference multiple servers?
14:47 parallax-lawrenc don’t want a client to go down just because one’s not available
14:51 B21956 joined #gluster
14:52 B21956 joined #gluster
14:52 tdjb joined #gluster
14:53 tdjb Hi
14:53 glusterbot tdjb: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:53 failshell joined #gluster
14:57 tdjb I have two nodes replicating, and my logs are filling up my disk really fast. I tried to add a brick to let gluster handle the replication when there was no data on node2, and node1 had ~5TB of data. I had to force remove the brick, rsync the files, remove file attributes and removed .glusterfs. Now, after the rsync, I readded the brick and I'm spammed with errors like: https://bpaste.net/show/56d3a0c1d153
14:57 glusterbot Title: show at bpaste (at bpaste.net)
15:03 partner parallax-lawrenc: umm say what? not following on how you're setting up or mounting.. its enough for the client to reach any one of your servers (part of the volume) to get the volume file for the client
15:03 partner assuming you're using native (glusterfs / fuse)
15:05 clutchk joined #gluster
15:08 partner what i have done is i've given a round-robin dns entry including all the hosts serving. that way there is never a need to touch the client side configs even if the whole gluster infra is changed, just update the dns
15:08 virusuy joined #gluster
15:08 virusuy joined #gluster
15:09 bene2 joined #gluster
15:11 jmarley joined #gluster
15:12 wushudoin joined #gluster
15:14 msmith_ joined #gluster
15:16 glusterbot New news from newglusterbugs: [Bug 1138992] gluster.org broken links <https://bugzilla.redhat.com/show_bug.cgi?id=1138992>
15:17 doubt joined #gluster
15:18 _dist joined #gluster
15:19 DougBishop joined #gluster
15:20 Andreas-IPO joined #gluster
15:21 siel joined #gluster
15:24 Arrfab joined #gluster
15:24 theron joined #gluster
15:28 RameshN joined #gluster
15:38 tdjb Is there some best practice on how to add a replicated brick to a volume with only one brick? As I said earlier, I have 5TBs, and they are already not too far off from each other, I hoped that the remaining data could be healed by gluster.
16:01 calisto joined #gluster
16:01 nshaikh joined #gluster
16:01 partner i would be happy to hear that also, i'm supposed to move over 250+ TB of data and i'm just a bit worried if gluster can handle it and in what time frame
16:05 msmith__ joined #gluster
16:08 partner maybe, haven't thought the move much yet, couple of alternative options at hand
16:15 kr0w joined #gluster
16:16 msmith_ joined #gluster
16:18 gdavis331 left #gluster
16:18 bene joined #gluster
16:25 Pupeno_ joined #gluster
16:26 edwardm61 joined #gluster
16:27 pdrakeweb joined #gluster
16:27 kumar joined #gluster
16:36 jiffin joined #gluster
16:36 John_HPC http://paste.ubuntu.com/8803392/ These files don't seem to heal. I checked both bricks, files are there and md5sums check out.
16:36 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
16:37 ndevos John_HPC: maybe difference in the 'stat' attributes or xattrs?
16:39 John_HPC 1sec
16:41 jiffin joined #gluster
16:43 nishanth joined #gluster
16:45 hagarth joined #gluster
16:47 jiffin joined #gluster
16:47 John_HPC ndevos: The stats seems to have a change time off by 1 second
16:48 John_HPC So far, most stats of the two files are the same and xattrs seems constant
16:48 ndevos John_HPC: right, that would be the issue, and I guess some afr changelog (xattr) has some values in there too
16:49 ndevos did you check the the "
16:49 ndevos trusted.*" xattrs
16:50 lmickh joined #gluster
16:53 jobewan joined #gluster
16:53 jiffin joined #gluster
16:56 John_HPC ndevos: I don't appear to have them set.
16:57 ndevos John_HPC: checked with "getfattr -m. -ehex -d $path_to_file" on the bricks?
16:58 R0ok_ joined #gluster
16:59 John_HPC ndevos: http://paste.ubuntu.com/8838249/   it appears trusted.afr.glustervol01-client-1 is different
16:59 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
17:00 John_HPC the bricks themselves seem identical
17:02 ndevos John_HPC: that is a meta-data split-brain (I think its called like that), see https://github.com/gluster/glusterfs/blob/release-3.5/doc/split-brain.md
17:02 glusterbot Title: glusterfs/split-brain.md at release-3.5 · gluster/glusterfs · GitHub (at github.com)
17:04 RameshN joined #gluster
17:05 John_HPC ndevos: thanks.
17:08 ndevos you're welcome
17:12 n-st joined #gluster
17:12 j2b joined #gluster
17:13 j2b left #gluster
17:20 ndk joined #gluster
17:21 meghanam joined #gluster
17:21 meghanam_ joined #gluster
17:22 portante joined #gluster
17:26 edward1 joined #gluster
17:29 j2b joined #gluster
17:32 jiffin joined #gluster
17:35 jiffin joined #gluster
17:35 ctria joined #gluster
17:38 jiffin joined #gluster
17:40 lkoranda joined #gluster
17:48 parallax-lawre-1 joined #gluster
17:49 nshaikh joined #gluster
17:50 jbrooks_ joined #gluster
17:52 parallax-lawrenc joined #gluster
17:52 lalatenduM joined #gluster
17:54 Pupeno joined #gluster
18:03 David_H_Smith joined #gluster
18:13 jiffin joined #gluster
18:13 PeterA joined #gluster
18:22 jiffin1 joined #gluster
18:26 qubit joined #gluster
18:27 qubit Is there a way to perform a `gluster --remote-host=... peer probe ...`? When I try, the daemon just logs out a "GlusterD svc cli read-only" message. Guessing this is for security purposes, but I would like to perform remote modifications. Some sort of authentication possible?
18:35 ndevos qubit: I'm not sure, maybe do a peer probe over ssh, like: ssh <remote-hos>t gluster peer probe $HOSTNAME
18:37 karnan joined #gluster
18:44 kanagaraj joined #gluster
18:45 necrogami joined #gluster
18:53 MacWinner joined #gluster
18:56 SOLDIERz joined #gluster
18:58 elico joined #gluster
18:59 _Bryan_ joined #gluster
19:20 ninkotech joined #gluster
19:21 ninkotech_ joined #gluster
19:26 B21956 joined #gluster
19:30 MugginsM joined #gluster
20:00 Pupeno_ joined #gluster
20:06 hollaus joined #gluster
20:08 rolfb joined #gluster
20:17 plarsen joined #gluster
20:21 failshell joined #gluster
20:23 plarsen joined #gluster
20:24 social joined #gluster
20:26 hollaus joined #gluster
20:34 chirino joined #gluster
20:36 failshell is there a way to change gluster's configuration so that it creates its logs with 644 perms instead of 600?
20:38 chirino joined #gluster
20:50 semiosis failshell: what distro?
20:51 semiosis maybe setting umask would work
20:51 semiosis which would be either in the initscript or upstart job, depending on your distro
20:52 failshell rhel
20:52 * qubit just uses filesystem ACLs to grant admins read access to all of /var/log. It's much easier than fighting with all the apps that write logs
20:52 chirino_m joined #gluster
20:52 failshell in this case, its our monitoring system that needs to read and check if it finds the brain string
20:53 failshell but ACLs might work. forgot about those actually
20:55 chirino_m joined #gluster
20:56 failshell qubit: thanks for the tip. totally had forgotten about those and it solves my problem without having to worry about the app
20:56 qubit you're welcome
20:57 failshell qubit: how does it handle logrotation? do you need to set the ACLs again? or that's transparent?
20:57 qubit failshell: Set the default ACL on the directory, new files inherit the default
20:58 sijis joined #gluster
20:58 chirino joined #gluster
20:59 zerick joined #gluster
21:26 Pupeno joined #gluster
21:26 Pupeno joined #gluster
21:38 JoeJulian well that's a failed experiment
21:47 andreask joined #gluster
21:50 davemc joined #gluster
21:51 rotbeard joined #gluster
21:55 failshell qubit: strangely, ive tested acls with /var/log/messages and /var/log/glusterfs/etc-foo.log. messages is fine but etc-foo.log loses its effective perms once you delete the file
21:56 qubit failshell: did you set a default ACL on /var/log/glusterfs ?
21:56 failshell qubit: yes recursively on /var/log
21:57 failshell qubit: https://gist.github.com/failshell/4a3938f81d7cf20a15a9 sensu being the user that loses its perms
21:57 glusterbot Title: gist:4a3938f81d7cf20a15a9 (at gist.github.com)
21:59 qubit failshell: is it being masked, or removed entirely? (getfacl /var/log/glusterfs/etc-foo.log)
22:00 failshell qubit: user:sensu:r--#effective:---
22:00 glusterbot failshell: user:sensu:r's karma is now -1
22:00 glusterbot failshell: #effective:-'s karma is now -1
22:03 qubit the app must be doing an explicit `chmod` instead of a umask. I don't know of any way around that :-/
22:06 qubit I also don't have this issue on my systems. my glusterfs logs are 644
22:07 failshell which distro and version of gluster?
22:08 qubit ubuntu, 3.4.1
22:08 failshell rhel 3.5.2
22:08 failshell oh well, looks like ill have to keep digging tomorrow
22:08 failshell thanks for your help
22:09 semiosis JoeJulian: epic fail?
22:09 qubit np. good luck
22:10 failshel_ joined #gluster
22:23 Diddi joined #gluster
22:24 Pupeno_ joined #gluster
22:24 JoeJulian semiosis: yeah, I tried playing with ovirt at home.
22:25 JoeJulian That is far from production ready.
22:25 semiosis wow
22:30 Maitre I dunno, I use oVirt in production here.  :P
22:33 d4nku joined #gluster
22:34 sijis is there a way to temporarily stop replication on a replication volume with only 2 bricks?
22:36 sijis here's why ... i want to see if replication is related to why copying large amount (by number, not size) of files to a volume. so a thought is.. stop replication. do the 'dump', then re-enable replication
22:39 d4nku Hello all, is it recommend to break a large DAS(40TB) attached to Hostserver(Brick). Should the Das be split in two or four raids and have each volume as its own gv01 - gv04?
22:41 semiosis d4nku: hard to say whats best in general, but i usually recommend having more, smaller bricks, rather than fewer larger bricks.  exact sizes & configurations depend on the use case
22:42 d4nku semiosis: Gotcha
22:44 lflores joined #gluster
22:44 lflores joined #gluster
22:47 lflores left #gluster
22:48 lflores joined #gluster
22:49 d4nku @semiosis I'm going to be working with million of image files near the 1Mb size. Would crating many bricks create more latency? I mean bandwidth usage I'll be limited to a 2 1Gbe  bond.
22:51 sijis ,,gluster services
23:00 hollaus joined #gluster
23:10 badone joined #gluster
23:18 glusterbot New news from newglusterbugs: [Bug 1160900] cli segmentation fault with remote ssl (3.6.0) <https://bugzilla.redhat.com/show_bug.cgi?id=1160900>
23:21 Pupeno joined #gluster
23:21 Pupeno joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary