Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:26 daMaestro joined #gluster
00:42 theron joined #gluster
00:48 msmith_ joined #gluster
00:50 cyberbootje joined #gluster
01:08 an joined #gluster
01:09 msmith_ joined #gluster
01:12 bala joined #gluster
01:19 gildub joined #gluster
01:41 harish_ joined #gluster
01:47 ultrabizweb joined #gluster
01:47 lyang0 joined #gluster
01:57 haomaiwa_ joined #gluster
02:02 msmith_ joined #gluster
02:14 haomaiwa_ joined #gluster
02:18 coredump joined #gluster
02:19 haomaiwa_ joined #gluster
02:25 fubada joined #gluster
02:27 msmith_ joined #gluster
02:28 bharata-rao joined #gluster
02:35 haomai___ joined #gluster
02:41 haomaiwa_ joined #gluster
02:48 pdrakewe_ joined #gluster
02:56 haomai___ joined #gluster
03:07 msmith_ joined #gluster
03:13 anoopcs joined #gluster
03:17 aravindavk joined #gluster
03:26 shubhendu joined #gluster
03:31 rejy joined #gluster
03:35 nishanth joined #gluster
03:44 davdunc` joined #gluster
03:50 itisravi joined #gluster
03:52 kshlm joined #gluster
03:53 nbalachandran joined #gluster
04:10 haomaiwang joined #gluster
04:18 haomai___ joined #gluster
04:19 msmith_ joined #gluster
04:22 smohan joined #gluster
04:30 anoopcs joined #gluster
04:32 Rafi_kc joined #gluster
04:32 rafi1 joined #gluster
04:48 ramteid joined #gluster
05:02 jiffin joined #gluster
05:04 kdhananjay joined #gluster
05:11 spandit joined #gluster
05:12 ndarshan joined #gluster
05:15 atalur joined #gluster
05:16 ricky-ti1 joined #gluster
05:16 kdhananjay joined #gluster
05:18 deepakcs joined #gluster
05:31 atinmu joined #gluster
05:32 Philambdo joined #gluster
05:40 raghu joined #gluster
05:40 justinmburrous joined #gluster
05:42 ppai joined #gluster
06:00 saurabh joined #gluster
06:00 sputnik13 joined #gluster
06:02 bala joined #gluster
06:07 sputnik13 joined #gluster
06:08 lalatenduM joined #gluster
06:10 Guest42780 joined #gluster
06:11 hagarth joined #gluster
06:15 overclk joined #gluster
06:19 partner joined #gluster
06:21 _Bryan_ joined #gluster
06:21 nshaikh joined #gluster
06:21 pkoro joined #gluster
06:30 aulait joined #gluster
06:31 purpleidea joined #gluster
06:31 JonathanS joined #gluster
06:31 kodapa joined #gluster
06:37 klaxa joined #gluster
06:40 Guest42780 joined #gluster
06:41 fubada joined #gluster
06:49 Slydder joined #gluster
06:49 Slydder morning all
06:51 Slydder is there a way I can tell if a replication is correct or not. I am replicating across 4 nodes and would like to have a way to check that the replication is correct on all nodes.
06:52 fubada joined #gluster
06:53 JoeJulian "gluster volume heal info" should tell you if there's any pending heals.
06:53 JoeJulian To actually check, you'd have to do that yourself by crawling your trees and collecting hash values for all your files and comparing them. That only works, of course, if your filesystem isn't being used.
06:55 bala joined #gluster
06:59 wmp joined #gluster
06:59 wmp hello, links in TOC dont works: http://www.gluster.org/documentation/Getting_started_overview/
06:59 glusterbot Title: Gluster (at www.gluster.org)
07:01 wmp can i use ext4 partitoin instead xfs?
07:02 JoeJulian yes
07:03 JoeJulian Out of curiosity, why do you have that preference?
07:08 ekuric joined #gluster
07:09 wmp JoeJulian: becouse on all server i use ext4
07:09 JoeJulian Fair enough.
07:13 Fen1 joined #gluster
07:13 kumar joined #gluster
07:16 aulait joined #gluster
07:20 gildub joined #gluster
07:24 dmachi joined #gluster
07:28 deepakcs joined #gluster
07:31 zerick joined #gluster
07:34 haomaiwa_ joined #gluster
07:39 fsimonce joined #gluster
07:42 milka joined #gluster
07:51 zerick joined #gluster
07:53 haomai___ joined #gluster
07:54 zerick joined #gluster
07:54 zerick joined #gluster
08:00 vimal joined #gluster
08:00 haomaiwa_ joined #gluster
08:07 xavih joined #gluster
08:09 Norky joined #gluster
08:11 liquidat joined #gluster
08:12 wmp left #gluster
08:14 ricky-ticky joined #gluster
08:16 haomai___ joined #gluster
08:19 asku joined #gluster
08:20 fsimonce joined #gluster
08:32 Fen1 joined #gluster
08:37 milka joined #gluster
08:38 milka joined #gluster
08:42 ppai joined #gluster
08:51 nshaikh joined #gluster
08:51 aulait joined #gluster
08:52 pkoro joined #gluster
09:04 kanagaraj joined #gluster
09:16 jmarley joined #gluster
09:17 unsigned joined #gluster
09:17 Slashman joined #gluster
09:18 unsignedmark joined #gluster
09:21 unsignedmark I am running a few VMs from gluster volumes, but when I migrate a guest from one host to another, the guest os will remount the root fs read-only because of errors, and basically crash. Anyone knows why? Gluster 3.4.2
09:21 unsignedmark A reboot of the machine then brings it up fine on the new host though
09:22 unsignedmark and using kvm and libvirtd i should say
09:24 hagarth joined #gluster
09:25 fubada joined #gluster
09:26 kumar joined #gluster
09:29 RaSTar joined #gluster
09:32 harish joined #gluster
09:35 spandit joined #gluster
09:36 Guest42780 joined #gluster
09:40 fubada joined #gluster
09:52 bala joined #gluster
09:56 LebedevRI joined #gluster
10:00 kanagaraj joined #gluster
10:00 kshlm joined #gluster
10:10 pkoro joined #gluster
10:17 hagarth joined #gluster
10:20 glusterbot New news from newglusterbugs: [Bug 1146902] Stopping or restarting glusterd on another node when volume start is in progress gives error messages but volume is started <https://bugzilla.redhat.com/show_bug.cgi?id=1146902> || [Bug 1146903] New 32 bits issues introduced by a recent patch <https://bugzilla.redhat.com/show_bug.cgi?id=1146903>
10:25 nshaikh joined #gluster
10:26 an joined #gluster
10:44 ira joined #gluster
10:50 glusterbot New news from newglusterbugs: [Bug 1146904] New 32 bits issues introduced by a recent patch <https://bugzilla.redhat.com/show_bug.cgi?id=1146904> || [Bug 1117822] Tracker bug for GlusterFS 3.6.0 <https://bugzilla.redhat.com/show_bug.cgi?id=1117822>
11:11 pkoro joined #gluster
11:25 harish joined #gluster
11:33 tdasilva joined #gluster
11:43 kkeithley joined #gluster
11:47 diegows joined #gluster
11:51 ppai joined #gluster
11:55 Fen1 joined #gluster
12:01 fubada joined #gluster
12:02 jvdm joined #gluster
12:03 B21956 joined #gluster
12:04 blu_ joined #gluster
12:06 jvdm for geo-replication we need a password-less ssh login, now, assume the slave runs ssh in a different port (e.g. 2222), how would you configure geo-replication in this case?
12:08 blu___ What is the best way to configure 3 nodes to only have with only one parity drive (ala RAID5)
12:16 itisravi_ joined #gluster
12:22 mkzero joined #gluster
12:22 jmarley joined #gluster
12:27 gildub joined #gluster
12:30 itisravi joined #gluster
12:33 calum_ joined #gluster
12:39 virusuy joined #gluster
12:43 Pupeno joined #gluster
12:50 sprachgenerator joined #gluster
12:50 ricky-ticky joined #gluster
12:52 bene2 joined #gluster
12:54 ekuric joined #gluster
13:19 chirino joined #gluster
13:20 gildub joined #gluster
13:23 partner upgraded yesterday from 3.3.2 to 3.4.5 and now our logs are flooded with the "disk layout missing" with following "mismatching layout", like this: http://pastie.org/private/48otxyeuda3mfima7jnmq
13:23 glusterbot Title: Private Paste - Pastie (at pastie.org)
13:25 partner not sure why fix-layout would be needed as the layout did not change but that's the only hint found from the internet so far
13:27 msmith_ joined #gluster
13:27 partner that will anyways break all the places as the /var mounts will get full so any advice most welcome
13:30 sputnik13 joined #gluster
13:32 liquidat joined #gluster
13:33 theron joined #gluster
13:33 rjoseph joined #gluster
13:34 davdunc` joined #gluster
13:35 davdunc joined #gluster
13:40 nbalachandran joined #gluster
13:44 sputnik13 joined #gluster
13:47 ninkotech joined #gluster
13:47 ninkotech_ joined #gluster
13:47 plarsen joined #gluster
13:50 plarsen joined #gluster
13:54 rjoseph joined #gluster
13:55 anoopcs joined #gluster
13:58 sputnik13 joined #gluster
13:59 rwheeler joined #gluster
14:00 l0uis joined #gluster
14:05 XpineX_ joined #gluster
14:20 chucky_z joined #gluster
14:21 glusterbot New news from newglusterbugs: [Bug 1146985] Patches with "Submitted, Merge Pending " status in GlusterFs gerrit server <https://bugzilla.redhat.com/show_bug.cgi?id=1146985>
14:21 bene2 joined #gluster
14:26 zwevans joined #gluster
14:31 mojibake joined #gluster
14:34 sputnik13 joined #gluster
14:36 jobewan joined #gluster
14:38 lalatenduM joined #gluster
14:39 fubada joined #gluster
14:42 sputnik13 joined #gluster
14:44 fattaneh1 joined #gluster
14:46 coredump joined #gluster
14:52 lmickh joined #gluster
15:03 msmith_ joined #gluster
15:06 coredump joined #gluster
15:07 Zordrak joined #gluster
15:16 sputnik13 joined #gluster
15:25 chirino joined #gluster
15:35 sputnik13 joined #gluster
15:46 hagarth joined #gluster
15:56 virusuy joined #gluster
15:56 virusuy joined #gluster
16:03 theron joined #gluster
16:04 coredump joined #gluster
16:06 sputnik13 joined #gluster
16:08 jiffin joined #gluster
16:10 LebedevRI joined #gluster
16:10 haomaiwa_ joined #gluster
16:13 chirino_m joined #gluster
16:21 daMaestro joined #gluster
16:35 n-st joined #gluster
16:41 pkoro joined #gluster
16:44 n-st hi, how exactly does the glusterfs-client retrieve data from a mounted volume? does it only use the server specified when the volume was mounted, or does it download data from all servers simultaneously (like it does for listing operations)?
16:51 glusterbot New news from newglusterbugs: [Bug 1005344] duplicate entries in volume property <https://bugzilla.redhat.com/show_bug.cgi?id=1005344>
16:53 kkeithley it sends the lookup requests to all the replicas. The first one to respond is the one the client reads from
16:56 n-st kkeithley: that makes sense, thanks.
17:04 CyrilPeponnet joined #gluster
17:05 CyrilPeponnet msg purpleidea Hey James
17:05 purpleidea CyrilPeponnet: hey
17:06 purpleidea lol
17:07 coredump joined #gluster
17:10 n-st is it advisable to keep the /var/www of two webservers in a shared glusterfs volume? (the average latency between them is around 4.3 milliseconds.)
17:14 haomaiwa_ joined #gluster
17:17 vikumar joined #gluster
17:21 PeterA joined #gluster
17:24 hagarth joined #gluster
17:24 sputnik13 joined #gluster
17:27 chucky_z can you still alter variables such as glusterfs cache with the newest versions?
17:35 uebera|| joined #gluster
17:35 semiosis chucky_z: ,,(options)
17:35 glusterbot chucky_z: See config options and their defaults with 'gluster volume set help'; you can see the current value of an option, if it has been modified, with 'gluster volume info'; see also this page about undocumented options: http://goo.gl/mIAe4E
17:35 tdasilva joined #gluster
17:37 msmith_ joined #gluster
17:40 rwheeler joined #gluster
17:42 chucky_z ok... and before I break something, is noatime a good idea?
17:42 cfeller joined #gluster
17:43 semiosis i like it
17:43 chucky_z I mean I can just throw it in there in the fstab like a normal ext3/ext4 mount without any caveats?
17:47 semiosis ah, you put the noatime on the brick mount, not the client mount
17:48 semiosis iirc newer versions will complain if you put that on a client mount
17:48 semiosis i think older versions silently ignored
17:49 fubada joined #gluster
17:49 chucky_z ok, unfortunately I have to mount the brick/client on the same partition. :(
17:49 chucky_z it seems to work great though
17:50 semiosis as long as the client mount point is not same as, or underneath, the brick path you should be OK
17:50 chucky_z nope!
17:56 glusterbot New news from resolvedglusterbugs: [Bug 1094860] Puppet-Gluster should support building btrfs bricks <https://bugzilla.redhat.com/show_bug.cgi?id=1094860>
18:00 RicardoSSP joined #gluster
18:00 RicardoSSP joined #gluster
18:00 coredump joined #gluster
18:07 coredump joined #gluster
18:30 coredump joined #gluster
18:50 skippy semiosis: newer clients complain about noatime on a client mount?  Using fuse driver?
18:50 skippy I've been making noatime mounts wiht 3.5.2.  I dont see complaints.  But maybe I missed them.
19:06 an joined #gluster
19:24 dmachi1 joined #gluster
19:27 bjuanico joined #gluster
19:51 bjuanico joined #gluster
20:16 theron joined #gluster
20:23 adamdrew joined #gluster
20:32 semiosis skippy: i must have been mistaken then
20:37 n-st joined #gluster
20:46 theron joined #gluster
20:56 quique joined #gluster
20:57 quique what's the difference between having a replica 4 with four bricks and a replica 2 with 4 bricks?
21:00 skippy replica 4 will write the same data on each of the four bricks.
21:00 skippy replica 2 will create two replica sets of two bricks each.
21:01 skippy quique: https://github.com/gluster/glusterfs/raw/master/doc/admin-guide/en-US/images/Distributed_Replicated_Volume.png
21:28 adamdrew joined #gluster
21:29 an joined #gluster
21:32 MacWinner joined #gluster
21:55 partner 16:23 < partner> upgraded yesterday from 3.3.2 to 3.4.5 and now our logs are flooded with the "disk layout missing" with following "mismatching layout", like this: http://pastie.org/private/48otxyeuda3mfima7jnmq
21:55 glusterbot Title: Private Paste - Pastie (at pastie.org)
22:09 partner fuck, that'll screw up the production over the weekend
22:21 partner we're just simply running over the /var capacity due to this flood
22:36 gildub joined #gluster
22:50 cfeller joined #gluster
23:08 wgao_ joined #gluster
23:09 MacWinner joined #gluster
23:23 glusterbot New news from newglusterbugs: [Bug 1147107] Cannot set distribute.migrate-data xattr on a file <https://bugzilla.redhat.com/show_bug.cgi?id=1147107>
23:38 Telsin joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary