Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 T3 joined #gluster
00:13 bennyturns joined #gluster
00:18 ctria joined #gluster
00:37 Pupeno joined #gluster
00:40 T3 joined #gluster
00:41 gnudna joined #gluster
00:47 plarsen joined #gluster
00:47 penglish2 joined #gluster
00:48 gnudna left #gluster
00:48 prg3 joined #gluster
00:56 gnudna joined #gluster
01:00 MugginsM joined #gluster
01:00 T3 joined #gluster
01:03 gnudna left #gluster
01:05 gnudna joined #gluster
01:08 kripper1 joined #gluster
01:12 elico left #gluster
01:17 T3 joined #gluster
01:20 gnudna left #gluster
01:26 gnudna joined #gluster
01:40 T3 joined #gluster
01:42 ecchcw joined #gluster
01:54 nangthang joined #gluster
01:59 plarsen joined #gluster
02:03 T3 joined #gluster
02:14 gildub joined #gluster
02:14 haomaiwa_ joined #gluster
02:15 harish joined #gluster
02:21 nishanth joined #gluster
02:27 dgandhi joined #gluster
02:40 T3 joined #gluster
02:40 bharata-rao joined #gluster
03:01 spandit joined #gluster
03:23 shubhendu joined #gluster
03:44 T3 joined #gluster
03:50 daMaestro joined #gluster
03:58 nbalacha joined #gluster
04:01 T3 joined #gluster
04:03 kanagaraj joined #gluster
04:10 nbalacha joined #gluster
04:13 atinmu joined #gluster
04:16 kumar joined #gluster
04:17 ppai joined #gluster
04:20 gnudna left #gluster
04:20 T3 joined #gluster
04:33 anoopcs joined #gluster
04:36 ppai joined #gluster
04:38 jiffin joined #gluster
04:39 T0aD- joined #gluster
04:39 glusterbot News from newglusterbugs: [Bug 1206429] Maintainin local transaction peer list in op-sm framework <https://bugzilla.redhat.com/show_bug.cgi?id=1206429>
04:42 nishanth joined #gluster
04:46 T3 joined #gluster
04:48 rafi joined #gluster
04:51 lalatenduM joined #gluster
04:55 ndarshan joined #gluster
04:56 RameshN joined #gluster
05:02 dusmant joined #gluster
05:14 anil joined #gluster
05:16 aravindavk joined #gluster
05:18 Manikandan joined #gluster
05:18 ashiq joined #gluster
05:19 hagarth joined #gluster
05:20 hgowtham joined #gluster
05:20 ppai joined #gluster
05:20 soumya_ joined #gluster
05:21 kasturi joined #gluster
05:25 overclk joined #gluster
05:26 schandra joined #gluster
05:27 atalur joined #gluster
05:40 kdhananjay joined #gluster
05:41 atalur joined #gluster
05:42 nshaikh joined #gluster
05:48 vikumar joined #gluster
05:49 jackdpeterson joined #gluster
05:49 Bhaskarakiran joined #gluster
05:56 atinmu joined #gluster
06:01 anrao joined #gluster
06:12 hagarth joined #gluster
06:13 soumya_ joined #gluster
06:15 LebedevRI joined #gluster
06:31 atinmu joined #gluster
06:49 lalatenduM_ joined #gluster
06:53 atinmu joined #gluster
07:16 jtux joined #gluster
07:18 raghu joined #gluster
07:19 RameshN joined #gluster
07:20 kdhananjay1 joined #gluster
07:20 gem joined #gluster
07:20 hgowtham joined #gluster
07:26 anrao joined #gluster
07:27 atinmu joined #gluster
07:30 Arminder joined #gluster
07:47 osc_khoj hello, I met some problem to use by compile source code
07:47 osc_khoj env :  env :     ubuntu 12.04 LTS + glusterfs 3.6.1
07:47 osc_khoj 2node(master) Distributed + Replication volume + 1 node(slave) distibuted by using geo-replication
07:47 osc_khoj krprdnas                                                 krprdnas-bk
07:47 osc_khoj server name :   KR01/KR02 +  JP01 on AWS
07:47 osc_khoj There was no problem to work with packged file,
07:47 osc_khoj but we needed to change the code location. and we compiled by source code..
07:47 osc_khoj In 2 node Distributed+replication voulme, there are no problem.
07:47 osc_khoj but In geo-replication, we met some error as below when run command
07:47 osc_khoj gluster volume geo-replication krprdnas 192.168.0.139::krprdnas-bk create push-pem force
07:47 osc_khoj 192.168.0.139 not reachable.
07:47 osc_khoj geo-replication command failed
07:47 osc_khoj ---------------------------------------------------------------------------------
07:47 glusterbot osc_khoj: -------------------------------------------------------------------------------'s karma is now -1
07:47 osc_khoj [2015-03-27 06:16:05.616302] E [glusterd-geo-rep.c:2012:glusterd_verify_slave] 0-: Not a valid slave
07:47 osc_khoj [2015-03-27 06:16:05.618358] E [glusterd-geo-rep.c:2240:glusterd_op_stage_gsync_create] 0-: 192.168.0.139::krprdnas-bk is not a valid slave volume. Error: 192.168.0.139 not reachable.
07:47 osc_khoj --we ran a procedure as below------------------------------------------------------------------------------------
07:47 glusterbot osc_khoj: below----------------------------------------------------------------------------------'s karma is now -1
07:47 osc_khoj download file : wget https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.6/+files/glusterfs_3.6.1.orig.tar.gz
07:47 osc_khoj glusterfs_3.6.1.orig.tar.gz (5.7 MiB)
07:47 osc_khoj compile prerequsite :  apt-get install  gcc flex bison openssl libssl-dev libxml2 libxml2-dev make
07:47 osc_khoj compile : ./configure --prefix=/opt/gluster
07:47 osc_khoj make && make install
07:47 osc_khoj / adapted passwordless ssh
07:47 osc_khoj ----------------------------------------------------------------------------------------
07:47 glusterbot osc_khoj: --------------------------------------------------------------------------------------'s karma is now -1
07:47 osc_khoj we found when run "gluster volume geo-replication" command, can't deliver  /opt/gluster/var/lib/glusterd/geo-replication/common_secret.pem.pub
07:47 osc_khoj we copied it to geo-replication server...but didn;t work.
07:47 osc_khoj I gathered the log at
07:47 osc_khoj http://paste.ubuntu.com/10687918/
07:48 osc_khoj Is there anyone to help me out...
07:49 osc_khoj please
07:58 spiekey joined #gluster
08:03 bala joined #gluster
08:07 kovshenin joined #gluster
08:10 glusterbot News from newglusterbugs: [Bug 1206461] sparse file self heal fail under xfs version 2 with speculative preallocation feature on <https://bugzilla.redhat.com/show_bug.cgi?id=1206461>
08:11 spandit joined #gluster
08:13 [Enrico] joined #gluster
08:22 ron-slc joined #gluster
08:23 jbrooks joined #gluster
08:24 Philambdo joined #gluster
08:26 hchiramm joined #gluster
08:38 fsimonce joined #gluster
08:40 lalatenduM joined #gluster
08:42 hgowtham joined #gluster
08:45 nbalacha joined #gluster
08:59 kshlm joined #gluster
09:04 liquidat joined #gluster
09:05 bharata_ joined #gluster
09:10 Arminder- joined #gluster
09:11 anrao joined #gluster
09:21 nbalacha joined #gluster
09:23 bala joined #gluster
09:23 Arminder joined #gluster
09:24 meghanam joined #gluster
09:25 aravindavk osc_khoj, Looks like some issue with Geo-rep on Ubuntu systems. I will look into the issue
09:25 bene2 joined #gluster
09:33 prilly joined #gluster
09:33 Slashman joined #gluster
09:35 ppai joined #gluster
09:40 ktosiek joined #gluster
09:40 hagarth joined #gluster
09:53 dusmant joined #gluster
09:55 Norky joined #gluster
09:56 Arminder joined #gluster
09:57 ira joined #gluster
09:58 Arminder- joined #gluster
10:03 anoopcs joined #gluster
10:03 bala joined #gluster
10:05 Arminder joined #gluster
10:06 DV joined #gluster
10:09 Arminder- joined #gluster
10:11 AndreeeCZ joined #gluster
10:12 AndreeeCZ hi again! What is wrong with this iptables config? Gluster stopped working after i employed it:
10:12 AndreeeCZ http://pastie.org/10057028
10:13 AndreeeCZ that is - allow only ssh, http and gluster, reject others
10:14 prilly joined #gluster
10:14 kshlm AndreeeCZ, what version of GlusterFS are you using?
10:15 AndreeeCZ kshlm, glusterfs 3.2.7
10:15 kshlm That should be correct for 3.2.7
10:16 AndreeeCZ kshlm, if i remove the last input entry (reject all), it starts working again
10:17 kshlm I'm not familiar with IPtables rules. So I don't have any idea why what you observe is happening.
10:18 kshlm With GlusterFS-3.4 and above, bricks use ports starting from 49152. I thought that could be the reason for your problem.
10:21 fattaneh joined #gluster
10:22 AndreeeCZ kshlm, this looks like gluster is also using 1017-1021 ?? http://pastie.org/10057050
10:25 kshlm Those are not listen ports used for serving. They are the client ports used to connect to the listen ports.
10:25 harish joined #gluster
10:25 AndreeeCZ right and sice i have allowed all outgoing, it should be good
10:26 kshlm Should be.
10:29 ndevos AndreeeCZ: any particular reason why you are still using a version that does not get any updates? 3.4 is the current oldest release that gets bugfixes, and thats almost phased out too already
10:29 AndreeeCZ ndevos, its in the repos
10:30 ndevos AndreeeCZ: well, newer versions too, it just depends on what repo you're using
10:31 AndreeeCZ stock debian repos
10:33 AndreeeCZ pfff iptables will be my damnation
10:34 AndreeeCZ this is what it looks like with the default drop policy http://pastie.org/10057068
10:36 karnan joined #gluster
10:39 gildub joined #gluster
10:41 glusterbot News from newglusterbugs: [Bug 1206517] Data Tiering:Distribute-replicate type Volume not getting converted to a tiered volume on attach-tier <https://bugzilla.redhat.com/show_bug.cgi?id=1206517>
10:45 rafi1 joined #gluster
10:49 soumya_ joined #gluster
10:50 dusmant joined #gluster
10:51 Dw_Sn joined #gluster
10:52 atinmu joined #gluster
10:53 ndevos AndreeeCZ: if you are planning to use it for something else than testing, you really should think about using a more recent version, like 3.5 or 3.6, see http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/Debian/
10:55 Manikandan joined #gluster
10:56 Arminder joined #gluster
10:59 nbalacha joined #gluster
11:00 Arminder joined #gluster
11:01 Arminder joined #gluster
11:02 Arminder joined #gluster
11:02 anoopcs joined #gluster
11:03 spiekey joined #gluster
11:05 Arminder joined #gluster
11:05 ctria joined #gluster
11:06 lalatenduM_ joined #gluster
11:07 Arminder- joined #gluster
11:07 spiekey Hi
11:07 glusterbot spiekey: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
11:08 spiekey i have lost a Harddrive/Brick on a 3 Node Replica Cluster. How do i replace my failed brick?
11:08 Arminder- joined #gluster
11:09 DV joined #gluster
11:09 ninkotech joined #gluster
11:09 ninkotech_ joined #gluster
11:11 dusmant joined #gluster
11:12 Arminder joined #gluster
11:12 Manikandan joined #gluster
11:13 Arminder- joined #gluster
11:14 Arminder- joined #gluster
11:15 Arminder- joined #gluster
11:17 rafi joined #gluster
11:18 atinmu joined #gluster
11:20 Arminder joined #gluster
11:21 Arminder- joined #gluster
11:29 Manikandan joined #gluster
11:31 nbalacha joined #gluster
11:34 aravindavk joined #gluster
11:41 glusterbot News from newglusterbugs: [Bug 1202212] Performance enhancement for RDMA <https://bugzilla.redhat.com/show_bug.cgi?id=1202212>
11:45 Arminder joined #gluster
11:47 Arminder- joined #gluster
11:49 Arminder joined #gluster
11:54 fattaneh1 joined #gluster
11:55 gem joined #gluster
12:03 john joined #gluster
12:05 calisto joined #gluster
12:09 bene2 joined #gluster
12:10 vikumar joined #gluster
12:11 glusterbot News from newglusterbugs: [Bug 1206547] [Backup]: Glusterfind create session unable to correctly set passwordless ssh to its peer(s) <https://bugzilla.redhat.com/show_bug.cgi?id=1206547>
12:11 glusterbot News from newglusterbugs: [Bug 1206546] Data Tiering:Need a way from CLI to identify hot and cold tier bricks easily <https://bugzilla.redhat.com/show_bug.cgi?id=1206546>
12:11 glusterbot News from newglusterbugs: [Bug 1206553] Data Tiering:Need to allow detaching of cold tier too <https://bugzilla.redhat.com/show_bug.cgi?id=1206553>
12:11 glusterbot News from newglusterbugs: [Bug 1206539] Tracker bug for GlusterFS documentation Improvement. <https://bugzilla.redhat.com/show_bug.cgi?id=1206539>
12:13 nbalacha joined #gluster
12:14 Arminder- joined #gluster
12:15 Arminder joined #gluster
12:16 dusmant joined #gluster
12:16 jmarley joined #gluster
12:21 lalatenduM_ joined #gluster
12:22 Manikandan joined #gluster
12:23 Arminder- joined #gluster
12:25 Arminder joined #gluster
12:28 firemanxbr joined #gluster
12:30 nbalacha joined #gluster
12:34 ktosiek joined #gluster
12:37 kripper joined #gluster
12:39 aravindavk joined #gluster
12:42 Gill joined #gluster
12:47 kanagaraj joined #gluster
12:47 up2late joined #gluster
12:48 up2late does anyone know if a gluster client can query its quota size by bytes?
12:50 up2late need to start deleting files from the client side automatically if its at its quota hard limit
12:52 wkf joined #gluster
12:54 firemanxbr joined #gluster
12:57 B21956 joined #gluster
12:58 Arminder- joined #gluster
12:58 T3 joined #gluster
13:01 Slashman_ joined #gluster
13:05 shaunm joined #gluster
13:18 theron joined #gluster
13:30 georgeh-LT2 joined #gluster
13:40 aravindavk joined #gluster
13:42 glusterbot News from newglusterbugs: [Bug 1206592] Data Tiering:Allow adding brick to hot tier too(or let user choose to add bricks to any tier of their wish) <https://bugzilla.redhat.com/show_bug.cgi?id=1206592>
13:42 glusterbot News from newglusterbugs: [Bug 1206596] Data Tiering:Adding new bricks to a tiered volume(which defaults to cold tier) is messing or skewing up the dht hash ranges <https://bugzilla.redhat.com/show_bug.cgi?id=1206596>
13:42 glusterbot News from newglusterbugs: [Bug 1206587] Replace contrib/uuid by a libglusterfs wrapper that uses the uuid implementation from the OS <https://bugzilla.redhat.com/show_bug.cgi?id=1206587>
13:43 dgandhi joined #gluster
13:44 dgandhi joined #gluster
13:46 dgandhi joined #gluster
13:46 dgandhi joined #gluster
13:47 dgandhi joined #gluster
13:48 kripper left #gluster
13:59 plarsen joined #gluster
13:59 ecchcw joined #gluster
14:00 fattaneh1 joined #gluster
14:07 shubhendu joined #gluster
14:12 glusterbot News from newglusterbugs: [Bug 1206602] Data Tiering: Newly added bricks not getting tier-gfid <https://bugzilla.redhat.com/show_bug.cgi?id=1206602>
14:15 kanagaraj joined #gluster
14:16 jmarley joined #gluster
14:19 bene2 in glusterfs-3.6.0.53-1.el6rhs.x86_64, why am I seeing messages like this in FUSE mount log?   Isn't this going to slow down any app that does a lot of renames?     [2015-03-27 14:13:08.592464] I [dht-rename.c:1345:dht_rename] 0-vol0-dht: renaming /bulked/_run/SAS_work79A100000E17_gprfc073.sbu.lab.eng.bos.redhat.com/mp_connect/submitted_jobs.sas7bdat.lck (hash=vol0-replicate-0/cache=vol0-replicate-0) => /bulked/_run/SAS_work79A100000E17_
14:19 bene2 gprfc073.sbu.lab.eng.bos.redhat.com/mp_connect/submitted_jobs.sas7bdat (hash=vol0-replicate-1/cache=vol0-replicate-0)
14:21 wushudoin joined #gluster
14:29 ckotil I have 1 brick each on 2 hosts, and when i cut off one side there is about 30-45 seconds where any access to the FS just hangs. anyone know how to tune that or what Im doing wrong?
14:29 ckotil it sounds like this http://serverfault.com/questions/619355/how-to-lower-gluster-fs-down-peer-timeout-reduce-down-peer-impact but I tuend the ping-timeout to 3 seconds, and it didnt change the failure mode
14:35 roost joined #gluster
14:36 ndevos bene2: yeah, and that is still the same in the current upstream/master - file bug for it please
14:36 ndevos file a bug ...
14:36 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
14:36 gnudna joined #gluster
14:36 gnudna joined #gluster
14:36 gnudna left #gluster
14:36 bene2 ndevos, thx, will do
14:36 gnudna joined #gluster
14:37 edong23 joined #gluster
14:42 Leildin hi guys, in gluster 3.5, if I remove a brick from a distributed volume there's no copying of files to another node before removing it, right ?
14:46 ndevos ~ping-timeout | ckotil
14:46 glusterbot ckotil: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
14:47 kkeithley GlusterFS-3.4.7beta4 is available. Release Notes at http://blog.gluster.org/2015/03/glusterfs-3-4-7beta4-is-now-available-for-testing/
14:47 kkeithley Tentative GA is April 6. Please test at your earliest convenience; time is short.
14:48 ckotil ya, i read that. the probelm is that this blocks a fast failover. im serving web pages, trying to make it HA
14:48 anrao joined #gluster
14:48 gnudna wonders about 3.6.x
14:49 gnudna is that not considered stable?
14:49 monotek1 joined #gluster
14:50 ndevos ckotil: it all depends on your expectations, ha is never(?) instantaneous
14:50 ckotil a few seconds would be ideal
14:51 fattaneh1 left #gluster
14:51 ndevos ckotil: sure, but there needs to be some time for detection and recovery, you can lower the ping-timeout, but that can also cause fail-overs when they should not be needed
14:54 ckotil sure, i dont plan on keeping the ping-timeout at 3 seconds. i was just trying to get things to failover quicker. and so far I havent been able to improve on the 30-40 seconds
14:54 ckotil my underlying FS is ext4 would switching to xfs help? there are a lot of directory's with a couple small files in each
14:56 lpabon joined #gluster
15:02 CyrilPeponnet joined #gluster
15:04 corretico joined #gluster
15:08 kkeithley gnudna: yes, 3.6.2 is considered stable. Many are using it in production. It's the basis of RHS 3.0, including the recently announced (and renamed) Red Hat Gluster Storage 3.0.4.
15:08 julim joined #gluster
15:12 Arminder joined #gluster
15:14 julim joined #gluster
15:16 ckotil ndevos: there's nothing else I can tune to shorten the failover time when I lose a node?
15:18 ecchcw joined #gluster
15:19 gnudna ckotil i believe there is an option you can set on the volume
15:20 ckotil http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options nothing jumped out at me here
15:20 gnudna network.ping-timeout
15:20 gnudna i think that is the one
15:21 gnudna be default it is 42s
15:21 ckotil ah, I tried setting that way down. to 3 seconds. didnt change a thing.
15:22 gnudna thanks kkeithley
15:23 gnudna not sure if there are other options but in my case it did not work as expected either
15:24 gnudna but i use replicated for kvm and my testing showed it did not work i have since gone about it differently
15:24 ckotil ive been using iptables to test this. if i block off one of the other nodes, any kind of stat, read, write takes about 30 seconds to complete. then subsequent requests are fast again
15:25 gnudna ckotil did you stop and start the volume?
15:25 ckotil no
15:25 gnudna i believe i read the volume needs to be stopped and started when you make that change
15:25 ckotil im simulating connectivity failure, or power failure. loss of a node
15:25 ckotil ah
15:25 * ckotil tries
15:26 Arminder joined #gluster
15:26 Arminder joined #gluster
15:27 soumya_ joined #gluster
15:28 Gill left #gluster
15:28 fattaneh1 joined #gluster
15:28 ckotil grabbing lunch. bbiab
15:28 gnudna later
15:30 Arminder joined #gluster
15:31 Arminder joined #gluster
15:32 Arminder joined #gluster
15:33 Arminder joined #gluster
15:34 Arminder joined #gluster
15:34 ndevos ckotil: it also depends a little on what causes the fail-oever to get delayed, ping-timeout would be the 1st option to test (not sure if you need to remount everything to apply the option)
15:35 Arminder joined #gluster
15:35 ndevos ckotil: if the process is using file locks, those locks need to be released (or cleaned up in case of disconnects) before other applications can get those locks
15:36 theron_ joined #gluster
15:36 Arminder joined #gluster
15:37 ndevos ckotil: there are more details about this locking stuff in bug 1129787
15:37 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1129787 high, high, ---, ndevos, POST , file locks are not released within an acceptable time when a fuse-client uncleanly disconnects
15:38 shaunm joined #gluster
15:38 obnox hmmm trying to build glusterfs (master) locally as per the instructions here:
15:38 obnox http://www.gluster.org/community/documentation/index.php/Building_GlusterFS
15:38 obnox on fedora 21
15:38 obnox the instructions tell me to install libcmocka-devel
15:38 obnox but the build fails due to missing header:
15:38 obnox cmockery/cmockery_override.h
15:39 obnox libcmocka is supposed to replace cmockery, and it ships cmockery/cmockery.h but not this one?
15:39 ndevos obnox: the current master branch requires cmocka-1.0+
15:39 haomaiwa_ joined #gluster
15:39 obnox ndevos: $ rpm -q libcmocka
15:39 obnox libcmocka-1.0.0-1.fc21.x86_64
15:40 ndevos maybe 1.0.1?
15:40 gnudna obnox look for the devel file
15:40 obnox cmocka does not ship cmockery_override.h
15:40 obnox i checked cmocka master code (upstream)
15:40 gnudna something like libcmocka-devel
15:40 ndevos obnox: yes, you need libcmocka-devel-1.0.1
15:40 obnox hm, $ rpm -q libcmocka-devel
15:40 obnox libcmocka-devel-1.0.0-1.fc21.x86_64
15:41 * obnox wonders where the cmocka package gets cmockery_override.h from
15:42 ndevos obnox: version 1.0.1, not 1.0.0 -> http://paste.fedoraproject.org/203843/70923142
15:42 obnox it is not in cmocka code and not in cmocka fedora rpm spec file
15:42 gnudna cmockery2-devel-1.3.8-2.el7.x86_64
15:42 gnudna it is in epel
15:42 obnox ndevos: but what is missing is not included in your paste
15:42 obnox :)
15:43 obnox ndevos: can you check whether you have not installed cmockery2 ?
15:43 gnudna so i guess in fedora it is part of the package i pasted above
15:43 obnox I suspect that the website is simply incomplete
15:43 _shaps_ joined #gluster
15:43 ndevos obnox: ah, wait, what branch are you building?
15:43 gnudna yum provides */cmockery_override.h
15:43 obnox master
15:44 ndevos no, you are not, those cmockery headers are not required in the master branch anymore
15:45 obnox ndevos: they seem to be.
15:45 obnox ndevos: wait...
15:45 ndevos hmm, http://review.gluster.org/9738 should have gotten rid of those
15:45 obnox i just cloned the repo from review.gluster.org
15:45 ndevos and you get any includes when you 'git grep cmockery' ?
15:45 obnox ndevos: o sorry for the noise!
15:46 * ndevos only sees the updates in the .spec
15:46 obnox I seem to have been in the wrong dir.
15:46 * obnox hides
15:46 obnox checking again
15:47 ndevos also, this probably is more like a #gluster-dev issue, oh well
15:47 obnox oh,
15:47 obnox wasn't aware...
15:47 ckotil gnudna: totally worked
15:48 ckotil down to 3 seconds
15:48 * ckotil reads up
15:48 ckotil cool ndevos thanks. im not sure what kind of locking is going on. shouldnt be anything really. it's just apache reading and writing to the mount
15:49 ckotil db's arent on the glusterFS
15:49 ndevos ckotil: I have no idea how apache uses (or not) locking
15:49 ckotil me either
15:52 ckotil this is good tho. ill mess around with ping-timeout, but that's really helped the failover situation. i can shut down one side and apache starts serving again within 3 seconds. im using mod proxy balancer.
15:54 hagarth joined #gluster
15:55 ckotil If I added another brick, and or went duplicated, would there be any failover time at all from just losing a single node?
15:55 bennyturns joined #gluster
15:56 ckotil added another brick on an additional node that is
15:57 Slashman joined #gluster
16:13 bala joined #gluster
16:29 theron joined #gluster
16:32 deniszh joined #gluster
16:32 fattaneh1 joined #gluster
16:32 fattaneh1 left #gluster
16:34 Leildin JoeJulian, Hello kind sir would you have a second to spare a small mind ?
16:35 Arminder joined #gluster
16:35 * JoeJulian checks to ensure his camera is off...
16:35 JoeJulian I literally just sat down at my desk.
16:35 Leildin I'm lucky then !
16:35 Leildin I'm wondering about two things in 3.5
16:36 JoeJulian That's far fewer than me.
16:36 Leildin I want to remove two bricks on a distributed volume then rebalance
16:37 Leildin does the remove-brick copy files prior to removal in 3.5 ? and does the rebalance have any effect on a distributed vol other than create folder structure
16:38 Leildin I've gotten multiple sources of info that contridict each other so I'm reticent to go ahead with removal
16:38 JoeJulian Leildin: It's *supposed to* copy the files off, yes. I don't have a lot of confidence in that process and bug 1136702 makes me even less so.
16:38 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1136702 high, high, ---, bugs, MODIFIED , Add a warning message to check the removed-bricks for any files left post "remove-brick commit"
16:39 JoeJulian Rebalance will change the subvolume hash mapping and move files to match that mapping unless the target is more full than the source.
16:39 JoeJulian @dht
16:39 glusterbot JoeJulian: I do not know about 'dht', but I do know about these similar topics: 'dd'
16:40 JoeJulian I could have sworn I added something for that....
16:40 JoeJulian @lucky dht misses are expensive
16:40 glusterbot JoeJulian: http://joejulian.name/blog/dht-misses-are-expensive/
16:40 JoeJulian See ^ if you want to learn how dht is used.
16:42 glusterbot News from newglusterbugs: [Bug 1206655] glusterd crashes on brick op <https://bugzilla.redhat.com/show_bug.cgi?id=1206655>
16:45 deniszh joined #gluster
16:52 T3 joined #gluster
16:53 deepakcs joined #gluster
16:59 Rapture joined #gluster
17:02 shaunm joined #gluster
17:02 haomai___ joined #gluster
17:08 7GHAA8XA9 joined #gluster
17:10 T3 joined #gluster
17:11 Leildin Thanks JoeJulian !
17:18 nshaikh joined #gluster
17:31 6A4ABY5Z4 joined #gluster
17:40 haomaiwang joined #gluster
17:50 rafi joined #gluster
17:53 fattaneh joined #gluster
17:55 haomaiwa_ joined #gluster
18:04 vipulnayyar joined #gluster
18:23 fattaneh1 joined #gluster
18:42 daMaestro joined #gluster
18:49 nshaikh joined #gluster
19:03 fattaneh1 joined #gluster
19:19 calisto_ joined #gluster
19:22 ekuric joined #gluster
19:44 Arminder joined #gluster
19:45 Arminder joined #gluster
19:46 Arminder joined #gluster
19:47 Arminder joined #gluster
19:48 Arminder joined #gluster
19:51 Arminder joined #gluster
19:52 Arminder joined #gluster
19:53 Arminder joined #gluster
19:55 Arminder joined #gluster
19:55 vipulnayyar joined #gluster
19:56 Arminder joined #gluster
19:57 shaunm joined #gluster
19:57 Arminder joined #gluster
20:01 fattaneh1 left #gluster
20:13 coredump joined #gluster
20:17 fattaneh1 joined #gluster
20:18 fattaneh1 joined #gluster
20:19 rafi joined #gluster
20:25 roost left #gluster
20:28 DV joined #gluster
20:33 coredump joined #gluster
20:35 coredump joined #gluster
20:55 gnudna left #gluster
21:01 rafi joined #gluster
21:10 ctria joined #gluster
21:12 coredump joined #gluster
21:14 Mo joined #gluster
21:23 brianw joined #gluster
21:25 brianw I have (2) gluster volumes setup. If I stop glusterd on one host, I can see synced files on the associated bricks still. On the system w/ glusterd still running. gluster peer probe shows as "Peer in Cluster (Disconnected) ". How can it still sync files to the brick???
21:35 penglish1 joined #gluster
21:49 wkf joined #gluster
21:52 brianw I can't understand how it is syncing the bricks w/ glusterd stopped...
21:55 brianw systemctl stop glusterfsd
21:55 brianw Got it...
21:55 brianw Trying to use a simple snapsot procedure by just taking a node offline and running some backups off the brick, then starting back up...
22:02 brianw Works well. moving on. :)
22:08 o5k joined #gluster
22:10 ctria joined #gluster
22:21 theron joined #gluster
22:44 plarsen joined #gluster
22:50 Gill joined #gluster
23:11 T3 joined #gluster
23:21 badone joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary