Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 theron joined #gluster
00:08 theron joined #gluster
00:17 wkf joined #gluster
00:22 theron_ joined #gluster
00:24 T3 joined #gluster
00:29 bennyturns joined #gluster
00:52 topshare joined #gluster
01:05 gildub joined #gluster
01:08 hagarth joined #gluster
01:23 lalatenduM joined #gluster
01:24 bharata-rao joined #gluster
01:37 T3 joined #gluster
01:38 snewpy joined #gluster
01:38 RicardoSSP joined #gluster
01:59 pelox joined #gluster
02:14 nangthang joined #gluster
02:16 kshlm joined #gluster
02:16 purpleidea lol
02:20 Folken_ hey glusterfs guys, I have a disperse gluster volume on 3 bricks, I've mounted the gluster volume via fuse, when I do rsync -av /olddata/datapoint1/ /mnt where /mnt is the gluster volume it appears to get stuck on large files
02:21 Folken_ running gluster 3.6.2 on ubuntu 14.04.2 with ext4 on lvm (thick provisioned)
02:25 victori joined #gluster
02:30 cyberbootje joined #gluster
02:33 cfeller joined #gluster
02:38 haomaiwa_ joined #gluster
02:39 Norky joined #gluster
03:01 meghanam joined #gluster
03:37 victori joined #gluster
03:41 nbalacha joined #gluster
03:43 ppai joined #gluster
03:45 shylesh__ joined #gluster
03:46 itisravi joined #gluster
03:47 Folken_ it's a bug
03:47 Folken_ http://review.gluster.org/#/c/9475/
03:48 T3 joined #gluster
03:51 spandit joined #gluster
04:01 atinmu joined #gluster
04:02 nishanth joined #gluster
04:04 Hemanth1 joined #gluster
04:07 meghanam joined #gluster
04:10 RameshN joined #gluster
04:20 DV joined #gluster
04:22 kanagaraj joined #gluster
04:24 schandra joined #gluster
04:41 SOLDIERz joined #gluster
04:43 sputnik13 joined #gluster
04:50 Hemanth1 joined #gluster
04:52 ppp joined #gluster
04:55 anoopcs joined #gluster
04:56 rafi joined #gluster
04:57 jiffin joined #gluster
04:57 dusmant joined #gluster
05:04 vimal joined #gluster
05:08 gem joined #gluster
05:08 aravindavk joined #gluster
05:14 kshlm joined #gluster
05:15 Manikandan_ joined #gluster
05:16 Apeksha joined #gluster
05:17 ashiq joined #gluster
05:21 DV joined #gluster
05:23 poornimag joined #gluster
05:24 meghanam joined #gluster
05:25 atalur joined #gluster
05:27 soumya joined #gluster
05:30 soumya joined #gluster
05:33 ashiq joined #gluster
05:37 T3 joined #gluster
05:45 raghu joined #gluster
05:45 kdhananjay joined #gluster
05:48 anil joined #gluster
05:49 dusmant joined #gluster
05:56 Bhaskarakiran joined #gluster
05:56 Bhaskarakiran_ joined #gluster
06:00 Bhaskarakiran joined #gluster
06:01 ramteid joined #gluster
06:10 SOLDIERz joined #gluster
06:12 SOLDIERz_ joined #gluster
06:12 SOLDIERz_ joined #gluster
06:24 victori joined #gluster
06:30 vimal joined #gluster
06:32 bala joined #gluster
06:35 smohan joined #gluster
06:36 shubhendu joined #gluster
06:37 aravindavk joined #gluster
06:38 DV_ joined #gluster
06:47 elico joined #gluster
06:48 kovshenin joined #gluster
06:55 nshaikh joined #gluster
06:57 nangthang joined #gluster
07:00 kovshenin joined #gluster
07:04 Bhaskarakiran joined #gluster
07:06 lifeofguenter joined #gluster
07:13 kdhananjay joined #gluster
07:13 kshlm joined #gluster
07:14 anrao joined #gluster
07:19 bala joined #gluster
07:20 gildub joined #gluster
07:24 jtux joined #gluster
07:26 T3 joined #gluster
07:26 rjoseph joined #gluster
07:27 glusterbot News from newglusterbugs: [Bug 1201631] Dist-geo-rep: With new Active/Passive switching logic, mgmt volume mountpoint is not cleaned up. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1201631>
07:36 Manikandan_ joined #gluster
07:46 topshare joined #gluster
07:54 bala joined #gluster
07:55 Philambdo joined #gluster
08:06 topshare joined #gluster
08:10 lifeofguenter joined #gluster
08:11 Leildin joined #gluster
08:20 deniszh joined #gluster
08:26 topshare joined #gluster
08:27 [Enrico] joined #gluster
08:27 glusterbot News from newglusterbugs: [Bug 1201648] Conditionally destroy mutex and conditional variables. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1201648>
08:27 hchiramm_ joined #gluster
08:32 pelox joined #gluster
08:34 harish_ joined #gluster
08:54 RameshN joined #gluster
09:00 liquidat joined #gluster
09:03 DV joined #gluster
09:12 ctria joined #gluster
09:14 T3 joined #gluster
09:15 harish_ joined #gluster
09:30 Pupeno joined #gluster
09:30 T0aD joined #gluster
09:32 Dw_Sn joined #gluster
09:32 hgowtham joined #gluster
09:36 masterzen joined #gluster
09:48 masterzen joined #gluster
09:51 vikumar joined #gluster
09:52 rotbeard joined #gluster
09:58 ira joined #gluster
10:02 Prilly joined #gluster
10:05 Prilly joined #gluster
10:11 jflf joined #gluster
10:20 nbalacha joined #gluster
10:20 anrao_afk joined #gluster
10:20 Norky joined #gluster
10:21 nbalacha joined #gluster
10:21 rjoseph joined #gluster
10:22 raz joined #gluster
10:23 rafi1 joined #gluster
10:26 raz left #gluster
10:37 wkf_ joined #gluster
10:37 yossarianuk hi - how can I find out if my mounted glustetfs partition is mounted with 'direct-io-mode' enabled ?
10:38 yossarianuk I can't see that option just using the 'mount' command
10:39 Bhaskarakiran joined #gluster
10:45 rafi joined #gluster
10:47 firemanxbr joined #gluster
10:47 ctria joined #gluster
10:53 topshare_ joined #gluster
10:56 nangthang joined #gluster
10:57 glusterbot News from newglusterbugs: [Bug 1201724] Handle the review comments in bit-rot patches <https://bugzilla.redhat.co​m/show_bug.cgi?id=1201724>
10:58 rwheeler joined #gluster
10:59 rjoseph joined #gluster
11:01 nbalacha joined #gluster
11:03 T3 joined #gluster
11:09 pietschee joined #gluster
11:09 foster joined #gluster
11:11 necrogami joined #gluster
11:12 Prilly performance.cache-size
11:12 Prilly how to calculate it?
11:13 hgowtham joined #gluster
11:14 pietschee it seems that the load over my 4 node distriputed replicated cluster is imbalanced. seems on of the nodes got all the work to do... how to get (nearly) same load on all servers?
11:21 kkeithley1 joined #gluster
11:23 dusmant joined #gluster
11:27 pietschee left #gluster
11:27 glusterbot News from resolvedglusterbugs: [Bug 921215] Cannot create volumes with a . in the name <https://bugzilla.redhat.com/show_bug.cgi?id=921215>
11:30 jiffin joined #gluster
11:32 bala joined #gluster
11:33 itisravi joined #gluster
11:35 Leildin Hi guys, if anyone has any idea if gluster with loads of small files can be made to work faster with some tweek I would give my firstborn for that tweek
11:37 smohan_ joined #gluster
11:38 calisto_ joined #gluster
11:43 kdhananjay joined #gluster
11:54 LebedevRI joined #gluster
11:54 lalatenduM joined #gluster
11:57 dusmant joined #gluster
11:57 diegows joined #gluster
11:59 aravindavk joined #gluster
12:02 SOLDIERz_ joined #gluster
12:04 T3 joined #gluster
12:15 rjoseph joined #gluster
12:16 andrewlsd joined #gluster
12:17 T3 joined #gluster
12:23 SOLDIERz_ joined #gluster
12:30 edwardm61 joined #gluster
12:38 jflf Leildin: I'll give them your firstborn if anyone can help with that, too.
12:40 Folken_ I have not tired it...
12:40 Folken_ but could you create a block device using loop to overcome the small file issue?
12:41 dgandhi joined #gluster
12:42 dgandhi joined #gluster
12:43 Folken_ has anybody created a nightly ppa build for debian/ubuntu
12:43 shubhendu joined #gluster
12:49 lalatenduM joined #gluster
12:55 kanagaraj joined #gluster
12:56 rafi joined #gluster
13:02 bala joined #gluster
13:03 hybrid5121 joined #gluster
13:09 B21956 joined #gluster
13:10 nishanth joined #gluster
13:11 RameshN joined #gluster
13:11 elico joined #gluster
13:13 smohan joined #gluster
13:14 xavih joined #gluster
13:14 malevolent joined #gluster
13:14 Leildin jflf : We have a deal !
13:18 theron joined #gluster
13:29 andreask joined #gluster
13:29 andreask left #gluster
13:31 jmarley joined #gluster
13:31 georgeh-LT2 joined #gluster
13:32 andrewlsd Folken_, the only PPA I know of for 'buntu is https://launchpad.net/~semiosis
13:42 smohan joined #gluster
13:43 bene2 joined #gluster
13:47 Apeksha joined #gluster
13:47 SOLDIERz_ joined #gluster
13:51 andrewlsd left #gluster
13:52 nishanth joined #gluster
13:58 pkoro joined #gluster
13:59 plarsen joined #gluster
13:59 Dw_Sn joined #gluster
14:11 bennyturns joined #gluster
14:13 wushudoin joined #gluster
14:30 Marqin joined #gluster
14:31 Marqin hello, is it possible to connect gluster v3.2 or even v3.5 into volume running on v3.1 glusters?
14:31 Marqin or all servers in volume have to be the same version?
14:33 johnnytran joined #gluster
14:33 rwheeler joined #gluster
14:34 Dw_Sn joined #gluster
14:35 pelox joined #gluster
14:37 aravindavk joined #gluster
14:38 sadbox joined #gluster
14:44 jcastillo joined #gluster
14:44 plarsen joined #gluster
14:50 Norky there is some cross-version compatibility, but I don't know if it goes back as far as 3.1
14:52 [o__o] joined #gluster
14:56 _polto_ joined #gluster
15:01 [Enrico] joined #gluster
15:06 SOLDIERz_ joined #gluster
15:09 victori joined #gluster
15:14 corretico joined #gluster
15:17 soumya joined #gluster
15:23 SOLDIERz_ joined #gluster
15:28 Folken_ Marqin: it'd be best if all are same version
15:29 harish_ joined #gluster
15:30 sputnik13 joined #gluster
15:42 meghanam joined #gluster
15:43 yossarianuk hi  - to disable direct IO - should i use 'direct-io-mode=off' or 'direct-io-mode=disable' - also how can I check if an existing glusterfs mount is using directIO or not?
15:46 meghanam joined #gluster
15:46 meghanam joined #gluster
15:49 gem joined #gluster
15:50 meghanam joined #gluster
15:54 virusuy joined #gluster
15:54 virusuy joined #gluster
15:55 _polto_ hi guys, for the second time I broke my GlusterFS production installation.  I have this error while trying to creade a directory or a file.   Stale file handle
15:55 _polto_ how can I repair my gluster volume ?
15:55 _polto_ apparently this problem appear on long sub-directory names
16:00 side_control joined #gluster
16:02 the-me joined #gluster
16:03 deniszh joined #gluster
16:11 ctria joined #gluster
16:15 kovshenin joined #gluster
16:16 ildefonso joined #gluster
16:17 jflf yossarianuk: man glusterfs says that the default is enable, so I guess it's disable
16:17 jflf I have tried with off and disable, it seems to work with both. Disabled (with the final d) fails.
16:18 fubada hi purpleidea
16:18 rjoseph joined #gluster
16:18 fubada was curious if you had a chance to fix that issue with the puppet-gluster module
16:20 yossarianuk jflf: thanks
16:21 fubada purpleidea: https://gist.github.com/aa​merik/390be8bcac269871fddc this issue
16:45 T3 joined #gluster
16:51 jmarley joined #gluster
16:52 bala joined #gluster
16:52 _polto_ joined #gluster
16:57 side_control joined #gluster
16:58 nbalacha joined #gluster
16:58 JoeJulian Leildin: It's simple. Stop performing lookups on small files. Keep them open, cache them in your app, cache them in front of your app... When your app opens and closes them, reads their attributes, re-reads their attributes (php is horrible for this) all the time, you're spending too large a percent of time in the lookup() function. Even worse is if you have a huge path within the gluster mount that you have to search every time you open that s
16:58 JoeJulian mall file causing lookups for files that don't even exist (again, notorious php apps).
16:59 matclayton joined #gluster
17:00 JoeJulian Folken_: We started down the road to building nightlies when we moved the ppa to https://launchpad.net/~gluster but never quite got that far.
17:01 matclayton We have a gluster cluster (3.6.1 on ubuntu 12.04) going nuts, and maxing out a brick with 100% utilazation. the interesting thing is under volume status all it isn’t showing up any Ports for the brick in question, any ideas what might be wrong.
17:02 JoeJulian Marqin: No, you cannot connect cross-version back to 3.1. I would strongly recommend trying to use all the same version anyway.
17:02 matclayton also will a ppa for 3.6.2 on ubuntu 12.04 (LTS) be released?
17:02 Leildin you've put your finger on my biggest problem. php webservice that HAS to check files seeing as it's a workflow that implies loads of small files shifting ahead of the large ones
17:03 Leildin they change each time, I can't really get around it with caching
17:03 sputnik13 joined #gluster
17:05 JoeJulian _polto_: If you have a consistent way to reproduce that error, please file a bug report. A remount, if mounted nfs, should clear the stale file handle. If not, check your client and brick logs. Also check the brick filesystems. That error can also come from a filesystem corruption on a brick.
17:05 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
17:06 JoeJulian Leildin: ,,(php)
17:06 glusterbot Leildin: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH
17:06 glusterbot --negative-timeout=HIGH --fopen-keep-cache
17:06 lifeofguenter joined #gluster
17:07 Leildin if you solve this for me, you have my first born to look after !
17:08 JoeJulian Leildin: I already had a new baby 16 years after our first born. I'm feeling way too old to add another one.
17:09 Leildin thanks Joe
17:10 JoeJulian matclayton: Check the brick log and see if there's any clue. Check self-heal info to see if you're in the middle of a self-heal. If all else fails, bounce the brick (kill that glusterfsd and "gluster volume start $volname force"
17:11 matclayton JoeJulian: will do
17:12 JoeJulian As far as 3.6.2 on precise, not sure. If you check /topic and search the channel logs, I seem to recall semiosis saying something about the build failing, but I could be mistaken.
17:14 matclayton JoeJulian: just a bunch of this http://dpaste.com/0K2YQRD
17:18 matclayton JoeJulian: what strange is that the instant we reboot the IO comes back
17:18 jcarter2 joined #gluster
17:19 JoeJulian A "lot" of clients disconnecting? Sounds very odd.
17:19 plarsen joined #gluster
17:20 centran joined #gluster
17:22 matclayton this brick appears to be at 100% disk IO, its a 17 drive RAID6 array. We might have a disk in it which is about to fail
17:23 centran I'm using gluster to mirror files on two servers so using replication. I want to keep the volume started because it is also mounted on each server. However, I want the two servers to stop replicating to each other so I can update a bunch of files to test and then later tell them to start replicating again to push those changes
17:24 centran does the rebalance command do that for me?
17:24 _polto_ joined #gluster
17:26 centran cause I thought rebalance was for the normal gluster setup of multiple bricks and making sure those are distributed so it would seem stopping rebalance on replicated bricks wouldn't do the trick
17:26 lifeofguenter joined #gluster
17:30 matclayton JoeJulian: status commands we’re all dying we just had to issue a cluster wide reset to fix it. just got them back up, and we can’t see anything in the shd running on these bricks right now. IO appears to have dropped for now.
17:31 matclayton JoeJulian: can you confirm if the SHD goes from brick groups to brick groups and then repeats?
17:32 Rapture joined #gluster
17:32 victori joined #gluster
17:33 kovshenin joined #gluster
17:39 SOLDIERz_ joined #gluster
17:45 matclayton JoeJulian: is there a way to ratelimit the SHD?
17:45 JoeJulian shd uses the dirents in a combined volume from .glusterfs/indices/xattrop and checks for new ones every 10 minutes, or when a connection is reestablished.
17:46 JoeJulian The self-heal operations are on a lower priority queue so shouldn't interfere with normal operations.
17:47 matclayton ok, it look like we get high IO on bricks in a very specific order, and thats the cause/effect of our problem
17:48 JoeJulian Are your disks using the deadline scheduler?
17:48 matclayton cfq I think
17:48 JoeJulian try changing that.
17:48 matclayton (the default, we can change it though)
17:49 jobewan joined #gluster
17:50 JoeJulian I hope that makes the difference. I've never had a problem. Most users haven't had a problem. But sometimes there's been people reporting resource starvation during self-heal. It's been so rare I've kind-of dismissed it as a hardware issue so I'm hoping I've been wrong all along.
17:50 calisto joined #gluster
17:51 centran so is there a way to basically "pause" replication
17:51 matclayton annoyingly this has stopped right now, and its only doing it on one server, which implies it might be an underlying hardware issue
17:51 JoeJulian pkill -f glustershd
17:52 JoeJulian On all your servers, of course.
17:52 JoeJulian ... actually..
17:52 centran I don't want to stop the volume though because it is still mounted on the server
17:53 JoeJulian Oh good. I thought I remembered a setting for that. "cluster.self-heal-daemon off"
17:53 rotbeard joined #gluster
17:58 matclayton is there a list of all the options anywhere?
17:58 JoeJulian gluster volume set help
17:59 Hemanth1 joined #gluster
17:59 jmarley joined #gluster
17:59 matclayton awesome
18:07 theron joined #gluster
18:15 kovshenin joined #gluster
18:20 Apeksha joined #gluster
18:20 fandi joined #gluster
18:21 Pupeno joined #gluster
18:25 lalatenduM joined #gluster
19:01 shaunm joined #gluster
19:03 TyphooN`work joined #gluster
19:05 TyphooN`work is it possible to rename volumes with glusterfs 3.5.x?  I am getting a strange error when trying to rename volumes and it seems like several others are running into it (via my quick google search).
19:05 TyphooN`work unrecognized word: rename (position 1)
19:08 lalatenduM joined #gluster
19:12 TyphooN`work left #gluster
19:13 JoeJulian is it possible to have an attention span longer than 7 minutes?
19:14 JoeJulian @tell TyphooN`work there has never been a command to rename volumes.
19:14 glusterbot JoeJulian: The operation succeeded.
19:30 centran TyphooN`work: you delete it but not the data and make a new one pointing to the old
19:33 lifeofguenter joined #gluster
19:47 silentlight joined #gluster
19:47 silentlight left #gluster
20:10 Pupeno joined #gluster
20:20 matclayton joined #gluster
20:22 SOLDIERz_ joined #gluster
20:22 R0ok_ joined #gluster
20:24 _polto_ joined #gluster
20:29 shaunm joined #gluster
20:34 roost joined #gluster
20:45 kovshenin joined #gluster
20:47 kovshenin joined #gluster
20:54 plarsen joined #gluster
21:16 wushudoin joined #gluster
21:18 purpleidea fubada: i've been a bit busy and it hasn't been a priority for me, sorry. I guess can you open up a "github issue" so at least we can track it there, and others might help patch it too?
21:22 _polto_ joined #gluster
21:41 roost joined #gluster
21:56 T3 joined #gluster
22:03 Pupeno_ joined #gluster
22:18 Pupeno joined #gluster
22:24 _polto_ joined #gluster
22:36 neofob left #gluster
23:02 Pupeno_ joined #gluster
23:03 T0aD joined #gluster
23:05 T0aD joined #gluster
23:21 bala joined #gluster
23:34 luis_silva joined #gluster
23:42 T3 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary