Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:26 DV joined #gluster
00:28 gildub joined #gluster
00:29 badone_ joined #gluster
00:31 jmarley joined #gluster
00:33 badone__ joined #gluster
00:35 jaank joined #gluster
00:37 plarsen joined #gluster
00:43 tg2 is there logic that tells clients to not use a brick if it's in remove-brick progress?
00:44 tg2 i see my clients still putting files on it when it' sin remove-brick
00:44 tg2 does replace-brick mitigate this?
00:44 diegows joined #gluster
00:47 jaank joined #gluster
00:51 _Bryan_ joined #gluster
00:57 bala joined #gluster
00:58 vijaykumar joined #gluster
01:07 purpleidea joined #gluster
01:10 T3 joined #gluster
01:16 lyaunzbe joined #gluster
01:17 lyaunzbe Hey guys! Had a quick question. Can I change the allowed IPs for a gluster cluster while it is running? Do I have to restart the cluster if I add a new IP address?
01:18 lyaunzbe I am talking in the context of auth.allow
01:18 JoeJulian You can change it while it's running.
01:18 lyaunzbe Very awesome! Thank you JoeJulian, I appreciate the quick response.
01:19 JoeJulian You're welcome.
01:21 MugginsM joined #gluster
01:21 ira joined #gluster
01:30 purpleid1a joined #gluster
01:51 harish joined #gluster
02:05 julim joined #gluster
02:05 T3 joined #gluster
02:18 nbalacha joined #gluster
02:21 purpleidea joined #gluster
02:28 lyaunzbe joined #gluster
02:31 haomaiwa_ joined #gluster
02:42 purpleidea joined #gluster
02:42 purpleidea joined #gluster
02:52 nangthang joined #gluster
02:52 ira joined #gluster
03:00 bharata-rao joined #gluster
03:02 purpleidea joined #gluster
03:04 kdhananjay joined #gluster
03:07 osc_khoj joined #gluster
03:08 plarsen joined #gluster
03:12 vijaykumar joined #gluster
03:20 thangnn_ joined #gluster
03:23 meghanam joined #gluster
03:30 jiku joined #gluster
03:41 spandit joined #gluster
03:53 atinmu joined #gluster
03:54 anrao joined #gluster
03:59 RameshN joined #gluster
04:04 itisravi joined #gluster
04:05 sripathi joined #gluster
04:23 kanagaraj joined #gluster
04:31 gildub joined #gluster
04:32 nbalacha joined #gluster
04:32 poornimag joined #gluster
04:36 gem joined #gluster
04:42 purpleidea joined #gluster
04:42 purpleidea joined #gluster
04:52 ndarshan joined #gluster
04:54 meghanam joined #gluster
04:56 maveric_amitc_ joined #gluster
04:58 kotreshhr joined #gluster
04:58 schandra joined #gluster
05:01 gem joined #gluster
05:04 kumar joined #gluster
05:11 purpleidea joined #gluster
05:13 siel joined #gluster
05:13 siel joined #gluster
05:15 lalatenduM joined #gluster
05:23 gem joined #gluster
05:24 bala joined #gluster
05:25 smohan joined #gluster
05:26 kanagaraj joined #gluster
05:27 deniszh joined #gluster
05:29 nbalacha joined #gluster
05:30 deniszh joined #gluster
05:31 dusmant joined #gluster
05:34 Bhaskarakiran joined #gluster
05:34 deniszh1 joined #gluster
05:38 hagarth joined #gluster
05:39 vipulnayyar joined #gluster
05:40 Manikandan joined #gluster
05:40 purpleidea joined #gluster
05:40 purpleidea joined #gluster
05:41 soumya_ joined #gluster
05:42 karnan joined #gluster
05:44 ashiq joined #gluster
05:49 kshlm joined #gluster
05:50 raghu joined #gluster
05:51 vimal joined #gluster
05:54 hgowtham joined #gluster
06:00 anil joined #gluster
06:03 overclk joined #gluster
06:05 anrao joined #gluster
06:07 ppai joined #gluster
06:10 lalatenduM joined #gluster
06:10 glusterbot News from newglusterbugs: [Bug 1208367] Data Tiering:Data moving to the tier(inertia) where data already exists instead of moving to hot tier first by default <https://bugzilla.redhat.com/show_bug.cgi?id=1208367>
06:11 purpleidea joined #gluster
06:11 purpleidea joined #gluster
06:13 ashiq joined #gluster
06:17 rjoseph joined #gluster
06:19 kovshenin joined #gluster
06:19 nishanth joined #gluster
06:20 aravindavk joined #gluster
06:23 MugginsM joined #gluster
06:26 kdhananjay joined #gluster
06:32 ashiq joined #gluster
06:34 hchiramm joined #gluster
06:53 smohan joined #gluster
06:53 aravindavk joined #gluster
07:04 kanagaraj joined #gluster
07:05 hgowtham joined #gluster
07:05 nbalacha joined #gluster
07:10 glusterbot News from newglusterbugs: [Bug 1208386] Data Tiering:Replica type volume not getting converted to tier type after attaching tier <https://bugzilla.redhat.com/show_bug.cgi?id=1208386>
07:10 kevein joined #gluster
07:10 atinmu joined #gluster
07:12 overclk joined #gluster
07:17 dusmant joined #gluster
07:17 nishanth joined #gluster
07:22 Leildin joined #gluster
07:28 fsimonce joined #gluster
07:39 nishanth joined #gluster
07:40 maveric_amitc_ joined #gluster
07:40 atinmu joined #gluster
07:43 overclk joined #gluster
07:45 deniszh joined #gluster
07:48 T0aD joined #gluster
07:53 liquidat joined #gluster
07:55 user_42 joined #gluster
07:57 o5k__ joined #gluster
07:59 user_42 Hi. Is it possible to restrict access for root users of some servers on a volume or do they have root rights if ip is auth.allow(ed) ?
07:59 o5k_ joined #gluster
08:00 dusmant joined #gluster
08:01 pkoro joined #gluster
08:08 hagarth user_42: you might want to look at option root-squashing in gluster volume set.
08:10 glusterbot News from newglusterbugs: [Bug 1208404] [Backup]: Behaviour of backup api in the event of snap restore - unknown <https://bugzilla.redhat.com/show_bug.cgi?id=1208404>
08:12 atalur joined #gluster
08:12 Slashman joined #gluster
08:19 smohan joined #gluster
08:19 user_42 hagarth: thanks for the hint. I just read about it. If I get it right one can prevent root user from accessing volumes as root but a superuser can (if alowed to) write/read files as another user to get access... this sounds like a good solution. thanks!
08:24 aravindavk joined #gluster
08:34 bala1 joined #gluster
08:41 _shaps_ joined #gluster
08:55 vipulnayyar joined #gluster
08:56 ktosiek joined #gluster
08:56 anil joined #gluster
08:58 brianw_ joined #gluster
08:58 purpleidea joined #gluster
08:59 o5k joined #gluster
09:08 kdhananjay joined #gluster
09:10 glusterbot News from resolvedglusterbugs: [Bug 1188968] Geo-replication changelog crawl causes cpu spike - creates lot of stale changelog files <https://bugzilla.redhat.com/show_bug.cgi?id=1188968>
09:12 o5k joined #gluster
09:15 ira joined #gluster
09:26 purpleidea joined #gluster
09:31 itisravi joined #gluster
09:33 safea joined #gluster
09:33 safea hi everybody
09:34 T3 joined #gluster
09:36 kotreshhr joined #gluster
09:36 safea does anyone implement with this guide, but only use 3 nodes? http://www.severalnines.com/blog/high-availability-file-sync-and-share-deploying-owncloud-galera-cluster-mysql-and-glusterfs
09:38 ctria joined #gluster
09:39 itisravi_ joined #gluster
09:39 kanagaraj joined #gluster
09:40 kanagaraj joined #gluster
09:46 Leildin Do you have a question more specifically about gluster ?
09:46 Leildin safea, I'm not sure how to help you ^^
09:46 kotreshhr1 joined #gluster
09:51 aravindavk joined #gluster
09:54 hgowtham joined #gluster
09:55 vipulnayyar joined #gluster
09:57 brianw_ joined #gluster
09:57 bitpushr joined #gluster
09:57 smohan_ joined #gluster
10:00 jiffin joined #gluster
10:01 smohan_ joined #gluster
10:05 smohan joined #gluster
10:06 nbalacha joined #gluster
10:11 glusterbot News from newglusterbugs: [Bug 1208452] installing client packages or glusterfs-api requires libgfdb.so <https://bugzilla.redhat.com/show_bug.cgi?id=1208452>
10:11 xucanjie joined #gluster
10:15 dusmant joined #gluster
10:17 kotreshhr joined #gluster
10:28 itisravi_ joined #gluster
10:43 hgowtham joined #gluster
10:44 harish joined #gluster
10:48 kanagaraj joined #gluster
10:49 kanagaraj joined #gluster
10:50 Pupeno joined #gluster
10:50 Pupeno joined #gluster
11:04 smohan joined #gluster
11:06 T3 joined #gluster
11:07 kkeithley1 joined #gluster
11:21 dusmant joined #gluster
11:29 kanagaraj joined #gluster
11:36 bivak joined #gluster
11:41 glusterbot News from newglusterbugs: [Bug 1208482] pthread cond and mutex variables of fs struct has to be destroyed conditionally. <https://bugzilla.redhat.com/show_bug.cgi?id=1208482>
11:42 soumya joined #gluster
11:43 RameshN joined #gluster
11:52 sac` joined #gluster
12:02 sac joined #gluster
12:05 wkf_ joined #gluster
12:21 T3 joined #gluster
12:23 jmarley joined #gluster
12:26 haomaiwa_ joined #gluster
12:27 LebedevRI joined #gluster
12:31 hgowtham joined #gluster
12:32 B21956 joined #gluster
12:34 nangthang joined #gluster
12:35 kanagaraj_ joined #gluster
12:36 shaunm joined #gluster
12:41 glusterbot News from newglusterbugs: [Bug 1208520] [Backup]: Glusterfind not working with change-detector as 'changelog' <https://bugzilla.redhat.com/show_bug.cgi?id=1208520>
12:42 user_42 joined #gluster
12:49 karnan joined #gluster
12:49 kotreshhr left #gluster
12:54 meghanam joined #gluster
12:55 hagarth joined #gluster
12:56 smohan_ joined #gluster
12:57 T0aD joined #gluster
12:58 Gill joined #gluster
13:02 dgandhi joined #gluster
13:10 Gill left #gluster
13:12 RayTrace_ joined #gluster
13:14 DV joined #gluster
13:16 dusmant joined #gluster
13:19 soumya joined #gluster
13:19 hamiller joined #gluster
13:32 firemanxbr joined #gluster
13:35 georgeh-LT2 joined #gluster
13:44 hchiramm_ joined #gluster
13:47 kovsheni_ joined #gluster
13:50 jmarley joined #gluster
13:58 jiffin joined #gluster
14:01 vipulnayyar joined #gluster
14:02 Slashman joined #gluster
14:06 jiffin joined #gluster
14:06 RameshN joined #gluster
14:06 kasturi joined #gluster
14:10 T3 joined #gluster
14:13 joseki joined #gluster
14:13 overclk joined #gluster
14:13 joseki hello gluster! looking for a quick answer to how to change a hostname on a distributed cluster?
14:14 hamiller joseki, I think that information is saved several places, including the volume. It may be tricky..
14:15 joseki this is what i've found
14:16 joseki http://www.gluster.org/pipermail/gluster-users/2014-May/017328.html
14:21 hamiller joseki, thats a year old, but basicaly correct. after my morning meeting, I'll see if there is anything better.
14:24 jiffin joined #gluster
14:24 joseki thank you!
14:30 wushudoin joined #gluster
14:31 gnudna joined #gluster
14:34 plarsen joined #gluster
14:38 hagarth joined #gluster
14:39 DV__ joined #gluster
14:42 glusterbot News from newglusterbugs: [Bug 1208564] ls operation hangs after the first brick is killed for distributed-disperse volume <https://bugzilla.redhat.com/show_bug.cgi?id=1208564>
14:49 deepakcs joined #gluster
14:53 gnudna left #gluster
14:53 gnudna joined #gluster
14:55 bennyturns joined #gluster
14:56 lpabon joined #gluster
14:57 bene2 joined #gluster
14:58 Slashman joined #gluster
15:09 wkf joined #gluster
15:12 DV__ joined #gluster
15:13 20WAAZNEC joined #gluster
15:13 17SAB8NFB joined #gluster
15:21 Gill joined #gluster
15:24 ProT-0-TypE joined #gluster
15:27 wsirc_4067 joined #gluster
15:27 roost joined #gluster
15:28 T3 joined #gluster
15:28 purpleidea joined #gluster
15:36 hagarth joined #gluster
15:52 vipulnayyar joined #gluster
16:07 prilly_ joined #gluster
16:08 _Bryan_ joined #gluster
16:08 karnan joined #gluster
16:12 wsirc_4067 joined #gluster
16:15 kbyrne joined #gluster
16:19 Norky joined #gluster
16:23 Rapture joined #gluster
16:23 DV joined #gluster
16:24 lalatenduM joined #gluster
16:26 RayTrace_ joined #gluster
16:40 hagarth joined #gluster
16:44 _Bryan_ joined #gluster
16:45 nangthang joined #gluster
16:47 lyaunzbe joined #gluster
17:05 R0ok_|kejani joined #gluster
17:08 shaunm joined #gluster
17:08 shaunm joined #gluster
17:15 RicardoSSP joined #gluster
17:15 RicardoSSP joined #gluster
17:23 ernetas Hey guys.
17:23 ernetas After upgrading to 3.6 from 3.5, I can't mount using volume files.
17:23 ernetas Where could be the problem?
17:24 ernetas http://fpaste.org/206575/14279954/
17:27 user_42 joined #gluster
17:37 daMaestro joined #gluster
17:40 d-fence joined #gluster
17:53 JoeJulian ernetas: https://github.com/gluster/glusterfs/blob/release-3.6/xlators/mount/fuse/utils/mount.glusterfs.in#L240-L252
17:57 ernetas JoeJulian: and?
17:57 ernetas The domains resolve to the correct hosts.
17:57 JoeJulian Sorry, you're running vol files so I assumed you were an expert.
17:57 JoeJulian So if it can't find the vol file (-Z) then the rest of that host checking happens.
17:58 ernetas But it should be able to find the vol file.
17:58 JoeJulian That's what you'll need to diagnose then.
17:59 JoeJulian Running from vol files hasn't been officially supported since 3.0.
18:00 ernetas What's the proper way of mounting gluster volume from multiple hosts?
18:01 JoeJulian mount -t glusterfs servername:/volumename /mountpoint
18:02 ernetas That's a single servername.
18:03 JoeJulian @mount server
18:03 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
18:04 d-fence joined #gluster
18:10 DV joined #gluster
18:13 purpleidea joined #gluster
18:13 purpleidea joined #gluster
18:14 RayTrace_ joined #gluster
18:15 _Bryan_ joined #gluster
18:28 ScopeDial joined #gluster
18:29 bennyturns joined #gluster
18:39 o5k joined #gluster
18:45 ildefonso joined #gluster
18:53 hagarth joined #gluster
18:58 roost joined #gluster
18:59 DV joined #gluster
19:03 Gill joined #gluster
19:12 ernetas JoeJulian: how long does it usually take to release a patch and package it? I patched the bug (https://github.com/gluster/glusterfs/pull/31), but it seems that on my system (running Ubuntu 14.04, with Glusterfs from ppa:gluster/glusterfs-3.6), the script is slightly outdated from the Github version of the 3.6 branch, although it was modified 1 month ago. Also, my pull request seems to be to the master of glusterfs, will someone cherry pick it to the re
19:14 JoeJulian ernetas: Not sure. I would either ask in #gluster-dev or on the gluster-devel mailing list.
19:15 ernetas JoeJulian: thanks! :)
19:19 JoeJulian Oh, hey ernetas, I just noticed you issued a PR on github. Please see ,,(hack)
19:19 glusterbot The Development Work Flow is at http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow
19:39 RayTrace_ joined #gluster
19:41 _Bryan_ joined #gluster
19:43 elico joined #gluster
19:47 quique left #gluster
19:53 tg2 Anybody know if this is expected behavior and/or if it has been patched in a newer version of gluster than 3.3, https://gist.github.com/mandb/93369097139c6cc3ff98
20:02 JoeJulian tg2: Sorry, man, no clue. Rebalance has always been a huge mystery.
20:03 penglish1 joined #gluster
20:05 tg2 does replace-brick do the same thing
20:05 tg2 or will writes go on the new brick
20:05 tg2 logically they should
20:05 tg2 but then again, logically, a brick in remove-brick state should not accept new files either lol
20:05 tg2 who would know better?
20:07 JoeJulian Even the developers don't trust it - it seems. bug 1136702
20:07 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1136702 high, high, ---, bugs, MODIFIED , Add a warning message to check the removed-bricks for any files left post "remove-brick commit"
20:09 JoeJulian I believe replace-brick does some replica translator trickery to replicate the new brick from the old brick, but "replace-brick start" is deprecated and somehow we're supposed to have the same functionality after the fact with replace-brick..commit force. I disagree with that, too.
20:10 wkf joined #gluster
20:10 virusuy joined #gluster
20:11 virusuy Good evening everyone
20:11 JoeJulian Good $greeting_time yourself.
20:13 glusterbot News from newglusterbugs: [Bug 1208676] No support for mounting volumes with volume files <https://bugzilla.redhat.com/show_bug.cgi?id=1208676>
20:13 tg2 I know remove-brick start was deprecated and became default
20:13 tg2 at some version
20:13 tg2 3.4 maybe
20:15 tg2 trying to decide if it makes more sense to just take it off and rsync back into the volume which unfortunately is not transparent.
20:15 JoeJulian Is your volume distribute-only?
20:16 tg2 yeah sadly
20:16 tg2 I have to schedule downtime to do a copy and it's ~15TB
20:16 JoeJulian Yuck.
20:16 tg2 ya
20:16 tg2 last time took 8 hours
20:16 tg2 with a hacked up rsync-fu copy script which maxes out disk and network IO
20:17 tg2 this is the only volume left running 3.3
20:17 tg2 the new one on 3.6.2 is doing well
20:17 tg2 haven't tried a remove-brick start to see if that same issue has been resolved
20:17 JoeJulian Unfortunately, I wouldn't hold my breath.
20:18 tg2 let me add to the patch
20:18 tg2 bug *
20:18 JoeJulian So you're just trying to reduce your volume by one brick? or is there a longer term goal associated with that process?
20:18 tg2 yeah I want to decomission that node so I can do some upgrades on it
20:18 tg2 bbu batteries need to be changed
20:18 tg2 firmwards on the lsi cards need updating
20:18 tg2 new network card
20:18 tg2 OS
20:18 tg2 etc
20:18 tg2 its almost 3 years old
20:19 tg2 but this remove-brick just never finishes
20:19 JoeJulian Will those upgrades take less than 8 hours?
20:19 tg2 cause it keeps accepting files while it's offloading them
20:19 tg2 yeah tey can
20:19 tg2 but i'm also changin gthe disks ;)
20:19 tg2 if not I see what you mean and that's what I'd do
20:19 JoeJulian Just trying to think of alternatives.
20:19 tg2 I wonder who would know for sure if replace-brick will send new writes to the new brick
20:19 tg2 that would solve the issue
20:20 JoeJulian There's a maintainers list... let me find it...
20:20 tg2 since it's a simple distribute volume I can just in-place update gluster to any version
20:20 tg2 if it were fixed in a newer version
20:20 tg2 but I don't know if it is
20:21 tg2 the other issue is the network card is erroring out and can only sustain about 1.6gbps instead of full 10gbps so this node is really due for some TLC
20:22 JoeJulian https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L81-L85
20:23 JoeJulian ouch, 1.6 out of 10 would really suck with that much data to move.
20:23 JoeJulian Almost be worth taking the downtime just to replace that hardware as an interim step.
20:23 tg2 ya
20:25 tg2 I would think that a replace-brick would send new writes (and reads) to the new brick, which would then relay requests for misses to the old node
20:25 tg2 while copying
20:25 tg2 :\
20:25 tg2 time for some production experimentation ;D
20:25 tg2 I don't mind if it takes a week to move the data, as long as when it's moved there is none left haha
20:27 tg2 I'll try replace-brick instead
20:27 tg2 and then try remove-brick on my new 3.6.2 cluster and see if it does the same, accepting new reads while in remove-brick state
20:28 DV joined #gluster
20:28 JoeJulian If I were testing 3.6, I would test against 3.6.3rc*
20:34 tg2 I added it to the bug post
20:34 tg2 why not 3.6.2
20:43 tg2 when you did replace-brick in: https://joejulian.name/blog/replacing-a-glusterfs-server-best-practice/
20:43 tg2 what was the hardware spec and how much data was it moving /hr
20:43 balacafalata joined #gluster
20:46 tg2 says 3.5 is the latest stable
20:56 gnudna left #gluster
21:02 lpabon joined #gluster
21:05 deniszh joined #gluster
21:06 jermudgeon joined #gluster
21:08 deniszh1 joined #gluster
21:13 JoeJulian tg2: I forget what those were. They were horribly old and my total data was only in the hundreds of gigabytes.
21:14 JoeJulian tg2: I know 3.5 is, but I would still test against the latest 3.6.3 release candidate.
21:16 nangthang joined #gluster
21:16 o5k_ joined #gluster
21:17 tg2 hm
21:18 tg2 is there a variable that controls the speed of the migration?
21:18 JoeJulian No
21:18 tg2 seems like it's not even running near line rate or disk rate in terms of i/o or throughput
21:20 JoeJulian When you email Raghavendra and Shyamsundar, could you please CC me on it?
21:23 tg2 only throttling stuff I see is for rpc throttling and bit-rot scrub throttle
21:26 penglish1 left #gluster
21:36 glusterbot` joined #gluster
21:36 siel_ joined #gluster
21:36 julim_ joined #gluster
21:36 fyxim joined #gluster
21:36 brianw__ joined #gluster
21:37 bfoster1 joined #gluster
21:37 oxidane_ joined #gluster
21:38 haomaiwa_ joined #gluster
21:40 sankarshan_away joined #gluster
21:44 glusterbot joined #gluster
21:53 jermudgeon joined #gluster
21:57 _Bryan_ joined #gluster
22:07 rotbeard joined #gluster
22:33 DV joined #gluster
22:53 prilly_ joined #gluster
23:10 DV joined #gluster
23:13 balacafalata-bil joined #gluster
23:29 plarsen joined #gluster
23:41 DV joined #gluster
23:50 FatBack joined #gluster
23:58 Pupeno joined #gluster
23:58 lyaunzbe hey guys! quick conceptual question, is the volfile replicated across bricks?
23:58 JoeJulian No, across servers in the trusted peer group.
23:59 lyaunzbe I think i worded that wrong Joe. So I have two servers that Im using as nodes in my glusterfs cluster. If I set auth.allow from one server, will that be applied to the other node in the cluster?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary