Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-08-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 woakes070048 joined #gluster
00:13 woakes070048 hey what are you guys using for a vip ctdb vs keepalived?
00:16 JoeJulian I just set up ucarp for my home stuff. Looks like gluster is leaning toward pacemaker/corosync.
00:16 ipmango_ joined #gluster
00:18 woakes070048 JoeJulian: im using for ovirt storage and I notice that is what what most people are using. I haven't used pacemaker since my drbd days
00:19 srepetsk woakes070048: i'm using ctdb myself; it works pretty well
00:19 JoeJulian You found what it takes to make me really not like something... mention it in the same breath as drbd.
00:20 JoeJulian Good thing nobody's mentioned that alongside gelato.
00:20 srepetsk you don't like gelato?!
00:20 woakes070048 srepetsk: that is what I used in the staging cluster i made. I kinda sad there isn't a puppet script for it.
00:20 dingdong hey guys I have 1 more question maybe a bit silly but if one of the 2 bricks are offline the other should take it up right?
00:21 dingdong losing my connection on both
00:26 dingdong nevermind silly told you
00:26 dingdong hosts file error
00:26 dingdong :P
00:28 calavera joined #gluster
00:36 woakes07004 joined #gluster
00:49 nishanth joined #gluster
00:58 gildub joined #gluster
01:01 akay1 if a client connects with ctdb to a particular host, does that connection get spread across all nodes or does it only changes when it drops the connection to the existing?
01:06 srepetsk ctdb is only assigning the IP to a specific server; if that server fails, the IP gets elected to another
01:07 woakes070048 joined #gluster
01:09 tru_tru joined #gluster
01:24 papamoose1 joined #gluster
01:31 Lee1092 joined #gluster
01:56 aaronott joined #gluster
02:03 nangthang joined #gluster
02:04 dingdong hey guys can someone help me with adding a brick to the volume?
02:04 dingdong currently having a 2 replica is it possible to change it to 3?
02:19 auzty joined #gluster
02:21 haomaiwang joined #gluster
02:23 PaulCuzner joined #gluster
02:32 dewey joined #gluster
02:37 PaulCuzner left #gluster
02:45 VeggieMeat joined #gluster
02:52 kalzz joined #gluster
03:09 tertiary joined #gluster
03:13 [1]tertiary joined #gluster
03:20 bharata-rao joined #gluster
03:28 calavera joined #gluster
03:30 sakshi joined #gluster
03:30 prg3 joined #gluster
03:31 ndk joined #gluster
03:32 scuttle|afk joined #gluster
03:36 shubhendu joined #gluster
03:39 autoditac joined #gluster
03:43 TheSeven joined #gluster
03:47 vmallika joined #gluster
03:50 overclk joined #gluster
03:52 ppai joined #gluster
03:58 atinm joined #gluster
04:00 nishanth joined #gluster
04:01 kanagaraj joined #gluster
04:12 nbalacha joined #gluster
04:18 RameshN joined #gluster
04:19 harish joined #gluster
04:19 dewey joined #gluster
04:21 pppp joined #gluster
04:26 yazhini joined #gluster
04:34 gem joined #gluster
04:37 meghanam joined #gluster
04:39 neha joined #gluster
04:42 ramky joined #gluster
04:43 kotreshhr joined #gluster
04:43 vimal joined #gluster
04:44 jwd joined #gluster
04:46 rafi joined #gluster
04:46 jwaibel joined #gluster
04:47 _ndevos joined #gluster
04:47 ramteid joined #gluster
04:50 tertiary joined #gluster
04:52 poornimag joined #gluster
04:53 [1]tertiary joined #gluster
04:55 ndarshan joined #gluster
04:59 jiffin joined #gluster
05:00 cliluw joined #gluster
05:01 RameshN joined #gluster
05:01 poornimag joined #gluster
05:05 calavera joined #gluster
05:05 meghanam joined #gluster
05:07 sahina joined #gluster
05:07 tertiary joined #gluster
05:09 [1]tertiary joined #gluster
05:12 deepakcs joined #gluster
05:13 aravindavk joined #gluster
05:14 jcastill1 joined #gluster
05:17 elico joined #gluster
05:19 jcastillo joined #gluster
05:25 vmallika joined #gluster
05:28 jcastill1 joined #gluster
05:33 jcastillo joined #gluster
05:34 pppp joined #gluster
05:45 jwd joined #gluster
05:46 atalur joined #gluster
05:49 hgowtham joined #gluster
05:50 Manikandan joined #gluster
05:51 ashiq joined #gluster
05:53 dijuremo joined #gluster
05:54 javi404 joined #gluster
05:54 haomaiwa_ joined #gluster
05:56 raghu joined #gluster
05:59 maveric_amitc_ joined #gluster
06:06 dingdong joined #gluster
06:08 TvL2386 joined #gluster
06:15 haomaiwa_ joined #gluster
06:21 Bhaskarakiran joined #gluster
06:21 Manikandan joined #gluster
06:27 TvL2386 joined #gluster
06:28 aravindavk joined #gluster
06:30 pppp joined #gluster
06:36 kshlm joined #gluster
06:37 anil joined #gluster
06:42 kotreshhr joined #gluster
06:43 atalur joined #gluster
06:43 atinm joined #gluster
06:45 karnan joined #gluster
06:46 atalur joined #gluster
06:49 ramky joined #gluster
06:51 aravindavk joined #gluster
06:52 Manikandan joined #gluster
06:58 rjoseph joined #gluster
06:59 kshlm joined #gluster
07:01 Bhaskarakiran joined #gluster
07:02 kotreshhr joined #gluster
07:05 kaushal_ joined #gluster
07:07 kovshenin joined #gluster
07:07 rtalur joined #gluster
07:14 nangthang joined #gluster
07:18 R0ok_ joined #gluster
07:18 kdhananjay joined #gluster
07:21 TvL2386 joined #gluster
07:31 papamoose joined #gluster
07:32 rastar joined #gluster
07:35 fsimonce joined #gluster
07:39 Slashman joined #gluster
07:43 autoditac joined #gluster
07:48 Debloper joined #gluster
07:50 badone_ joined #gluster
08:02 ctria joined #gluster
08:09 LebedevRI joined #gluster
08:12 aravindavk joined #gluster
08:18 akay1 srepetsk, if i have lots of clients connecting with ctdb, does it choose the least busy node or just pick one at random? and if random is there any difference to using normal and rrdns?
08:18 autoditac_ joined #gluster
08:22 RaSTar joined #gluster
08:25 ajames-41678 joined #gluster
08:25 RaSTar joined #gluster
08:27 jordie joined #gluster
08:29 jordie 请问双节点复制的glusterfs集群传输数据只有普通文件系统的一半,该如何提升?
08:29 jordie 有看得懂中文的吗?
08:32 spalai joined #gluster
08:36 autoditac_ joined #gluster
08:40 Philambdo joined #gluster
08:49 ndarshan joined #gluster
08:50 JoeJulian Sorry, jordie, we do require english in this channel.
08:50 nishanth joined #gluster
09:04 meghanam joined #gluster
09:17 jordie Thanks JoeJulian!I thought someone can read chinese!
09:18 ramky joined #gluster
09:23 ppai joined #gluster
09:28 Pupeno joined #gluster
09:28 badone__ joined #gluster
09:30 ndarshan joined #gluster
09:34 RayTrace_ joined #gluster
09:37 kaushal_ joined #gluster
09:42 SOLDIERz joined #gluster
09:42 harish joined #gluster
09:51 ndarshan joined #gluster
09:57 gem joined #gluster
10:05 B21956 joined #gluster
10:07 ajames-41678 joined #gluster
10:21 poornimag joined #gluster
10:22 rtalur left #gluster
10:26 ramky joined #gluster
10:26 woakes070048 joined #gluster
10:37 rastar joined #gluster
10:38 rastar joined #gluster
10:42 elico joined #gluster
10:56 kaushal_ joined #gluster
10:56 gem joined #gluster
10:56 autoditac__ joined #gluster
11:00 ndarshan joined #gluster
11:00 kshlm joined #gluster
11:02 rjoseph joined #gluster
11:02 firemanxbr joined #gluster
11:04 karnan joined #gluster
11:05 kaushal_ joined #gluster
11:07 shyam joined #gluster
11:19 nthomas joined #gluster
11:24 poornimag joined #gluster
11:26 R0ok_ joined #gluster
11:38 julim joined #gluster
11:44 unclemarc joined #gluster
11:53 bennyturns joined #gluster
11:59 nsoffer joined #gluster
11:59 rafi1 joined #gluster
12:00 raghu joined #gluster
12:02 rafi joined #gluster
12:05 spalai left #gluster
12:08 overclk joined #gluster
12:10 gletessier joined #gluster
12:11 kaushal_ joined #gluster
12:19 Manikandan_ joined #gluster
12:26 srepetsk akay1: i don't kno whow it elects the server to run on, sorry, but ovirt should generally load balance to some extent
12:27 kaushal_ joined #gluster
12:32 kanagaraj joined #gluster
12:34 aaronott joined #gluster
12:40 jcastill1 joined #gluster
12:41 pppp joined #gluster
12:48 rafi joined #gluster
12:57 jcastillo joined #gluster
13:00 hchiramm_ joined #gluster
13:01 kaushal_ joined #gluster
13:02 shaunm joined #gluster
13:02 TheCthulhu joined #gluster
13:14 TheCthulhu1 joined #gluster
13:15 eljrax bennyturns: Are you around?
13:15 bennyturns eljrax, yepo wasup?
13:15 eljrax Was wondering if you got anywhere with the distributed benchmarks. You mentioned you might have noticed the same thing I did (2x2 being noticeably slower than 1x2)
13:15 ajames-41678 joined #gluster
13:15 eljrax We spoke last week or so
13:16 bennyturns eljrax, nope, just got a BZ open
13:16 eljrax Alright, public?
13:16 bennyturns eljrax, I'll dig it up in a min, can you post it in it?
13:16 bennyturns eljrax, yaya
13:16 eljrax Grand
13:17 bennyturns in the middle of something right now, gimme a few
13:17 eljrax Sure, no worries
13:17 bennyturns eljrax, awesome catch man!
13:19 rastar joined #gluster
13:23 rwheeler joined #gluster
13:26 mator http://www.gartner.com/technology/reprint​s.do?id=1-28XVMOC&ct=150130&st=sb
13:26 glusterbot Title: Critical Capabilities for Scale-Out File System Storage (at www.gartner.com)
13:27 mator Critical Capabilities for Scale-Out File System Storage
13:27 ninkotech joined #gluster
13:27 ninkotech_ joined #gluster
13:30 daMaestro joined #gluster
13:33 bennyturns eljrax, I wasnt able to get a BZ open, I looked for it
13:33 bennyturns eljrax, if you wanna open one I can comment or I iwll open one this afternoon
13:37 eljrax Feel free to open it, I reckon you have more experience with that and gravitas. Happy to chime in on it though :)
13:38 hamiller joined #gluster
13:40 aaronott joined #gluster
13:42 dgandhi joined #gluster
13:43 doekia joined #gluster
13:44 pk1_ joined #gluster
13:45 pk1_ gletessier: hi
13:47 tertiary joined #gluster
13:47 Twistedgrim joined #gluster
13:47 overclk joined #gluster
13:47 pk1_ kdhananjay: could you give me the link to pad?
13:47 kdhananjay pk1_: https://public.pad.fsfe.org/p/geoffrey
13:48 glusterbot Title: FSFE Etherpad: public instance (at public.pad.fsfe.org)
13:48 pk1_ gletessier: Please ping me once you are back
13:48 pk1_ kdhananjay: thanks, I will speak to gletessier once he is back
13:50 RayTrace_ joined #gluster
13:54 cholcombe joined #gluster
13:57 ekuric joined #gluster
14:02 wehde joined #gluster
14:03 wehde does anyone know if there is a way to find out how long a self heal will take?
14:03 kotreshhr left #gluster
14:03 wehde gluster volume heal data info ---- doesn't say a whole lot
14:03 glusterbot wehde: --'s karma is now -2
14:09 wushudoin joined #gluster
14:13 wehde does anyone know if there is a way to find out how long a self heal will take?
14:20 doekia joined #gluster
14:25 theron joined #gluster
14:28 ahab joined #gluster
14:29 wehde does anyone know if there is a way to find out how long a self heal will take?
14:35 eljrax wehde: It's usually considered bad form to constantly repeat your question.
14:35 eljrax But you can do the maths yourself, I guess. Check how much it has moved in a given time period
14:35 eljrax Multiply by how much is left
14:36 eljrax But that's assuming you know how much there is to heal
14:38 wehde so i would have to get the file size from each brick and compare that way?
14:40 papamoose left #gluster
14:42 timotheus1 joined #gluster
14:45 squizzi_ joined #gluster
14:46 jcastill1 joined #gluster
14:51 jcastillo joined #gluster
14:51 tertiary joined #gluster
14:54 victori joined #gluster
14:57 shyam joined #gluster
14:57 blubberdi joined #gluster
14:58 pk1_ wehde: no :-(. It only says if it needs self-heal or not.
14:58 blubberdi Hey, can somebody tell me how I can produce a split-brain on a replicated volume with two bricks? I'm not able to break it...
14:59 jcsp_ joined #gluster
14:59 [1]tertiary joined #gluster
15:02 theron_ joined #gluster
15:03 kdhananjay joined #gluster
15:07 kshlm joined #gluster
15:07 Trefex joined #gluster
15:09 twisted` joined #gluster
15:16 cyberswat joined #gluster
15:19 mband joined #gluster
15:23 haomaiwa_ joined #gluster
15:24 mband Hi, I have just done a complete reinstall of my glusterfs hosting servers, and now I can't seem to figure out how to add an existing glusterfs volume to the gluster master - I tried to create a new volume but it told me the brick(s) were already belonging to a volume, so that is not how to do it (don't want to force it, since the data is kind of valuable) - any suggestions on how to reassemble my previous g
15:24 Trefex joined #gluster
15:24 mband lusterfs volume?
15:25 _maserati joined #gluster
15:28 bennyturns joined #gluster
15:33 Trefex joined #gluster
15:34 vimal joined #gluster
15:36 Trefex joined #gluster
15:42 nangthang joined #gluster
15:43 cyberswat joined #gluster
15:46 RedW joined #gluster
15:47 Norky joined #gluster
15:48 _Bryan_ joined #gluster
15:51 ipmango joined #gluster
15:59 nzero joined #gluster
16:00 jwd joined #gluster
16:02 kdhananjay pk1_++
16:02 glusterbot kdhananjay: pk1_'s karma is now 1
16:03 Twistedgrim joined #gluster
16:10 deepakcs joined #gluster
16:16 RayTrace_ joined #gluster
16:17 scuttle|afk joined #gluster
16:17 overclk joined #gluster
16:21 victori joined #gluster
16:25 pk1_ kdhananjay++
16:25 glusterbot pk1_: kdhananjay's karma is now 4
16:28 hagarth pk1_: :O
16:32 RameshN joined #gluster
16:44 Rapture joined #gluster
16:48 shaunm joined #gluster
16:48 nthomas joined #gluster
16:48 mpietersen joined #gluster
16:58 RayTrace_ joined #gluster
16:58 justicefries joined #gluster
17:02 ajames41678 joined #gluster
17:03 plarsen joined #gluster
17:12 RayTrace_ joined #gluster
17:13 jockek joined #gluster
17:15 jockek uhr, so, trying to get glusterfs working in an ipv6-only environment
17:15 jockek browsing the world wide web, I get mixed results; some say it should work, others state it doesn't, and points to (several) bugs (which are still open)
17:15 kotreshhr joined #gluster
17:16 jockek out-of-the-box the latest 3.6 and 3.7 (RPM's) doesn't seem to work
17:16 jockek (it fails on the probe, and glusterd only listens to 24007 on ipv4)
17:19 neofob joined #gluster
17:25 jockek is there any version that supposedly works better than the others?
17:25 jockek i'm about to try 3.5 now, see if that works any better
17:26 JoeJulian I thought they fixed it for 3.7...
17:26 jockek aparently not out-of-the-box, at least
17:26 jockek glusterd only listens to ipv4, so you're kinda stuck there even if the rest of the stuff works
17:27 JoeJulian bug 1117886
17:27 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1117886 unspecified, unspecified, ---, anders.blomdell, POST , Gluster not resolving hosts with IPv6 only lookups
17:29 jockek ah, well that explains
17:29 jockek "waiting till 3.6 or 3.7 is stable enough"
17:29 jockek maybe 3.5 is the way to go regardless, then? :-P
17:31 JoeJulian Looks like the problem is in mixed environments. If you're not mixed, you should be ok I guess.
17:31 jockek no mixed here, two fresh centos7.1 installs, using the same gluster-RPM's on both
17:31 JoeJulian No, mixed ipv4+ipv6
17:31 jockek ah
17:32 jockek well, both are v6-only
17:32 jockek with only quad-a dns-entries
17:32 JoeJulian "The problem was more complex than I anticipated, the problem occurs in multiple places, and the lookup of hosts and connecting to them is unfortunately decoupled, which means that replacing gethostbyname with getaddrinfo will (always?) return the IPv6 address, which means that hosts without IPv6 connectivity will be hosed (this was the AFAICT the [unexplained/unexplored] reason for removing IPv6 support in the first place."
17:32 jockek yes, but that wouldn't apply in my scenario
17:33 jockek as they only have v6, no v4 at all (except localhost/127.0.0.1)
17:35 troble joined #gluster
17:35 jockek latest 3.5 also only listens to ipv4-only
17:35 troble Hey guys just a question does the amount of bricks have any impact on speed/performance?
17:36 JoeJulian troble: It can, depending on your workload and configuration.
17:36 overclk joined #gluster
17:36 JoeJulian If you're write heavy and have too many replica, you'll bottleneck there as the client performs the write to all the replica.
17:37 JoeJulian If you have a huge data set, but the needs from that data set are very diverse (like Pandora), having a huge number of distribute subvolumes can prevent bottlenecks.
17:38 troble ahh thanks and the read performance when the write is over?
17:38 JoeJulian Having a large number of replica for a read-heavy environment where there are popular files being read frequently (like Netflix), having a larger number of replica can prevent bottlenecks.
17:39 troble I am talking about a setup which has more read then write jobs
17:39 troble sounds good for me
17:39 JoeJulian how big? netflix big?
17:40 troble naahh unfortunately just a couple of mounts between several hosts reading the same file
17:41 JoeJulian I would start small. You can always add replicas later. If you start out too big you have to make other compromises to achieve that kind of scale.
17:41 troble would like to be a part of Netflix by the way (did you know they give 1 year of holidays when you got a kid) doesnt mather man of wife
17:41 JoeJulian Yeah, just made the news.
17:41 troble yeah I am planning to start small
17:42 troble Just found out that you need to add 2 by to if you create a volume, just adding 1 to the existing 2 isn't possible
17:43 troble took me some time to figure that out  :(
17:44 troble for know I'll stay working at eBay it amazing (not netflix amazing but amazing) :P
17:44 troble Again thanks JoeJulian! will come back if I need you !! :)
17:45 JoeJulian troble: Which state are you working for ebay in?
17:45 troble it :D
17:45 troble noc
17:45 JoeJulian Ah, ok.
17:45 troble not that good :P
17:45 JoeJulian I know a bunch of people up here in Seattle that work there.
17:46 troble ahh I am working in the netherland but know them (\almost) all
17:51 B21956 joined #gluster
17:52 gem joined #gluster
17:54 gem joined #gluster
17:54 RayTrace_ joined #gluster
17:57 gem joined #gluster
17:59 gem joined #gluster
18:01 gem joined #gluster
18:03 gem joined #gluster
18:05 gem joined #gluster
18:06 shyam joined #gluster
18:07 gem joined #gluster
18:09 gem joined #gluster
18:11 gem joined #gluster
18:11 nsoffer joined #gluster
18:14 gem joined #gluster
18:14 cliluw joined #gluster
18:15 apahim joined #gluster
18:16 gem joined #gluster
18:16 shyam1 joined #gluster
18:16 rastar joined #gluster
18:16 MarceloLeandro_ joined #gluster
18:18 gem joined #gluster
18:19 MarceloLeandro_ good afternoon, I would like to take some questions about tiering.
18:20 gem joined #gluster
18:20 MarceloLeandro_ someone help me?
18:22 gem joined #gluster
18:22 rastar joined #gluster
18:24 gem joined #gluster
18:24 nsoffer joined #gluster
18:25 theron joined #gluster
18:26 gem joined #gluster
18:27 RameshN joined #gluster
18:28 gem joined #gluster
18:30 _maserati so what happens when I tack on another replication server to about 2TB of data? Will it still be usable?
18:37 PatNarciso joined #gluster
18:38 PatNarciso _maserati, your gluster (and its data) will be available when you add additional bricks to your volume.
18:47 shyam joined #gluster
18:48 kotreshhr left #gluster
18:59 JoeJulian MarceloLeandro_: It's not complete yet.
19:08 shyam joined #gluster
19:09 msvbhat MarceloLeandro_: Yes, it's not ready yet. But shoot your question anyway. If no one answers here, send them to @gluster-users mailing list
19:12 B21956 left #gluster
19:12 B21956 joined #gluster
19:38 shyam joined #gluster
19:40 Rapture joined #gluster
19:44 MarceloLeandro__ joined #gluster
19:44 Philambdo joined #gluster
19:50 nzero joined #gluster
20:08 daMaestro joined #gluster
20:28 nzero joined #gluster
20:39 doekia joined #gluster
21:10 CyrilPeponnet why guys
21:10 CyrilPeponnet I have very strange behaviour
21:11 CyrilPeponnet backend 3.5.2, client 3.6.4 using fuse gluster a stat /path/to/symlink is hanging for ever
21:12 CyrilPeponnet then it's impossible to kill even with kill -9
21:12 CyrilPeponnet and I have plenty of INFO: task ls:5154 blocked for more than 120 seconds.
21:12 CyrilPeponnet with a stack strace
21:15 daMaestro joined #gluster
21:20 CyrilPeponnet in fact
21:20 daMaestro joined #gluster
21:24 nzero joined #gluster
21:28 CyrilPeponnet I think there is a serious bug here
21:30 daMaestro|isBack joined #gluster
21:32 badone__ joined #gluster
21:35 timotheus1 joined #gluster
21:36 theron joined #gluster
21:44 shyam joined #gluster
22:01 _maserati You guys see this explosion in china from today? https://www.youtube.com/watch?v=​MHQPX2TJPQc&feature=youtu.be   holy jeebus
22:04 JoeJulian CyrilPeponnet: If you kill the fuse client (glusterfs) the block will be released. That dismounts the volume and doesn't solve your problem, but at least you don't have to reboot to clear it.
22:06 CyrilPeponnet good to know
22:07 JoeJulian CyrilPeponnet: So... did anything change?
22:07 CyrilPeponnet yes it closes all my stuck processes
22:07 CyrilPeponnet but I think the issue is on my volume....
22:08 CyrilPeponnet because a stat on the faulty file still hang with 3.6 client but not with 3.5 client
22:13 CyrilPeponnet when it get stuck the only  client message I got is  [afr-self-heal-common.c:476:afr_log_selfheal] 0-usr_global-replicate-0: Completed metadata selfheal on c097f083-4961-496e-9702-eacc9bd255c1. source=0 sinks=
22:14 nsoffer joined #gluster
22:19 nzero joined #gluster
22:19 jatone joined #gluster
22:20 jatone random question: has gluster fixed the create lookup bug that causes creates to slow down as the cluster grows in nodes?
22:21 jatone and/or any recent articles about gluster performance for large clusters?
22:22 jatone sadly article I'm current looking at was written over a year ago (blog.gluster.org/category/performance/), and says a fix is in the works, but no bug is linked to the issue.
22:30 CyrilPeponnet @JoeJulian I got a debug trace (https://gist.github.com/CyrilPeponnet/a10​4834e815e64555848#file-gistfile1-txt-L561) line 561 is when I perform a stat /path/to/file and the whole thing hang. Can't see anything relevant but maybe you can :)
22:30 JoeJulian There was a create lookup bug?
22:31 JoeJulian CyrilPeponnet: What is /images/7750dc/0.0/I1681/decore ? file or directory?
22:32 CyrilPeponnet file
22:34 CyrilPeponnet I think this file may be in bad state but getfattr -d -m . -e hex on each occurrence of file in brick (replica 2) give me the same thing
22:34 CyrilPeponnet except for trusted.glusterfs.f8f5ef65-4678​-47be-bcb1-6c3cdaef545e.xtime=
22:35 JoeJulian xtime is for geo-rep
22:35 CyrilPeponnet ok so it's fine as we don't use it any more
22:35 JoeJulian Anything useful in the brick logs?
22:36 CyrilPeponnet let me check
22:36 CyrilPeponnet hmm all brick log files are empty
22:36 CyrilPeponnet size 0
22:37 JoeJulian is /var/log full?
22:38 MrAbaddon joined #gluster
22:39 CyrilPeponnet nope
22:39 JoeJulian I bet the log got rotated but the server never was HUPed.
22:40 JoeJulian That's why I always do copytruncate.
22:40 CyrilPeponnet could be yes...
22:41 CyrilPeponnet but  regulary issue systemctl restart glusterd
22:41 CyrilPeponnet (because with 3.5.2 if it let it like say for 3 days, it get OOM killed eating 64GB of RAM...)
22:43 CyrilPeponnet @JoeJulian nothing relevant in that log ? (except we really should not set a gluster mount in the PATH)
22:46 nsoffer joined #gluster
22:47 Pupeno joined #gluster
22:51 CyrilPeponnet how I can make the brick logs things without killing everything in production (kind of afraid with that)
23:01 CyrilPeponnet ok I have some brick logs
23:03 ajames-41678 joined #gluster
23:04 CyrilPeponnet @JoeJulian https://gist.github.com/CyrilPeponnet​/d1e6d971c2eb9c85a6fa#file-brick1-L4 first brick top of file you can see the 3.6.4 client connection
23:05 CyrilPeponnet brick2 https://gist.github.com/Cyril​Peponnet/b6e20b29499be7cabb25
23:05 glusterbot Title: brick2 · GitHub (at gist.github.com)
23:06 CyrilPeponnet I have a LOT of permission denied I can't explain
23:07 CyrilPeponnet this volume is mounted as RO by the clients
23:11 CyrilPeponnet I really need an advice here... our gluster setup is degrading itself. I can't find anything relevant.
23:13 nzero joined #gluster
23:17 the-me joined #gluster
23:18 JoeJulian Sorry, CyrilPeponnet, $dayjob...
23:19 CyrilPeponnet I know :( as mine :p try to bring back gluster to life
23:21 JoeJulian selinux?
23:21 CyrilPeponnet nope
23:22 CyrilPeponnet looks like the corresponding links are not created
23:22 CyrilPeponnet I mean
23:22 CyrilPeponnet trusted.gfid=0x90be07e9e61940b68e64a5a7ce3d180b I can't fine .glusterfs/90/be/90be07e9e61940b68e64a5a7ce3d180b hardlink
23:22 JoeJulian But then fsetattr should return ENOENT not EPERM.
23:23 JoeJulian Odd.
23:23 CyrilPeponnet files belong to a user called test lab (499.499) this volume is root squashed
23:25 JoeJulian maybe...
23:31 CyrilPeponnet setting root squash to off stop the denied issue
23:33 JoeJulian ok, root squash must be handled by the posix translator, I think.
23:33 JoeJulian And I just checked and marker is below posix so it has to go up through it. That's probably where EPERM was coming from.
23:34 volga629 joined #gluster
23:34 CyrilPeponnet ok so this is a real issue or not ? (I mean use root-squash)
23:34 JoeJulian Did you say you're not using geo-rep anymore?
23:35 volga629 Hello Everyone, trying enable encryption gluster vol set datapoint auth.ssl-allow and getting error volume set: failed: option : auth.ssl-allow does not exist
23:35 CyrilPeponnet yep
23:35 volga629 any help thank you
23:35 JoeJulian then disable marker and you *should* be able to re-enable root squash
23:35 CyrilPeponnet how to disable marker ?
23:36 CyrilPeponnet geo-replication.indexing: on -> reset it ?
23:36 JoeJulian gluster volume set $vol geo-replication.indexing off
23:36 JoeJulian and/or reset
23:36 CyrilPeponnet geo-replication.ignore-pid-check: on
23:36 CyrilPeponnet changelog.changelog: on
23:36 CyrilPeponnet nfs.acl: off
23:36 CyrilPeponnet geo-replication.indexing: on
23:36 CyrilPeponnet (sorry)
23:36 CyrilPeponnet can I reset the two geo-rep and changelog as well ?
23:37 JoeJulian noticed that changing log-level then doing a reset didn't change the log-level back to info, so I don't have a lot of faith in reset.
23:37 JoeJulian But yes
23:38 CyrilPeponnet i works for me with reset for log level at least
23:38 JoeJulian They may have fixed that then.
23:39 CyrilPeponnet (I'm on 3.5.2 are you running older ?)
23:40 JoeJulian volga629: what version?
23:40 JoeJulian CyrilPeponnet: I may have been when I lost faith in reset doing what I wanted every time....
23:40 volga629 glusterfs 3.5.5 built on Jul  8 2015 18:58:36
23:41 JoeJulian volga629: looks like that feature was added in 3.6
23:41 JoeJulian You'll need 3.6.4 or 3.7.3
23:42 CyrilPeponnet ok I will reenable root squashing and see...
23:42 CyrilPeponnet (note that this a separate issue and my gluster client 3.6 is still stuck)
23:42 volga629 I see, I will need use auth.allow ?
23:43 JoeJulian Did you want encryption?
23:43 volga629 yes for server side and then tls client authentication
23:43 JoeJulian Then you'll need to upgrade.
23:44 JoeJulian @latest
23:44 volga629 that should do it http://download.gluster.org/pub/gluster/gl​usterfs/3.6/3.6.2/Fedora/fedora-21/x86_64/
23:44 glusterbot JoeJulian: The latest version is available at http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
23:44 glusterbot Title: Index of /pub/gluster/glusterfs/3.6/3​.6.2/Fedora/fedora-21/x86_64 (at download.gluster.org)
23:45 JoeJulian volga629: 3.6.4 or 3.7.3
23:46 volga629 I am just looking for repo file
23:46 CyrilPeponnet one question, when I want to  set client as debug for now I have to do that on volume side... this means that my 1k clients are getting into debug mode and starts logs things like crazy. Is there a way to set the DEBUG client side only ?
23:47 JoeJulian http://download.gluster.org/pub/gluster/glu​sterfs/LATEST/Fedora/glusterfs-fedora.repo
23:47 CyrilPeponnet (I think I found my answer in mount.glusterfs)
23:47 JoeJulian right
23:48 JoeJulian might work.
23:48 JoeJulian I remember arguing at one time that command line log levels should override the volume settings.
23:49 volga629 that was not nice
23:49 volga629 /var/tmp/rpm-tmp.S6nt1a: line 45: 18204 Segmentation fault      (core dumped) glusterd --xlator-option *.upgrade=on -N
23:50 JoeJulian Nifty
23:51 volga629 http://fpaste.org/254545/23452143/
23:51 glusterbot Title: #254545 Fedora Project Pastebin (at fpaste.org)
23:54 edwardm61 joined #gluster
23:57 volga629 0-rpc-transport: /usr/lib64/glusterfs/3.6.4/rpc-transport/rdma.so: cannot open shared object file: No such file or directory
23:59 dlambrig_ joined #gluster
23:59 cyberswat joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary