Camelia, the Perl 6 bug

IRC log for #gluster, 2013-08-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 JoeJulian paying my hard earned cash on this one... It'll probably cost me a whole dime!
00:01 a2_ :O
00:07 glusterbot joined #gluster
00:09 chirino joined #gluster
00:16 jporterfield joined #gluster
00:17 SteveCooling a2_: so, no need for me to bring my i386 vm online then?
00:17 SteveCooling pretty much finished the scheduled job here
00:18 JoeJulian that would be good.
00:19 JoeJulian oh geez... that was dumb... I should just go home...
00:21 JoeJulian I have no idea what I was thinking... I can't create a 32bit vm at rackspace...
00:21 chirino joined #gluster
00:21 a2_ SteveCooling, do you have an i386 vm then?
00:21 SteveCooling hehe
00:21 SteveCooling yes, somewhere :)
00:21 JoeJulian It would take some serious trickery to reinstall 32bit over the 64bit image...
00:22 SteveCooling hehe.. give me a few... i'll boot it.
00:31 SteveCooling alright
00:31 SteveCooling who is logging in?
00:31 a2_ me
00:32 SteveCooling check privmsg
00:33 a2_ got it
00:33 chirino joined #gluster
00:35 itisravi joined #gluster
00:46 chirino joined #gluster
00:55 jporterfield joined #gluster
00:57 chirino joined #gluster
00:58 JoeJulian yay, and I just finished building a vm to test that patch in.
01:03 a2_ JoeJulian, thanks! please vote +1 verified if it fixes the bug for you
01:04 a2_ wow, there are a *LOT* of warnings building on a 32bit system!
01:05 chirino joined #gluster
01:05 JoeJulian I never looked. I just build the src.rpm and send it off to koji.
01:07 a2_ the sheer number of warnings tempts me to say we don't support 32bit :(
01:07 JoeJulian aack!
01:07 JoeJulian This company can't afford to replace all the hardware at once...
01:07 glusterbot New news from resolvedglusterbugs: [Bug 999356] client crash in el5 i386 <http://goo.gl/jsVaEo>
01:07 glusterbot New news from newglusterbugs: [Bug 997902] GlusterFS mount of x86_64 served volume from i386 breaks <http://goo.gl/RTWzcG>
01:07 mmalesa joined #gluster
01:09 a2_ we really need 32-bit CI.. it is ignored way too much
01:09 a2_ our jenkins is 64bit
01:10 a2_ wondering if gcc -m32 is sufficient to at least get the warnings
01:13 SteveCooling a2_: i opened the patch at review.gluster... what do i click to confirm it works?
01:13 a2_ hmm you need 32bit runtim and dev glibc for it
01:13 a2_ review button
01:13 a2_ and set 1 for 'verified'
01:13 SteveCooling nice
01:14 SteveCooling done.
01:14 a2_ thanks!
01:15 SteveCooling np
01:15 a2_ alright folks, time to head home, thanks for killing one more bug
01:16 JoeJulian Thank you Avati.
01:16 SteveCooling time to go home here too. time is 03:16
01:16 SteveCooling :P
01:16 a2_ woops!
01:17 JoeJulian That bug kept me up 'till 4:30am last night...
01:18 SteveCooling ouch. well, glad i could help
01:19 SteveCooling good night
01:28 chirino joined #gluster
01:30 kevein joined #gluster
01:34 andrewklau joined #gluster
01:35 andrewklau Has anyone had any good experience with doing dual nic bonding for glusterfs storage networks?
01:37 chirino joined #gluster
01:44 purpleidea andrewklau: i have. it worked, need more info?
01:45 andrewklau purpleidea, I was just wondering how the performance improvement was. Few people I've spoken to said they haven't had any benefits running dual nics
01:45 purpleidea andrewklau: it's a fair question. it obviously depends on your workloads of course, replication level, etc etc... i can explain why if that's not obvious...
01:46 purpleidea size of your cluster, speed of disk io, etc...
01:46 andrewklau purpleidea, we just have a few nodes in a dual replication with sata disks.
01:47 purpleidea andrewklau: do you have the extra nic's on hand, or are you considering the expense ?
01:47 andrewklau considering the expense
01:49 purpleidea andrewklau: so quite honestly, don't bother, see if the cluster as is gives appropriate performance for what you need, and check to see if you're nics and getting saturated first. also, keep in mind that for certain types of nic bonding, you need support for your switch. a list of modes is here: https://www.kernel.org/doc/Docum​entation/networking/bonding.txt
01:49 glusterbot <http://goo.gl/DC0bT> (at www.kernel.org)
01:49 purpleidea your*
01:49 harish joined #gluster
01:49 purpleidea are*
01:49 purpleidea from*
01:49 purpleidea geez typo city
01:50 chirino joined #gluster
01:51 andrewklau purpleidea, haha, thanks that's what I've been reading. We haven't been able to put the cluster on a full workload yet, but in our testing the single gigabit nic was working fine.
01:51 purpleidea well happy hacking then! for repeatedly testing and redeploying gluster, check out my puppet module :P
01:52 andrewklau Will do, thanks again.
01:52 purpleidea andrewklau: np
01:57 dmojoryder I believe v3.4 resolved the ext4 issue. Given that is there a preferred filesystem to use with gluster anymore? I know the recommendation had been xfs but it was unclear if that was due to the ext4 issue or not
02:13 jporterfield joined #gluster
02:23 jporterfield joined #gluster
02:27 awheeler joined #gluster
02:28 lalatenduM joined #gluster
02:31 itisravi joined #gluster
02:47 bala joined #gluster
02:51 vshankar joined #gluster
02:52 gluslog joined #gluster
02:54 beejeebus joined #gluster
02:56 shubhendu joined #gluster
02:58 beejeebus hi, i'm trying to debug a volume that is in different states on different nodes
02:59 beejeebus https://gist.github.com/beejeebus/6302702
02:59 glusterbot Title: gist:6302702 (at gist.github.com)
02:59 beejeebus different brick counts, different status (started on one node, stopped on the other)
03:08 rastar joined #gluster
03:09 bharata_rao joined #gluster
03:15 jporterfield joined #gluster
03:23 jporterfield joined #gluster
03:38 chirino joined #gluster
03:43 bulde joined #gluster
03:48 ndarshan joined #gluster
03:53 shapemaker joined #gluster
03:57 chirino joined #gluster
03:58 ppai joined #gluster
03:58 RameshN joined #gluster
03:59 shubhendu joined #gluster
04:08 chirino joined #gluster
04:09 bulde joined #gluster
04:13 raghu joined #gluster
04:15 sgowda joined #gluster
04:16 chirino_m joined #gluster
04:24 mattf joined #gluster
04:25 pono joined #gluster
04:26 rastar_ joined #gluster
04:28 psharma joined #gluster
04:29 rjoseph joined #gluster
04:32 itisravi joined #gluster
04:36 satheesh1 joined #gluster
04:40 kanagaraj joined #gluster
04:42 CheRi joined #gluster
04:48 mohankumar joined #gluster
04:55 ndarshan joined #gluster
05:08 hagarth joined #gluster
05:15 ndarshan joined #gluster
05:22 shylesh joined #gluster
05:31 beejeebus any pointers for debugging/fixing volumes that report differently on different nodes?
05:31 beejeebus i have two nodes, which report each other as peers
05:31 beejeebus one reports a volume as started, one as stopped
05:44 itisravi joined #gluster
05:54 asias joined #gluster
05:55 rjoseph joined #gluster
06:01 lalatenduM joined #gluster
06:05 lalatenduM joined #gluster
06:07 harish joined #gluster
06:09 zombiejebus joined #gluster
06:11 mooperd joined #gluster
06:12 ndarshan joined #gluster
06:13 bulde joined #gluster
06:14 jtux joined #gluster
06:18 guigui1 joined #gluster
06:21 nshaikh joined #gluster
06:26 rgustafs joined #gluster
06:27 rgustafs joined #gluster
06:29 vimal joined #gluster
06:33 jporterfield joined #gluster
06:35 beejeebus1 joined #gluster
06:42 manik2 joined #gluster
06:43 anands joined #gluster
06:48 kanagaraj joined #gluster
06:49 manik2 joined #gluster
06:49 vpshastry joined #gluster
06:56 ajha joined #gluster
06:56 andreask joined #gluster
06:57 eseyman joined #gluster
07:02 ndarshan joined #gluster
07:02 hybrid5121 joined #gluster
07:13 sahina joined #gluster
07:15 dusmant joined #gluster
07:15 aravindavk joined #gluster
07:16 ngoswami joined #gluster
07:23 |miska| joined #gluster
07:26 satheesh1 joined #gluster
07:28 |miska| Hi, I have troubles mounting my gluster on client machine
07:28 spandit joined #gluster
07:28 |miska| I have a server with one volume and I can mount it localy on server, remote doesn't work
07:29 |miska| Says failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running
07:29 |miska| Tried that, brick is running and I'm accessing gluster on the server
07:30 |miska| Any typical rookie mistake?
07:30 andreask firewall?
07:30 |miska| I'm giving VPN IP and firewall is disabled on VPN
07:36 |miska| Log also shows given volfile which looks kinda correct (shows correct path to the brick on the server)
07:36 ricky-ticky joined #gluster
07:38 tjikkun_work joined #gluster
07:41 |miska| hmmm
07:41 andrewklau joined #gluster
07:44 |miska| Are there any global permission that might prevent me from accessing the server?
07:44 |miska| I set auth on volume
07:50 |miska| hmmm, nfs mount works, gluster one doesn't :-/
07:56 |miska| nfs mounting gluster volume
07:58 andreask can you pastebin  "gluster volume" info and "gluster volume status" output?
08:01 |miska| http://susepaste.org/70072818 volume info
08:01 glusterbot Title: SUSE Paste (at susepaste.org)
08:03 |miska| http://susepaste.org/33644890
08:03 glusterbot Title: SUSE Paste (at susepaste.org)
08:03 |miska| hmmm, this is weird
08:03 |miska| Before restarting gluster there was a port for the brick
08:03 |miska| Ok, now the error message makes sense
08:05 ujjain joined #gluster
08:06 X3NQ joined #gluster
08:07 andreask so its really not online
08:08 |miska| Yep
08:08 |miska| Used to be
08:08 |miska| Now I need to figure out why
08:08 JonnyNomad joined #gluster
08:08 |miska| But got stuff working temporaly over NFS and now there is clear error on server
08:08 |miska| So it should be much sipler :-)
08:18 jporterfield joined #gluster
08:29 |miska| Ok, so I maged to get to the state where my volume doesn't start anymore :-D
08:31 king_slayer joined #gluster
08:31 king_slayer Hello
08:31 glusterbot king_slayer: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:33 king_slayer So, we have 4 web pages, with 2 web servers running each one and using pound to distribute the http requests, each server has a copy of the virtualhost content... I'm looking for a central solution, just found glusterfs, so my idea is to implement glusterfs + heartbeat to put all the virtualhost folders in one server and make the web servers to import that folder with nfs
08:34 king_slayer seems like a good idea, I'll remove the duplication of data and will have 2 nodes in the glusterfs with hearbeat, providing automatic failover....looking for someone with some experience doing the same I found this http://www.bauer-power.net/2012/03/glus​terfs-is-not-ready-for-san-storage.html#.UhXMe-YW3Sw
08:34 glusterbot <http://goo.gl/ZakJJk> (at www.bauer-power.net)
08:37 king_slayer I was wondering about using glusterfs in a production environment... I've been reading posts on google about it, but I was wondering if someone here has a opinion about it
08:38 |miska| Don't know what I did, but my problem is fixed now :-D
08:38 |miska| Thanks
08:46 ngoswami joined #gluster
08:54 mohankumar joined #gluster
08:55 tjikkun_work joined #gluster
08:57 itisravi_ joined #gluster
08:58 satheesh joined #gluster
09:03 andreask1 joined #gluster
09:04 sgowda joined #gluster
09:06 mmalesa joined #gluster
09:14 jporterfield joined #gluster
09:18 spider_fingers joined #gluster
09:20 sgowda joined #gluster
09:26 andrewklau left #gluster
09:26 duerF joined #gluster
09:41 harish_ joined #gluster
09:44 sgowda joined #gluster
09:49 sahina joined #gluster
09:49 shubhendu joined #gluster
09:52 nshaikh left #gluster
09:52 dusmant joined #gluster
10:01 shruti joined #gluster
10:02 bala1 joined #gluster
10:17 aravindavk joined #gluster
10:22 raghu joined #gluster
10:23 satheesh1 joined #gluster
10:30 jporterfield joined #gluster
10:31 kkeithley1 joined #gluster
10:33 mmalesa joined #gluster
10:33 sahina joined #gluster
10:37 dusmant joined #gluster
10:41 shubhendu joined #gluster
10:46 meghanam joined #gluster
10:46 meghanam_ joined #gluster
10:48 msciciel_ joined #gluster
10:58 lpabon joined #gluster
11:09 social Hi, is it normal for distributed-replica setup to constantly heal with traffic?
11:18 satheesh1 joined #gluster
11:20 ppai joined #gluster
11:22 yinyin joined #gluster
11:23 mmalesa_ joined #gluster
11:25 satheesh joined #gluster
11:35 vpshastry joined #gluster
11:38 andreask social: depends on your usage, doing a stat call on a file triggers a self-heal
11:41 rastar joined #gluster
11:42 ujjain joined #gluster
11:49 Guest53741 joined #gluster
11:52 dusmant joined #gluster
11:58 rgustafs joined #gluster
12:03 B21956 joined #gluster
12:05 DV joined #gluster
12:09 jtux joined #gluster
12:13 andreask joined #gluster
12:15 rgustafs joined #gluster
12:19 chirino joined #gluster
12:23 plarsen joined #gluster
12:24 failshell joined #gluster
12:30 chirino joined #gluster
12:32 tjikkun_work joined #gluster
12:35 yinyin joined #gluster
12:37 ricky-ticky joined #gluster
12:41 tjikkun_work joined #gluster
12:43 chirino joined #gluster
12:51 chirino joined #gluster
12:59 JoeJulian andreask, social: correction - doing a lookup() triggers a self-heal *check*, contacting each replica and ensuring that the metadata for each doesn't include pending updates for the other to ensure you don't retrieve a stale copy. stat() just triggers this lookup() with the least amount of additional overhead.
13:01 andreask JoeJulian: thanks for clarification ;-)
13:02 Norky joined #gluster
13:03 chirino joined #gluster
13:07 social JoeJulian: so in case of heavy traffic where tons of mkdir/stat/ls/.. etc are being done it's normal to have a lot of files in gluster volume heal <volname> info as it gets triggered continually
13:08 robo joined #gluster
13:11 chirino joined #gluster
13:11 glusterbot New news from newglusterbugs: [Bug 990089] do not unlink the gfid handle upon last unlink without checking for open fds <http://goo.gl/V4YGIB>
13:11 deepakcs joined #gluster
13:13 tobias- I'm trying to find out which ports glusterfs is using when mounting a client, since I need to do this through a firewall i need to know which ports are needed for that. I can't find the documentation for this. I believe it exists, I'm just having a hard time googling it =/
13:15 kkeithley_ @ports
13:15 glusterbot kkeithley_: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
13:16 tobias- kkeithley_: thanks!
13:19 JoeJulian social: I think they're added to that list if they're in a transition state, ie. pending attributes are set
13:20 ngoswami joined #gluster
13:22 hagarth joined #gluster
13:24 jtux joined #gluster
13:24 bennyturns joined #gluster
13:27 chirino joined #gluster
13:28 awheeler joined #gluster
13:36 rwheeler joined #gluster
13:38 chirino joined #gluster
13:43 bugs_ joined #gluster
13:44 mooperd joined #gluster
13:46 chirino joined #gluster
13:46 mibby- joined #gluster
13:48 Guest53741 joined #gluster
13:53 jebba joined #gluster
13:57 spider_fingers left #gluster
13:57 aliguori joined #gluster
13:57 chirino joined #gluster
14:11 bala1 joined #gluster
14:12 glusterbot New news from newglusterbugs: [Bug 1000019] Bogus dates in RPM changelog <http://goo.gl/ewX0kX>
14:12 chirino joined #gluster
14:23 chirino joined #gluster
14:31 chirino joined #gluster
14:39 robo joined #gluster
14:39 jskinner_ joined #gluster
14:39 chirino joined #gluster
14:43 jclift_ joined #gluster
14:44 vpshastry joined #gluster
14:45 failshell joined #gluster
14:48 chirino joined #gluster
14:51 hagarth joined #gluster
14:51 guigui joined #gluster
14:59 chirino joined #gluster
14:59 vimal joined #gluster
15:00 zerick joined #gluster
15:04 gkleiman joined #gluster
15:08 sac joined #gluster
15:08 chirino joined #gluster
15:08 sac`away joined #gluster
15:09 lalatenduM joined #gluster
15:11 daMaestro joined #gluster
15:13 jclift_ joined #gluster
15:17 jclift___ joined #gluster
15:18 chirino joined #gluster
15:19 JoeJulian ndevos: re the bogus dates... I guess that's what happens when you work on that stuff in the middle of the night... It was still my Monday... ;)
15:19 Norky joined #gluster
15:19 rwheeler joined #gluster
15:19 ndevos JoeJulian: yeah, I understand, that happened to me several times too already
15:23 LoudNoises joined #gluster
15:23 lpabon joined #gluster
15:24 kaptk2 joined #gluster
15:25 TuxedoMan joined #gluster
15:25 TuxedoMan should my quorum be on a separate server?
15:25 lpabon joined #gluster
15:26 JoeJulian @dictionary quorum
15:27 JoeJulian @dict quorum
15:27 glusterbot JoeJulian: wn, bouvier, devil, gcide, and moby-thes responded: wn: quorum n 1: a gathering of the minimal number of members of an organization to conduct business; devil: QUORUM, n. A sufficient number of members of a deliberative body to have their own way and their own way of having it. In the United States Senate a quorum consists of the chairman of the Committee on Finance and
15:27 glusterbot JoeJulian: a messenger from the White House; in the House of Representatives, of the Speaker and the devil. ; moby-thes: 48 Moby Thesaurus words for "quorum": assemblee, assembly, assignation, at home, ball, brawl, caucus, colloquium, commission, committee, conclave, concourse, congregation, congress, conventicle, convention, convocation, council, dance, date, diet, eisteddfod, festivity, (6 more messages)
15:28 JoeJulian good enough...
15:29 JoeJulian So, with that understanding, you don't set quorum "on a separate server" you set quorum to require a minimum number of members.
15:30 JoeJulian With 3.4 a non-participating server (a server that doesn't host bricks for that particular volume) may be a quorum member.
15:30 bugs_ joined #gluster
15:31 jbrooks joined #gluster
15:32 robo joined #gluster
15:37 _pol joined #gluster
15:38 TuxedoMan back
15:38 TuxedoMan got pulled away, let me catchup
15:42 TuxedoMan So do you need to probe the non-participating server and set quorum on all servers + non participating?
15:45 JoeJulian You would need to probe it..
15:48 JoeJulian "cluster.server-quorum-type: If set to server, enables the specified volume to participate in quorum." - Huh??? That doesn't seem to make any sense...
15:51 TuxedoMan I'm seeing its 2 commands?gluster volume set <volname> cluster.server-quorum-type none/server
15:51 TuxedoMan gluster volume set all cluster.server-quorum-ratio <percentage%>
15:51 TuxedoMan My confusion was that I thought I had to some how establish that "this is the stand alone quorum server"
15:51 TuxedoMan I guess I just run those commands on each server, including the non-participating and then it just works
15:52 JoeJulian Ah, I see. I hadn't fully read about that new feature
15:53 JoeJulian The "how to test" from http://www.gluster.org/community/documen​tation/index.php/Features/Server-quorum is the best documentation so far.
15:53 glusterbot <http://goo.gl/vrw2D> (at www.gluster.org)
15:54 JoeJulian The volume quorum is different from the server quorum. The volume quorum goes read-only if the quorum isn't met. The server quorum kills it's bricks if it recognizes a loss of quorum from the total peer group.
15:54 TuxedoMan Yea
15:54 TuxedoMan I'm going for quorum server
15:55 JoeJulian I haven't really given any thought, yet, as to why one would be better than another though.
15:55 JoeJulian And I need espresso...
15:55 TuxedoMan In my mind -- I saw it going as... Hey node1&node2 -- this is your quorum server. Once of you drops of the quorum server does its thing, calculates if the quorum% is met, when its not IT shuts down the bricks
15:55 TuxedoMan err
15:55 TuxedoMan server1&server2 ;)
15:57 JoeJulian No it's hey, I'm part of a quorum of 5 servers. I can only see 1 other peer right now. Oh, crap,  I don't have a quorum, kill my bricks.
16:00 semiosis JoeJulian: server quorum prevents even reading when there's no quorum
16:00 semiosis so you are always guaranteed to see consistent data
16:02 jclift_ joined #gluster
16:03 TuxedoMan JoeJulian: that makes sense then
16:04 TuxedoMan So on a trusted pool of 3, I would set quorum on all 3 servers, they would talk to eachother, set a percentage of if 1 server drops, kill bricks.
16:05 TuxedoMan It's not necessarily a dedicated/separate server over looking your active pool saying "everything is OK" or "I detected a server dropped, we're not in quorum anymore, KILL BRICKS!!"
16:05 JoeJulian The default is 50%+1 so in a pool of three, if two servers are available, stay up. If I'm all alone, kill my bricks.
16:06 TuxedoMan makes sense
16:06 TuxedoMan thanks for clarifying
16:06 JoeJulian Right. The other way wouldn't work. If you can't communicate with a server, you can't tell it to kill its bricks. :D
16:10 chirino joined #gluster
16:11 shylesh joined #gluster
16:14 sprachgenerator joined #gluster
16:16 Guest53741 joined #gluster
16:20 chirino joined #gluster
16:25 jclift joined #gluster
16:25 pono joined #gluster
16:25 neofob how do i configure glusterfs source to have "Block Device backend"?
16:29 Mo___ joined #gluster
16:29 neofob nm, i got it ./configure --enable-bd-xlator
16:29 neofob with lib lvm2-dev and all
16:32 jporterfield joined #gluster
16:32 chirino joined #gluster
16:38 mattf joined #gluster
16:44 mohankumar neofob: ping me if you are stuck using bd xlator
16:45 neofob mohankumar: thanks, i'm setting up a test env to run openstack to play with
16:45 johnmark neofob: excellent. let us know how it goes
16:45 mohankumar neofob: i developed bd_map xlator, but now pusing next version of BD xlator with more features
16:45 mohankumar fyi ^^
16:45 neofob where can i find more document on this than the the output from `gluster help`?
16:46 social hmm gluster just hit oomkiller randomly :/
16:47 mohankumar neofob: if you are looking for BD related help, it should be in docs/features/bd.txt
16:48 neofob mohankumar: thanks
16:48 neofob btw, on centos 6.4, i can mount the gluster volume on the server itself but not from another machine
16:49 mohankumar neofob: is it specific to BD or generic?
16:49 neofob oh, this is generic
16:50 chirino joined #gluster
17:00 chirino joined #gluster
17:01 lalatenduM joined #gluster
17:04 zombiejebus joined #gluster
17:12 chirino joined #gluster
17:13 glusterbot New news from newglusterbugs: [Bug 998967] gluster 3.4.0 ACL returning different results with entity-timeout=0 and without <http://goo.gl/B2gFno>
17:14 Technicool joined #gluster
17:21 chirino joined #gluster
17:32 chirino joined #gluster
17:39 awheeler joined #gluster
17:40 chirino joined #gluster
17:44 SteveWatt joined #gluster
17:44 jclift joined #gluster
17:51 chirino joined #gluster
17:51 neilvana joined #gluster
17:55 neilvana We have been evaluating using gluster to replace a stand alone raid nas.  Our initial impressions have been very positive.  However, I'm trying to overcome an issue when integrating gluster with samba and active directory.
17:56 neilvana Some of our users have a very large number of groups >100 assigned in active directory.
17:56 neilvana This makes using the gluster mount faile with a "cannot open directory .: Transport endpoint is not connected" error.
17:57 neilvana I fired up gdb and did some poking around.
17:58 neilvana Apparently in rpc/rpc-lib/src/rpc-clnt.h it is failing to encode the rpc header.
17:58 robo joined #gluster
17:59 chirino joined #gluster
18:00 neilvana Specifically the call xdr_size = xdr_sizeof in the rpc_client_fill_request function returns 0 if the number of groups is above ~ 100 or greater.
18:00 neilvana I tracked this down to a limitation in glibc.
18:00 neilvana Specifically glibc limits auth data to 400 bytes.
18:00 neilvana I'm not sure if there is a good reason for that, but as a result it fails to encode the header.
18:01 neilvana in glibc you can take a look at sunrpc/rpc/auth.h and see MAX_AUTH_BYTES is set to 400.
18:02 neilvana Really I think it is a bit ridiculous that we have so many dang AD groups in our company but I have little control over that.
18:03 neilvana Does anyone have any thoughts about what to do?  I'm rebuiliding glibc with a bigger max right now just to see if it works, but I thought I'd post it here too.
18:07 neilvana FYI I'm using gluster 3.4, the latest build as far as I can tell.
18:11 chirino joined #gluster
18:12 rwheeler joined #gluster
18:30 awheeler joined #gluster
18:31 chirino joined #gluster
18:32 awheeler joined #gluster
18:32 SteveWatt joined #gluster
18:44 chirino joined #gluster
18:58 chirino joined #gluster
19:04 JoeJulian neilvana: Could you use ACLs instead?
19:05 JoeJulian Also, file a bug report. It'll probably end up a wontfix but you never know... these guys are pretty clever.
19:05 glusterbot http://goo.gl/UUuCq
19:05 neilvana I'm not assigning group permissions to files.  The user just belongs to those groups.
19:05 neilvana My ultimate plan will be to use ACLs.
19:06 neilvana I'm just barely setting up a couple nodes to do some testing so right now the volume just contains a couple files.
19:07 neilvana And I started writting up a bug already.
19:13 glusterbot New news from newglusterbugs: [Bug 1000131] Users Belonging To Many Groups Cannot Access Mounted Volume <http://goo.gl/JOatTA>
19:14 chirino joined #gluster
19:14 mooperd joined #gluster
19:16 dewey joined #gluster
19:24 chirino joined #gluster
19:25 Recruiter joined #gluster
19:26 JoeJulian Erm... this looks like an interesting bug... http://ur1.ca/f61cw
19:26 glusterbot Title: #34108 Fedora Project Pastebin (at ur1.ca)
19:30 JoeJulian How can nobody have encountered this yet?
19:31 JoeJulian hmm, maybe it's just me...
19:31 JoeJulian ,,(meh) I'm just reading this wrong.. :(
19:31 glusterbot I'm not happy about it either
19:32 JoeJulian ,,(thanks) for caring, glusterbot.
19:32 glusterbot you're welcome
19:32 chirino joined #gluster
19:35 zombiejebus joined #gluster
19:35 thomasrt hey, it looks like accessing the .landfill directory from a client causes the client to hang
19:36 JoeJulian well that's nifty
19:36 JoeJulian Anything in the client log when that happens?
19:36 thomasrt is this a known issue?  I think .landfill is not exposed to the client, but if they know it's there and stat it ....
19:36 thomasrt nothing
19:36 thomasrt gluster is not functional enough even to log it seems
19:38 thomasrt I'm 3.3.2 BTW
19:38 JoeJulian ah
19:40 thomasrt I'm hanging around to see if there's even a timeout message written ot the client log.  So far nothing and it's been 15 minutes
19:40 thomasrt anyone want to DoS your gluster clients?
19:41 JoeJulian I would set the client log level to TRACE and trigger the hang again. Perhaps running it in gdb and breaking during the hang and getting a "thread apply all bt". All that data would be added when you file a bug report.
19:41 glusterbot http://goo.gl/UUuCq
19:41 JoeJulian I'm diagnosing one now that I need to file or I'd see if I could duplicate it.
19:46 thomasrt this one's easy enough to describe, that I hope one of the #gluster lurkers pick up on
19:52 chirino joined #gluster
19:52 JoeJulian thomasrt: btw... the timeout you would be waiting for would be the frame-timeout of 30 minutes
19:54 SteveWatt joined #gluster
19:56 aliguori joined #gluster
20:01 JoeJulian a2_, hagarth, avati: I need a little help on clear-locks. I have a deadlock that I think might be lock related. http://ur1.ca/f61mp frames 9 and 10 for pid 5513. How would I identify a lock and what kind and inode|entry|posix to clear?
20:01 glusterbot Title: #34117 Fedora Project Pastebin (at ur1.ca)
20:02 chirino joined #gluster
20:11 JoeJulian Ooh, neat... I have a zombie glusterd
20:11 chirino joined #gluster
20:11 glusterbot New news from resolvedglusterbugs: [Bug 927146] AFR changelog vs data ordering is not durable <http://goo.gl/jfrtO>
20:12 JoeJulian hmm
20:13 SteveWatt joined #gluster
20:13 plarsen joined #gluster
20:20 chirino joined #gluster
20:38 chirino joined #gluster
20:41 neofob if i have a failed mounting (mount -t glusterfs server:/vol /mnt/mount_point)
20:41 neofob /mnt/mount_point is crapped now, how do i fix it?
20:41 johnmark JoeJulian: heya, can you hook up glusterbot to the gluster blog?
20:41 neofob i kill the mounting process but it doesn't help
20:47 JoeJulian johnmark: I can.
20:47 JoeJulian johnmark: what's in it for me? ;)
20:47 JoeJulian jk
20:47 JoeJulian neofob: you could kill the glusterfs process
20:49 syntheti_ joined #gluster
20:49 syntheti_ joined #gluster
20:56 neofob JoeJulian: that's what i did i kill the mount.glusterfs
20:57 neofob but i still have d???? when i `ls -l` /mnt
20:57 JoeJulian mount.glusterfs is a bash script. Is the glusterfs process still running?
20:58 JoeJulian If not, you should be able to umount
20:58 neofob no, it's not running for that mount point
20:58 neofob i still have two other gluster volumes mounted
20:59 neofob hah, i can umount the failed mount point
20:59 neofob thanks
20:59 JoeJulian sure, but just umount that one. What you have is fuse is still connected to that directory but the application that fuse is expecting to pass calls to isn't there.
21:01 neofob this is centos 6.4 server, i open the port 2400[7-9] on the server (one brick), but the mounting still fails on client (debian wheezy)
21:02 JoeJulian @ports
21:02 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
21:02 JoeJulian gluster volume status should show you which specific port the brick is listening on
21:03 neofob arh, i use tcp and 3.4, that's why, rhanks
21:04 neofob btw, what is typical raid controller card people use? i happen to have this legacy lsi megaraid 84016E at work
21:04 neofob it's a nightmare to get it to work with linux
21:05 * JoeJulian uses none.
21:06 awheele__ joined #gluster
21:07 badone joined #gluster
21:08 spligak I see multi-master geo replication listed in the "Planning for Gluster 3.4" doc. Trying to dig up more info - was there any progress made on that for 3.4?
21:11 y4m4 joined #gluster
21:18 JoeJulian http://www.gluster.org/community/d​ocumentation/index.php/Features34
21:18 glusterbot <http://goo.gl/4MvOh> (at www.gluster.org)
21:21 JoeJulian Interesting... didn't make the proposal deadline for 3.5 either: http://www.gluster.org/community/d​ocumentation/index.php/Planning35
21:21 glusterbot <http://goo.gl/l2gjSh> (at www.gluster.org)
21:21 neofob just add a nicely formatted table in markdown to my note: http://bit.ly/17NXqz0
21:21 glusterbot Title: notes/env/glusterfs.md at master · neofob/notes · GitHub (at bit.ly)
21:22 neofob for ports that gluster[d|fs] uses
21:27 rwheeler joined #gluster
21:30 spligak JoeJulian, that makes me a little sad. were there any notes on that? it didn't have a link to a new feature page if I recall correctly.
21:31 spligak I'm interested to know how folks thought it might be implemented.
21:33 dmueller joined #gluster
21:35 helloadam joined #gluster
21:42 JoeJulian Right, it does not have a link to a feature page, though it lists Csaba Henk, Jeff Darcy and "venky" (not sure who that is).
21:43 JoeJulian I kind-of wonder if that's where Jeff is going with his new style afr...
21:43 JoeJulian http://www.gluster.org/community/documentati​on/index.php/Features/new-style-replication
21:43 glusterbot <http://goo.gl/6eUNW6> (at www.gluster.org)
21:47 ninkotech__ joined #gluster
21:54 social hmm 3.5 could have memleaks in georeplication fixed >.>
21:56 JoeJulian is there a bug report?
21:59 social JoeJulian: yeah against 3.3 and not sure about the one I'm chasing now. But don't worry there will be one ,)
22:06 SteveWatt joined #gluster
22:14 dmueller is there a page on the gluster.org site that talks about governance?
22:18 zerick joined #gluster
22:23 JoeJulian http://www.gluster.org/community/documentation/​index.php/Gluster_Community_Governance_Overview
22:23 glusterbot <http://goo.gl/c2gHFl> (at www.gluster.org)
22:23 johnmark dmueller: that's the one :)
22:24 JoeJulian "Governance rules - we have drafts of documents, but we need to vote and approve them"
22:24 johnmark which will probably change a bit at the next board meeting, when we vote on it
22:24 kkeithley_ @learn governance as http://www.gluster.org/community/documentation/​index.php/Gluster_Community_Governance_Overview
22:24 glusterbot kkeithley_: The operation succeeded.
22:24 jebba joined #gluster
22:25 johnmark JoeJulian: precisely :)
22:25 dmueller thanks guys..
22:26 dmueller johnmark: how do you pick who's on your board? is it open voting by community?
22:26 badone joined #gluster
22:26 JoeJulian Ouiji board
22:27 dmueller that's what i thought ;-)
22:27 johnmark lol
22:27 johnmark dmueller: no, it's people who have a vested interest in it
22:27 johnmark the initial board consists of people and orgs that I reached out to
22:27 dmueller johnmark are you around tomorrow am 9:00 am ish to joing a community hangout to talk about it? Pacfic Time
22:28 johnmark and for the next go round, the board will vote on new members
22:28 dmueller it's still wicked early days for your board, correct?
22:28 johnmark dmueller: not sure. I'm booked tomorrow from 8am - 10am your time, and then I have to go a-traveling
22:28 johnmark dmueller: yes, correct
22:28 johnmark dmueller: but I'm open next week
22:28 johnmark also, are you coming to the community day?
22:29 johnmark because if you or your guys are, we can meetup in SFO
22:29 dmueller which one & where?
22:29 dmueller url?
22:29 johnmark SF - 8/27
22:29 dmueller no
22:29 dmueller no travel in august for me..
22:29 johnmark http://glusterday-sfo.eventbrite.com/
22:29 johnmark gah
22:29 JoeJulian The rule, thus far, is they have to contribute in some way. Be it through hours and hours of BSing on IRC... I mean helping people, committing a project to the forge, hacking glusterfs, or $$$.
22:29 glusterbot Title: Gluster Community Day San Francisco- Eventbrite (at glusterday-sfo.eventbrite.com)
22:29 johnmark ok
22:29 johnmark heh heh :)
22:29 johnmark especially valuable is BS on IRC ;)
22:30 dmueller i like ouiji board approach..
22:30 johnmark and we don't yet allow $$$, although it would  be nice, on account of there not being a "gluster corporation" to take the cash
22:30 johnmark haha... it sounds nice about now!
22:30 johnmark dmueller: but setting up a community day in SEA might be something to consider
22:30 dmueller you saw krishnan s's blog post using you as the poster child for governeance..
22:30 johnmark JoeJulian: wouldn't you agree?
22:31 johnmark dmueller: oh, that's why he pinged me
22:31 johnmark no, I didn't see that :)
22:31 johnmark perhaps I should take a look
22:31 dmueller we could do one in seattle together..
22:31 johnmark dmueller: that would be awesome
22:31 dmueller let me grab the url 2 secs
22:31 johnmark and JoeJulian is around the corner, so...
22:31 dmueller or could you come to vancouver..
22:31 JoeJulian Sea would be good. There's a lot of industry in this town.
22:32 johnmark dmueller: forgot, you're in Vancouver :)
22:32 dmueller vancouver is my hometown..
22:32 dmueller ;-)
22:32 johnmark heh
22:32 JoeJulian Facebook, google, ebay... There's a "big data" group that meets up in Bellevue every month...
22:32 johnmark oh well - I'd love either or both
22:32 dmueller k
22:32 johnmark JoeJulian: nice
22:33 dmueller look at lanyrd and see if there's anything we can piggy back off of..another big event cloudy or bigdata
22:33 JoeJulian Just my luck.. the only international gig johnmark would give me is 4 hours away...
22:35 dmueller http://allthingsplatforms.com/opensource/​when-open-source-foundation-makes-sense/ is the blog i mentioned
22:35 glusterbot <http://goo.gl/5giqQK> (at allthingsplatforms.com)
22:35 dmueller sorry joe
22:35 JoeJulian I still think he should have sent me to Scotland. :D
22:37 awheeler joined #gluster
22:39 johnmark dmueller: ah, thanks - Ian Skerritt also chimed in. Looks liek I should respond
22:39 johnmark JoeJulian: he hheh ;)
22:39 johnmark you're a funny guy
22:39 johnmark dmueller: I'll take a look at other conferences in the region. Definitely want to piggy-back on other events
22:42 glusterbot New news from resolvedglusterbugs: [Bug 839950] libgfapi: API access to gluster volumes (a.p.k.a libglusterfsclient) <http://goo.gl/YEXSd>
22:45 dmueller yeh..
22:46 dmueller IMHO - what you guys are doing could be a good first step for openshift, but i kinda think it all leads to a foundation eventually
22:46 dmueller but that's just one evangelist's opinion
22:47 fidevo joined #gluster
23:03 rosmo_ joined #gluster
23:03 rosmo_ hi guys
23:03 rosmo_ i'm in a bit of bind, i can't mount my gluster volume presumably because my replicate node is down
23:04 rosmo_ all i get is: [2013-08-22 23:03:02.137877] E [client-handshake.c:1741:client_query_portmap_cbk] 0-virtpool2-client-0: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
23:04 rosmo_ [2013-08-22 23:03:02.138175] W [socket.c:514:__socket_rwv] 0-virtpool2-client-1: readv failed (No data available)
23:04 rosmo_ [2013-08-22 23:03:02.138235] I [client.c:2097:client_rpc_notify] 0-virtpool2-client-1: disconnected
23:04 rosmo_ [2013-08-22 23:03:02.138256] E [afr-common.c:3735:afr_notify] 0-virtpool2-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
23:06 JoeJulian Looks like it can't read the volume info from the second server either.
23:06 rosmo_ that's because the second server is down
23:06 rosmo_ and it's not coming up in a few ours
23:06 rosmo_ hours even
23:08 JoeJulian "0-virtpool2-client-0: failed to get the port number" That's the first server and "0-virtpool2-client-1: readv failed (No data available)" is the second.
23:08 JoeJulian So does gluster volume status give a port number?
23:08 rosmo_ Brick virt2.ovirtsan:/bricks/virtpool2N/AN4966
23:08 rosmo_ NFS Server on localhostN/ANN/A
23:08 rosmo_ Self-heal Daemon on localhostN/ANN/A
23:09 rosmo_ it says Online N.. i just upgraded from 3.4.0 alpha to 3.4.0..
23:09 edong23 pastie dude
23:09 edong23 pastie
23:09 rosmo_ edong23: sorry :/
23:09 edong23 www.pastie.org
23:09 edong23 its easier for everyone
23:09 JoeJulian Thanks, edong23, but <=3 lines we don't usually get uptight about.
23:09 rosmo_ http://pastie.org/8261104
23:09 glusterbot Title: #8261104 - Pastie (at pastie.org)
23:10 JoeJulian ... and I do know how to get uptight... ;)
23:10 edong23 i dont get uptight about anything, but generally it is easier to read even a few lines in a pastie
23:10 JoeJulian So anyway....
23:10 edong23 look how nice that is
23:10 rosmo_ i just don't get it.. if i have a replicate volume, how come i can't mount it with one of the bricks missing?
23:10 JoeJulian I'm guessing when you upgraded glusterfsd didn't get killed. killall glusterfsd and restart glusterd
23:11 rosmo_ did that already
23:11 rosmo_ actually i rebooted the whole box after upgrading
23:11 JoeJulian hmm
23:11 chjohnst_work joined #gluster
23:11 JoeJulian Well, is glusterfsd running for bricks/virtpoo2 (check with ps)
23:12 JoeJulian Looks like it is on pid 4966...
23:13 rosmo_ seems like there's no glusterfsd anymore
23:13 JoeJulian Ah, check the brick log then.
23:13 JoeJulian Perhaps the brick isn't mounted?
23:13 rosmo_ it is
23:15 DV joined #gluster
23:15 rosmo_ seems like because it can't connect to other server for replicate, it just fails
23:17 JoeJulian The brick doesn't connect to the other server.
23:17 JoeJulian Did you check the brick log?
23:17 rosmo_ agch
23:18 rosmo_ i've used too many hosts-aliases and it was pointing to itself
23:18 JoeJulian At least that's easy to fix. :D
23:19 rosmo_ basically i had two bricks on two servers
23:19 rosmo_ two different hostnames but both pointing to the same ip address
23:20 rosmo_ also a protip: 200 gigs of snapshots make for a very long boot
23:25 JoeJulian Not on xfs
23:25 rosmo_ lvm snapshot
23:25 rosmo_ of xfs though
23:26 a2_ lvm dm-thin snapshots?
23:28 JoeJulian lvm snapshots have left me very disappointed.
23:28 awheeler joined #gluster
23:28 JoeJulian For my snapshot needs, now, I'm using btrfs (not under gluster or for anything critical).
23:29 a2_ JoeJulian, have you tried dm-thin snapshots?
23:29 a2_ it will be cool to integrate gluster with btrfs snapshots
23:29 JoeJulian No. That wasn't available when I needed a solutin.
23:29 JoeJulian it would
23:30 JoeJulian I'm hoping that the snapshot mechanism will be pluggable.
23:30 JoeJulian Doesn't zfs have a snapshot method also?
23:31 a2_ does i guess
23:33 robo joined #gluster
23:42 MugginsM joined #gluster
23:43 MugginsM hi, I'm having trouble with an upgrade to 3.4. Clients seem to be trying to connect to port 49153, which I don't see listed in the docs anywhere
23:43 MugginsM 3.3 didn't do that I don't think
23:44 JoeJulian @3.4 release notes
23:44 glusterbot JoeJulian: http://goo.gl/AqqsC
23:44 JoeJulian https://github.com/gluster/glusterfs/blob/release-​3.4/doc/release-notes/3.4.0.md#brick-port-changes
23:44 glusterbot <http://goo.gl/ufWxTq> (at github.com)
23:45 MugginsM from what I can tell, our network doesn't let through anything above 32768
23:45 * MugginsM sighs
23:46 MugginsM is there some way to move it back to 24007+ ?
23:46 JoeJulian The port change had something to do with adhering to standards.
23:46 JoeJulian recompile
23:47 JoeJulian RFC 6335
23:47 JoeJulian xlators/mgmt/glusterd/src/glusterd-pmap.h:#define GF_IANA_PRIV_PORTS_START 49152 /* RFC 6335 */
23:48 JoeJulian That's the line you'd have to change.
23:48 MugginsM 'k
23:48 MugginsM looking to see if I can fix the network first (although it's not ours :-/ )
23:51 zombiejebus joined #gluster
23:56 MugginsM ok, can't fix the network easily :(
23:56 MugginsM change control on firewall is .... slow
23:58 MugginsM presumably if a 3.3 client is able to talk to a 3.4 server, the 3.4 server must be able to use 24007?
23:58 MugginsM aha, I see it listens on it
23:58 JoeJulian @ports
23:58 glusterbot JoeJulian: glusterd's management port is 24007/tcp and 24008/tcp if you use rdma. Bricks (glusterfsd) use 24009 & up for <3.4 and 49152 & up for 3.4. (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111.
23:59 MugginsM ah

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary