Camelia, the Perl 6 bug

IRC log for #gluster, 2012-11-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 eurower Hi. If I make rpm/yum on a standard filesystem, it's ok, but on a glusterfs, it failed (100% cpu and never ending). Problem with lock or other ?
00:07 JoeJulian eurower: Are your bricks ,,(ext4)?
00:07 glusterbot eurower: Read about the ext4 problem at http://goo.gl/PEBQU
00:09 eurower yes, it's brick of ext4. I read that ...
00:11 seanh-ansca joined #gluster
00:14 eurower Meanwhile I'm on Fedora 15 / Kernel 2.6.32
00:35 blendedbychris I'm replacing a brick in a replica (2 bricks). right now it's limping along with 1 brick, how do i readd an empty brick and remove the old?
00:38 jbrooks joined #gluster
00:47 JoeJulian eurower: Fedora 15 started on 2.6.38.6-26 and is up to 2.6.43-8 so your kernel may have that bug/enhancement.
00:47 TSM2 joined #gluster
00:48 JoeJulian blendedbychris: peer probe if it's a new server with the new brick, then gluster volume replace-brick $vol $deadbrick $newbrick commit force ; gluster volume heal $vol full
00:48 blendedbychris JoeJulian: i added the new node so far and now i have two nodes, one with the old ip and one with a hostname
00:49 blendedbychris probed i should say
00:49 cyberbootje joined #gluster
00:49 blendedbychris it'd be replace-brick with itself?
00:49 blendedbychris is that right?
00:50 JoeJulian paste your volume info for me...
00:50 stefanha joined #gluster
00:50 blendedbychris http://pastie.textmate.org/p​rivate/6vtscozuxahfeuuz2y4mg
00:50 glusterbot <http://goo.gl/MTCRN> (at pastie.textmate.org)
00:51 JoeJulian Ok, so -2 is the dead brick. Where's the new brick?
00:52 JoeJulian Also, this peering needs the hostname assigned. probe -1 from -2 to assign the name.
00:52 blendedbychris sld-wowza-2
00:52 blendedbychris 10.16.26.139 is the old brick
00:52 JoeJulian Oh! ok, I see.
00:53 blendedbychris so far i have probed
00:53 penglish JoeJulian: I forgot to mention - if you don't mind posting your talk slides somewhere, I'm sure the SASAG folks would appreciate it
00:53 JoeJulian penglish: I was planning on it. Today's kind-of been crappy so far... Damned windows machine.
00:54 penglish Joanna said she was having a bit of trouble focusing because she was so tired and wanted me to explain things which you did during your live demo. Unfortunately I was out of the room at the time!
00:54 blendedbychris why does wowza-2 return only ips and wowza-1 returns hostname (and dead ip address) :\
00:54 blendedbychris weird
00:54 blendedbychris w/e
00:55 JoeJulian Hmm, interesting... so peer status on -2 shows an ip for -1?
00:55 blendedbychris JoeJulian: ya
00:56 JoeJulian penglish: I also plan on doing a video of that demo.
00:56 JoeJulian blendedbychris: you can fix that by probing -1 by name from -2.
00:57 blendedbychris ah indeed
00:57 blendedbychris okay what's next though in getting the data on this new -2
00:57 JoeJulian blendedbychris: Ok, that done and assuming you have the filesystem desired on sld-wowza-2:/srv/glusterd/bricks/b0, then on -2 do gluster volume start wowza force
00:58 penglish JoeJulian: +1 :-)
00:58 blendedbychris JoeJulian: by file system desired, do i need to rsync first?
00:58 JoeJulian no
00:59 JoeJulian Once it's running just do a "gluster volume heal wowza full" and it should start replicating.
00:59 blendedbychris cool
01:00 blendedbychris is this kind of stuff documented?
01:01 semiosis doc-u-mented?
01:01 JoeJulian yep, it's in the ,,(rtfm) - though that peer probe anomaly isn't very noticeable.
01:01 glusterbot Read the fairly-adequate manual at http://goo.gl/E3Jis
01:02 blendedbychris JoeJulian: the hostname issue or the dead ip vs. live hostname
01:04 JoeJulian hostname
01:04 JoeJulian For the dead ip, you should be able to peer detach it.
01:05 * JoeJulian is harassing blendedbychris in two channels now. :D
01:05 JoeJulian Or was before he disappeared.
01:08 semiosis you're taking over freenode
01:08 JoeJulian All your channel are belong to ME!
01:15 Triade1 joined #gluster
01:23 tc00per left #gluster
02:24 bala joined #gluster
02:59 y4m4 joined #gluster
03:00 blendedbychris joined #gluster
03:00 blendedbychris joined #gluster
03:00 y4m4 joined #gluster
03:01 y4m4 joined #gluster
03:03 ika2810 joined #gluster
03:10 stopbit joined #gluster
03:51 bala joined #gluster
04:09 Triade joined #gluster
04:12 Triade1 joined #gluster
04:24 Triade joined #gluster
04:52 theron joined #gluster
05:13 chacken joined #gluster
05:49 dsj joined #gluster
05:51 dsj Hi.  I have a 6x3 distribute-replicate volume.  One of the nodes has a simulated failure and I'm testing moving its bricks to another machine.  replace-brick doesn't work because it requires connectivity to the dead peer.  remove-brick and then add-brick of just one of the dead bricks at a time doesn't work either because it reduces the replica count for that subvolume to 2, which is inconsistent.
05:51 dsj Do I have to remove the third replica of all distribute subvolumes in order to make this work?
06:18 Humble joined #gluster
06:33 dsj And BTW, if I do "commit force" rather than "start", I get "brick: <B> does not exist in volume: <V>"
06:33 dsj "gluster volume info" does show it, but not "gluster volume status"
06:44 xinkeT joined #gluster
06:57 ika2810 left #gluster
07:34 ramkrsna joined #gluster
07:34 ramkrsna joined #gluster
07:56 theron_ joined #gluster
08:12 theron joined #gluster
08:12 xinkeT joined #gluster
08:12 dsj joined #gluster
08:12 chacken joined #gluster
08:12 stopbit joined #gluster
08:12 y4m4 joined #gluster
08:12 stefanha joined #gluster
08:12 cyberbootje joined #gluster
08:12 jbrooks joined #gluster
08:12 duerF joined #gluster
08:12 nightwalk joined #gluster
08:12 ctria joined #gluster
08:12 tryggvil joined #gluster
08:12 wN joined #gluster
08:12 berend joined #gluster
08:12 nick5 joined #gluster
08:12 eurower joined #gluster
08:12 khushildep joined #gluster
08:12 oneiroi joined #gluster
08:12 chaseh joined #gluster
08:12 bennyturns joined #gluster
08:12 esm_ joined #gluster
08:12 layer3 joined #gluster
08:12 gmcwhistler joined #gluster
08:12 guigui3 joined #gluster
08:12 crashmag joined #gluster
08:12 Shdwdrgn joined #gluster
08:12 saz joined #gluster
08:12 kevein joined #gluster
08:12 arusso joined #gluster
08:12 xymox joined #gluster
08:12 bulde joined #gluster
08:12 thekev joined #gluster
08:12 JordanHackworth joined #gluster
08:12 atrius joined #gluster
08:12 ola` joined #gluster
08:12 XmagusX joined #gluster
08:12 sr71 joined #gluster
08:12 Dave2 joined #gluster
08:12 unalt_ joined #gluster
08:12 tripoux joined #gluster
08:12 joeto joined #gluster
08:12 mtanner joined #gluster
08:12 Daxxial_1 joined #gluster
08:12 gcbirzan joined #gluster
08:12 purpleidea joined #gluster
08:12 mnaser joined #gluster
08:12 UnixDev_ joined #gluster
08:12 HeMan joined #gluster
08:12 m0zes joined #gluster
08:12 samppah joined #gluster
08:12 jiqiren joined #gluster
08:12 snarkyboojum joined #gluster
08:12 eightyeight joined #gluster
08:12 NcA^ joined #gluster
08:12 morse joined #gluster
08:12 rcheleguini joined #gluster
08:12 rz__ joined #gluster
08:12 redsolar_office joined #gluster
08:12 zwu|afk joined #gluster
08:12 dec joined #gluster
08:12 rubbs joined #gluster
08:12 niv joined #gluster
08:12 sensei joined #gluster
08:12 plantain joined #gluster
08:12 quillo joined #gluster
08:12 vincent_vdk joined #gluster
08:12 elyograg joined #gluster
08:12 kkeithley joined #gluster
08:12 jdarcy joined #gluster
08:12 rudebwoy joined #gluster
08:12 Psi-Jack joined #gluster
08:12 NuxRo joined #gluster
08:12 Ramereth joined #gluster
08:12 the-dude_ joined #gluster
08:12 samkottler|bbl joined #gluster
08:12 Melsom joined #gluster
08:12 rosco joined #gluster
08:12 frakt joined #gluster
08:12 circut joined #gluster
08:12 TSM joined #gluster
08:12 spn joined #gluster
08:12 joscas joined #gluster
08:12 benner joined #gluster
08:12 juhaj joined #gluster
08:12 lanning joined #gluster
08:12 SpeeR joined #gluster
08:12 sjoeboo joined #gluster
08:12 VeggieMeat joined #gluster
08:12 clag_ joined #gluster
08:12 H__ joined #gluster
08:12 JoeJulian joined #gluster
08:12 bdperkin joined #gluster
08:12 RobertLaptop joined #gluster
08:12 raghavendrabhat joined #gluster
08:12 gluslog joined #gluster
08:12 primusinterpares joined #gluster
08:12 gm__ joined #gluster
08:12 stigchristian joined #gluster
08:12 zoldar joined #gluster
08:12 Zengineer joined #gluster
08:12 social_ joined #gluster
08:12 pull_ joined #gluster
08:12 tjikkun joined #gluster
08:12 ste99 joined #gluster
08:12 smellis joined #gluster
08:12 masterzen joined #gluster
08:12 klubko joined #gluster
08:12 rbergeron joined #gluster
08:12 jds2001 joined #gluster
08:12 meshugga joined #gluster
08:12 trapni joined #gluster
08:12 haidz joined #gluster
08:12 yosafbridge joined #gluster
08:12 SteveCooling joined #gluster
08:12 sadsfae joined #gluster
08:12 al joined #gluster
08:12 Eimann joined #gluster
08:12 haakond joined #gluster
08:12 flowouffff joined #gluster
08:12 ndevos joined #gluster
08:12 VisionNL joined #gluster
08:12 ackjewt joined #gluster
08:12 jmara joined #gluster
08:12 hagarth_ joined #gluster
08:12 abyss^ joined #gluster
08:12 jiffe1 joined #gluster
08:12 wintix joined #gluster
08:12 pdurbin joined #gluster
08:12 jiffe98 joined #gluster
08:12 MinhP joined #gluster
08:12 maxiepax joined #gluster
08:12 z00dax joined #gluster
08:12 cbehm joined #gluster
08:12 tru_tru joined #gluster
08:12 edoceo joined #gluster
08:12 linux-rocks joined #gluster
08:12 helloadam joined #gluster
08:12 _Bryan_ joined #gluster
08:12 flin joined #gluster
08:12 a2 joined #gluster
08:12 xiu joined #gluster
08:12 johnmark joined #gluster
08:12 er|c joined #gluster
08:12 misuzu joined #gluster
08:12 madphoenix joined #gluster
08:12 sac_ joined #gluster
08:12 penglish joined #gluster
08:12 _br_ joined #gluster
08:12 xavih joined #gluster
08:14 Daxxial_ joined #gluster
08:14 ramkrsna joined #gluster
08:19 ramkrsna joined #gluster
08:58 bala joined #gluster
09:09 eurower JoeJulian: in fact I no have the Fedora 15 kernel, but a previous kernel (with openvz) : 2.6.32-042
09:24 sshaaf joined #gluster
09:40 eurower I have GlusterFS 3.3.0. According to JoeJulian's blog, this problem of ext4 is fixed in 3.3.1 ?
09:42 ola` left #gluster
09:45 gcbirzan I'd recommend upgrading to 3.3.1 anyway
09:54 eurower gcbirzan : yes, compilation in progress ^^
10:02 Eimann joined #gluster
10:30 eurower Okay, it's fixed in 3.3.1 => thanks ;)
10:38 eurower hum, some errors persists with lock file ...
11:03 eurower files appears twice. hummm :s
11:18 morse joined #gluster
11:26 morse joined #gluster
11:38 manik joined #gluster
11:50 manik joined #gluster
11:56 eurower gluster volume status : Volume nz101 is not started / gluster volume start nz101 : Volume nz101 does not exist (what ?)
12:19 spn joined #gluster
13:00 nightwalk joined #gluster
13:20 sunus joined #gluster
13:21 nueces joined #gluster
14:03 eurower Creation of volume nz101 has been successful. Please start the volume to access data / start : Volume nz101 does not exist
14:26 purpleidea joined #gluster
14:26 nightwalk joined #gluster
14:33 bala1 joined #gluster
14:48 eurower Ok. I'm starting from scratch. GlusterFS 3.3.1, creation from 0 of replication 2 brick. All is ok, but rpm/yum block again. CPI is 100% for yum. No more process gluster seem to loop.
14:50 marcelle joined #gluster
14:51 marcelle hi, the documentation seems really strange, nothing is mentioned about what ports should be open between the nodes, i need to have a replicated (mirror) disk between 3 nodes and can not prob the servers, what ports should be open?
14:56 sunus joined #gluster
14:57 benner joined #gluster
15:11 sunus1 joined #gluster
15:13 NuxRo eurower: what fs for bricks?
15:20 marcelle i keep getting this error: Server and Client lk-version numbers are not same, reopening the fds
15:20 glusterbot marcelle: This is normal behavior and can safely be ignored.
15:21 marcelle lol but its not working
15:29 eurower NuxRo : ext4. I upgrade to 3.3.1 due to a bug in 3.3.0 with ext4
15:31 NuxRo oh, didnt know they fixed it in 3.3.1
15:31 NuxRo I'd still recommend switching to xfs
15:38 nightwalk joined #gluster
15:47 sjoeboo_ joined #gluster
15:48 TSM2 joined #gluster
16:22 oneiroi joined #gluster
16:25 nightwalk joined #gluster
16:35 eurower oki, humm, same probleme with ext3. GlusterFS is not compatible with ext3 and ext4 (rpm/yum failed) :s
16:37 NuxRo well, yum issues may indicate other problems
16:38 NuxRo i dont see it related to glusterfs
16:40 eurower NuxRo : I tried the same files on a local disk, it's ok. It fail only if the files are placed on a glusterfs mountpoint
16:40 NuxRo what is yum having to do with glusterfs?
16:42 eurower rpm/yum seem to use a lock file system or other that gluster doesn't like. All is ok in my test, it's only when I try the exactly sames root files on a glustered mount that it failed.
16:43 NuxRo wait, have you mounted /var on glusterfs?
16:44 eurower yes, all system from / is mounted on glusterfs
16:45 NuxRo oh, right
16:45 NuxRo glusterfs natively or nfs?
16:45 NuxRo nfs might behave better
16:45 eurower glusterfs natively, not nfs
16:45 NuxRo well, feel free to open a bug
16:47 inodb joined #gluster
16:47 eurower okay. snif :)
17:01 ika2810 joined #gluster
17:10 nightwalk joined #gluster
17:17 mary_987 joined #gluster
17:18 mary_987 i keep getting the message unknown option _netdev (ignored) when trying to mount using fstab, anyone faced this? after a reboot my gluster setup is not functinal until i manually mount again or put it in rc.local which looks like a walkaround instead of fixing the issue :(
17:33 nightwalk joined #gluster
17:53 eurower I'm going to test MooseFS. Gluster is not stable. big snif snif :s
17:53 NuxRo ....
18:03 sshaaf joined #gluster
18:08 nightwalk joined #gluster
18:52 inodb joined #gluster
18:56 sjoeboo_ joined #gluster
19:04 inodb joined #gluster
19:11 nightwalk joined #gluster
19:14 khushildep joined #gluster
19:39 JoeJulian @later tell eurower GlusterFS is stable, you just don't listen when people tell you what's wrong and how to fix it. Good luck getting ANYTHING to work correctly with that attitude.
19:39 glusterbot JoeJulian: The operation succeeded.
19:40 JoeJulian mary_987: That's just noise. _netdev is always ignored by mount for any filesystem. Most of them just don't tell you about it. _netdev is a startup script flag.
20:05 inodb joined #gluster
20:05 nightwalk joined #gluster
20:36 mary_987 JoeJulian: but once the system is booted they are not mounted :( i have to mount them again :(
20:41 JoeJulian mary_987: Which distro?
21:11 manik joined #gluster
21:19 nightwalk joined #gluster
21:26 lanning a FUSE filesystem as /? wouldn't there be catch 22 issues?
21:27 JoeJulian I don't know how you would do it unless you're building it into an initrd.
21:27 JoeJulian But he didn't seem smart enough to be doing that.
21:28 JoeJulian Sorry, that was mean. He kind-of pissed me off.
21:28 lanning hmm... you would have to make sure the whole binary was loaded into memory before the pivotroot
21:29 sensei I suppose he could've chrooted himself onto a gluster mount
21:29 lanning hey, has anyone tried to link the fuse client with nfs-ganesha?
21:35 JoeJulian That guy's big problem was that he was running a kernel with the ext4 bug. When I told him, he claimed that he was running fedora 15 so it wouldn't affect him. It does. Then he read in the comments on my blog when I stated, based on feedback I received at the time I made the comment, that 3.3.1 would have a fix. It doesn't.
21:37 JoeJulian I haven't heard of anybody using that. It has a posix FSAL so it should work fine.
21:37 lanning ya, I was looking more to their FSAL_FUSELIKE
21:37 JoeJulian Yeah, I was just reading that.
21:37 JoeJulian interesting.
21:38 lanning stop traversing in/out of the kernel so much
21:38 JoeJulian It claims to avoid the fuse layer
21:38 JoeJulian Of course, you would have the same advantage using the native nfs.
21:39 lanning ya, but a bit better if clients mount localhost
21:39 JoeJulian You can still do that.
21:39 lanning without a kernel patch?
21:40 lanning (NFS export a FUSE mount)
21:40 JoeJulian Yep, install glusterfs-server, start glusterd, probe the client, nfs mount from localhost.
21:40 lanning if I rsync the config around for updates, does it auto update the deamon?
21:41 lanning or do I have to restart it?
21:41 JoeJulian I'm lost there... Why would you rsync a config?
21:41 lanning I have 96 clients, that could dynamically grow/shrink
21:42 JoeJulian Configs are dynamic by design. The only issue would be adding them to the trusted pool.
21:42 lanning this is a cloud
21:43 lanning ok, so a node goes down, I detect this and have to remove it from the cluster
21:44 JoeJulian yep
21:45 lanning ok, so I need to write a REST service to accept auto cluster registrations from all clients...
21:46 lanning then another that scans peers and auto removes all disconnected ones, that are not serving bricks.
21:47 JoeJulian That would work. Should be pretty easy to do too. You could even do an unreg for the clients if you know you can remove them in an orderly way.
21:49 lanning hmmm...
21:50 JoeJulian If you're going to use the cli as an api layer, you might be interested in the --xml switch.
21:50 lanning the FUSE->nfs-ganesha is a lot simpler (way less state)
21:50 JoeJulian good point.
21:50 lanning ya, I would use that.
21:52 JoeJulian May I ask what product you're using this behind?
21:52 lanning the only problem with nfs-ganesha is that while it is under active development by Panasas and IBM, no one is maintaining the FUSE module
21:52 lanning a private IaaS
21:54 lanning Xen based
21:54 JoeJulian If you put together a test bed and can document any bugs you might find clearly, I suspect that some of the gluster developers would be willing to make contributions to that. It sounds like something that would interest at least a couple of the guys.
21:55 lanning the FUSE module doesn't compile.  seems half ported to their new memory management system
21:56 lanning I will have a test bed shortly.
21:56 JoeJulian Send an email to gluster-devel with what you know and I'll see if I have enough of my mind left to remember to ping jdarcy and avati on monday.
21:57 lanning ok, will do
21:58 JoeJulian Could be interesting to take that a step further and write a glfs library interface for it.
22:01 lanning it's cool they have an SNMP and LDAP FSAL... walk the tree with find... :)
22:08 nightwalk joined #gluster
22:47 Jippi joined #gluster
22:55 khushildep joined #gluster
23:04 nightwalk joined #gluster
23:46 mtanner joined #gluster
23:49 nightwalk joined #gluster
23:53 TSM2 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary