Camelia, the Perl 6 bug

IRC log for #gluster, 2012-11-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 jayunit100_ hi Gluster people
00:02 FU5T joined #gluster
00:03 semiosis hi
00:03 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
00:03 semiosis jayunit100_: ^^
00:03 tryggvil joined #gluster
00:06 jayunit100_ Is anyone using https://github.com/gluster/hadoop-glusterfs
00:06 glusterbot Title: gluster/hadoop-glusterfs · GitHub (at github.com)
00:07 jayunit100_ I've pulled the source down.  Noticing that it has an obvious dependency (cluster).  Was wondering if it is possible to mock cluster for testing the source code, or wether that might be a good idea.
00:07 jayunit100_ s/cluster/gluster
00:25 nightwalk joined #gluster
00:41 robo joined #gluster
01:01 robo joined #gluster
01:09 Technicool joined #gluster
01:49 robo joined #gluster
02:06 robo joined #gluster
02:22 kevein joined #gluster
02:35 FU5T joined #gluster
02:36 the-dude_ joined #gluster
02:40 mnaser joined #gluster
03:03 seanh-ansca joined #gluster
03:09 sunus joined #gluster
03:13 ika2810 joined #gluster
03:18 jayunit100_ https://gist.github.com/3826617 <-- vagrant file for running cluster against ubuntu.  Wonder if one exists for fedora… or how it would be different.
03:18 glusterbot Title: Create GlusterFS clusteredVolume for Vagrant Gist (at gist.github.com)
03:26 sunus the STACK_WIND and STACK_UNWIND, how to understand these two?
03:26 sunus by direction or something else?
03:31 nightwalk joined #gluster
04:12 mdarade1 joined #gluster
04:18 nightwalk joined #gluster
04:22 kevein joined #gluster
04:27 pranithk joined #gluster
04:35 Humble joined #gluster
04:38 ika2810 left #gluster
04:43 seanh-ansca joined #gluster
05:37 mdarade1 left #gluster
06:23 kevein joined #gluster
06:38 Humble joined #gluster
06:43 hchiramm_ joined #gluster
07:05 hchiramm_ joined #gluster
07:27 dobber joined #gluster
07:32 pkoro joined #gluster
07:42 hchiramm_ joined #gluster
07:44 nightwalk joined #gluster
07:48 ctria joined #gluster
07:55 lkoranda joined #gluster
08:00 Azrael808 joined #gluster
08:07 inodb joined #gluster
08:08 inodb joined #gluster
08:09 tryggvil joined #gluster
08:19 ika2810 joined #gluster
08:24 nightwalk joined #gluster
08:25 clag_ joined #gluster
08:27 clag_ joined #gluster
08:29 andreask joined #gluster
08:34 rotbart joined #gluster
08:36 rotbart hi there, I have two load balanced and independent (so far) webservers and need a common storage. I plan a dedicated gluster cluster in the future, but for now my idea is to use glusterfs directly on that two webservers. Is this possible or are there any reasons for not doing this?
08:43 humka joined #gluster
08:43 quillo joined #gluster
08:46 Nr18 joined #gluster
08:48 Nr18 joined #gluster
08:53 tjikkun_work joined #gluster
09:03 Triade joined #gluster
09:07 nightwalk joined #gluster
09:08 gbrand_ joined #gluster
09:13 kevein joined #gluster
09:18 hurdman joined #gluster
09:20 hurdman hi, i have got a problem. I need to replace a brick, into a replicate, from one brick (who's down) into a network ( 10.5.0.0/16 ) to an other brick into an over network ( 10.30.0.0/16 ) , the actually up brick have both network and have both brick like peer
09:20 hurdman is there any solution or have i to destry the replica and rebuild one ? ( i don't want to stop my services :'( )
09:23 tryggvil joined #gluster
09:23 manik joined #gluster
09:24 tryggvil joined #gluster
09:34 DaveS_ joined #gluster
10:01 tryggvil_ joined #gluster
10:03 humka left #gluster
10:18 kevein joined #gluster
10:18 gbrand_ joined #gluster
10:18 nightwalk joined #gluster
10:18 Triade joined #gluster
10:18 tjikkun_work joined #gluster
10:18 Nr18 joined #gluster
10:18 quillo joined #gluster
10:18 rotbart joined #gluster
10:18 andreask joined #gluster
10:18 clag_ joined #gluster
10:18 inodb joined #gluster
10:18 lkoranda joined #gluster
10:18 ctria joined #gluster
10:18 pkoro joined #gluster
10:18 sunus joined #gluster
10:18 mnaser joined #gluster
10:18 the-dude_ joined #gluster
10:18 FU5T joined #gluster
10:18 jayunit100_ joined #gluster
10:18 rubbs joined #gluster
10:18 arusso joined #gluster
10:18 XmagusX joined #gluster
10:18 circut joined #gluster
10:18 joeto joined #gluster
10:18 JordanHackworth joined #gluster
10:18 xymox joined #gluster
10:18 dstywho joined #gluster
10:18 xillver joined #gluster
10:18 rwheeler joined #gluster
10:18 davdunc joined #gluster
10:18 balunasj|mtg|awa joined #gluster
10:18 nick5 joined #gluster
10:18 elyograg joined #gluster
10:18 atrius_away joined #gluster
10:18 jbrooks_ joined #gluster
10:18 ninkotech_ joined #gluster
10:18 rcheleguini joined #gluster
10:18 koodough joined #gluster
10:18 hagarth joined #gluster
10:18 chaseh joined #gluster
10:18 chacken2 joined #gluster
10:18 badone joined #gluster
10:18 mtanner joined #gluster
10:18 spn joined #gluster
10:18 Eimann joined #gluster
10:18 theron joined #gluster
10:18 xinkeT joined #gluster
10:18 cyberbootje joined #gluster
10:18 jbrooks joined #gluster
10:18 Shdwdrgn joined #gluster
10:18 saz joined #gluster
10:18 thekev joined #gluster
10:18 sr71 joined #gluster
10:18 Dave2 joined #gluster
10:18 unalt_ joined #gluster
10:18 tripoux joined #gluster
10:18 HeMan joined #gluster
10:18 m0zes joined #gluster
10:18 samppah joined #gluster
10:18 jiqiren joined #gluster
10:18 snarkyboojum joined #gluster
10:18 eightyeight joined #gluster
10:18 NcA^ joined #gluster
10:18 redsolar_office joined #gluster
10:18 zwu joined #gluster
10:18 dec joined #gluster
10:18 plantain joined #gluster
10:18 vincent_vdk joined #gluster
10:18 kkeithley joined #gluster
10:18 jdarcy joined #gluster
10:18 Psi-Jack joined #gluster
10:18 NuxRo joined #gluster
10:18 Ramereth joined #gluster
10:18 samkottler|out joined #gluster
10:18 Melsom joined #gluster
10:18 frakt joined #gluster
10:18 joscas joined #gluster
10:18 lanning joined #gluster
10:18 VeggieMeat joined #gluster
10:18 H__ joined #gluster
10:18 JoeJulian joined #gluster
10:18 RobertLaptop joined #gluster
10:18 raghavendrabhat joined #gluster
10:18 gluslog joined #gluster
10:18 primusinterpares joined #gluster
10:18 gm__ joined #gluster
10:18 stigchristian joined #gluster
10:18 misuzu joined #gluster
10:18 er|c joined #gluster
10:18 johnmark joined #gluster
10:18 xiu joined #gluster
10:18 a2 joined #gluster
10:18 flin joined #gluster
10:18 _Bryan_ joined #gluster
10:18 helloadam joined #gluster
10:18 linux-rocks joined #gluster
10:18 tru_tru joined #gluster
10:18 z00dax joined #gluster
10:18 maxiepax joined #gluster
10:18 MinhP joined #gluster
10:18 jiffe98 joined #gluster
10:18 pdurbin joined #gluster
10:18 wintix joined #gluster
10:18 jiffe1 joined #gluster
10:18 abyss^ joined #gluster
10:18 hagarth_ joined #gluster
10:18 jmara joined #gluster
10:18 ackjewt joined #gluster
10:18 VisionNL joined #gluster
10:18 ndevos joined #gluster
10:18 haakond joined #gluster
10:18 zoldar joined #gluster
10:18 Zengineer joined #gluster
10:18 social_ joined #gluster
10:18 pull_ joined #gluster
10:18 tjikkun joined #gluster
10:18 smellis joined #gluster
10:18 masterzen joined #gluster
10:18 jds2001 joined #gluster
10:18 meshugga joined #gluster
10:18 trapni joined #gluster
10:18 haidz joined #gluster
10:18 yosafbridge joined #gluster
10:18 SteveCooling joined #gluster
10:18 sadsfae joined #gluster
10:18 al joined #gluster
10:22 kevein joined #gluster
10:22 nightwalk joined #gluster
10:22 Triade joined #gluster
10:22 quillo joined #gluster
10:22 davdunc joined #gluster
10:22 jbrooks_ joined #gluster
10:22 rcheleguini joined #gluster
10:22 hagarth joined #gluster
10:22 m0zes joined #gluster
10:22 NcA^ joined #gluster
10:22 Psi-Jack joined #gluster
10:22 JoeJulian joined #gluster
10:22 RobertLaptop joined #gluster
10:22 stigchristian joined #gluster
10:22 _Bryan_ joined #gluster
10:22 flin joined #gluster
10:22 a2 joined #gluster
10:22 xiu joined #gluster
10:22 johnmark joined #gluster
10:22 er|c joined #gluster
10:22 misuzu joined #gluster
10:23 sunus joined #gluster
10:23 arusso joined #gluster
10:23 chaseh joined #gluster
10:23 chacken2 joined #gluster
10:23 cyberbootje joined #gluster
10:23 thekev joined #gluster
10:23 samkottler|out joined #gluster
10:23 gm__ joined #gluster
10:23 primusinterpares joined #gluster
10:26 Triade joined #gluster
10:26 jbrooks_ joined #gluster
10:26 JoeJulian joined #gluster
10:26 RobertLaptop joined #gluster
10:26 johnmark joined #gluster
10:27 nightwalk joined #gluster
10:27 quillo joined #gluster
10:27 rcheleguini joined #gluster
10:27 Psi-Jack joined #gluster
10:27 misuzu joined #gluster
10:27 er|c joined #gluster
10:27 xiu joined #gluster
10:27 a2 joined #gluster
10:27 flin joined #gluster
10:27 _Bryan_ joined #gluster
10:28 gluslog joined #gluster
10:28 raghavendrabhat joined #gluster
10:29 Humble joined #gluster
10:30 mnaser joined #gluster
10:30 snarkyboojum joined #gluster
10:50 bauruine joined #gluster
10:55 ika2810 joined #gluster
11:12 badone joined #gluster
11:14 lh joined #gluster
11:14 lh joined #gluster
11:17 saz joined #gluster
11:45 nightwalk joined #gluster
12:01 ika2810 left #gluster
12:07 saz joined #gluster
12:09 sunus what is write-behind xlator?
12:11 Technicool joined #gluster
12:21 kkeithley1 joined #gluster
12:31 kkeithley1 joined #gluster
12:39 gcbirzan_ joined #gluster
12:39 sunus how can i disable a translator?
12:40 nightwalk joined #gluster
12:40 gcbirzan_ so, on a host that's running gluster, sometimes reading from /proc blocks for glusterfs processes.
12:41 Technicool sunus, which translator?   typically that would be done via gluster volume set <foo>
12:41 Technicool gcbirzan, do you have a specific example?
12:41 sunus Technicool: hi, i want to disable write-behind and see how it goese
12:42 gcbirzan_ Yeah.
12:42 sunus Technicool: because i think i might found a bug with write-behind
12:42 gcbirzan_ http://bpaste.net/show/n8MNPxWUmExpUqCX7MWJ/
12:42 glusterbot Title: Paste #n8MNPxWUmExpUqCX7MWJ at spacepaste (at bpaste.net)
12:42 gcbirzan_ I have that machine on right now, dunno what you want me to try but... :P
12:43 gcbirzan_ if I kill -9 the process, when it starts again, it breaks
12:43 sunus Technicool: i want to disable write-behind
12:45 Technicool sunus, what is the bug you think you hit?  It looks like disabling is via "performance.flush-behind: ON|OFF"
12:45 Technicool gcbirzan, looking
12:45 sunus Technicool: i am not sure wether flush-behind and write-behind is the same thing
12:45 gcbirzan_ this happened after the update to 3.3.1
12:46 sunus Technicool: but i will try it now,  i am testing qemu with gluster intergetion
12:46 Technicool sunus:  gluster volume set help | grep write-behind -B 2
12:46 gcbirzan_ and since I can't read its cmdline, no idea which it is
12:46 gcbirzan_ ah, hm
12:47 gcbirzan_ I think I got it
12:47 Technicool gcbirzan, is this happening as a result of an application or script, or just happening sporadically
12:47 Technicool nm, maybe ^^
12:48 gcbirzan_ this happens... dunno, we just upgraded our hosts to 3.3.1 and this is happening
12:48 sunus Technicool: i have get that disable, and test now
12:48 Technicool sunus, what was the issue you are encountering?
12:49 guigui3 joined #gluster
12:49 sunus when there is a lot of data qemu want to write to a image in glusterfs node, it get segment fault
12:50 Technicool gcbirzan, trying to understand if the hang you notice from the cmdline is from a script calling gluster or not
12:50 sunus Technicool: and bt turns out it happened in somewhere of write-behind part of code
12:50 gcbirzan_ Technicool: ah. hm, sorry, don't understand your question
12:50 gcbirzan_ the hang is when I do ps aux
12:50 Technicool gcbirzan, ps aux from client or server?
12:51 gcbirzan_ fromt he client
12:51 gcbirzan_ which is also a server
12:51 Technicool ;)
12:52 Technicool sunus, let me know if disabling makes a difference, there could be other factors as well but best to isolate the issue if possible
12:53 sunus Technicool: i will:)
12:53 Technicool gcbirzan, so you updated to 3.3.1, and now running a ps aux consistently hangs?
12:53 sunus Technicool: i will get the result in a few minutes
12:55 gcbirzan_ yeah
12:55 manik joined #gluster
12:55 Technicool gcbirzan, anything else updated at the same time?
12:56 sunus Technicool: do i need to restart the glusterd services after turn of glushbehind?
12:56 Technicool im on airport wifi but there is a possible bug in NFS that looked similar
12:56 sunus Technicool:  flush-behind
12:56 Technicool in five or six years when the bug opens ill let you know  ;)
12:57 Technicool sunus, i would recommend it, it may also be required on the clients
12:58 gcbirzan_ Technicool: eh.
12:59 Technicool gcbirzan, reading through the bug still
13:00 Technicool gcbirzan as a quick check, are you seeing anything like "contains a readdir loop." if you grep /var/log/messages*
13:00 gcbirzan_ nope
13:00 Tarok joined #gluster
13:00 Tarok Hi
13:00 glusterbot Tarok: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:01 edward1 joined #gluster
13:01 Technicool had to find power, probably not the issue then but it looked promising at first
13:02 bauruine_ joined #gluster
13:04 Tarok I've a problem with glusterfs on ubuntu 12.04. Gluster refuse to start !
13:04 Tarok I check log and i've :
13:04 gcbirzan_ I hate this.
13:04 Tarok getaddrinfo failed (Name or service not known)
13:06 Technicool gcbirzan, what OS and kernel are you on?  and, has gluster been remounted since you upgraded?
13:07 gcbirzan_ no idea about the latter, fedora with 3.6.1
13:07 gcbirzan_ er
13:08 gcbirzan_ 3.6.6-1.fc17.x86_64
13:09 mooperd_ joined #gluster
13:09 mooperd_ hello. I cant work out where to get the swift components from...
13:11 Technicool gcbirzan, ill try to test out before my flight if i can, connection is dropping every few minutes so no promises
13:12 Technicool mooper, moment ill ping the link
13:12 theron joined #gluster
13:13 mooperd_ Technicool: ta. I have heard talk of a repro
13:13 mooperd_ repo
13:13 ctria joined #gluster
13:14 Technicool mooperd_, http://www.gluster.org/2012/09/howto-using​-ufo-swift-a-quick-and-dirty-setup-guide/
13:14 glusterbot <http://goo.gl/Wf7Yx> (at www.gluster.org)
13:15 Technicool everything you need
13:15 mooperd_ Technicool: aah. no centos RPMs
13:15 mooperd_ I support the RHEL6 ones will work
13:16 Technicool mooperd_, haven't testing myself but would be very surprised if they didnt
13:17 mooperd_ Technicool: are these parts of the official Gluster project?
13:17 duerF joined #gluster
13:18 Technicool mooperd_, unofficially, yes  ;)
13:18 mooperd_ Technicool: Unofficually official
13:18 mooperd_ ok
13:19 Technicool all the UFO stuff is official, the repos from kkeithley i'm not sure
13:20 nightwalk joined #gluster
13:20 juhaj joined #gluster
13:21 niv joined #gluster
13:27 nueces joined #gluster
13:28 y4m4 joined #gluster
13:36 Tarok 0-rpc-transport/rdma: No IB devices found
13:36 Tarok any ideas
13:36 Tarok ?
13:37 cbehm joined #gluster
13:43 kkeithley_wfh Not sure what you mean by no CentOS rpms. My repo has EPEL rpms that work on CentOS.
13:45 mooperd_ kkeithley: oh hello
13:45 kkeithley_wfh hi
13:45 glusterbot kkeithley_wfh: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
13:45 glusterbot answer.
13:45 kkeithley_wfh go away glusterbot
13:46 kkeithley_wfh I've been building the glusterfs rpms for Fedora for the past year or so, and the rpms in my repo are built from the same rpm spec file, on Fedora Koji build machines for the most part too. They're about as official as anything can get IMO
13:47 mooperd_ kkeithley: Im just about to try and get this working on some old ibm blades what I have in my dungeon
13:47 kkeithley_wfh sounds good
13:47 mooperd_ kkeithley: But they might be a little too old. The "gigabit" switch only seems to be able to manage about 43kbit/s
13:50 mooperd_ kkeithley: would you install the swift stuff on all your nodes
13:50 mdarade1 joined #gluster
13:50 mdarade1 left #gluster
13:50 mooperd_ and load balance between them?
13:51 nightwalk joined #gluster
13:51 kkeithley_wfh I think that makes sense. I haven't tried on more than a single node.
13:51 mooperd_ So the swift component just attaches to the client
13:51 kkeithley_wfh And I don't have a load balancer
13:51 mooperd_ and does all the http diggety stuff
13:52 kkeithley_wfh Not sure I follow you.
13:52 mooperd_ Well, the swift component translates REST requests into filesystem requests?
13:52 kkeithley_wfh There's swift, there's the Gluster "layer" kind of in the middle of swift. The clients read and write to swift.
13:53 kkeithley_wfh using http
13:53 mooperd_ http://pastie.org/5371664
13:53 glusterbot Title: #5371664 - Pastie (at pastie.org)
13:53 kkeithley_wfh The gluster component just integrates swift with gluster back end storage.
13:53 mooperd_ kkeithley: ah, so its not using the gluster client
13:54 mooperd_ swift connects directly to gluster backend?
13:54 kkeithley_wfh If you use yum to install, yum will fetch all the dependencies for you and install them too
13:55 nick5 joined #gluster
13:56 tryggvil_ joined #gluster
13:56 mooperd_ kkeithley: yum --enablerepo=epel-glusterfs install glusterfs glusterfs-server glusterfs-fuse glusterfs-swift glusterfs-swift-account glusterfs-swift-container glusterfs-swift-object glusterfs-swift-plugin glusterfs-swift-proxy
13:56 mooperd_ I was doing it with this command
13:56 mooperd_ broken yum perhaps
13:56 kkeithley_wfh okay, wasn't sure. Yes, seems like broken yum
14:01 puebele joined #gluster
14:04 aliguori joined #gluster
14:08 changi joined #gluster
14:16 tryggvil joined #gluster
14:20 Tarok left #gluster
14:21 changi Hi, i have a gluster 3.2 setup on debian squeeze with a replica volume on 2 node. I have some poor performance with svn checkout. It take 11 minute to proceed instead of 32s on local fs. Does anyone have this issue ?
14:21 glusterbot New news from newglusterbugs: [Bug 874348] mount point broke on client when a lun from a storage backend offline or missing . After there the data are scrap <http://goo.gl/CjwrE>
14:25 Staples84 joined #gluster
14:34 robo joined #gluster
14:34 puebele joined #gluster
14:36 andreask joined #gluster
14:38 ctrianta joined #gluster
14:51 Daxxial_ joined #gluster
14:53 puebele1 joined #gluster
14:57 koodough joined #gluster
15:01 guigui1 joined #gluster
15:04 stopbit joined #gluster
15:16 badone_ joined #gluster
15:20 nightwalk joined #gluster
15:22 glusterbot New news from newglusterbugs: [Bug 876214] Gluster "healed" but client gets i/o error on file. <http://goo.gl/eFkPQ>
15:29 wushudoin joined #gluster
15:39 davdunc joined #gluster
15:39 davdunc joined #gluster
15:52 glusterbot New news from newglusterbugs: [Bug 876222] Gluster "healed" but client gets i/o error on file. <http://goo.gl/hwxIg>
15:54 semiosis changi: i have apache/svn serving from a glusterfs volume and it works great, quite fast
15:58 daMaestro joined #gluster
16:04 puebele joined #gluster
16:04 tc00per joined #gluster
16:09 ika2810 joined #gluster
16:20 robo joined #gluster
16:22 plarsen joined #gluster
16:40 robo joined #gluster
16:44 robo joined #gluster
16:46 robo joined #gluster
16:54 aliguori joined #gluster
16:54 robo joined #gluster
16:57 zaitcev joined #gluster
17:01 robo joined #gluster
17:07 stat1x joined #gluster
17:17 Mo__ joined #gluster
17:20 Bullardo joined #gluster
17:49 JoeJulian changi: Do you have like 20000 files in one directory?
17:52 tryggvil_ joined #gluster
17:56 JoeJulian @learn volstat as Please paste the output from "gluster volume status" to fpaste.org or dpaste.org and post the url that's generated here.
17:56 glusterbot JoeJulian: The operation succeeded.
17:56 Technicool joined #gluster
17:56 JoeJulian ~volstat | gcbirzan_
17:56 glusterbot gcbirzan_: Please paste the output from gluster volume status to fpaste.org or dpaste.org and post the url that's generated here.
17:59 tryggvil joined #gluster
18:01 obryan joined #gluster
18:02 obryan I think I am doing something wrong with my test volume I am trying to create.  It does not throw an error when i run create but i can't start testvol and it does not show in the volume information
18:02 obryan BASH> gluster volume create testvol replica 2 transport tcp server1:/mnt/shared server2:/mnt/shared
18:03 obryan yes, the shared dir exists in both servers
18:03 JoeJulian which version?
18:04 obryan 3.2.5
18:04 obryan if I try to start the volume I get "operation failed"
18:05 JoeJulian try restarting both glusterd
18:05 obryan is there a verbose/debug for the create?
18:05 obryan kk
18:05 JoeJulian And, actually, I probably should have asked for ,,(pastestatus) first, but I'm tired...
18:05 glusterbot Please paste the output of "gluster peer status" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
18:06 obryan that's just it, I don't get any output
18:06 JoeJulian I had to work on windows machines yesterday and that just completely drains my will to live.
18:06 obryan i run the create vol and it gives no output
18:06 obryan oi
18:06 JoeJulian Oh, I remember something like that from back then...
18:06 robo joined #gluster
18:07 JoeJulian Yeah, restarting glusterd was, iirc, how I worked around it.
18:07 obryan is there a log I can refer to?
18:07 JoeJulian Yep. All logs are in /var/log/glusterfs.
18:07 JoeJulian glusterd is named etc-glusterfs-glusterd.vol.log and the client is cli.log.
18:08 obryan yes I am looking at it now
18:08 JoeJulian Essentially there was a bug back then where if there was a communication error between the client and the server, that would happen.
18:08 JoeJulian It's a REALLY old bug, btw...
18:08 obryan what's the current v??
18:09 JoeJulian Is this in production, or are you just getting started?
18:09 JoeJulian 3.3.1
18:09 obryan this is what was in the EC2 repo and the docs said 3.2.5 so I figured it was close to current
18:09 obryan god no not production
18:09 JoeJulian @yum repo
18:09 glusterbot JoeJulian: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
18:09 obryan no, ubuntu repo
18:09 JoeJulian @ppa
18:09 glusterbot JoeJulian: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
18:09 obryan ah
18:10 obryan but i'm getting a chance to prove Gluster could work as a shared log across regions in EC2
18:10 obryan (it will also be the first time i've used Gluster...)
18:10 * JoeJulian prods semiosis
18:10 obryan i'm going to pasty this bit from the log
18:11 obryan http://pastebin.com/ELZP0DYU
18:11 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
18:11 obryan http://dpaste.org/GmQoj/
18:11 glusterbot Title: dpaste.de: Snippet #213041 (at dpaste.org)
18:11 obryan there ya go
18:12 JoeJulian Did you know that you can apt-get install dpaste and then you could just do like "tail -n 50 etc-glusterfs-glusterd.vol.log | dpaste"
18:12 obryan interesting, i still like my KDE-desktop widget for that sort of thing :)
18:12 JoeJulian :)
18:12 obryan but a useful little trick to know :)
18:13 JoeJulian 3.2.5 had several little annoying bugs. (3.2.7 is the current in that series). I'd say just uninstall and install from the ppa rather than worrying about this.
18:13 obryan now I did make sure to add the glusterfs ports 111, 24007-240018 into my EC2 security settings
18:14 obryan sounds like a plan
18:21 esm_ joined #gluster
18:31 obryan crapnaggit
18:31 obryan same result
18:31 obryan no output
18:31 obryan can't start the volume
18:41 aliguori joined #gluster
18:45 gmcwhistler joined #gluster
18:47 quillo joined #gluster
18:48 andreask joined #gluster
18:52 obryan oh, it's no longer saying "operation failure" just "No volumes present"
19:07 Tekni joined #gluster
19:12 DaveS_ joined #gluster
19:14 DaveS joined #gluster
19:40 aricg joined #gluster
19:43 nightwalk joined #gluster
19:49 obryan FYI - I had to remove 3.3.1, kept segfaulting.  I am however, in both versions, getting no output of any kind when i try to create a volume
19:52 saz joined #gluster
20:03 Humble joined #gluster
20:09 semiosis JoeJulian: pong
20:10 semiosis afk today training new developer
20:10 semiosis but back for a bit
20:18 andreask left #gluster
20:21 Bullardo joined #gluster
20:28 Bullardo joined #gluster
20:32 Tarok_ joined #gluster
20:36 semiosis ok afk again
20:45 zaitcev joined #gluster
20:58 * obryan sighs
21:05 Alpinist joined #gluster
21:23 aricg hey what is an ib dirver?  ibv_devinfo
21:23 aricg Failed to get IB devices list: Function not implemented
21:34 gbrand_ joined #gluster
21:37 aricg im trying to get gluster working between some KVM machines on my workstation.... perhaps it cannot be virtualized as such?
21:38 aricg to clarify the KVM machines are the glusterfs-servers
21:44 aricg hmm, perhaps I should have used the virrtio dirver...
21:56 chandank joined #gluster
21:56 chandank left #gluster
21:56 chandank joined #gluster
21:56 y4m4 joined #gluster
22:04 chandank joined #gluster
22:06 aricg hmm. switching to VirtIO disks didn't help https://privatepaste.com/87e5bfeb79
22:06 glusterbot Title: privatepaste.com :: Paste ID 87e5bfeb79 (at privatepaste.com)
22:07 TSM2 joined #gluster
22:17 tryggvil joined #gluster
22:19 badone joined #gluster
22:26 aricg 0-socket.management: reading from socket failed. Error Transport endpoint is not connected
22:28 blendedbychris joined #gluster
22:28 blendedbychris joined #gluster
22:31 inodb_ joined #gluster
22:35 aricg no errors untill I try to mount ..  0-gluster_volume1-client-1: failed to get the port number for remote subvolume
22:46 lh joined #gluster
22:47 aricg um. it works now.
22:48 aricg the gluster volume was not started...
23:17 duerF joined #gluster
23:27 blubberdi joined #gluster
23:27 blubberdi ,,ppa
23:27 blubberdi ,,(ppa)
23:27 glusterbot The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
23:29 chandank does it has all latest fixes.
23:30 chandank all fixes that are in kkeithley's  epel repo?
23:30 blubberdi Hi, which paa should I use if I'm running ubuntu 10.04? I can't install this because it needs libc6 (>= 2.14) but 2.11.1-0ubuntu7.11 is to be installed
23:35 Bullardo joined #gluster
23:44 JoeJulian aricg: IB is infiniband.
23:45 JoeJulian chandank: Yes, kkeithley's fedorapeople repo is the official one for Fedora/EL. ,,(yum repo)
23:45 glusterbot chandank: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
23:46 chandank JeoJulian, Thanks :-)
23:58 JoeJulian blubberdi: Regarding lucid: http://irclog.perlgeek.de/g​luster/2012-11-07#i_6133751
23:58 glusterbot <http://goo.gl/dXbqT> (at irclog.perlgeek.de)

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary