Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:24 yinyin joined #gluster
01:22 edong23_ joined #gluster
01:29 edong23_ i was looking through the channel logs in the topic... i cannot find an asnwer for this
01:29 edong23_ the gluster volume doesnt always automount
01:29 edong23_ sometimes it does
01:30 edong23_ sometimes it doesnt
01:30 edong23_ _netdev doesnt make a difference
01:32 edong23_ 3.3.1
01:35 nueces joined #gluster
01:39 sunus joined #gluster
01:43 yinyin_ joined #gluster
02:01 kikupotter joined #gluster
02:04 kikupotter i have a problems :[socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (192.168.15.129:989)
02:04 glusterbot kikupotter: That's just a spurious message which can be safely ignored.
02:05 kikupotter i have no idea for this problem ,please help me .thanks!! :(
02:07 ultrabizweb joined #gluster
02:18 duffrecords in Gluster 3.3.1 is direct-io-mode enabled or disabled by default?  I have not specified it in fstab and it doesn't appear when I run "gluster volume info"
02:18 duffrecords just trying to figure out what it's currently set to
02:20 raven-np joined #gluster
02:47 __Bryan__ joined #gluster
03:04 bharata joined #gluster
03:11 hagarth joined #gluster
03:28 duffrecords left #gluster
03:39 sripathi joined #gluster
03:46 shylesh joined #gluster
03:58 hagarth joined #gluster
04:11 theron joined #gluster
04:23 yinyin joined #gluster
04:27 sgowda joined #gluster
04:37 lala joined #gluster
04:40 kikupotter joined #gluster
04:41 yinyin joined #gluster
04:45 dbruhn joined #gluster
04:45 lala_ joined #gluster
04:54 vpshastry joined #gluster
05:05 deepakcs joined #gluster
05:07 17WAAYBOX joined #gluster
05:17 yinyin joined #gluster
05:22 bulde joined #gluster
05:29 shireesh joined #gluster
05:31 bulde1 joined #gluster
05:31 raghu joined #gluster
05:31 glusterbot New news from resolvedglusterbugs: [Bug 799861] nfs-nlm: cthon lock test hangs then crashes the server <http://goo.gl/6WCTV> || [Bug 800735] NLM on IPv6 does not work as expected <http://goo.gl/OvsQB> || [Bug 802767] nfs-nlm:unlock within grace-period fails <http://goo.gl/oHwe1> || [Bug 803180] Error logs despite not errors to user (client) <http://goo.gl/OwIxy> || [Bug 803637] nfs-nlm: lock not honoured if tried from
05:35 glusterbot New news from newglusterbugs: [Bug 885424] File operations occur as root regardless of original user on 32-bit nfs client <http://goo.gl/BiF6P> || [Bug 885802] NFS errors cause Citrix XenServer VM's to lose disks <http://goo.gl/xil6p> || [Bug 847619] [FEAT] NFSv3 pre/post attribute cache (performance, caching attributes pre- and post fop) <http://goo.gl/qbDjE> || [Bug 847626] [FEAT] nfsv3 cluster aware rpc.statd for NLM
05:40 vijaykumar joined #gluster
05:40 rastar joined #gluster
05:50 ramkrsna joined #gluster
05:50 ramkrsna joined #gluster
05:56 tk421 joined #gluster
05:56 tk421 hi everyone
05:57 tk421 I just installed gluster 3.3.1 in one server and one client
05:57 tk421 and the client cannot mount the server
05:57 tk421 [2013-01-09 16:50:31.955389] E [client-handshake.c:1717:client_query_portmap_cbk] 0-images-graphs-client-0: failed to get the port
05:57 tk421 any ideas ?
06:00 tk421 ok, I forgot to start the volume :S
06:00 tk421 quit
06:01 glusterbot New news from resolvedglusterbugs: [Bug 823763] nfs: showmount does not display the new volume <http://goo.gl/WLZQa> || [Bug 823768] nfs: showmount does not display the new volume <http://goo.gl/jGVKb> || [Bug 825197] ping_pong hangs on nfs mount <http://goo.gl/bOHZC> || [Bug 831949] NFS: Missing directories on nfs mount <http://goo.gl/uKZOB> || [Bug 835573] NFS localhost <http://goo.gl/SGPFj> || [Bug 836005] vmware
06:02 sripathi joined #gluster
06:03 sripathi joined #gluster
06:21 hagarth joined #gluster
06:25 bala1 joined #gluster
06:33 overclk joined #gluster
06:45 vimal joined #gluster
06:46 ngoswami joined #gluster
06:57 Nevan joined #gluster
06:58 cyr_ joined #gluster
07:12 vpshastry joined #gluster
07:20 b00tbu9 joined #gluster
07:20 b00tbu9 Hi
07:20 glusterbot b00tbu9: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
07:21 b00tbu9 I am getting error Connection failed. Please check if gluster daemon is operational.
07:21 b00tbu9 using ubuntu installed glusterfs-common glusterfs-server
07:21 b00tbu9 when trying to add peers
07:21 b00tbu9 I am getting the above error..
07:22 b00tbu9 using glusterfs 3.2
07:22 b00tbu9 the command I used is  sudo gluster peer probe servername
07:24 jtux joined #gluster
07:28 theron joined #gluster
07:31 sripathi joined #gluster
07:34 guigui1 joined #gluster
07:35 guigui1 left #gluster
07:42 puebele joined #gluster
07:50 sripathi joined #gluster
07:52 ekuric joined #gluster
07:53 ekuric joined #gluster
07:56 vpshastry joined #gluster
08:00 puebele joined #gluster
08:01 sjoeboo joined #gluster
08:01 ctria joined #gluster
08:06 sgowda joined #gluster
08:12 andreask joined #gluster
08:19 sjoeboo joined #gluster
08:23 gbrand_ joined #gluster
08:38 hagarth joined #gluster
08:39 bauruine joined #gluster
09:02 JoeJulian b00tbu9: Check to see that glusterd is running.
09:04 JoeJulian edong23: ubuntu lucid?
09:05 JoeJulian edong23: If yes, I was able to get it to work with https://gist.github.com/4472816
09:05 glusterbot Title: The glusterfs-server upstart conf would start before the brick was mounted. I added this ugly hack to get it working. I ripped the pre-start script from mysql and decided to leave the sanity checks in place, just because. (at gist.github.com)
09:06 JoeJulian edong23: Otherwise, it should work even without _netdev in ubuntu and with _netdev on an rpm based install.
09:19 badone joined #gluster
09:23 Alpinist joined #gluster
09:27 dobber joined #gluster
09:30 4JTAAXQ78 joined #gluster
09:32 b00tbu9 JoeJulian it is working now.. but now client mount has permission error..
09:33 khushildep joined #gluster
09:33 b00tbu9 some ??? is shown and when tries to access it it says Transaction end point not connected
09:34 sjoeboo_ joined #gluster
09:35 chirino_m joined #gluster
09:36 overclk_ joined #gluster
09:37 Alpinist_ joined #gluster
09:38 nueces_ joined #gluster
09:50 hateya joined #gluster
10:11 rcheleguini joined #gluster
10:13 sjoeboo joined #gluster
10:15 yinyin joined #gluster
10:28 lorderr joined #gluster
10:29 sjoeboo joined #gluster
10:46 khushildep joined #gluster
10:50 rastar1 joined #gluster
10:59 manik joined #gluster
11:10 rastar joined #gluster
11:14 yinyin joined #gluster
11:28 toruonu joined #gluster
11:30 toruonu it seems our gluster has gotten into a weird state where most processes that have some i/o are stuck even if the i/o is marginal. Then again ls for example works even though it takes time
11:30 toruonu when I monitor one brick node logs they don't seem to get any entries at all as if the system's dead
11:30 toruonu on some node I can't even start top to see who's doing what
11:31 toruonu ideas how to figure out who's offending without going over 30+ servers that have it mounted?
11:31 toruonu especially considering some of those nodes aren't really responsive right now
11:33 stigchri_ joined #gluster
11:39 sripathi left #gluster
11:40 edward1 joined #gluster
11:41 ndevos toruonu: so, this just happened without a reason you can think of? maybe some updated was done?
11:41 tru_tru joined #gluster
11:41 toruonu no gluster's been running as is
11:41 toruonu for weeks
11:41 toruonu everyone's doing their work and all of a sudden when I was merging 8 files together it got stuck
11:41 ndevos toruonu: updates of non-gluster components, like the kernel?
11:41 toruonu no things just were running
11:41 ndevos hmm
11:41 toruonu no touching this year
11:42 toruonu I'm getting such errors in nfs.log:
11:42 toruonu http://fpaste.org/hS1M/
11:42 glusterbot Title: Viewing [2013-01-09 13:31:49.025001] E [rpc- ... -01-09 13:01:44.808274. timeout = 18 ... Transport endpoint is not connected (at fpaste.org)
11:42 toruonu on one of the brick nodes
11:43 toruonu on some nodes top when started doesn't really start, you can't kill it with ctrl+c you can't suspend it with ctrl+z and logging in again kill -9 won't kill things
11:44 ndevos toruonu: well, the nfs-service is just a glusterfs-client, but the times of the lines in that log are pretty close together
11:44 toruonu well that's also ca 15 min ago
11:44 toruonu local time
11:44 toruonu it's 13:44 right now
11:44 toruonu and the crap started around the time it says 13:01 +- ...
11:45 toruonu don't see any entries in messages file for i/o errors on any of the brick nodes so likely the disks are fine… according to gluster volume status it's all nice and dandy
11:46 ndevos toruonu: call_bail caused bug 843003, maybe thats what you're hitting?
11:46 glusterbot Bug http://goo.gl/kYcf2 high, urgent, ---, kaushal, ON_QA , call_bail of a frame in glusterd might lead to stale locks in the cluster
11:47 ndevos although I'm not sure how that would affect the processes on the gluster servers...
11:47 toruonu etc-glusterfs-glusterd.vol.log:[2013-01-09 13:45:51.317305] I [glusterd-op-sm.c:2653:glusterd_op_txn_complete] 0-glusterd: Cleared local lock
11:47 toruonu so don't think so
11:47 ndevos ah, ok
11:48 toruonu I can do an ls on the volume, it works, but takes a long time
11:48 toruonu but processes are/get stuck that are started on the volume
11:48 toruonu it's /home so basically anything a user does :)
11:49 ndevos right, but processes run with a different $PWD that do not access any files undet $HOME run fine?
11:49 toruonu a simple ls of a users home directory takes normally under a second, now took 25s
11:49 ndevos s/undet/under/
11:49 glusterbot ndevos: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
11:50 toruonu seems glusterbot's hosed too :p
11:50 toruonu ndevos: yeah logging in as root who's home's not there works fine
11:50 toruonu I can start top and it works
11:51 toruonu logging in as an ordinary user that top got stuck and is still stuck
11:51 toruonu http://fpaste.org/V5v1/
11:51 glusterbot Title: Viewing [mario@phys ~]$ uptime 11:34:34 up 4 ... :00, 11 users, load average: 21.81, ... 83 [mario@phys ~]$ top ^C^C^C^Z ^Z^Z (at fpaste.org)
11:51 toruonu it's now 13:51
11:51 toruonu so it's been stuck for nearly 20 minutes
11:52 toruonu ah this system's got a whacko time
11:52 ndevos right, so either the glusterfs-client (nfs?) can not contact the bricks in a timely matter, or the bricks fail to respond
11:52 toruonu it's 11:58 on that node :P I guess I need to set ntpd on taht node
11:53 ndevos I guess that each process you start as a user will result in D-state (waiting on I/O), and adding +1 to the loadavg
11:54 toruonu but it's uniform, all clients are hosed
11:55 toruonu I just logged into a node that's not doing anything
11:55 toruonu and has /home mounted
11:55 toruonu and doing an ls /home it took quite some time
11:57 ndevos well, thats expected if the bricks do not respond quickly
11:58 ndevos so, can you check what the glusterfsd processes are doing? there may be some hints in the brick logs
11:59 toruonu if they did log anything indeed...
11:59 toruonu I'll have to go over ALL bricks
11:59 toruonu I think the one main downside of gluster is very bad debugging capacity… :)
12:00 ndevos yeah, I agree, but what would make things easier?
12:00 ndevos if we can propose some changes that help with debugging, the devs can be convinced to implement that :)
12:01 toruonu it's a valid point, but it really would help if one could understand what gluster's doing right now :) volume status lists just the services
12:01 toruonu brick health information, last check, activity …
12:01 toruonu performance measurement might say something, but ...
12:02 toruonu brick logs don't have entries for today, last entries from *.log; *.log.1 list on some bricks some unlink errors from yesterday, but nothing from today
12:03 ndevos you can get a state-dump from the glusterfsd process by sending it a USR1 or USR2 signal with kill - the file will be under /tmp and contain the PID
12:04 ndevos now, the only issue is understanding that state-dump :-/
12:04 toruonu well right now the debugging has gone long enough that users are sharpening their killing utencils so trying a shutdown of the two users servers and remounting /home to see if that helps
12:11 toruonu the only thing that I can consider that might have caused it is that I was adding together 8 files, discovered that one file was too many and moved the file to a new filename, but the original process was still running… I guess that might have hosed gluster? though if it did it's not a very safe system...
12:12 ndevos I guess the issue is with the bricks, and less with the glusterfs-clients - based on that nfs-log you pasted earlier
12:13 ndevos not sure what you mean by "adding together 8 files", but as longa s you've done all through a client-mount, it should not break the system IMHO
12:14 toruonu yeah they were done on the filesystem over nfs
12:14 toruonu anyway interesting state now
12:14 toruonu I did on all gluster daemons service glusterd stop; service glusterfsd stop
12:15 toruonu after that I still have gluster processes running
12:15 toruonu http://fpaste.org/jfZr/
12:15 glusterbot Title: Viewing root 5799 1 0 Jan01 ? 01:03:12 /usr/ ... host --volfile-id gluster/glustershd ... 0 14:15 pts/0 00:00:00 grep gluster (at fpaste.org)
12:15 toruonu wanted a clean restart to get rid of all stale handles
12:15 hagarth joined #gluster
12:16 cyr_ joined #gluster
12:16 ndevos those are clients, I would expect "service glusterd stop" to kill them, but it's also not really required
12:17 ndevos oen is the nfs-server process, maybe it does not exit when nfs-clients still use it?
12:17 ndevos s/oen/one/
12:17 glusterbot What ndevos meant to say was: one is the nfs-server process, maybe it does not exit when nfs-clients still use it?
12:17 ndevos glusterbot: oh, well done!
12:18 yinyin joined #gluster
12:18 dustint joined #gluster
12:18 * ndevos has to go now, might be back later
12:18 toruonu ok, seems that killing all of the gluster processes stopped all the stale processes and things may recover now
12:18 khushildep joined #gluster
12:19 ndevos okay, that would point to an issue in the nfs-server part
12:19 toruonu yes, managed to merge the files now
12:19 toruonu so we're back, but that was one nasty hiccup
12:20 ndevos maybe you can reproduce that problem again? abort some merging of some kind? if you manage that, file a bug with a good description so that it can be fixed
12:20 glusterbot http://goo.gl/UUuCq
12:20 toruonu I'd use the fuse mount, but it's too slow for day-to-day operations
12:21 yinyin_ joined #gluster
12:21 toruonu well right now we're kind of in a middle of somethign that's really fastly neede
12:21 ndevos sure, and maybe not reproduce on your production environment either :)
12:21 toruonu :D
12:22 toruonu I guess I'll have to indeed set up a testbed as well
12:22 ndevos I highly recommend that if you do not have one!
12:22 * ndevos <- gone
12:24 edong23 joined #gluster
12:28 edong23 joined #gluster
12:41 deckid joined #gluster
12:48 sjoeboo joined #gluster
13:01 yinyin joined #gluster
13:02 mohankumar joined #gluster
13:07 petersaints joined #gluster
13:08 rastar joined #gluster
13:08 manik joined #gluster
13:27 theron joined #gluster
13:33 bala joined #gluster
13:34 akenney joined #gluster
13:40 vpshastry joined #gluster
13:53 vpshastry joined #gluster
13:55 aliguori joined #gluster
13:56 jtux joined #gluster
13:59 svx_ joined #gluster
14:00 lala_ joined #gluster
14:02 khushildep joined #gluster
14:03 wN joined #gluster
14:07 hagarth @channelstats
14:07 glusterbot hagarth: On #gluster there have been 69393 messages, containing 3097813 characters, 519017 words, 2123 smileys, and 273 frowns; 510 of those messages were ACTIONs. There have been 23500 joins, 870 parts, 22685 quits, 8 kicks, 43 mode changes, and 5 topic changes. There are currently 177 users and the channel has peaked at 192 users.
14:12 semiosis :O
14:13 hagarth :O
14:22 balunasj joined #gluster
14:23 obryan joined #gluster
14:26 cicero not that anyone's counting or anything.
14:29 smellis interesting that there are many more smiles than frowns
14:30 bfoster heh, I think we could improve the ratio though. :) :) :)
14:30 semiosis it used to count :O as 'yells'... where'd that go JoeJulian?
14:35 smellis hah
14:37 rastar joined #gluster
14:38 hagarth more from me to improve the ratio :) :) :D
14:39 * jdarcy o_0
14:44 x4rlos 8 kicks? hehehe
14:44 plarsen joined #gluster
14:46 stopbit joined #gluster
14:46 sjoeboo joined #gluster
14:48 edong23 JoeJulian: no, centos 6.3
14:48 edong23 not ubuntu
14:48 edong23 sometimes, reboot and it mounts
14:48 edong23 sometimes reboot and it doesnt
14:49 smellis edong23: found a patch from a while ago for the netfs ra that could mount it, which is how I'd rather do it
14:49 smellis edong23: but I also tried to mount the backing store using an ra, and I couldn't get it to work, so banging away at that tonight
14:53 edong23 smellis: ... so, no cluster resource mounting?
14:53 lh joined #gluster
14:53 lh joined #gluster
14:55 rwheeler joined #gluster
14:56 edong23 smellis: rc.local
14:56 dbruhn__ joined #gluster
14:57 smellis ha
14:57 smellis edong23: it wasn't mount the xfs store for me, but it was 1am, so I can't be sure about the config
14:58 smellis edong23: i'm confident it will work, now that I'm fresh, prolly leave work early
14:58 edong23 k
14:58 smellis edong23: cluster comes up nicely though
14:58 edong23 ill knock the shit out of my customers and thier crappy appointments and head over early too
14:58 smellis and it'll start libvirt
14:58 smellis haha
14:58 smellis kewl
14:59 vimal joined #gluster
14:59 smellis we'll need to do some patching for netfs ra and maybe make an installer so that we don't have to do all this by hand everytime
15:00 edong23 i know you think im joking... but trying an rc.local mount will tell us whether its a timing issue, or indeed something in fstab or whatnot
15:00 edong23 rc.local would be after all other startup
15:01 smellis doesn't really matter, because the cluster needs to manage it so that it doesn't try to start vms without disk images
15:01 smellis we're damn close
15:03 dbruhn__ I use the rc.local method to, and have an application that relies on gluster being running. I just mount then put a sleep count in there and then start the application
15:08 erik49 joined #gluster
15:09 tru_tru joined #gluster
15:32 dustint joined #gluster
15:34 theron joined #gluster
15:35 dustint joined #gluster
15:39 bugs_ joined #gluster
15:41 ndevos edong23: http://review.gluster.org/3043 adds resource agents for glusterfs, maybe thats what you're looking for?
15:41 glusterbot Title: Gerrit Code Review (at review.gluster.org)
15:41 ndevos edong23: and http://review.gluster.org/4130 shows some commands on how to build it as rpm
15:41 glusterbot Title: Gerrit Code Review (at review.gluster.org)
15:44 dustint joined #gluster
15:48 smellis ndevos: agreed, but our cluster nodes are also gluster clients who need to mount the volume, so we'll need to facilitate that as well
15:50 rvrangel joined #gluster
15:53 ndevos smellis: right, but you can use a normal 'fs' ra for that, cant you?
15:54 sjoeboo joined #gluster
15:56 jbrooks joined #gluster
15:57 smellis it has to be modified because it checks for known fs types, extX,xfs,jfs,etc...
15:58 smellis and I'll probably modify (read: abuse) the netfs ra
16:02 daMaestro joined #gluster
16:06 rastar joined #gluster
16:09 Azrael808 joined #gluster
16:12 manik joined #gluster
16:26 zaitcev joined #gluster
16:33 JuanBre joined #gluster
16:42 theron joined #gluster
16:44 ne19 joined #gluster
16:48 lala joined #gluster
17:03 tc00per Answer to my problem with high/constant load on 'new' idle gluster servers running CentOS 6.3 and GlusterFS 3.3.1-6 seems to be unrelated to GlusterFS but is an issue with xfs. There is a bug in bugzilla (https://bugzilla.redhat.co​m/show_bug.cgi?id=813137) but I don't know if/when it'll be resolved. Updated here as documentation.
17:03 glusterbot <http://goo.gl/l6eky> (at bugzilla.redhat.com)
17:03 glusterbot Bug 813137: high, urgent, rc, bfoster, VERIFIED , [xfs/xfstests 273] heavy cp workload hang
17:17 johnmark tc00per: that's good to know
17:17 johnmark thanks for sharing
17:22 JoeJulian tc00per: Great... Hope that's fixed before it filters down to CentOS.
17:23 JoeJulian johnmark: I need to finalize plans for doing that workshop at cascadia.
17:24 ascii0x3F joined #gluster
17:26 theron joined #gluster
17:26 lala_ joined #gluster
17:40 Mo__ joined #gluster
17:44 nueces joined #gluster
17:50 bauruine joined #gluster
17:59 y4m4 joined #gluster
18:03 tc00per Sorry to be chatty on this but it looks like a different bug is more appropriate for my high load on XFS backed glusterfs and now comments are specifically mentioning problems with GlusterFS. Might be good to follow this one (https://bugzilla.redhat.co​m/show_bug.cgi?id=883905) and/or to chime in with any problems if you have had any. I haven't had any because this is a new system/build without data but it makes me hesitant to put this into production.
18:03 glusterbot <http://goo.gl/0OitI> (at bugzilla.redhat.com)
18:03 glusterbot Bug 883905: unspecified, unspecified, rc, bfoster, NEW , xfsaild in D-state
18:04 JoeJulian Don't be sorry. This is good stuff. :D
18:09 bfoster tc00per: I'm not sure how valid the last comment is in that bug
18:09 bfoster (wrt to the application to gluster/rhev)
18:09 johnmark JoeJulian: +1
18:09 JoeJulian johnmark: I need to finalize plans for doing that workshop at cascadia.
18:10 johnmark JoeJulian: ok. please let me know what you need from me. I will do anthing to make that happen
18:11 johnmark I assume that means sponsoring the conference and paying them
18:11 JoeJulian Hookers and blow, buddy...
18:12 JoeJulian Actually you were going to put me in touch with someone at Dell to see about providing equipment. Are you going to send someone to do the workshop(s) with me? I can do Gluster, but if we have the space and the setup, why not do oVirt and whatever else you can think of.
18:15 johnmark JoeJulian: will definitely send speakers and others to help
18:15 johnmark including yours truly!
18:15 johnmark JoeJulian: what kind of equipment do you need?
18:16 tc00per bfoster: Agreed... and good reason to follow the bug I imagine which obviously you are doing. Good to know you are in both places at once... :)
18:17 JoeJulian I was thinking some VM hosts and as many small clients as we can fit in a room.
18:23 RicardoSSP joined #gluster
18:27 johnmark JoeJulian: ok
18:29 johnmark JoeJulian: I'll put in a request this week
18:29 johnmark JoeJulian: we need to start getting demo equipment anyway, given how much we're going to be on the road this year
18:30 RicardoSSP hey guys, does someone knows if there is a preview of being able to compile gluster on freebsd?
18:31 RicardoSSP I've seen some patches for netbsd but when trying to compile on freebsd, it gaves me some errors
18:32 JoeJulian I'm afraid I don't know much more than you on that one.
18:32 johnmark RicardoSSP: yeah, I've been looking for someone to help us port to FreeBSD
18:33 johnmark because right now, it works on NetBSD, thanks to our NetBSD contributor
18:33 johnmark he patches all of our releases whenever it breaks the NetBSD build
18:33 RicardoSSP johnmark, I've tried a lot of things, also applying the netbsd patches but it doesn't work. I can help, but I don't know anything about C coding
18:33 JoeJulian I assume manu could help with the errors though. Go ahead and file a bug report and comment on it to the gluster-devel mailing list.
18:33 glusterbot http://goo.gl/UUuCq
18:33 johnmark RicardoSSP: yeah, this requires someone who can code
18:34 johnmark RicardoSSP: what JoeJulian said
18:34 johnmark RicardoSSP: manu once told me that the amount of work is significant and that the NetBSD patches wouldn't work
18:35 RicardoSSP johnmark, but do you know the main cause of this behaviour? I really couldn't understand the error, just saw on the code a define for netbsd that changes something about the inodes, i think
18:36 RicardoSSP I can try to replay this bug and open something on bugzilla, this could help just to show the error to the developers, right?
18:36 johnmark RicardoSSP: that I couldn't tell you. I would need a freebsd box to say anything semi-intelligent
18:36 johnmark RicardoSSP: yes
18:36 RicardoSSP johnmark, nice, I will do this :)
18:36 RicardoSSP thanks :)
18:37 RicardoSSP I want to try gluster with zfs features as compression and other stuffs (ssd caches, etc) but on linux the performance is too poor, and I'm not so good on opensolaris / openindiana to take the shot ;)
18:39 johnmark RicardoSSP: if you open a bug on bugzilla, make sure to send a note to gluster-devel
18:39 RicardoSSP johnmark, right, on the channel or on email?
18:40 JoeJulian I would recommend email for Manu.
18:41 RicardoSSP right :) thanks a lot
18:41 JoeJulian I could be wrong though, but I notice emails from him more often than him talking on IRC.
18:42 johnmark JoeJulian: yes
18:44 JoeJulian johnmark: So GlusterFS and oVirt (I assume). Any other workshops you can think of that you guys would want to do? I don't want to say, hey, give us a whole room for the conference for these three (I'm going to con someone from puppetlabs into doing something) workshops.
18:45 lh joined #gluster
18:45 JoeJulian I wonder if I could get whack do come do something on logstash...
18:47 johnmark JoeJulian: I was thinking of puppetconf. but I could also see openshift
18:47 johnmark but my priorities are gluster and ovirt :)
18:51 rwheeler joined #gluster
19:04 grosso joined #gluster
19:14 bauruine joined #gluster
19:18 duffrecords joined #gluster
19:21 duffrecords in Gluster 3.3.1 is direct-io-mode enabled or disabled by default?  I have not specified it in fstab and it doesn't appear when I run "gluster volume info"
19:22 a2 duffrecords, by default it is a somewhat "hybrid" mode
19:23 duffrecords I was interested in trying "disable" because of this post  http://gluster.org/pipermail/glust​er-users/2012-October/034526.html but if the results don't turn out satisfactory is there a way to set it back to this "hybrid?"
19:23 glusterbot <http://goo.gl/7zLRF> (at gluster.org)
19:29 fubada joined #gluster
19:29 fubada hi all
19:29 fubada what should be the order of operations for upgrading machines that have gluster installed?
19:29 fubada servers first?
19:29 fubada clients first?
19:32 JoeJulian servers first
19:32 fubada thanks
19:32 JoeJulian But going between minor versions (thus far) they're incompatible so you have to do both (3.2 to 3.3 for instance).
19:33 fubada glusterfs-server-3.3.1-3.el6.x86_64 is whats installed
19:33 fubada glusterfs                    x86_64 3.3.1-6.el6                       glusterfs-x86_64           1.8 M
19:33 fubada is whats offered
19:33 JoeJulian No worries then.
19:34 fubada thanks
19:34 JoeJulian In fact, iirc, there's nothing between -3 and -6 that'll even affect you unless you're using swift.
19:34 fubada im not
19:35 fubada whoa hang on
19:35 fubada it did something funky
19:36 fubada warning: /var/lib/glusterd/vols/teldata-public/t​eldata-public.gluster2.app.us1.teladoc.​com.gluster-bricks-teldata-public.vol saved as /var/lib/glusterd/vols/teldata-public/tel​data-public.gluster2.app.us1.teladoc.com.​gluster-bricks-teldata-public.vol.rpmsave
19:36 fubada how come?
19:36 JoeJulian Because they're recreated from the info file when glusterd starts.
19:36 fubada ok
19:38 glusterbot New news from newglusterbugs: [Bug 861947] Large writes in KVM host slow on fuse, but full speed on nfs <http://goo.gl/UWw7a>
20:02 rwheeler joined #gluster
20:04 DaveS__ joined #gluster
20:08 andreask joined #gluster
20:30 kkeithley johnmark: any word on a date for the CERN visit?
20:36 badone joined #gluster
20:40 johnmark kkeithley: looking like 2/6
20:40 johnmark I'll forward you the latest
20:41 gbrand_ joined #gluster
20:43 isomorphic joined #gluster
20:50 polfilm joined #gluster
21:02 andreask1 joined #gluster
21:05 chouchins joined #gluster
21:06 sipane joined #gluster
21:10 polfilm joined #gluster
21:16 sipane joined #gluster
21:17 polfilm joined #gluster
21:30 Sachin_ joined #gluster
21:34 nueces joined #gluster
21:36 cyr_ joined #gluster
21:38 andreask joined #gluster
21:40 duffrecords1 joined #gluster
21:57 sipane Does anyone have any experience of running glusterfs as a backend for KVM guests?
21:58 sipane I'm interested in setting up a couple of KVM hosts but would like live failover of the backend if possible.
22:04 sipane it's like a crypt in here :(
22:04 JoeJulian Only for the impatient. :P
22:04 haidz ^
22:04 JoeJulian Yep, I do that myself.
22:05 * JoeJulian notes the irony....
22:05 sipane Not dead yet :)
22:05 JoeJulian Hehe
22:06 JoeJulian Hey, I'm wearing that T-Shirt today.
22:07 sipane I haven't yet tried either gluster or drbd, currently use rsync + heartbeat
22:07 VSpike I'm hitting a problem trying to do a tar backup on a gluster FS from a client machine where I get "file changed while we read it" errors for each file...
22:07 JoeJulian ouch... drbd has never made me happy...
22:08 VSpike I did a search on this and results seemed to suggest this happens with NFS not FUSE (I'm using the native gluster client)...
22:08 VSpike and that there's no fix but only a workaround (use star instead)
22:08 VSpike Is that correct?
22:08 sipane What about drbd made gluster the better choice?
22:08 JoeJulian I assume you're not changing the files as you're tarring them?
22:08 VSpike I'm not, no
22:09 semiosis VSpike: i get that too but i ignore it and life seems to go on ok
22:09 VSpike semiosis: that would have been my next question, probably :) can I just ignore it
22:09 * JoeJulian digs up drbd memories and attempts to suppress the rage...
22:10 sipane Thanks for putting yourself through the pain
22:10 andreask joined #gluster
22:10 sipane It seemed a fairly straighforward choice, but some of the info on the web about gluster 2-3 years ago made me feel a bit wary about considering it for production
22:11 JoeJulian The only rough patch was that prior to 3.3, healing was a blocking operation so the vm's would have to wait if you had a network partition.
22:12 sipane Thanks for the info, much appreciated
22:12 JoeJulian Now, that said, I do only use the vm image for the OS. For any data that VM needs, I mount a gluster volume.
22:13 VSpike Hm, perhaps I should be doing this on the server instead
22:13 sipane Cool hadn't thought of that approach
22:13 VSpike And perhaps I should have used something that can do snapshots, like btrfs or lvm
22:13 JoeJulian It's currently much more efficient that way. File system block operations across a network are not very efficient.
22:13 JoeJulian However...
22:13 neofob left #gluster
22:14 JoeJulian 3.4 has a library interface that's already supported in qemu-kvm.
22:14 theron joined #gluster
22:14 VSpike semiosis: tar eventually fails... too many warnings perhaps?
22:14 JoeJulian So qemu will establish a direct connection to the bricks instead of going through fuse. This makes it significanly faster than going through fuse.
22:15 sipane That sounds very encouraging.
22:15 semiosis VSpike: never heard of too many warnings adding up to an error
22:16 JoeJulian Ooh, I wonder...
22:17 BobbyD_FL joined #gluster
22:17 VSpike semiosis: my mistake - my script spotted the non-zero exit code and said there was an error, but I think it's just tar's code for "some files changed"
22:17 semiosis ah
22:17 BobbyD_FL left #gluster
22:18 JoeJulian VSpike: I wonder if it throws that warning if you set the mount option "attribute-timeout=0"
22:21 sipane JoeJulian thanks very much for the info
22:22 JoeJulian Any time
22:23 andrewbogott joined #gluster
22:23 sipane left #gluster
22:23 andrewbogott joined #gluster
22:24 duffrecords joined #gluster
22:25 VSpike Darn it .. fuse doesn't support -o remount :)
22:30 lh joined #gluster
22:30 lh joined #gluster
22:30 JoeJulian VSpike: Just mount the volume again in another directory, if you've got users using it, and try from the new mount.
22:31 VSpike Good point
22:32 edong23_ joined #gluster
22:34 semiosis if that works please put a q&a on community.gluster.org i'm sure lots of people would appreciate getting rid of those annoying warnings
22:37 VSpike JoeJulian: It reduces them by a huge amount :) I only get a few. Also, it seems to run an order of magnitude faster
22:38 erik48 joined #gluster
22:38 petersaints joined #gluster
22:38 JoeJulian Interesting...
22:43 VSpike Sorry, I was completely wrong
22:43 VSpike Forgive me :) It's late here..
22:44 VSpike Script doesn't use -v, makes it look like there are more errors - with -v it writes more output making it look like it goes quicker. In fact time, and grep/wc -l tell me that with attribute-timeout=0 it's slower and produces more errors
22:45 VSpike Apologies for misleading you
22:46 raven-np joined #gluster
22:47 VSpike A few repetitions and it's pretty consistent
22:47 VSpike http://pastie.org/5657768
22:47 glusterbot Title: #5657768 - Pastie (at pastie.org)
22:48 VSpike first one has the attribute-timeout=0
23:06 RicardoSSP JoeJulian, johnmark opened the bugzilla: 893795
23:06 RicardoSSP :)
23:07 hateya joined #gluster
23:09 glusterbot New news from newglusterbugs: [Bug 893795] Gluster 3.3.1 won't compile on Freebsd <http://goo.gl/U8QFF>
23:11 hattenator joined #gluster
23:25 andreask1 joined #gluster
23:31 H__ joined #gluster
23:47 theron joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary