Camelia, the Perl 6 bug

IRC log for #gluster, 2013-04-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 duerF joined #gluster
00:10 genewitch 2.6.32
00:10 genewitch they're already on 3.8
00:11 genewitch i think, i haven't looked since i stopped using gentoo for servers every day
00:18 nueces joined #gluster
00:27 vrturbo joined #gluster
00:55 yinyin joined #gluster
00:57 m0zes joined #gluster
01:18 cyberbootje joined #gluster
01:27 harish joined #gluster
01:36 d3O joined #gluster
01:57 balunasj joined #gluster
02:05 JoeJulian genewitch: Yes, enterprise linux uses a "stable" kernel, backporting bug fixes and the occasional structure change that breaks ext4. :)
02:07 d3O_ joined #gluster
02:09 d3O__ joined #gluster
02:12 jikz joined #gluster
02:26 brunoleon joined #gluster
02:27 tyl0r joined #gluster
02:42 jikz joined #gluster
02:43 d3O joined #gluster
02:44 lalatenduM joined #gluster
02:53 kevein joined #gluster
03:04 vshankar joined #gluster
03:27 hagarth joined #gluster
03:55 flrichar joined #gluster
04:02 vpshastry1 joined #gluster
04:08 jiqiren is log concurrent safe? as in can I just go log.Println("stuff") over and over?
04:09 raghu joined #gluster
04:23 itisravi joined #gluster
04:29 hagarth joined #gluster
04:34 mohankumar joined #gluster
04:35 sjoeboo_ joined #gluster
04:44 fidevo joined #gluster
04:46 sgowda joined #gluster
04:52 hchiramm_ joined #gluster
05:02 bala joined #gluster
05:14 y4m4 joined #gluster
05:27 aravindavk joined #gluster
05:36 rastar joined #gluster
05:38 glusterbot New news from newglusterbugs: [Bug 952975] Files appear in stat but are lost to readdir <http://goo.gl/wKTYT>
05:41 rotbeard joined #gluster
05:44 bulde joined #gluster
05:51 vimal joined #gluster
05:55 satheesh joined #gluster
06:03 Nevan joined #gluster
06:03 rgustafs joined #gluster
06:13 deepakcs joined #gluster
06:19 satheesh joined #gluster
06:21 sjoeboo_ joined #gluster
06:22 mnaser joined #gluster
06:32 guigui3 joined #gluster
06:37 rgustafs joined #gluster
06:38 ricky-ticky joined #gluster
06:38 Uzix joined #gluster
06:40 Uzix Hi! What disadvantages of mounting glusterfs on client through NFS?
06:44 samppah native client has better performance and it's fault tolerant by design because it connects to all servers where NFS client connects only to one
06:45 ekuric joined #gluster
06:48 Uzix ty
06:49 itisravi joined #gluster
06:56 puebele joined #gluster
07:05 dobber_ joined #gluster
07:08 rb2k joined #gluster
07:13 ollivera joined #gluster
07:14 puebele joined #gluster
07:16 vpshastry1 joined #gluster
07:28 Guest42893 joined #gluster
07:34 andreask joined #gluster
07:35 social_ joined #gluster
07:36 social_ Hi, I'm bit puzzled, does gluster support flock? Or this question should be more like does fuse support flock? I've put splunk search head polling on gluster thinking it should work and well it did not.
07:37 social_ anyone tried splunk on gluster? :)
07:39 atpas joined #gluster
07:47 mtanner_w joined #gluster
07:48 efries joined #gluster
07:51 semiosis joined #gluster
07:56 phix social_: no
07:56 phix Apparantly glusterfs isn't too great at running between hosts on a slow, intemettent link
08:05 ngoswami joined #gluster
08:09 flrichar joined #gluster
08:14 jikz joined #gluster
08:16 hchiramm__ joined #gluster
08:19 spider_fingers joined #gluster
08:19 Norky joined #gluster
08:26 sohoo joined #gluster
08:27 Chiku|dc why when I stop my volume, I got this message :
08:27 Chiku|dc [glusterd-utils.c:1316:gluste​rd_brick_unlink_socket_file] 0-glusterd: Failed to remove /tmp/8e561e0da529589512c8bf756731aa80.socket error: Resource temporarily unavailable
08:28 Chiku|dc but still the file is removed
08:31 mnaser joined #gluster
08:40 jclift joined #gluster
09:00 sohoo i see bricks pairs with a bit different sizes(arround 2G dif each on some of them) what is the best way to check the integrety of replications pairs? what wouyld be the best rsync options/switches to sync the HDs(outside gluster) xattr etc..
09:05 bulde joined #gluster
09:06 sohoo will rsync -aHAX --delete do ok?
09:06 rnts left #gluster
09:08 badone joined #gluster
09:08 gbrand_ joined #gluster
09:17 jclift sohoo: As a thought, maybe look into the gluster healing commands?
09:17 jclift sohoo: That should do the checking itself, and also "fix" any problems it encounters.
09:19 hagarth joined #gluster
09:19 Nagilum_ no, not any problems
09:21 sohoo jxlift tnx, what would do the checking?
09:24 sohoo you mean the rsync?
09:25 duerF joined #gluster
09:27 Nagilum_ sohoo: stat() on all files, that should bring the replicas in sync, unless there is a split-brain
09:27 rastar1 joined #gluster
09:28 Nagilum_ "find . -type f >/dev/null" will do such a stat()
09:28 sohoo thanks, i know ls -R /mnt.. it was the old way befor the auto self heal daemon. does the self heal does that or we still need ls -R?
09:29 jclift Heh, my knowledge is so lopsided with Gluster still.
09:29 jclift I know some parts well, other parts are still black boxes to me. :(
09:35 rastar joined #gluster
09:37 sohoo ok, so testing yourslef is still the best practic. Ill try that and see if it fixes the size issues
09:37 bleon joined #gluster
09:38 duerF joined #gluster
09:40 rastar1 joined #gluster
09:43 joehoyle- joined #gluster
09:45 Nagilum_ sohoo: ls -lR /mnt.. , without -l it will just do a readdir() and no stat
09:51 aravindavk joined #gluster
09:52 andreask joined #gluster
10:01 ujjain joined #gluster
10:07 sohoo tnx, i know. the -l is the most important to gluster :)\
10:08 H__ Nagilum_: can one do a find without invoking stat ?
10:08 Nagilum_ find .
10:08 Nagilum_ will only do readdir
10:09 H__ but you said : "find . -type f >/dev/null" will do such a stat()
10:09 Nagilum_ yes
10:09 H__ so it's the type f that makes it stat ?
10:09 Nagilum_ yep
10:09 H__ cool. makes sense. thanks.
10:09 neofob joined #gluster
10:10 Nagilum_ what type doesn't really matter, but in order to determine the type, find has to do the stat
10:23 tryggvil joined #gluster
10:29 vshankar joined #gluster
10:30 hagarth joined #gluster
10:43 andreask joined #gluster
10:43 Oneiroi joined #gluster
10:45 duerF joined #gluster
10:47 edward1 joined #gluster
10:53 RicardoSSP joined #gluster
10:53 RicardoSSP joined #gluster
11:06 RobertLaptop joined #gluster
11:07 itisravi joined #gluster
11:14 itisravi_ joined #gluster
11:14 hybrid512 joined #gluster
11:20 dustint joined #gluster
11:22 kkeithley joined #gluster
11:31 SteveCooling Hi! Getting weird rpm/yum dependency problems on update on one of my machines.. ("Requires: switch.so.0()(64bit)" and other .so files). Any ideas?
11:32 SteveCooling This is a test machine so when it failed my "yum update" i did a "yum remove glusterfs" to get it running. Now it gives the same error on trying to install again.
11:33 Nagilum_ yum provides "*lib*/switch.so.0" ?
11:33 SteveCooling well... gusterfs provides them
11:34 Nagilum_ um..weird!
11:34 SteveCooling indeed
11:34 SteveCooling https://dl.dropboxusercontent​.com/u/683331/glusterfsck.png
11:34 glusterbot <http://goo.gl/jX9iS> (at dl.dropboxusercontent.com)
11:34 Nagilum_ hmm
11:35 Nagilum_ do I read that right that the glusterfs rpm was not found on the epel repo?
11:35 SteveCooling i'm using a fedorapeople repo i was once recommended in here
11:35 SteveCooling theres an older one in epel i think
11:36 Nagilum_ you're on RHEL?
11:36 Nagilum_ or FC?
11:37 SteveCooling centos 6.4 64bit
11:37 SteveCooling https://dl.dropboxusercontent.​com/u/683331/glusterfsck2.png
11:37 glusterbot <http://goo.gl/rsfcc> (at dl.dropboxusercontent.com)
11:37 Nagilum_ wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glu​sterfs/LATEST/EPEL.repo/glusterfs-epel.repo
11:37 glusterbot <http://goo.gl/5beCt> (at download.gluster.org)
11:37 Nagilum_ maybe use that instead?
11:38 SteveCooling *gives it a try*
11:38 Nagilum_ thats what I use..
11:39 SteveCooling well.. that one gives an older build it seems
11:40 SteveCooling but installs like it should
11:42 Nagilum_ "- Remove useless provides for xlator .so files and private libraries (3.4.x)"
11:42 Nagilum_ maybe thats causing it?
11:42 SteveCooling https://bugzilla.redhat.com/show_bu​g.cgi?format=multiple&amp;id=952122 ?
11:42 glusterbot <http://goo.gl/oxFdY> (at bugzilla.redhat.com)
11:42 Nagilum_ http://arm.koji.fedoraproject.or​g/koji/buildinfo?buildID=136281
11:42 glusterbot <http://goo.gl/a96nd> (at arm.koji.fedoraproject.org)
11:42 Nagilum_ yeah
11:43 SteveCooling seems to be related, but it isn't obvious to me exactly what makes it behave like that
11:43 Nagilum_ well, they removed that the rpm provides the libswitch
11:43 Nagilum_ but it does provide it
11:43 Nagilum_ and the other packages depend on it
11:44 SteveCooling even seems like it depends on itself
11:44 SteveCooling it bombed out even on the "glusterfs" package
11:45 Nagilum_ yeah, the "depend-upon" list is partially automatically generated using ldd
11:45 Nagilum_ so it will usually list all the lib*.so file correctly
11:45 Nagilum_ files
11:46 badone joined #gluster
11:47 SteveCooling so the newest rpm build has half a bugfix in it?
11:47 SteveCooling :)
11:48 Nagilum_ I'd call it simply broken
11:49 Nagilum_ as it seems these provides aren't useless at all
11:50 Nagilum_ maybe you can comment on the bug
11:50 Nagilum_ that this change break updating/installing
11:50 Nagilum_ breaks
11:51 SteveCooling i'll give it a try :)
11:51 SteveCooling btw: this is the repo i was using http://repos.fedorapeople.org/repos​/kkeithle/glusterfs/epel-6/x86_64/
11:51 glusterbot <http://goo.gl/XKKcq> (at repos.fedorapeople.org)
11:53 bulde joined #gluster
12:02 Nagilum_ but I don't really see anything in the changelog that would make me want to switch, there are a few nice-to-haves but nothing really important
12:05 hybrid5121 joined #gluster
12:05 Nagilum_ not sure, but doesn't yum allow you to revert the update? yum history list, then do an undo..
12:07 SteveCooling no problem. not sure what build i have elsewhere, but this is only a test machine
12:08 SteveCooling the -1 rpms work fine, will just use them
12:12 ricky-ticky joined #gluster
12:17 jdarcy joined #gluster
12:19 H__ what would be a light way to log all gluster access, including if the file exists or not ?
12:20 ndevos 'light' makes it more difficult, but I'd go with tcpdumping and some scripts using tshark
12:20 * ndevos actually wanted to have look into that, and produce apache-like logs
12:21 joehoyle joined #gluster
12:24 ndevos H__: http://people.redhat.com/ndevos/talks/​debugging-glusterfs-with-wireshark.d/ contains some scripts that can be used as a base
12:24 glusterbot <http://goo.gl/3nM9n> (at people.redhat.com)
12:25 H__ ndevos: cool, thanks. will check that out
12:26 ndevos H__: I'm not sure when I have time for writing such a script, but if you have questions/ideas/something I'm interested :)
12:28 ndevos H__: you can use tcpdump to capture X MB at the time, and rotate to a next file, you'll have to check what '-s' option to use to get enough bytes of the packets to analyze the contents you need
12:28 ndevos LOOKUP and OPEN are probably the most important ones
12:28 vrturbo joined #gluster
12:29 H__ I need statistics like #queries per gluster mount, type of query (r/w, lookup, stat) was the file at the dht expected location, and timing info.
12:29 ndevos thats quite a bit to gather :)
12:29 ndevos have you looked into the 'gluster volume top ...' option already?
12:30 H__ a bit
12:30 ndevos there are also some integrated profiling options, maybe they sufFice?
12:30 H__ and killed glusterd's while doing that
12:30 ndevos ah :-/
12:30 H__ found out that that was caused by mistyping a node name
12:31 H__ still need to reproduce and file a bug
12:31 glusterbot http://goo.gl/UUuCq
12:31 ndevos I like the tcpdump way because it is not interfereing with gluster itself - it just may cause higher load on the system...
12:32 H__ yes, gluster could use a lightweight access point for this info. (i consider top a heavyweight solution)
12:32 H__ just apache logstyle like you mentioned
12:36 ricky-ticky joined #gluster
12:36 deepakcs joined #gluster
12:40 aliguori joined #gluster
12:45 bet_ joined #gluster
12:55 samppah can someone config if libgfapi is available in red hat storage 2 update 4?
12:56 samppah it seems to work with qemu gluster but i'm bit worried if there is something i should know about :)
12:58 samppah s/config/confirm/
12:59 glusterbot What samppah meant to say was: can someone confirm if libgfapi is available in red hat storage 2 update 4?
13:00 Norky libgfapi is part og GlusterFS 3.4, while RHS 2u4 is still on GlusterFS 3.3.1, so I believe the answer is no
13:07 samppah Norky: yes thats what i understood aswell.. however i have been doing some tests with qemu with glusterfs support and it seems to work with RHS 2u4 aswell
13:08 lalatenduM joined #gluster
13:12 flrichar joined #gluster
13:23 hagarth joined #gluster
13:33 mohankumar joined #gluster
13:33 portante joined #gluster
13:35 aliguori joined #gluster
13:39 rcheleguini joined #gluster
13:46 y4m4 joined #gluster
13:47 vpshastry1 joined #gluster
13:47 y4m4 joined #gluster
13:50 piotrektt_ joined #gluster
13:53 Nagilum_ cdTv: SkiBrille aufsetzen!
13:53 Nagilum_ ups
13:54 spider_fingers left #gluster
13:56 Nagilum_ Scorpi: gar nicht
13:56 Nagilum_ *hrm*
14:00 manik joined #gluster
14:06 bugs_ joined #gluster
14:13 mohankumar joined #gluster
14:16 Humble_ joined #gluster
14:16 neofob joined #gluster
14:19 wushudoin joined #gluster
14:19 vpshastry joined #gluster
14:19 wushudoin left #gluster
14:23 jskinner_ joined #gluster
14:34 tyl0r joined #gluster
14:49 ngoswami joined #gluster
14:52 shylesh joined #gluster
15:06 _pol joined #gluster
15:07 _pol joined #gluster
15:08 jbeitler joined #gluster
15:12 jbeitler So I have a question and not finding an answer very easily. If I where to srm on a file in a brick, would it srm the files on any other brick attached? or just delete them normally?
15:16 jdarcy srm?
15:16 daMaestro joined #gluster
15:16 jbeitler secure remove
15:16 jbeitler so not rm -rf xxx.txt but srm -rf xxx.txt
15:17 jdarcy When you remove a file through the native/NFS mountpoint, it's removed from all bricks that have it.  If you remove it directly from the brick on the server, which you absolutely shouldn't be doing, then it's removed only from that brick.
15:18 jbeitler yeah I understand that, my question is will it securely remove the other files on the other bricks (using the native/NFS Mount point)
15:19 jbeitler if I securely remove the file from one
15:20 jdarcy Any call for a file through the mountpoint will go to every copy of the file (on different bricks).  That applies to write, unlink, etc.
15:20 jbeitler ahh so it will work
15:20 jbeitler okay thank you very much
15:20 jdarcy No problem.
15:27 jbeitler left #gluster
15:29 botton joined #gluster
15:29 botton left #gluster
15:47 phix https://sphotos-b.xx.fbcdn.net/hphotos-as​h4/2497_365303673586604_1870371452_n.jpg
15:47 puebele1 joined #gluster
15:47 glusterbot <http://goo.gl/tmTXp> (at sphotos-b.xx.fbcdn.net)
15:53 samppah phix: lol :)
15:53 phix I Know right/
15:53 phix I got banned for two channels for pasting that and now I got a lol :) it was worth it :P
15:54 samppah hah :)
15:54 phix ##java and #Python ops need to get humour
16:24 semiosis well it's off topic to say the least.  do you do much participation in irc channels besides the random off topic, off color meme?
16:24 semiosis i can see how that might not be appreciated, if it's all you're offering
16:25 semiosis phix:
16:25 phix semiosis: I do
16:26 phix but still, people in IRC channels need to look outside every once and a while
16:26 phix then again looking outside may be considered offtopic to these types of people
16:27 phix too narrow minded if you ask me
16:27 semiosis i think i speak for most people here when i say we'd rather not have jokes posted in channel that disparage any ethnic groups, or people's physical appearance
16:28 phix Are you from ireland?
16:28 semiosis i respect all people
16:28 phix as do I
16:28 semiosis not irish
16:29 phix but even me being respectful of all people found that funny
16:29 phix I seriously did not see her the first time I looked at that pic
16:29 phix it was spot on
16:29 semiosis anyway, not going to kickban you over this, but consider yourself warned.  find another channel for that kind of thing
16:29 phix I wasn't being descrimative or disrespectful at all
16:29 semiosis we appreciate jokes about filesystems, performance measurement with dd, that kind of thing :)
16:30 samppah semiosis :D
16:30 phix ok, I have a joke for you, which filesystem's founder has 20 years to life?
16:30 semiosis this will be funny if it's anyone other than reiser
16:30 semiosis who?
16:31 phix hehe
16:31 phix it was reiser :)
16:31 semiosis ok well it was a good try :)
16:31 phix I know right
16:31 phix reisferfs is as dead as his wife
16:31 phix who even uses that now?
16:31 phix I mean after ext4 came out
16:32 puebele joined #gluster
16:32 phix reiserfs4 was supposed to be awesome but it never came to being stable, and 3 has hashing / btree issues
16:32 semiosis http://www.quickmeme.com/meme/3se83p/
16:32 glusterbot Title: One Does Not Simply - one does not simply build glusterfs on nonlinux systems (at www.quickmeme.com)
16:33 phix hey semiosis, can I use glusterfs on intermettent VPN connections?
16:33 semiosis @wonka
16:33 phix wonka has golden tickets?
16:33 semiosis had another qkmeme but cant find it
16:33 semiosis (another on-topic quickmeme)
16:34 semiosis i have had success using nfs clients over openvpn to access glusterfs volumes
16:34 semiosis although my connection was very solid, between datacenters, it was temporary but not intermittent
16:34 phix haha #ReiserFS is empty
16:35 phix Like the look in the founders wife's eyes
16:36 * semiosis gbtw
16:36 semiosis if you have more glusterfs questions feel free to ask, i'll bbiab
16:36 semiosis otherwise, joke time is over for me
16:47 nueces joined #gluster
16:48 lh joined #gluster
16:48 lh joined #gluster
16:52 xymox joined #gluster
16:53 y4m4 joined #gluster
16:54 puebele1 joined #gluster
16:58 y4m4 joined #gluster
17:05 portante joined #gluster
17:15 hagarth joined #gluster
17:24 nat_ joined #gluster
17:24 ladd joined #gluster
17:26 nat hi folks. we're using gluster on a couple of machines, and are finding that our web sites crawl without stat-prefetch turned off, but when we turn it on the mail server slows to a crawl.
17:27 nat is there a known set of workarounds to deal with that sort of thing.
17:27 nat or area where i should look first?
17:30 dustint joined #gluster
17:30 semiosis see ,,(php)
17:30 glusterbot php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://goo.gl/uDFgg for details.
17:31 semiosis is your web site using php?
17:31 semiosis if it is, that page has some helpful info
17:31 semiosis but whether your web site is php or not, you should use caching to accelerate static content
17:32 nat semiosis: some of them are.
17:32 nat semiosis: and varnish was indeed already on my todo list.
17:32 semiosis apache's mod_cache is helpful, as is varnish
17:32 semiosis i like varnish a lot
17:32 semiosis it's great
17:32 nat i've used it other places... its great.
17:32 nat thanks for the tips reading the page now. looks interesting.
17:32 semiosis but mod_cache is a bit easier to use, you can just turn it on and it mostly works
17:34 the-me joined #gluster
17:34 hagarth joined #gluster
17:36 xymox joined #gluster
17:39 Alpinist joined #gluster
17:45 the-me joined #gluster
17:47 manik joined #gluster
17:48 gbrand_ joined #gluster
17:54 aliguori joined #gluster
18:00 hagarth joined #gluster
18:07 andreask joined #gluster
18:24 rb2k joined #gluster
18:38 joehoyle joined #gluster
18:45 piotrektt_ hey. on gentoo after instalation when i try to peer probe nothing happens
18:45 piotrektt_ what may be the cause?
18:46 semiosis iptables?
18:47 piotrektt_ ok. its fine
18:47 piotrektt_ :)
18:48 semiosis does that mean your problem is solved?
18:48 semiosis great
18:52 Alpinist joined #gluster
18:55 jruggiero joined #gluster
18:55 jruggiero left #gluster
18:59 jskinn___ joined #gluster
19:00 rotbeard joined #gluster
19:07 brunoleon__ joined #gluster
19:08 jskinner_ joined #gluster
19:14 piotrektt_ yeah. ive added it to rc and restarted both nodes, and it worked. dunno why.
19:15 piotrektt_ cause the deamon was running
19:43 x4rlos joined #gluster
19:54 Helfrez joined #gluster
19:55 tziOm joined #gluster
19:56 tziOm Hello
19:56 glusterbot tziOm: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:56 tziOm huh, nice welcome! :/
19:56 JoeJulian It's a pretty standard irc thing.
19:57 JoeJulian Had to add it 'cause I want to help but people would come in the channel, say hello, wait 30 seconds and leave.
20:02 tziOm I need to export smb volumes, I am aware of the work on vfs layer (samba) but it is not usable in production?
20:02 tziOm My question is then, should I run samba instances on the gluster brick servers? samba/ctdb
20:04 kkeithley jvyas is MIA today, he could tell you when the Samba vfs work will be done.
20:05 tziOm But anyway, is it common to run samba on the storage servers?
20:05 kkeithley you need to mount the gluster volume(s) somewhere, then serve them with Samba. That _could_ be on the brick servers.
20:06 JoeJulian Personally, I just have a single samba vm.
20:06 JoeJulian But then I have reduced the number of windows machines I have to support to a bare minimum.
20:09 JoeJulian Dammit... another instance of a client not reconnecting to a brick. Hrm.
20:09 tziOm I am working on a quite big setup, ~100TB usable space (replicated 3x)
20:37 humbug__ joined #gluster
20:42 awheeler_ joined #gluster
20:44 sjoeboo tziOm: we run samba directly on our brick servers, with RRDNS in front of them to spread clients out.
20:44 sjoeboo this works okay, but i would rather get a dual 10GB nic'd box multihomed and have that be a single file_mover basically
20:48 tziOm ok.. why exactly?
20:48 tziOm and how many mounts do you have and what load?
20:48 tziOm I will have about 10k mounts
20:49 tziOm Thats why I want to spread samba load, and resources are there (1U 4x3TB quad core bricks)
20:56 badone joined #gluster
20:57 \_pol joined #gluster
21:02 tziOm sjoeboo, ?
21:03 sjoeboo we don't have that many mounts YET
21:03 tziOm But do you have any numbers for me?
21:03 tziOm and eventually where you are seeing problems
21:04 sjoeboo but its a shared storage pool, mostly native gluster mounts (about 3K), then maybe a few hundred CIFS mounts, mostly at client systems/instruments
21:04 sjoeboo we have been bitten once by someone doing about 300 threads to the storage via CIFS
21:04 sjoeboo so that node/brick had a huge load since it was all hairpinning though that one node (this is currently a 5x2 dist-replica)
21:05 sjoeboo thought it will soon be doubled then tripled
21:05 tziOm but you say you want to move away from samba on bricks..
21:06 sjoeboo i'd like to try it, yes, and have a pool of gateways basically
21:11 \_pol joined #gluster
21:21 the-me joined #gluster
21:25 kai_office joined #gluster
21:29 humbug joined #gluster
21:32 kai_office I seem to be missing something. I have 4 CentOS 6.4 VMs (ran yum -y update yesterday). I have had success creating a "replica 4" volume, and everything seems to work. I am currently trying to run a pure distributed volume by executing the command: gluster volume create vol1 gluster{1,2,3,4}:/data/brick1/vol1/ . If I start the volume, and mount the new glusterfs volume on each of the 4 nodes, I am able to write files to the mounted glusterfs volume. Bu
21:34 JoeJulian In case you thought it was all there, the line limit hit at "But".
21:35 kai_office But as soon as there is more than 1 file, ls hangs until I run 'gluster volume stop vol1'. I am 100% certain that the path to /data/brick1/vol1 is my xfs volume on each of the 4 gluster machines, so I don't think I'm being bitten by the ext4 bug.
21:35 kai_office JoeJulian: thanks :)
21:35 JoeJulian ~ext4 | kai_office
21:35 glusterbot kai_office: (#1) Read about the ext4 problem at http://goo.gl/xPEYQ or (#2) Track the ext4 bugzilla report at http://goo.gl/CO1VZ
21:35 kai_office JoeJulian: I'm certain my bricks are not on an ext4 filesystem.
21:36 kai_office [root@gluster1 ~]# df -h /data/brick1/vol1/
21:36 kai_office Filesystem            Size  Used Avail Use% Mounted on
21:36 kai_office /dev/vdb1              20G   33M   20G   1% /data/brick1
21:36 kai_office [root@gluster1 ~]# blkid /dev/vdb1
21:36 kai_office /dev/vdb1: UUID="86d4edeb-d430-4aeb-8cce-53d03b54f3b4" TYPE="xfs"
21:36 JoeJulian Sorry, read as far as "ls hangs"... ADHD
21:36 kai_office JoeJulian: I understand :)
21:36 kai_office I'd be happy to find out it *is* the ext4 bug, but as far as I can tell, it's not
21:37 kai_office when I run 'gluster volume stop vol1', ls finishes by printing out hundreds of copies of each file
21:37 JoeJulian That sure sounds the same.
21:37 JoeJulian Did you check all the servers?
21:37 kai_office ya
21:37 kai_office I have Konsole attached to all 4 servers
21:38 kai_office sonuva
21:38 kai_office lol
21:38 kai_office One sec...
21:38 kai_office Ya, looks like server 4 is missing it's second disk....
21:39 kai_office Thanks :) :)
21:39 JoeJulian You're welcome.
21:41 Nagilum_ is there a significant performance difference between mounting -t glusterfs and -t nfs3 ?
21:46 semiosis sometimes
21:49 Nagilum_ and which is faster?
21:50 semiosis depends
21:50 semiosis anyway, gotta run
21:50 andreask correct answer would be: 42
21:50 Nagilum_ :-/
21:50 semiosis (if there was an easy answer, i'd have given it already :)
21:50 kai_office JoeJulian: ok, I put everything together again and it's working as expected. Thanks! :)
21:51 JoeJulian Woot!
21:53 fidevo joined #gluster
22:23 jiffe98 joined #gluster
22:42 humbug joined #gluster
22:43 joehoyle- joined #gluster
23:16 joehoyle joined #gluster
23:49 humbug joined #gluster
23:50 dustint joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary