Camelia, the Perl 6 bug

IRC log for #gluster, 2013-09-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 asias joined #gluster
00:08 neofob left #gluster
00:19 StarBeast joined #gluster
00:20 haritsu joined #gluster
00:24 nueces joined #gluster
00:30 diegows_ joined #gluster
00:31 bet_ joined #gluster
00:39 glusterbot New news from resolvedglusterbugs: [Bug 965995] quick-read and open-behind xlator: Make options (volume_options ) structure NULL terminated. <http://goo.gl/kOtWms> || [Bug 961691] CLI crash upon executing "gluster peer status " command <http://goo.gl/1QcVzK> || [Bug 846240] [FEAT] quick-read should use anonymous fd framework <http://goo.gl/FDbuE> || [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.
00:49 sprachgenerator joined #gluster
00:53 glusterbot New news from newglusterbugs: [Bug 1000131] Users Belonging To Many Groups Cannot Access Mounted Volume <http://goo.gl/JOatTA> || [Bug 1004091] SMB:smbd crashes while doing volume operations <http://goo.gl/2hQgXT> || [Bug 1004519] SMB:smbd crashes while doing volume operations <http://goo.gl/DMsNHh> || [Bug 1004100] smbd crashes in libglusterfs under heavy load <http://goo.gl/XRcTC1>
01:17 hchiramm_ joined #gluster
01:21 haritsu joined #gluster
01:22 lkthomas joined #gluster
01:23 glusterbot New news from newglusterbugs: [Bug 997576] glusterd becomes unresponsive when acting as mountbroker <http://goo.gl/D5yfwL> || [Bug 988642] quota: hitting "D" state for dd command <http://goo.gl/bGKKcu> || [Bug 997902] GlusterFS mount of x86_64 served volume from i386 breaks <http://goo.gl/RTWzcG> || [Bug 998352] [RHEV-RHS] vms goes into paused state after starting rebalance <http://goo.gl/mPCKdv>
01:25 bala joined #gluster
01:28 kevein joined #gluster
01:33 nueces joined #gluster
01:34 \_pol joined #gluster
01:44 hchiramm_ joined #gluster
01:55 \_pol joined #gluster
01:58 dmojoryder anyone seen /var full file system corrupting gluster cfgs so it will not start again? If so, any ideas on a fix to get it running again?
01:59 mohankumar joined #gluster
02:05 nueces joined #gluster
02:08 hchiramm_ joined #gluster
02:21 haritsu joined #gluster
02:34 hchiramm_ joined #gluster
02:36 nueces joined #gluster
02:36 kanagaraj joined #gluster
02:37 Cooly joined #gluster
02:44 ghena1986 joined #gluster
02:44 ghena1986 left #gluster
02:55 ajha joined #gluster
03:09 harish joined #gluster
03:11 kshlm joined #gluster
03:22 haritsu joined #gluster
03:23 hchiramm_ joined #gluster
03:26 bharata-rao joined #gluster
03:32 jporterfield joined #gluster
03:33 shubhendu joined #gluster
03:40 shylesh joined #gluster
03:42 mohankumar joined #gluster
03:52 itisravi joined #gluster
03:57 mohankumar joined #gluster
04:01 davinder joined #gluster
04:11 kPb_in joined #gluster
04:13 ndarshan joined #gluster
04:15 anands joined #gluster
04:21 sgowda joined #gluster
04:22 haritsu joined #gluster
04:26 dusmant joined #gluster
04:29 ppai joined #gluster
04:30 ababu joined #gluster
04:38 jporterfield joined #gluster
04:39 nshaikh joined #gluster
04:44 shruti joined #gluster
04:44 jporterfield joined #gluster
04:55 31NAAK27G joined #gluster
05:00 bala joined #gluster
05:11 lalatenduM joined #gluster
05:13 lalatenduM joined #gluster
05:14 RameshN joined #gluster
05:19 jag3773 joined #gluster
05:23 haritsu joined #gluster
05:26 hchiramm_ joined #gluster
05:30 spandit joined #gluster
05:31 timothy joined #gluster
05:38 vpshastry1 joined #gluster
05:38 bulde joined #gluster
05:45 jporterfield joined #gluster
05:46 bala joined #gluster
05:49 dusmant joined #gluster
05:50 rjoseph joined #gluster
06:03 vpshastry joined #gluster
06:06 samkottler joined #gluster
06:06 samkottler joined #gluster
06:06 glusterbot joined #gluster
06:08 vincent_vdk joined #gluster
06:11 [o__o] joined #gluster
06:11 JordanHackworth joined #gluster
06:12 haritsu joined #gluster
06:13 mohankumar joined #gluster
06:13 satheesh joined #gluster
06:21 TDJACR joined #gluster
06:23 jtux joined #gluster
06:33 kPb_in joined #gluster
06:38 ricky-ticky joined #gluster
06:39 vshankar joined #gluster
06:41 avati joined #gluster
06:47 mohankumar joined #gluster
06:58 ndevos dmojoryder: I've seen that before, in my case there were empty files under /var/lib/glusterd/peers/ that I needed to delete
07:02 ajha joined #gluster
07:10 jporterfield joined #gluster
07:11 hagarth joined #gluster
07:11 andreask joined #gluster
07:19 StarBeast joined #gluster
07:30 StarBeast joined #gluster
07:37 morse joined #gluster
07:43 ProT-0-TypE joined #gluster
07:45 Excolo joined #gluster
07:46 ngoswami joined #gluster
07:49 kanagaraj_ joined #gluster
07:52 kanagaraj joined #gluster
07:52 mohankumar joined #gluster
07:54 mooperd_ joined #gluster
07:54 mgebbe_ joined #gluster
07:57 harish__ joined #gluster
07:59 kanagaraj joined #gluster
08:00 StarBeast joined #gluster
08:01 fyxim joined #gluster
08:01 dusmant joined #gluster
08:03 spresser joined #gluster
08:04 johnmwilliams joined #gluster
08:07 vimal joined #gluster
08:10 TDJACR joined #gluster
08:12 flrichar joined #gluster
08:16 puebele joined #gluster
08:25 dusmant joined #gluster
08:25 hybrid5122 joined #gluster
08:27 jporterfield joined #gluster
08:33 jporterfield joined #gluster
08:36 puebele1 joined #gluster
08:37 atrius joined #gluster
08:47 jporterfield joined #gluster
08:49 tryggvil joined #gluster
08:54 kshlm joined #gluster
08:56 Excolo joined #gluster
09:01 itisravi joined #gluster
09:07 kshlm joined #gluster
09:07 itisravi_ joined #gluster
09:07 mohankumar joined #gluster
09:12 jjohn joined #gluster
09:12 micu3 joined #gluster
09:13 jjohn hello, can a gluster native client on 32-bit Linux talk to gluster volumes on 64-bit Linux? Is it supported?
09:17 ricky-ticky1 joined #gluster
09:20 jjohn anybody?
09:21 _Bryan_ joined #gluster
09:21 crashmag joined #gluster
09:21 kwevers joined #gluster
09:21 JonathanD joined #gluster
09:21 semiosis joined #gluster
09:21 sjoeboo joined #gluster
09:21 ThatGraemeGuy joined #gluster
09:21 arusso joined #gluster
09:21 anands joined #gluster
09:21 glusterbot joined #gluster
09:21 JordanHackworth joined #gluster
09:21 StarBeast joined #gluster
09:21 vincent_1dk joined #gluster
09:21 tryggvil joined #gluster
09:21 darshan joined #gluster
09:21 edong23_ joined #gluster
09:21 shylesh joined #gluster
09:21 Excolo joined #gluster
09:21 jporterfield joined #gluster
09:21 atrius joined #gluster
09:21 hybrid5122 joined #gluster
09:21 dusmant joined #gluster
09:21 flrichar joined #gluster
09:21 TDJACR joined #gluster
09:21 johnmwilliams joined #gluster
09:21 spresser joined #gluster
09:21 kanagaraj joined #gluster
09:21 mgebbe_ joined #gluster
09:21 mooperd_ joined #gluster
09:21 ProT-0-TypE joined #gluster
09:21 morse joined #gluster
09:21 andreask joined #gluster
09:21 hagarth joined #gluster
09:21 ajha joined #gluster
09:21 avati joined #gluster
09:21 vshankar joined #gluster
09:21 [o__o] joined #gluster
09:21 rjoseph joined #gluster
09:21 bala joined #gluster
09:21 bulde joined #gluster
09:21 spandit joined #gluster
09:21 hchiramm_ joined #gluster
09:21 jag3773 joined #gluster
09:21 RameshN joined #gluster
09:21 lalatenduM joined #gluster
09:21 shruti joined #gluster
09:21 nshaikh joined #gluster
09:21 ababu joined #gluster
09:21 davinder joined #gluster
09:21 shubhendu joined #gluster
09:21 bharata-rao joined #gluster
09:21 Cooly joined #gluster
09:21 kevein joined #gluster
09:21 asias joined #gluster
09:21 gluslog joined #gluster
09:21 toad joined #gluster
09:21 badone joined #gluster
09:21 chirino joined #gluster
09:21 Jasson joined #gluster
09:21 xavih joined #gluster
09:21 msciciel_ joined #gluster
09:21 tjikkun_work joined #gluster
09:21 bstr_ joined #gluster
09:21 bivak joined #gluster
09:21 jurrien_ joined #gluster
09:21 wirewater joined #gluster
09:21 JoeJulian joined #gluster
09:21 foster joined #gluster
09:21 mtanner_ joined #gluster
09:21 schrodinger joined #gluster
09:21 dmojoryder joined #gluster
09:21 nixpanic joined #gluster
09:21 stopbit joined #gluster
09:21 DV joined #gluster
09:21 gmcwhistler joined #gluster
09:21 spligak joined #gluster
09:21 Norky joined #gluster
09:21 DataBeaver joined #gluster
09:21 yosafbridge joined #gluster
09:21 masterzen joined #gluster
09:21 stickyboy joined #gluster
09:21 RobertLaptop joined #gluster
09:21 dneary joined #gluster
09:21 mattf joined #gluster
09:21 X3NQ joined #gluster
09:21 purpleidea joined #gluster
09:21 sac`away joined #gluster
09:21 GabrieleV joined #gluster
09:21 LoofAB joined #gluster
09:21 soukihei joined #gluster
09:21 SteveCooling joined #gluster
09:21 tru_tru joined #gluster
09:21 tg2 joined #gluster
09:21 jones_d joined #gluster
09:21 torbjorn___ joined #gluster
09:21 jbrooks joined #gluster
09:21 kkeithley joined #gluster
09:21 portante joined #gluster
09:21 morsik joined #gluster
09:21 duerF joined #gluster
09:21 eightyeight joined #gluster
09:21 tqrst joined #gluster
09:21 johnmorr joined #gluster
09:21 ryan_t joined #gluster
09:21 xymox joined #gluster
09:21 MinhP joined #gluster
09:21 hflai joined #gluster
09:21 risibusy joined #gluster
09:21 poptix joined #gluster
09:21 jmeeuwen joined #gluster
09:21 bradfirj_ joined #gluster
09:21 jiffe99 joined #gluster
09:21 Amanda joined #gluster
09:21 tjstansell joined #gluster
09:21 clag_ joined #gluster
09:21 the-me joined #gluster
09:21 ndevos joined #gluster
09:21 sac joined #gluster
09:21 twx_ joined #gluster
09:21 basic` joined #gluster
09:21 wcchandler joined #gluster
09:21 haidz joined #gluster
09:21 msvbhat joined #gluster
09:21 hagarth_ joined #gluster
09:21 social joined #gluster
09:21 delhage joined #gluster
09:21 codex joined #gluster
09:21 abyss^ joined #gluster
09:21 sysconfig joined #gluster
09:21 MediaSmurf joined #gluster
09:21 ingard joined #gluster
09:21 mibby- joined #gluster
09:21 NeatBasis joined #gluster
09:21 lanning joined #gluster
09:21 pachyderm joined #gluster
09:21 fleducquede joined #gluster
09:21 RichiH joined #gluster
09:21 brosner joined #gluster
09:21 bdperkin joined #gluster
09:21 mrEriksson joined #gluster
09:21 NuxRo joined #gluster
09:21 gGer joined #gluster
09:21 samppah_ joined #gluster
09:21 zwu joined #gluster
09:21 jfield joined #gluster
09:21 Ramereth joined #gluster
09:21 sonne joined #gluster
09:21 haakon_ joined #gluster
09:21 _NiC joined #gluster
09:21 ofu__ joined #gluster
09:21 pull joined #gluster
09:21 partner joined #gluster
09:21 ke4qqq joined #gluster
09:21 atrius` joined #gluster
09:21 Dave2 joined #gluster
09:21 Gugge joined #gluster
09:21 tw joined #gluster
09:21 m0zes joined #gluster
09:21 al joined #gluster
09:21 roidelapluie joined #gluster
09:21 mriv joined #gluster
09:21 georgeh|workstat joined #gluster
09:21 stigchristian joined #gluster
09:21 tobias- joined #gluster
09:21 paratai joined #gluster
09:21 cyberbootje joined #gluster
09:21 mjrosenb joined #gluster
09:21 ricky-ticky joined #gluster
09:21 kshlm joined #gluster
09:21 itisravi_ joined #gluster
09:21 jjohn hello, can a gluster native client on 32-bit Linux talk to gluster volumes on 64-bit Linux? Is it supported?
09:23 mgebbe joined #gluster
09:30 eseyman joined #gluster
09:32 itisravi joined #gluster
09:36 psharma joined #gluster
09:36 sgowda joined #gluster
09:37 limyreth Is there a way to keep an entire volume/directory cached on the client? Perhaps the closest I can get to that is setting performance.cache-size to the size of the volume, which is 30GB in my case?
09:39 satheesh1 joined #gluster
09:42 kshlm 12
09:44 ndarshan joined #gluster
10:01 wgao joined #gluster
10:02 jjohn can a gluster native client on 32-bit Linux talk to gluster volumes on 64-bit Linux? Is it supported?
10:07 samppah_ jjohn: it can but it's not supported (means it's not tested very well)..
10:07 Excolo joined #gluster
10:09 jjohn samppah: thank you. what about the other way around? 64-bit client connected to 32-bit volumes...I have a mix of 32-bit and 64-bit linux clients which need shared storage
10:10 satheesh1 joined #gluster
10:12 samppah_ jjohn: same thing with that afaik.. i think that JoeJulian is mixing 64 bit and 32 bit clients and servers
10:13 jjohn samppah_: thank you
10:17 Bonelli joined #gluster
10:18 Bonelli hello guys, I'm new to gluster and i'm trying to use it for an office LAN configuration, i'd like to ask something about replicas
10:18 Bonelli is anybody online?
10:19 Bonelli what i'd like to achieve is a simple mirror configuration with the possibility to remove nodes when we can afford shrinking the volume
10:20 Bonelli but it seems that it's not possible to remove a brick without removing the brick that mirrors that very same brick
10:21 Bonelli in that way i suppose i lose the data too, and i'd like to avoid it
10:22 Bonelli how can i remove just one copy of the replicated data, and then rebalance everything?
10:27 kPb_in joined #gluster
10:40 edward2 joined #gluster
10:45 andreask joined #gluster
10:48 kkeithley1 joined #gluster
10:55 jtux joined #gluster
10:57 mooperd_ joined #gluster
10:58 mbukatov joined #gluster
10:58 sgowda joined #gluster
10:59 failshell joined #gluster
11:02 nullck joined #gluster
11:09 satheesh3 joined #gluster
11:11 manik joined #gluster
11:14 bnh2 joined #gluster
11:16 bnh2 Is there any GlusterFS expert in here that can help?
11:18 hagarth joined #gluster
11:18 Bonelli i'm looking for one too
11:18 bnh2 I recently installed GlusterFS server on 2 debian machines and glusterfs client on 1
11:19 bnh2 Bonelli shall we try and help each other
11:19 bnh2 ?
11:19 Bonelli if you wanna
11:19 Bonelli what's your problem?
11:19 bnh2 great
11:19 Bonelli i'm experimenting on a similar configuration myself
11:21 bnh2 My issue is "I Installed glusterFS on 2x servers and GlusterFS client on1x machine" at the moment if I create a file on the brick in server 1 it doesnt get replicated to server 2 or client but if I create a file in server 2 I wont see it in server 1 but i will see it in the client. and If i create a file in the client it will get replicated on server 1 and server 2
11:21 bnh2 does that make any sense to you?
11:21 dusmant joined #gluster
11:21 Elendrys joined #gluster
11:21 shubhendu joined #gluster
11:22 Bonelli yes, it does make sense
11:22 Bonelli the replication is managed on the client side
11:22 Bonelli you should access the content of the bricks only with the client
11:23 Bonelli you should _never_ touch the content of the bricks directly
11:23 Bonelli it will certainly break things
11:23 limyreth Could it be that performance.cache-size is limited to about 4GB client-side?
11:23 dblack joined #gluster
11:23 bnh2 Bonelli that's exactly what i was looking for thanks for your help mate
11:23 Bonelli no no, it's just that you cannot create files in the server's bricks
11:24 Bonelli no problem bnh2
11:24 Excolo joined #gluster
11:25 bnh2 Bonelli - do you want to share your problem here and see if I or any one else better here can help
11:26 Bonelli my problem is a bug (i suppose), when i copy a bunch of files into the glusterfs mount directory
11:26 Bonelli they are all copied in the bricks and replicated as needed, but some of them are not listed in the glusterfs directory
11:26 Bonelli as seen by the client
11:27 Bonelli most odd unreliability, i must be using a wrong configuration
11:29 vpshastry2 joined #gluster
11:30 bala joined #gluster
11:32 jiqiren joined #gluster
11:35 Elendrys joined #gluster
11:36 Bonelli joined #gluster
11:57 harish__ joined #gluster
11:59 shireesh joined #gluster
12:01 B21956 joined #gluster
12:07 vpshastry2 left #gluster
12:11 sgowda joined #gluster
12:19 Bonelli any body can help me with a configuration? I can't even obtain "ls" command output into a glusterfs folder
12:20 ndevos Bonelli: could you hit the ,,(ext4) bug?
12:20 glusterbot Bonelli: Read about the ext4 problem at http://goo.gl/Jytba or follow the bug report here http://goo.gl/CO1VZ
12:22 Bonelli this is nasty
12:23 Bonelli so ubuntu 13.04 gluster packages are basically trash with ext4 bricks?
12:23 ndevos I'm not sure, you would need to check the kernel version (and possible 'stable' patches to ext4)
12:24 Norky joined #gluster
12:24 Bonelli kernel is: 3.8.0-30-generic
12:24 ndevos Bonelli: maybe the ,,(ppa) version contains a fix in glusterfs for it
12:24 glusterbot Bonelli: The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy
12:25 Bonelli nice, thanks bot
12:25 Bonelli glusterbot
12:31 rwheeler joined #gluster
12:32 stickyboy I think the recommendation for filesystem from June, 2012 until at least now is XFS
12:32 stickyboy Bonelli: ^^^
12:32 stickyboy Unless you know what you're doing :)
12:32 Bonelli i absolutely have no idea whatsoever of what i'm doing
12:32 stickyboy hehehe
12:32 Bonelli but i won't be using dedicated servers to hold the bricks
12:33 stickyboy I have an ok clue, and I decided to use XFS when I deployed a few months ago.
12:33 diegows_ joined #gluster
12:33 Bonelli i will use a lot of spare disk space from our workstations at the office as shared storage, and everything is already running with ext4
12:34 Bonelli can't really decide
12:38 stickyboy Ohhhh.
12:38 stickyboy Sounds interesting... but you'd better use >= 3.4, as I think the ext4 bug was mitigated in userspace from 3.4+.
12:39 sgowda joined #gluster
12:40 sprachgenerator joined #gluster
12:42 Bonelli i've replaced my configuration with the ppa for 3.4
12:42 Bonelli and testing now
12:42 Bonelli crossing fingers
12:42 robo joined #gluster
12:42 Bonelli fact is that somebody at the office uses 10% of the disk
12:43 tziOm joined #gluster
12:43 Bonelli and others 99% and always want more
12:43 Bonelli i figured this would be a nice solution
12:44 Bonelli :D now it works like a charm
12:44 Bonelli i'm happy
12:44 Bonelli thank
12:44 Bonelli s
12:45 awheeler joined #gluster
12:46 awheeler joined #gluster
12:48 bala joined #gluster
12:49 spandit joined #gluster
12:54 stickyboy Bonelli: Really?  Damn.  That is nice and quick hehe.
12:57 rcheleguini joined #gluster
12:58 Bonelli aye, it was quick as i was trying for the whole morning
12:58 Bonelli just changed the repository and baaam!
12:58 Bonelli working
12:58 Bonelli so happy
12:58 Bonelli bye bye, i'm off on another task now
13:08 bulde1 joined #gluster
13:19 shubhendu joined #gluster
13:26 ababu joined #gluster
13:28 robo joined #gluster
13:29 bennyturns joined #gluster
13:53 Excolo joined #gluster
13:54 Excolo Quick question about geo-replication. I have two datacenters, and want the same data replicated at each. Is geo-replication a master slave setup only? Where as if I have the master at DC1 DC2 has to write across the internet to that brick?
13:55 saurabh joined #gluster
13:57 dmojoryder re: /var filesystem filling up and glusterd not starting. A number of files were corrupted/truncated under /var/lib/glusterd including glusterd.info (appears to have the local servers trusted hash). To resolve I got the servers hash from gluster peer status (from a gluster server in a good state). Then on the bad host I removed the /var/lib/glusterd dir. On another gluster server I removed the 'bad' server with a 'gluster peer detach <hostname> force'.
13:57 kaptk2 joined #gluster
13:59 sprachgenerator joined #gluster
14:01 TomKa joined #gluster
14:01 jclift joined #gluster
14:09 rwheeler joined #gluster
14:10 hagarth joined #gluster
14:13 dusmant joined #gluster
14:16 Norky Excolo, yes, at present, the geo-replication is master/slave only
14:16 Sonicos joined #gluster
14:17 Excolo shit.... so the only way I can do that is with normal replication across the datacenters? (thats how the previous sysadmin did it) but we keep getting split brains
14:17 bulde joined #gluster
14:19 Norky I understand they are working on peer/peer replication, but for now it doesn't do what you want
14:21 Excolo Thanks for the info... another point on a long list of why i think the previous sysadmin shouldnt have used gluster in our setup (its a fantastic product, but our entire setup is for master master at each dc so that a dc can be turned off and the site still run)
14:21 Norky you coudl possibly layer rsync on top of glusterfs
14:22 Norky but if you want a single thing to keep two remote sites in sync, this probably aitn it :)
14:23 Excolo probably wouldnt work. One of our directories developers over the years just put everything into it rather than setup permissions on a new dir. Might as well be named misc. As such, there are probably tens of thousands of files in there, so an rsync would slow it all down
14:28 itisravi joined #gluster
14:33 bugs_ joined #gluster
14:34 limyreth left #gluster
14:38 jdarcy joined #gluster
14:40 Excolo Hey Norky, I was reading somewhere they were going to do master master in 3.4, was that done? or delayed? I don't see it in any 3.4 documentation I can find (maybe I can just upgrade and get it to do what I need)
14:41 flrichar joined #gluster
14:45 wgao joined #gluster
14:45 harish__ joined #gluster
14:45 Norky joined #gluster
14:47 TDJACR joined #gluster
14:47 TDJACR joined #gluster
14:53 Norky to answer you question, Excolo (no reason not to ask in channel) I beleive I recall reading about it being intended "next version" a while ago, but I might have made that memory up
14:55 gkleiman joined #gluster
14:55 kkeithley_ we were planning to do master-master in 3.4, but it wasn't ready in time
14:56 zerick joined #gluster
14:59 Norky ty kkeithley . I dont' see it mentioned in v3.5 plans either: http://www.gluster.org/community/d​ocumentation/index.php/Planning35
14:59 glusterbot <http://goo.gl/l2gjSh> (at www.gluster.org)
14:59 sgowda joined #gluster
15:03 kkeithley_ Our (i.e. Red Hat's) proposal is that we will commit resources to get multi-master geo-rep into 3.7
15:06 kkeithley_ Our proposal to the gluster community.
15:08 mgebbe joined #gluster
15:09 mgebbe_ joined #gluster
15:14 abassett joined #gluster
15:15 abassett hey I'm trying to figure out if i can setup a box to be a "proxy" for gluster, in that it mounts a gluster share and re-exports it with gluster or nfs
15:16 LoudNoises joined #gluster
15:17 kkeithley_ You certainly could do that. Whether that's a good idea or not I won't venture to say.
15:17 Norky how would the xattrs work?
15:18 davinder Hey ...I am gettinh 300Kb speed over network while trafering files in gluster file system
15:18 davinder on other filesystems it is 200MBps ...
15:18 Norky are xattrs of files on a FUSE (or NFS) gluster mount somehow stored differently on the brick filesystem?
15:19 davinder how i can improve performance for write IO
15:19 davinder i am using rsync
15:20 B21956 joined #gluster
15:21 abassett kkeithley: heh fair enough, i'm just doing a proof of concept at the moment, but I can't quite figure out how to do it
15:22 mharrigan joined #gluster
15:24 jdarcy davinder: What other filesystems are giving you 200MBps *to server disk* (not just buffered somewhere in memory) on your network?
15:25 kkeithley_ create VolumeA on NodeA, start it. Mount NodeA:VolumeA on NodeB at /mnt/VolumeA, create VolumeB on NodeB using brick at /mnt/VolumeB, start it. Mount NodeB:VolumeB on clients.
15:26 abassett ah i see
15:26 abassett thanks
15:26 jdarcy davinder: Also, what size files are we talking about, and how many concurrent rsync threads are you using?
15:27 kkeithley_ try it and see. I honestly have no idea what's going to happen, but I think it ought to work.
15:29 zaitcev joined #gluster
15:29 jdarcy kkeithley: IIRC, re-exporting via kernel NFS *mostly* works but can fall prey to a kind of memory-exhaustion deadlock under high load.
15:31 kkeithley_ indeed
15:31 jdarcy Re-exporting native over native should be fine, though there will be lots of context switches between the client and server daemons.  The only way to avoid that would be a hand-crafted single-process volfile.
15:32 jdarcy Basically the VolumeA client volfile, which the protocol/server part from the VolumeB server volfile spliced on.
15:32 kkeithley_ just trying to create a brick using a native mount tells me "volume create: volx: failed: /mnt/volx or a prefix of it is already part of a volume"
15:32 glusterbot kkeithley_: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
15:32 davinder jdarcy
15:32 abassett kkeithley: yup i got the same thing
15:32 Norky kkeithley, see, that's what I thought would happen
15:32 abassett oh, thanks, glusterbot
15:33 davinder jdarcy : rsync started with 3 threads
15:33 Norky the 'layered' volume checks the gfid on its brick - and it gets the gfid from the underlying volume
15:34 kkeithley_ well, it was a thought. ;-)
15:34 Norky well, I was guessing that woudl happen, wasn't certain by any means :)
15:34 kkeithley_ <carlsagan>You have to do the experiment.</carlsagan>
15:35 Norky true
15:35 Norky and breaking computer systems is always fun :_)
15:35 ctria joined #gluster
15:36 schrodinger Hi, 3.2.5-1ubuntu1 on Ubuntu 12.04. When I do "# gluster volume top myvol readdir brick xxx.xxx.xxx.xxx:/gluster/name_of_share list-cnt 10" I get "operation failed" but I can't figure out why. Could anyone point me in the right direction ? Thanks.
15:36 schrodinger *gluster 3.2.5-1ubuntu1
15:36 schrodinger :)
15:37 abassett so i guess I'll have to try another approach… thanks for the input kkeithley and Norky
15:37 Norky abassett, I have to wonder what you are trying to achieve
15:38 abassett building a compute cluster in the cloud
15:38 Norky layering gluster atop itself seems a bit... bonkers :)
15:38 abassett but storage is in our dc
15:38 jdarcy davinder: Well, the first thing to try for a small-file/metadata workload (like rsync usually is) would be to mount the GlusterFS volume via NFS.  NFS is a better protocol for that sort of thing, though weak in other areas.
15:39 jdarcy davinder: You still get all of the benefits of using GlusterFS (scale, replication etc.) except for automatic client-side failover.  There are recipes for dealing with that.
15:39 Norky can you not just access a gluter volume at your own DC directly?
15:40 abassett well im trying to limit exposure and only let the compute head through the vpn
15:40 Norky hrm, that makes the head node a bottle neck for all file I/O....
15:41 abassett yea i know
15:41 abassett like i said, proof of concept
15:41 Norky what's the code you're running on the cluster? Would stagein/out be a reasonable fit?
15:41 abassett and im just putting high compute/low io taks out here
15:42 jdarcy Norky: You could set up multiple heads like that if there are low levels of sharing.
15:43 abassett yea im just kind of feeling it out at the moment
15:43 abassett anyways, I have another tack i can try
15:44 jdarcy Hm.  For certain HPC or Big Data applications I could see creating such caching heads "on the fly" per job.
15:46 GLHMarmot joined #gluster
15:47 johnbot11 joined #gluster
15:51 mooperd_ joined #gluster
15:56 ngoswami joined #gluster
16:01 jre1234 joined #gluster
16:07 kanagaraj joined #gluster
16:11 kanagaraj joined #gluster
16:14 diegows_ joined #gluster
16:25 Mo_ joined #gluster
16:30 StarBeast joined #gluster
16:45 shylesh joined #gluster
16:52 StarBeast joined #gluster
16:58 davinder joined #gluster
17:01 itisravi joined #gluster
17:06 Technicool joined #gluster
17:12 jasson joined #gluster
17:30 Mo__ joined #gluster
17:39 lpabon joined #gluster
17:39 \_pol joined #gluster
17:40 shruti joined #gluster
17:44 nueces joined #gluster
17:45 tryggvil joined #gluster
17:47 \_pol_ joined #gluster
17:48 \_pol_ joined #gluster
17:52 \_pol joined #gluster
18:04 hagarth joined #gluster
18:05 ndk joined #gluster
18:09 ndk left #gluster
18:10 plarsen joined #gluster
18:19 ndk joined #gluster
18:24 aliguori joined #gluster
18:27 \_pol_ joined #gluster
18:32 \_pol joined #gluster
18:50 dneary joined #gluster
19:14 jasson I'm having a problem with a gluster 3.2.5 server, I have 2 networks it's connected to and I want it to replicate across one network (private storage network) and server to clients off the other network.  If I define peers on the private network, it won't replicate unless the volume is defined against the DNS name on the private network as well, then clients aren't able to connect on the public interface.
19:15 jasson Any tricks or something I'm overlooking for clients conencting to the public network side?
19:15 NeatBasis joined #gluster
19:24 18WAENSG7 joined #gluster
19:25 zerick joined #gluster
19:29 mooperd_ joined #gluster
19:30 ngoswami joined #gluster
19:38 tziOm joined #gluster
19:41 vincent_vdk joined #gluster
20:00 semiosis jasson: using fuse or nfs clients?
20:01 semiosis jasson: fuse clients do replication client-side.  nfs clients get server-side replication.
20:01 jasson using fuse, the 3.2 doesn't do the NFS natively.
20:01 andreask joined #gluster
20:01 jasson sorry, by client I mean just mounting the volume as a share, not replicating.
20:02 semiosis pretty sure 3.2 does nfs
20:02 semiosis pretty sure 3.1 did nfs
20:02 semiosis jasson: ,,(pasteinfo)
20:02 glusterbot jasson: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
20:02 jasson I'm trying to seperate out replication from the client interaction since we have 30 servers that mount a shared folder, that folder is replicated across storage servers for redundancy.
20:04 jasson ah, we tried gluster 3.3 and loved the nfs features, but ran into other issues with denying write permissions on the gluster mount, and nfs broke on a 20TB share on the client side.  Basically using gluster and fuse instead of NFS for the replication and better performance we're seeing.
20:04 semiosis jasson: fuse clients do the replication themselves... they send data to all the replicas.
20:04 jasson the replicatiiton is actually running as geo-replication
20:05 jasson but if the clients are sending the data to both servers then I can't actually seperate them out that way.
20:06 dneary joined #gluster
20:06 jasson so second question, when creating the volume, the command defines <server>:/folder, is it possible to get clients on both networks talking to that same volume?  since it would be seen as <server-nic1>  and <server-nic2> in DNS?
20:07 jasson the second network isn't actually connected to the internet, only the network on nic1
20:09 jasson basically nic2 is an isolated network that I'm trying to reduce traffic on to just gluster traffic running on 10gigabit hardware, while nic1 is the normal network.
20:10 semiosis split horizon dns then
20:11 semiosis servers resolve server hostnames to 10g ip addrs, clients resolve server hostnames to 1g ip addrs
20:12 jasson yeah, but the problem I'm running into is the client machines that only have 1g nics can't get to a share that's been defined against the 10g network when the volume is created.
20:12 jasson and I do need some clients to access via 1g, while most of the servers I want on the 10g
20:13 semiosis you should use hostnames in your brick addresses, hostnames that aren't tied to a specific machine or interface.
20:14 purpleidea joined #gluster
20:19 jasson this is where I wish I was realling using DNS, it's all a hosts file driven system that I inherited.  So I need to sort out filtering just the gluster traffic to the 10g private network while those same servers conduct all other traffic over the 1g network with eachother.  I was hoping i could define the volume by interface and bridge that somehow easily.
20:27 glusterbot New news from newglusterbugs: [Bug 1002556] running add-brick then remove-brick, then restarting gluster leads to broken volume brick counts <http://goo.gl/YqOYSj>
20:31 gmcwhistler joined #gluster
20:32 gmcwhistler joined #gluster
20:32 jag3773 joined #gluster
20:45 glusterbot New news from resolvedglusterbugs: [Bug 997140] Gluster NFS server dies <http://goo.gl/7aXct0>
20:55 atrius joined #gluster
20:55 sprachgenerator joined #gluster
20:57 jporterfield joined #gluster
21:03 johnmark joined #gluster
21:18 plarsen joined #gluster
21:35 B21956 joined #gluster
21:53 sonne left #gluster
21:57 Excolo joined #gluster
22:04 B21956 left #gluster
22:18 jporterfield joined #gluster
22:32 bivak joined #gluster
22:49 awheele__ joined #gluster
22:56 asias joined #gluster
23:08 bennyturns joined #gluster
23:14 jporterfield joined #gluster
23:24 jporterfield joined #gluster
23:38 jporterfield joined #gluster
23:51 johnbot1_ joined #gluster
23:52 johnbot11 joined #gluster
23:53 johnbot1_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary