Camelia, the Perl 6 bug

IRC log for #gluster, 2012-12-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:34 robo joined #gluster
00:40 robo joined #gluster
00:45 bala joined #gluster
01:08 yinyin joined #gluster
01:55 kevein joined #gluster
03:06 bharata joined #gluster
03:07 Ownage joined #gluster
03:07 Ownage Hi, I've just completed this walkthrough, but on centos. However there's something I'm missing: http://martinlanner.com/2012/07/26/glust​erfs-build-a-three-node-storage-cluster/
03:07 glusterbot <http://goo.gl/EBKLK> (at martinlanner.com)
03:08 Ownage at coming to the end of this walkthrough, I have 3 nodes which all have /data and gluster running, etc, but when I write a file to /data on one of them, it's not there on the others
03:26 yinyin joined #gluster
03:29 Ownage I've now on each node mounted its own volume such as mount -t glusterfs servername:/data /mnt
03:30 Ownage I find an hgfs in each mounted location, but making a file in here or outside of it doesn't replicated in any way to the other nodes
03:30 Ownage what am I missing?
03:36 Shdwdrgn joined #gluster
03:55 Shdwdrgn joined #gluster
03:57 sripathi joined #gluster
04:03 sgowda joined #gluster
04:10 Shdwdrgn joined #gluster
04:12 vpshastry joined #gluster
04:20 Shdwdrgn joined #gluster
04:34 yinyin joined #gluster
04:44 quillo joined #gluster
04:58 yinyin joined #gluster
05:01 chacken3 joined #gluster
05:06 shylesh joined #gluster
05:06 shylesh_ joined #gluster
05:13 hchiramm_ joined #gluster
05:18 yinyin joined #gluster
05:24 bala joined #gluster
05:27 avati joined #gluster
05:37 hagarth joined #gluster
05:38 yinyin joined #gluster
05:51 vimal joined #gluster
06:10 raghu joined #gluster
06:14 overclk joined #gluster
06:31 sripathi joined #gluster
06:48 JoeJulian Ownage: You have to mount the volume and write through that volume mountpoint. Your bricks are just storage for GlusterFS, not for you.
06:50 Ownage isn't that what I'm doing when I mount it with mount -t glusterfs server1:/data /mnt
06:50 JoeJulian I'll check. I've never read Martin Lanner's walkthrough.
06:51 JoeJulian I don't like it already though... He should have used the ,,(ppa)
06:51 glusterbot The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
06:52 JoeJulian Ok, you mounted /dev/vdb on /data. That's the brick.
06:52 JoeJulian He has you touch /data/test.txt but never delete it, potentially polluting the not-yet-existant volume.
06:53 JoeJulian And he has you create and start the volume, but never mount it.
06:53 JoeJulian When you mount -t glusterfs gluster01:testvol /data, and your brick is /data, that's not going to work.
06:54 rgustafs joined #gluster
06:54 JoeJulian When you mount over a filesystem, that filesystem is mounted over. Nothing can get on /dev/vdb now because you've mounted the volume (that needs that /dev/vdb by the way) over the top of it.
07:06 ramkrsna joined #gluster
07:16 kevein joined #gluster
07:18 hagarth joined #gluster
07:18 ngoswami joined #gluster
07:19 ekuric joined #gluster
07:24 vijaykumar joined #gluster
07:25 vpshastry joined #gluster
07:27 sripathi joined #gluster
07:32 Nevan joined #gluster
07:38 kevein joined #gluster
07:46 vijaykumar left #gluster
07:47 vijaykumar joined #gluster
07:50 puebele1 joined #gluster
07:56 mooperd joined #gluster
08:00 sripathi joined #gluster
08:09 puebele joined #gluster
08:13 vijaykumar joined #gluster
08:21 hagarth joined #gluster
08:46 badone joined #gluster
08:46 tryggvil joined #gluster
08:51 lkoranda joined #gluster
08:52 vpshastry joined #gluster
08:54 vijaykumar joined #gluster
08:54 Oneiroi Hi All, trying to use gluster for a clustered FS for Openstack (3.3.1) constantly getting "Directory Not Empty" when deleting an instance and openstack tries to clean up … any ideas?
08:55 Oneiroi selinux is in permissive os is fedora 17
08:57 rastar joined #gluster
09:00 mooperd Oneiroi: I am just about to set up the same system myself
09:00 mooperd Oneiroi: Well, if I can get bloody openstack installed.
09:00 mooperd It is eluding me at the moment
09:00 Oneiroi mooperd: what os ?
09:00 mooperd Centos 6.3
09:01 Oneiroi ah you'll want epel installed and use the epel openstack packages
09:02 mooperd Oneiroi: Yep, thats what I have been trying to do
09:02 mooperd https://github.com/beloglazov/o​penstack-centos-kvm-glusterfs/
09:02 glusterbot <http://goo.gl/KC498> (at github.com)
09:02 Oneiroi They are int the process of moving to folsom atm, so the docs are a bit mish mash ...
09:02 mooperd This is the most……..elequent
09:02 mooperd howto
09:02 Oneiroi this: https://fedoraproject.org/w/index.p​hp?title=Getting_started_with_OpenS​tack_on_Fedora_17&amp;oldid=306694 can be applied using the epel packages
09:02 glusterbot <http://goo.gl/HBkT9> (at fedoraproject.org)
09:02 mooperd But still
09:02 mooperd Its blood weird
09:03 mooperd 100 or more indevidual steps
09:03 mooperd All of which are liable to explode in your face!
09:03 mooperd "cut the blue wire". "THEY ARE ALL BLUE WIRES!!"
09:03 mooperd etc
09:04 mooperd I had a lot more fun with RHEV
09:07 shireesh joined #gluster
09:12 DaveS_ joined #gluster
09:28 dobber joined #gluster
09:31 vijaykumar joined #gluster
09:33 manik joined #gluster
09:35 gbrand_ joined #gluster
09:49 rotbeard joined #gluster
09:52 rotbeard hi there, I have glusterfs 3.0 (debian squeeze) on 2 webservers. Server and clients are poorly on the local maschines. I started a 'find' and my gluster daemons push my system load to > 100. I killed that 'find' and the one server looks pretty good, but the second doesn't. If I start glusterfs on server 2 it pushes the load to >100 again and this will push the load on server 1 too . How can I debug this?
09:53 dobber joined #gluster
09:54 yinyin joined #gluster
09:54 nightwalk joined #gluster
09:56 Oneiroi mooperd: pretty much
09:56 Oneiroi heavy workload this morning, replies will be sporadic
09:57 JoeJulian rotbeard: Start it and wait until it finishes re-populating the open fds and locks.
09:58 rotbeard thanks JoeJulian, I will do this tonight..can I monitor the status somehow?
09:59 shireesh joined #gluster
09:59 JoeJulian Not really, but it shouldn't last more than a few minutes (depending on number of open fd and locks)
10:00 rotbeard is a load of > 100 ( 2 native 6 core xeons with hyperthreading) normal while repopulating?
10:01 rotbeard I would expect a high I/O instead
10:01 JoeJulian Yes, especially back in the 3.0 days.
10:01 rotbeard ok
10:01 JoeJulian No, there's no actual i/o happening. It's re-establishing structures.
10:02 JoeJulian And you are aware that 3.0 is really old and buggy, right?
10:02 vpshastry joined #gluster
10:02 rotbeard of course :/ I should compile it by myself instead of using debian binaries
10:03 rotbeard this is the disadvantage of debian...some packages are really, really old
10:04 JoeJulian @ppa
10:04 glusterbot JoeJulian: The official glusterfs 3.3 packages for Ubuntu are available here: http://goo.gl/7ZTNY
10:04 JoeJulian Can't you use those as well? I'm more of an rpm guy.
10:07 rotbeard I never used ubuntu packages in debian before, but I will have a look at this
10:07 rotbeard in the next debian stable release, they use 3.2...not as up-to-date as I thought
10:10 rotbeard mh, the toolchain is different, I think I can not use ubuntu packages :(
10:15 rotbeard I could try 3.2 from debian backports, this should be better than 3.0
10:18 yinyin joined #gluster
10:18 rotbeard btw is it normal in 3.0 that a simple find command increase the load to a very high value?
10:22 shireesh joined #gluster
10:27 JordanHackworth_ joined #gluster
10:28 masterzen_ joined #gluster
10:29 gm____ joined #gluster
10:29 carrar_ joined #gluster
10:29 hagarth__ joined #gluster
10:30 JoeJulian Probably if you have a replicated volume and you're leaving one disconnected all the time. It should need to self-heal.
10:32 rotbeard normally both are online + connected
10:32 rotbeard anyway, I will have a look at installing a newer glusterfs
10:34 atoponce joined #gluster
10:35 social_ joined #gluster
10:40 rgustafs joined #gluster
10:40 tjikkun joined #gluster
10:40 tjikkun joined #gluster
10:41 hagarth joined #gluster
10:45 tru_tru joined #gluster
10:50 kore_ joined #gluster
10:50 duerF joined #gluster
10:51 kore left #gluster
10:56 Oneiroi JoeJulian: I have a gluster 3.3.1 setup, replica 2 across 3 nodes (2 bricks on each), whether mounted using nfs or glusterfs (fuse) an odd thing seems to occur whilst the mounted FS will show a directory as empty (at least to ls), attempts to remove it leads to a "directory not empty" error, any ideas?
10:56 Oneiroi I can confirm finding files in the bricks themselves, and the removal of these manually allows the directory to be removed on the mounted fs.
10:57 ctria joined #gluster
11:11 vpshastry joined #gluster
11:18 shireesh joined #gluster
11:58 vpshastry joined #gluster
12:13 hagarth joined #gluster
12:24 jim`` joined #gluster
12:25 jim`` Hi, I'm running gluster 3.2 and have a dead node in a 2 way replicated configuration
12:26 jim`` the dead node is completely inaccessible, I'd like to add a new peer and add the replicated bricks to that from the remaining good node
12:26 jim`` However replace-brick isn't working for me (replace-brick commit failed)
12:26 jim`` And I can't detach from the peer
12:27 jim`` nor add-brick (Another operation is in progress, please retry after some time)
12:27 jim`` I'd appreciate any suggestions on what to try next
12:49 lkoranda joined #gluster
12:52 guest2012 joined #gluster
12:52 guest2012 left #gluster
12:52 guest2012 joined #gluster
12:52 guest2012 hi all. Does anyone know if gluster 3.3.1 can cohesist with 3.3.0 in the same volume?
12:57 kkeithley We (red hat, gluster) don't test mixed 3.3.0/3.3.1 volumes. It's not a supported configuration. If you have problems, people are going to tell you to upgrade the 3.3.0 boxes to 3.3.1. That said, the wire protocols (rpc) are unchanged. There's no reason to think it wouldn't work. YMMV. Try it and see.
12:58 pepe123 joined #gluster
12:58 kkeithley (and after you try it, then upgrade the 3.3.0 boxes to 3.3.1 ;-))
13:00 guest2012 nice, thanks.
13:03 gbr joined #gluster
13:08 Oneiroi anyone?: "I have a gluster 3.3.1 setup, replica 2 across 3 nodes (2 bricks on each), whether mounted using nfs or glusterfs (fuse) an odd thing seems to occur whilst the mounted FS will show a directory as empty (at least to ls), attempts to remove it leads to a "directory not empty" error, any ideas? I can confirm finding files in the bricks themselves, and the removal of these manually allows the directory to be removed on the mounted fs"
13:08 gbrand__ joined #gluster
13:09 kkeithley Oneiroi: are they real files? Or are they zero length files with some xattrs on them?
13:09 vijaykumar left #gluster
13:10 Oneiroi kkeithley: real files it would appear
13:10 lkoranda joined #gluster
13:12 Oneiroi manual removal of said files directly from the bricks allows removal of the directory on the mounted fs … it's esxtreamly odd, there's nothing in the gluster logs to suggest what the issue might be either :_/
13:14 pepe123 hi everyone! i'am using Glusterfs native clients, (version 3.2.5-1). i have one volume that should be exported as read-only to some clients and  as rw to others. how can I enforce this restrictions from the server? i've seen the features/filter translator (read-only option) but cant figure out how to use it from the command line..
13:22 jrossi left #gluster
13:24 kkeithley Oneiroi: that is strange. I gather you can reproduce this easily?
13:24 chirino joined #gluster
13:25 Oneiroi kkeithley: 100% of the time it would appear, openstack is using the mounted fs (flusterfs or nfs both do the same), for /var/lib/nova/instances creationg of an nistance is fine, it's only the deletion that appears to fail with a "directory not empty" error
13:25 Oneiroi s/flusterfs/glusterfs/
13:25 glusterbot What Oneiroi meant to say was: kkeithley: 100% of the time it would appear, openstack is using the mounted fs (glusterfs or nfs both do the same), for /var/lib/nova/instances creationg of an nistance is fine, it's only the deletion that appears to fail with a "directory not empty" error
13:25 Oneiroi s/creationg/creation/
13:25 glusterbot What Oneiroi meant to say was: kkeithley: 100% of the time it would appear, openstack is using the mounted fs (flusterfs or nfs both do the same), for /var/lib/nova/instances creation of an nistance is fine, it's only the deletion that appears to fail with a "directory not empty" error
13:26 balunasj joined #gluster
13:26 pepe123 any hints?
13:28 Oneiroi pepe123:  no idea myself :(
13:29 pepe123 Oneiroi: :(
13:30 edward1 joined #gluster
13:32 kkeithley pepe123: had to google for it myself. E.g. see https://access.redhat.com/knowledge/docs/en-U​S/Red_Hat_Storage_Software_Appliance/3.2/html​/User_Guide/chap-User_Guide-Managing_Volumes.​html#sect-User_Guide-Managing_Volumes-Tuning
13:32 glusterbot <http://goo.gl/QeDUU> (at access.redhat.com)
13:32 kkeithley looks like you can only set rw or ro on a per-volume basis. No way to say some clients are rw and others are ro
13:33 kkeithley Seems like a reasonable enhancement if you want to file a BZ
13:36 pepe123 kkeithley: maybe i am missing something but options on that link seems to be only for nfs. i found the translator filter but cant see how to use from the command line..
13:36 pepe123 http://gluster.org/community/documentati​on/index.php/Translators/features/filter
13:36 glusterbot <http://goo.gl/b5BQ0> (at gluster.org)
13:36 pepe123 glusterbot: yes, but how to use from the command line
13:36 pepe123 it
13:38 pepe123 since i am using the command line already, i dont think it would be a good idea to edit some files by hand.
13:39 m0zes joined #gluster
13:41 pepe123 i could use also something like mount -t glusterfs -o ro .. but in my case some of the machines are not really 'secure' so this measure is not as secure as i would like
13:41 pepe123 so that is why i would like to enfore this from the server and not on clients
13:41 ekuric joined #gluster
13:42 pepe123 so my idea is to create some volumes with features/filter (for read-only) and others without features/filter but with the same underlying subvolume
13:43 lkoranda joined #gluster
13:44 rastar left #gluster
13:44 rastar joined #gluster
13:49 ndevos pepe123: the glusterfs client will download the volume file from the server and user that to connect to the bricks, it is also possible to use a different volume file (one that does not have the ro option)
13:50 ndevos pepe123: the only option would be to set the ro restriction on the bricks, not sure if that easily done though
13:50 pepe123 ndevos: thanks! but do you know make this config from the command line?
13:50 glusterbot New news from newglusterbugs: [Bug 883785] RFE: Make glusterfs work with FSCache tools <http://goo.gl/FLkUA>
13:50 ndevos pepe123: I very much doubt thats possible from the commandline
13:51 pepe123 ndevos: bad news then :( but thanks anyway.
13:52 ndevos pepe123: well, it may be possible, but I dont see how atm :-/
13:53 pepe123 ndevos: ;)
13:55 kkeithley it is possible to set ro on a volume from the cli (see the link I posted). I don't see anything on that page that makes me think it's for nfs only. But you can't set rw for some clients and ro for other clients.
13:56 aliguori joined #gluster
13:56 kkeithley er, I guess nfs.volume-access kinda implies nfs only. :-}
13:57 * kkeithley blushes
13:57 pepe123 :)
13:57 kkeithley should read the whole line
14:04 ekuric left #gluster
14:06 UnixDev /join #opensips
14:11 rwheeler joined #gluster
14:15 plarsen joined #gluster
14:20 glusterbot New news from newglusterbugs: [Bug 885739] RedHat init script doesn't support overwriting of options from sysconfig <http://goo.gl/fMJ1p>
14:23 Shdwdrgn joined #gluster
14:49 guigui3 joined #gluster
14:54 ngoswami joined #gluster
14:57 rastar left #gluster
14:59 lh joined #gluster
14:59 lh joined #gluster
15:02 stopbit joined #gluster
15:10 johnmark jdarcy: ping
15:10 theron joined #gluster
15:11 johnmark anybody going to LISA this week? we'll have a BoF, a booth and jdarcy is giving a tutorial
15:13 baczy joined #gluster
15:13 baczy Woe is me, GlusterFS geo replication
15:13 noob2 joined #gluster
15:18 noob2 so i'm not sure if i asked this before but can i convert a replica 2 volume into a replica 3?  i thought i would have to recreate the volume and migrate
15:22 baczy http://pastebin.com/HxQZL4bQ Anyone seen this silliness with GlusterFS Geo-Replication?
15:22 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
15:22 baczy Trying to get it set up and that's all I get
15:24 wushudoin joined #gluster
15:29 johnmark noob2: hrm... I thought the ability to switch volume types was a feature of 3.3.x
15:35 rastar joined #gluster
15:39 rastar left #gluster
15:39 duerF joined #gluster
15:41 noob2 oh ok i'll check again
15:41 noob2 thanks :)
15:43 ron-slc joined #gluster
15:45 vpshastry joined #gluster
15:46 gbr joined #gluster
15:58 tqrst is it normal that "gluster volume status" doesn't show that two of my servers - and all their bricks - are down?
15:58 tqrst they're not on the list at all
15:58 nightwalk joined #gluster
15:59 robo joined #gluster
16:01 daMaestro joined #gluster
16:03 ndevos tqrst: are they listed in "gluster peer status"?
16:10 tqrst ndevos: yes, as disconnected
16:11 ndevos tqrst: right, so thats why "gluster volume status" can not get the details
16:12 bdperkin_gone joined #gluster
16:12 tqrst ndevos: I would expect them to at least show that some bricks are missing, though
16:12 tqrst s/them/it
16:12 ndevos tqrst: how about "gluster volume info"?
16:12 tqrst if I were to only look at gluster volume status, I would get the impression that everything is just fine and dandy
16:12 bdperkin joined #gluster
16:13 tqrst ndevos: gvi shows all 40 bricks of my 20x2 volume
16:13 baczy joined #gluster
16:14 tqrst so it still knows about the bricks, but gvs doesn't feel like showing them for some reason
16:15 tqrst ah, bug 858275
16:15 glusterbot Bug http://goo.gl/s7hy7 medium, medium, ---, kaushal, ASSIGNED , Gluster volume status doesn't show disconnected peers
16:15 ndevos ah
16:16 gbr nfs failing on a regular basis: http://pastebin.com/YksxCQRS
16:16 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
16:16 tqrst is there a "+1" type thing for redhat's bugzilla?
16:17 ndevos tqrst: just comment on it with your opinion :)
16:23 tqrst ndevos: done, thanks
16:26 Humble joined #gluster
16:32 gbr I think I need to dump gluster and go back to drbd and iscsi
16:34 johnmark gbr: doh. is there an associated bug?
16:37 gbr johnmark: I though I found one earlier, but can't seem to find it again.  I'll file one, even if it gets marked as duplicate.  Too late for my client though.  He is MAD.
16:38 johnmark gbr: Ok.
16:38 ramkrsna joined #gluster
16:38 ramkrsna joined #gluster
16:42 gbr Bug 885802 - NFS errors cause Citrix XenServer VM's to lose disks
16:43 glusterbot Bug http://goo.gl/xil6p urgent, unspecified, ---, ksriniva, NEW , NFS errors cause Citrix XenServer VM's to lose disks
16:47 gbr I may try to disable the gluster NFS server and use the kernel on instead.
16:48 gbr can I disable gluster NFS on a single node  (replicate).  That way I can do the switchover with limited downtime.
16:48 ndevos gbr: not using ,,(ext4) by any chance?
16:48 glusterbot gbr: Read about the ext4 problem at http://goo.gl/PEBQU
16:49 gbr nope, using XFS
16:50 ndevos ok
16:51 glusterbot New news from newglusterbugs: [Bug 885802] NFS errors cause Citrix XenServer VM's to lose disks <http://goo.gl/xil6p>
17:00 elyograg noob2: I know that you can add a replica to a straight replicated volume.  I'm not clear on whether you can add a replica to a distreibuted-replicated volume.
17:01 elyograg I *think* the syntax for adding it is just to add a brick with a new "replica n" clause.
17:03 elyograg noob2: I do not know what version is required for this functionality.  I've only used 3.3.0 and 3.3.1.  My knowledge about this capability has come from idling in this channel.
17:06 nueces joined #gluster
17:09 guigui3 left #gluster
17:09 hagarth joined #gluster
17:11 tqrst any news regarding the ext4 issues? My bricks are all ext4, and we might have to update our kernel soon due to a non-gluster issue :\
17:20 rastar joined #gluster
17:21 jbrooks joined #gluster
17:25 chacken joined #gluster
17:30 Mo__ joined #gluster
17:33 penglish joined #gluster
17:33 baczy Georep makes me sad
17:35 semiosis :O
17:36 noob2 elyograg: thanks :)
17:50 zaitcev joined #gluster
17:52 ShaunR joined #gluster
17:54 raghu joined #gluster
18:04 DataBeaver joined #gluster
18:05 DataBeaver Any clue why I sometimes get directories replaced with broken symlinks like this: lrwxrwxrwx 2 root root       54 joulu  1 09:12 debug -> ../../d3/dc/d3dcc135-0a62-4​4ba-a9a7-c0a3f865a60e/debug
18:07 semiosis that looks bad
18:07 semiosis DataBeaver: what version are you using?
18:08 DataBeaver 3.3.0
18:08 semiosis i hope that whatever is causing that is fixed in 3.3.1 but tbh i have no idea why/how that would happen
18:09 semiosis maybe JoeJulian or kkeithley have heard of that
18:10 kkeithley doesn't ring any bells
18:13 DataBeaver I think they've appeared in place of files I have deleted, but I'm not entirely sure
18:16 semiosis DataBeaver: deleted through a client, right?  not deleted from the backend bricks directly?
18:16 DataBeaver I'm quite sure I deleted them through a client, yes
18:19 semiosis possible that happened while the client was disconnected from one of the servers?
18:19 semiosis (just guessing around here)
18:22 Humble joined #gluster
18:22 plarsen joined #gluster
18:24 johnmark FYI... the call for papers for SCALE 11x closes today
18:24 johnmark if you want to talk about GlusterFS, I'll pay for your travel
18:25 DataBeaver Is glusterfs even capable of operation if the network connection is broken?
18:26 semiosis DataBeaver: sometimes... moreso when using replication
18:26 DataBeaver The timestamp does indicate it happened fairly soon after I woke the client machine from sleep in the morning.
18:27 DataBeaver I don't have replication or any other fancy features in use here
18:28 semiosis ~pasteinfo | DataBeaver
18:28 glusterbot DataBeaver: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
18:29 DataBeaver http://fpaste.org/9VHc//
18:29 glusterbot Title: Viewing Paste #259086 (at fpaste.org)
18:29 semiosis single-brick volumes??
18:30 DataBeaver Yeah, this is just my home network
18:31 DataBeaver I was using NFS for a long time, but gradually it stopped working, so I had to look for an alternative and glusterfs was the best candidate.
18:33 DataBeaver Is there something bad about single-brick volumes, or were you just surprised at an uncommon use case?
18:35 elyograg DataBeaver: it does seem counterintuitive to add the considerable overhead of gluster if you're not going to get replication or extreme capacity through distribution.
18:36 semiosis +1
18:36 DataBeaver It does give me a working networked file system though, which NFS has been incapable of providing lately.
18:37 DataBeaver Aside from NFS, all the other options seem to be rather heavy duty, and many of them are in various states of abandonment.
18:37 semiosis imho, get nfs working
18:37 semiosis if you dont need the scale-out ability of glusterfs
18:38 DataBeaver I tried to debug NFS, but since it's composed of a bazillion components, I couldn't make much headway and got frustrated.
18:40 Humble joined #gluster
18:41 DataBeaver At first NFSv4 started mysteriously failing so that I could list files and even get correct username mappings, but trying to read the contents of any file caused the process to hang.  That was somewhere around a year ago.  Then, a few months ago, NFSv3 started freezing under heavy load.  Trying to work on my digital photos was guaranteed to cause a freeze in short order.  There were no useful messages to indicate what was going wrong.
18:42 rastar left #gluster
18:43 DataBeaver As a software developer and a person who likes to know what's going on in their system, I like the simplicity of glusterfs's single-process approach, as opposed to the monstrosity which is NFS and especially NFSv4.
18:58 baczy joined #gluster
18:59 JoeJulian DataBeaver: Nope, never seen that. I wish I had time to diagnose it right now, but I'm running way behind already. If you can keep that broken state intact, I might be able to look at it this afternoon.
19:03 DataBeaver hrm, I cleaned up the symlinks already.  The server logs contain entries like this: [2012-12-01 09:12:32.686756] W [posix-handle.c:529:posix_handle_soft] 0-home-posix: symlink ../../88/79/88792efd-4138-4​dea-90d9-27b68d03b124/debug -> /home/.glusterfs/88/79/88792efd-4​138-4dea-90d9-27b68d03b124/debug failed (No such file or directory)
19:06 edward1 joined #gluster
19:08 DataBeaver I can try deleting some directories right before I put the machine to sleep tonight and after I wake it up tomorrow to see if that reproduces the problem
19:15 DataBeaver btw, is it normal that glusterfs is slow with handling large amounts of small files and causes extremely high load on the server?
19:19 noob2 yeah that's normal
19:20 DataBeaver okay, guess I'll just have to live with it then
19:45 gbr joined #gluster
19:45 kkeithley JoeJulian: EPEL upgraded to Folsom? You're referring to the openstack-swift rpms, right?
19:47 kkeithley The gluster-swift rpms are still Essex, including the new ones I put up in my repo this morning.
19:48 kkeithley And I hope we're going to have Folsom in a couple days.
19:51 y4m4 joined #gluster
19:52 Humble joined #gluster
19:53 JoeJulian kkeithley: Yes
19:54 kkeithley so a) I'm not sure what openstack-swift rpms going to Folsom have to do with us, and b) what broke, maybe I can avoid repeating that breakage.
19:54 elyograg I saw the blog post about better swift coming.  it talked about keystone.  is that there?
19:54 JoeJulian a) It broke my keystone integration into glusterfs-swift... :D
19:54 kkeithley We'll get keystone after we get to Folsom. Dunno if it's going to make 3.4
19:55 kkeithley I'll crack the whip some more
19:55 JoeJulian I had it working.
19:55 elyograg ok.  i'm curious about something related - will the horrendous 32 character hex volume names still be required?
19:55 obryan joined #gluster
19:56 kkeithley 32 char hex vol names?  I give my volumes names like vol0 or the_volume when I'm testing. Do you mean the uuids?
19:57 elyograg i was reading something else, and JoeJulian said something about it too ... when using keystone currently, apparently the tenant ID in the database has to be the name of the volume.
19:57 JoeJulian elyograg: I can't see any way around that without hacking swift. It doesn't offer the tenant-name, just the tenant-id.
19:58 johnmark kkeithley: crack it! crack it!
19:58 JoeJulian I /think/ I saw that changing in grizzly, but I should make sure and file a bug if it's not.
19:58 glusterbot http://goo.gl/UUuCq
19:58 JoeJulian hehe, got myself.
19:59 elyograg does gluster have any kind of 'alias' capability where you can use two names to refer to a volume?  would that be something worth putting in bugzilla?
19:59 johnmark JoeJulian: :)
19:59 JoeJulian elyograg: Sounds like a reasonable workaround, and /probably/ not that hard to implement. file a bug. ;)
19:59 glusterbot http://goo.gl/UUuCq
20:00 nueces joined #gluster
20:00 elyograg JoeJulian: will do.  which component?
20:00 JoeJulian hmm... probably client I think...
20:02 JoeJulian https://plus.google.com/u/0/com​munities/110022816028412595292
20:02 glusterbot <http://goo.gl/f7xoc> (at plus.google.com)
20:02 JoeJulian Pfft... That wasn't what I was expecting glusterbot to say...
20:03 JoeJulian Join the new Google+ Gluster Community at http://goo.gl/f7xoc
20:04 JoeJulian johnmark: have you put the link on gluster.org yet?
20:04 Mo____ joined #gluster
20:08 elyograg getting a 500 error from that google+ page.
20:08 elyograg now it'sok.
20:12 zaitcev joined #gluster
20:19 Technicool joined #gluster
20:22 glusterbot New news from newglusterbugs: [Bug 885861] implement alias capability so more than one name can refer to the same volume <http://goo.gl/p3Kc1>
20:22 johnmark JoeJulian: not yet. I should
20:25 baczy joined #gluster
20:29 greylurk So last night our NFS connections to Gluster failed from some legacy boxes.  I'm having trouble making sense of the logs.  Any suggestions? https://gist.github.com/e027f1b44e3cc25861a2
20:29 glusterbot Title: Gluster error log Gist (at gist.github.com)
20:34 nueces joined #gluster
20:49 badone joined #gluster
20:53 gbrand_ joined #gluster
21:02 kkeithley greylurk: you built from source? What linux dist?
21:21 greylurk kkeithley - Not from source…  Installed the ubuntu repo packages in 12.04.
21:22 Staples84 joined #gluster
21:25 gbr Anyone here using kernel NFS to serve a gluster share instead of GlusterNFS?
21:26 Oneiroi joined #gluster
21:28 kkeithley JoeJulian: my preliminary build of gluster+UFO with Folsom "just worked."
21:30 jbrooks joined #gluster
21:40 xymox joined #gluster
21:42 gbr Anyone here using kernel NFS to serve a gluster share instead of GlusterNFS?
21:48 tryggvil joined #gluster
21:53 plarsen joined #gluster
22:00 H__ gbr: you mean exporting NFS over a local glusterfs mount ?
22:01 penglish H__: I took him to mean: mount the glusterfs locally using the FUSE module, and export it with knfsd
22:01 penglish s/him/gbr/
22:02 glusterbot What penglish meant to say was: H__: I took gbr to mean: mount the glusterfs locally using the FUSE module, and export it with knfsd
22:02 redsolar joined #gluster
22:05 quillo joined #gluster
22:08 H__ penglish: yes, that's what i tried to say, you worded it better :)
22:12 bdperkin_gone joined #gluster
22:13 bdperkin joined #gluster
22:16 gbr H__: Nope, exporting a gluster mount over kernel NFS
22:16 gbr Gluster NFS is giving me grief.
22:17 noob2 left #gluster
22:30 H__ gbr: gluster NFS did not work for me either, i switched our servers to use the gluster mount. It's better, still dies from time to time.
22:44 andreask joined #gluster
22:45 ShaunR How does gluster work when it comes to large storage pools, i have a 16 drive server now that i'm going to put gluster on, i'm wondering what I would do when i wanted to add another 16 drives?  Would i just deploy another 16 drive server, install gluster on that and then the two would become one?  How is this normally done
22:46 ShaunR I have a buddy who just daisy chains disk shelfs but he doesnt require alot of throughput, he just needs alot of storage and doesnt care about the bottle neck... I'm oposite though, i need high performance for throughput and IOPs
22:49 gbr H__: I mount from Citrix XenServer, so a gluster mount won't work.
22:56 a2 gbr, that is a bad idea (using knfs w/ glusterfs+fuse)
22:57 a2 *knfsd
22:57 a2 gbr, H__: what exactly was the problem with gluster NFS?
22:58 johnmark gbr: also, I'm curious why mounting from GlusterFS client won't work with XenServer
22:59 AK6L left #gluster
23:04 gbr a2: Gluster NFS was stable, but since November 1st, I get this:  http://goo.gl/xil6p
23:04 glusterbot Title: Bug 885802 NFS errors cause Citrix XenServer VM's to lose disks (at goo.gl)
23:05 gbr johnmark: XenServer will only mount it's storage from NFS, iSCSI or it's storage protocol.
23:06 gbr johnmark: if I could get XenServer to mount gluster shares, I'd be happy.
23:08 a2 gbr, what version are you using?
23:08 a2 what glusterfs version I mean
23:10 a2 gbr, did you perform a rebalance?
23:10 gbr a2:  3.3.1 and I have 2 servers in replica mode.
23:17 atrius joined #gluster
23:17 Alpinist joined #gluster
23:24 redsolar joined #gluster
23:30 gbr joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary