Camelia, the Perl 6 bug

IRC log for #gluster, 2013-01-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 ninkotech_ joined #gluster
00:12 raven-np joined #gluster
00:15 gauravp joined #gluster
00:20 chacken1 left #gluster
00:37 fedora joined #gluster
00:38 fedora Does anybody know if the top issue from http://gluster.org/community/documentation/ind​ex.php/Gluster_3.2_Release_Notes:_Known_Issues is fixed in 3.3 ?
00:38 glusterbot <http://goo.gl/WwNcl> (at gluster.org)
00:42 dustint joined #gluster
01:06 chirino joined #gluster
01:15 nik__ joined #gluster
01:22 ShaunR joined #gluster
01:24 dustint joined #gluster
01:31 xmltok joined #gluster
01:44 raven-np joined #gluster
01:48 xmltok joined #gluster
02:02 zwu joined #gluster
02:19 bdperkin joined #gluster
02:22 abkenney joined #gluster
02:26 bharata joined #gluster
02:31 lala joined #gluster
02:39 hchiramm_ joined #gluster
02:40 fedora joined #gluster
02:42 raven-np joined #gluster
02:48 overclk joined #gluster
02:53 xmltok_ joined #gluster
03:04 hagarth joined #gluster
03:04 xmltok_ joined #gluster
03:56 H__ joined #gluster
03:58 shylesh joined #gluster
04:07 helloadam joined #gluster
04:22 rodlabs joined #gluster
04:22 sripathi joined #gluster
04:27 maek joined #gluster
04:27 maek can I use gluster for a single directory on boxes that have already been partitioned up after the facT?
04:29 semiosis confused by your question... could you say more?
04:29 semiosis what do you have?  what do you want to do?
04:32 maek oh sorry
04:33 maek I have 1 dir
04:33 maek on a chef-server
04:33 maek in order to but these behind a load balancer
04:33 maek i need to share this dir
04:33 maek to each node
04:33 maek so I was under the impression I could like "wrap" an existing dir or make a new on in gluster
04:33 maek vs having to make a new partition
04:33 maek or disk
04:33 maek for a brick
04:33 maek semiosis: ^ thanks
04:34 maek and each node needs to be able to write, etc. could use nfs but im trying to avoid a single point of failure
04:34 maek Im also just getting to understand gluster. sorry for lack of understanding
04:34 sgowda joined #gluster
04:34 semiosis no prob
04:36 semiosis you could do a replicated volume with one directory mirrored between two servers to get HA
04:36 maek Ideally i have 3 servers
04:36 maek and I think I set replication to 3
04:36 maek if I understand correctly
04:36 maek so each node has a copy
04:36 maek this is a tiny amount of data
04:37 maek semiosis: this volume does it have to be a non formated partitoin
04:37 maek or can it just be a random dir in /var?
04:37 maek or i make /opt/gluster or something
04:37 semiosis gluster uses a directory on a server
04:37 semiosis xfs is recommended
04:38 semiosis cant use a raw block device, has to be formatted and mounted
04:38 semiosis theres an issue with recent kernels and ,,(ext4) so if your root partition is ext that may not work
04:38 glusterbot Read about the ext4 problem at http://goo.gl/PEBQU
04:38 maek can you point me to the docs on how to make gluster work on not the entire paritiotn.
04:38 maek I see bricks being made of brand new clean partitions
04:39 semiosis what docs?
04:39 maek but Im having trouble with google fu to find 'how to do this with a single dir on an existing FS'
04:39 maek haha?
04:39 maek oh
04:39 semiosis ,,(glossary)
04:39 glusterbot A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
04:39 semiosis so a "brick" is a directory on a server
04:39 semiosis not a raw device
04:39 semiosis there's ,,(rtfm)
04:39 glusterbot Read the fairly-adequate manual at http://goo.gl/E3Jis
04:40 semiosis also ,,(quickstart)
04:40 glusterbot I do not know about 'quickstart', but I do know about these similar topics: 'quick start'
04:40 semiosis ,,(quick start)
04:40 glusterbot http://goo.gl/CDqQY
04:41 maek semiosis: so in this doc its just by coincidence they are using the entire partition for the brick
04:41 maek but actually i could point the gluster colume create at any die?
04:41 maek gluster volume create, even.
04:41 maek http://gluster.org/community/doc​umentation/index.php/QuickStart
04:41 glusterbot <http://goo.gl/CDqQY> (at gluster.org)
04:42 semiosis yeah, but like i said, if your root mount is ext4 that could be a problem if you're using a recent kernel
04:42 maek ok, thanks for the help
04:42 semiosis also good idea to use quorum since you'll have 3-way replication
04:42 maek appreciate it
04:42 semiosis yw
04:42 test joined #gluster
04:42 maek quorum is something else ontop of this?
04:43 maek the gluster server doesnt provide?
04:43 semiosis it's an option you can enable on a volume
04:43 maek ah ok
04:43 maek thanks
04:43 semiosis yw
04:46 mohankumar joined #gluster
04:49 lala joined #gluster
04:51 deepakcs joined #gluster
04:53 ram joined #gluster
04:53 maek semiosis: http://gluster.org/community/documentation/index​.php/Gluster_3.2:_Setting_Volume_Options#cluster.quorum-type                   (quorum method)
04:53 maek http://gluster.org/community/documentation/index​.php/Gluster_3.2:_Setting_Volume_Options#cluster.quorum-count                  (# needed for quorum)
04:53 glusterbot <http://goo.gl/dZ3EL> (at gluster.org)
04:53 glusterbot <http://goo.gl/e3aXk> (at gluster.org)
04:53 maek are those the two options for quorum?
04:54 semiosis afaik all you need to do is set quorum-type to 'auto'
04:55 maek thanks
04:55 semiosis yw
04:55 maek lets say something gets out of whack. is there a "this guy is the source, replicate from him" type of option?
04:55 maek does that make senes?
04:57 semiosis if you have a tiny amount of data just copy the data off a "good" brick, start over with empty bricks and copy the data back in through a client mount point
04:58 maek wow that was simple
04:58 maek semiosis: awesome.
04:59 semiosis there's no option to designate the "good" brick yet, though it may happen in a future release
04:59 maek is there a merge?
04:59 maek so what would happen in my case
04:59 maek if the gluster cluster split brains
04:59 maek the node that was getting the writes would have newere info
04:59 maek and I would want to force that out to all the other nodes
05:00 maek but it would be really hard to determine which one got the write
05:00 semiosis writes go to all bricks
05:00 maek unless its split
05:00 maek or would it just sync back up
05:00 maek once the split was resolved?
05:00 semiosis if a file is split brain then you wont be able to write to it at all, glusterfs will lock out any access to that file
05:00 maek sorry this new high tech stuff makes me so confused
05:00 maek ok
05:00 maek thanks again
05:00 semiosis yw
05:01 semiosis with quorum you should have a hard time getting split brain
05:01 maek ok
05:01 semiosis i think the volume becomes read-only if there's not enough bricks online to make a quorum
05:01 maek and mounting the via -glusterfs is using fuse to let me write into the FS with out worries of api, etc
05:01 semiosis right
05:01 maek oh, awesome!
05:02 maek so glad smart people work on this stuff.
05:02 semiosis hahaha yeah they sure are smart
05:02 maek appreciate the help. have a good evening.
05:02 semiosis sure any time
05:02 semiosis have a good evening as well :)
05:11 vpshastry joined #gluster
05:12 hagarth joined #gluster
05:23 melanor9 joined #gluster
05:27 raghu joined #gluster
05:52 koodough joined #gluster
06:17 vpshastry joined #gluster
06:18 melanor91 joined #gluster
06:27 theron joined #gluster
06:31 shireesh joined #gluster
06:33 bala joined #gluster
06:46 glusterbot New news from resolvedglusterbugs: [Bug 848750] Gluster UFO (Gluster Swift) crashes after umount <http://goo.gl/u6VGn> || [Bug 765313] [FEAT] Unit test framework for UFO. <http://goo.gl/rzljE> || [Bug 767575] object-storage: all the processes not getting stopped and again a start of swift gives a problem <http://goo.gl/yl1QL>
06:50 glusterbot New news from newglusterbugs: [Bug 826512] [FEAT] geo-replication checkpoint support <http://goo.gl/O6N3f>
06:51 test__ joined #gluster
06:52 Nevan joined #gluster
07:00 vikumar joined #gluster
07:09 ngoswami joined #gluster
07:20 ramkrsna joined #gluster
07:20 ramkrsna joined #gluster
07:25 bala joined #gluster
07:27 jtux joined #gluster
07:28 vpshastry joined #gluster
07:33 pkoro joined #gluster
07:37 raven-np joined #gluster
07:43 Nuxr0 joined #gluster
07:44 jh4cky joined #gluster
07:49 raven-np joined #gluster
07:50 vpshastry joined #gluster
07:51 guigui1 joined #gluster
07:57 ekuric joined #gluster
07:59 ctria joined #gluster
08:01 jtux joined #gluster
08:11 rgustafs joined #gluster
08:15 ngoswami joined #gluster
08:17 andreask joined #gluster
08:17 Nr18 joined #gluster
08:31 Joda joined #gluster
08:37 tjikkun_work joined #gluster
08:38 bulde joined #gluster
08:39 vikumar joined #gluster
08:49 pai joined #gluster
08:54 hagarth joined #gluster
08:56 raven-np joined #gluster
09:01 dobber joined #gluster
09:05 bauruine joined #gluster
09:10 * x4rlos just read the page torbjorn__: sent yesterday.
09:13 x4rlos thanks for that :-)
09:17 duerF joined #gluster
09:21 glusterbot New news from newglusterbugs: [Bug 902684] Crash seen on ssl_setup_connection() <http://goo.gl/GY7rw>
09:22 Norky_ joined #gluster
09:26 Norky joined #gluster
09:28 rnts joined #gluster
09:29 DaveS joined #gluster
09:30 shireesh joined #gluster
09:41 bulde joined #gluster
09:48 melanor9 joined #gluster
09:49 manik joined #gluster
09:59 ram_ joined #gluster
10:01 guigui1 left #gluster
10:01 ram_raja joined #gluster
10:04 tryggvil joined #gluster
10:09 harshpb joined #gluster
10:10 Azrael808 joined #gluster
10:13 36DACS7N5 joined #gluster
10:17 glusterbot New news from resolvedglusterbugs: [Bug 879078] Impossible to overwrite split-brain file from mountpoint <http://goo.gl/eR0Ki>
10:35 rcheleguini joined #gluster
10:42 melanor91 joined #gluster
10:44 melanor9 joined #gluster
10:46 melanor92 joined #gluster
10:47 melanor93 joined #gluster
11:04 guigui3 joined #gluster
11:13 hagarth joined #gluster
11:14 shylesh joined #gluster
11:18 olri joined #gluster
11:24 sripathi1 joined #gluster
11:30 guigui3 joined #gluster
11:33 ngoswami joined #gluster
11:36 hagarth joined #gluster
11:46 dobber joined #gluster
11:51 Staples84 joined #gluster
11:52 andreask joined #gluster
11:54 kkeithley1 joined #gluster
11:57 shireesh joined #gluster
12:08 edward1 joined #gluster
12:13 Elendrys joined #gluster
12:21 glusterbot New news from newglusterbugs: [Bug 895656] geo-replication problem (debian) [resource:194:logerr] Popen: ssh> bash: /usr/local/libexec/glusterfs/gsyncd: No such file or directory <http://goo.gl/ZNs3J> || [Bug 895528] 3.4 Alpha Tracker <http://goo.gl/hZmy9>
12:24 DataBeaver joined #gluster
12:27 rnts We have a 2-server 'Distributed-Replicate' setup where one server has crashed and come back online after a few days, when this happened before we simply restarted the server and did a rebalance and everything was fine, now we're getting IO-errors and filenotfound errors randomly throughout the system. What's the standard way of dealing with that? We run 3.2.6
12:27 plarsen joined #gluster
12:31 raven-np joined #gluster
12:31 Elendrys Hi there, I have a question about my Gluster config. Everything works fine, except that we have frequent "page allocation" failure in the system log. We are running glusterfs 3.3.0-1.el6.x86_64 (server and client, both server) and it seems that these error are related to attr related operations ( GETXATTR(system.posix_acl_access) ). Server run Scientific Linux 6.3 (Redhat like) x64, on ext4
12:31 Elendrys partitions mounted with -oacl option. Kernel version 2.6.32. Is there a chance to get rid of these problems by updating to a newer version ? Or fs tuning ? Thanks
12:34 rwheeler joined #gluster
12:45 tryggvil joined #gluster
12:47 glusterbot New news from resolvedglusterbugs: [Bug 889382] Glusterd crashes in volume delete <http://goo.gl/oAAVp>
12:48 Staples84 joined #gluster
12:51 Elendrys joined #gluster
13:05 melanor9 joined #gluster
13:10 iperovic joined #gluster
13:11 iperovic hi
13:11 glusterbot iperovic: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:16 rastar joined #gluster
13:16 overclk Elendrys: are these the same problem you were experiencing last time (geo replication related?)
13:16 kkeithley1 @ext4
13:16 glusterbot kkeithley1: Read about the ext4 problem at http://goo.gl/PEBQU
13:17 iperovic I'm interested in using glusterfs for keeping two directories of about 100MB synchronized between servers. This directory used to be on a NFS3 export ona a third server but recently we had problems with file locking. Since the old setup dosen't provide any HA, I tested the glusterfs solution. So far it did great. I'm interested wther anyone has any suggestions about such setup. And a...
13:17 iperovic ...question - is it important to have separate disks / disk partitions for bricks. Can I just export some directory on each server, considering relatively low performance required?
13:19 kkeithley1 Elendrys: I'd be cautious about updating to a newer kernel. See ext4 ^^^
13:21 kkeithley1 iperovic: you can certainly use a directory for the brick. Whether you do that or use a dedicated volume is up to you.
13:26 iperovic kkeithley1: Thanx. I't seemed so, but the docs I went through didn't mention it.
13:32 iperovic left #gluster
13:33 dustint joined #gluster
13:34 abkenney joined #gluster
13:41 melanor91 joined #gluster
13:47 hateya joined #gluster
13:49 melanor9 joined #gluster
13:59 aliguori joined #gluster
13:59 aliguori_ joined #gluster
14:03 1JTAAEZCH joined #gluster
14:03 melanor9 joined #gluster
14:08 theron joined #gluster
14:09 puebele1 joined #gluster
14:11 hagarth joined #gluster
14:11 sgowda joined #gluster
14:15 melanor9 joined #gluster
14:18 sgowda joined #gluster
14:22 glusterbot New news from newglusterbugs: [Bug 885424] File operations occur as root regardless of original user on 32-bit nfs client <http://goo.gl/BiF6P> || [Bug 885802] NFS errors cause Citrix XenServer VM's to lose disks <http://goo.gl/xil6p> || [Bug 893778] Gluster 3.3.1 NFS service died after <http://goo.gl/NLoE3>
14:25 manik joined #gluster
14:28 haidz joined #gluster
14:31 vpshastry joined #gluster
14:33 puebele joined #gluster
14:33 harshpb joined #gluster
14:36 Elendrys overclk : yes. I checked about geo-replication. Everything looks ok when we use system. These error stopped after server reboot for 3-4 days but logs are full of it, and if users are not experiencing any issue when using the file server, i'm affraid that we're going in troubles if we do not correct these memory pagination errors.
14:37 Elendrys I may add that obviously, we set a Samba 3 server over the volumes (Original volumes are RW and replicas are RO)
14:39 stopbit joined #gluster
14:40 Elendrys overclk : i had to move to another campus, sorry for the late answer
14:42 tjikkun_work joined #gluster
14:47 goofy21 joined #gluster
14:47 goofy21 Hi All!
14:48 maek trying to figure this replicated brick stuff out. I have 2 boxes and a replication count of 2. I would like to add a 3rd box/brick and up the replication count to 3. cant seem to figure out the command. any input would be appreciated. thanks!
14:49 goofy21 (newbie) just add the peer
14:49 goofy21 and then:
14:49 goofy21 gluster volume add-brick...
14:50 maek so I did the peer probe
14:50 maek and that seemed to work
14:50 maek but then when I try and add the brick
14:50 maek # gluster volume add-brick gv0 172.21.66.20:/opt/gluster
14:50 maek Incorrect number of bricks supplied 1 with count 2
14:51 m0zes maek: gluster volume add-brick gv0 replica 3 172.21.66.20:/opt/gluster
14:51 m0zes should work.
14:51 m0zes if you are running 3.3.x
14:51 goofy21 I'm working on a proof-of-concept  and it seems to work fine as expected...
14:51 goofy21 but I have seen something I couldn't understand...
14:52 goofy21 in a single test with 2 nodes...
14:52 sjoeboo_ joined #gluster
14:52 goofy21 Why I can see the data in /export/brick1 and also in /export/brick1/.glusterfs ?
14:53 goofy21 are the files twice in the brick?
14:53 elyograg are you mounting the volume or the brick?
14:53 goofy21 the gluster volume is started...
14:53 goofy21 but not mounted on the node
14:54 goofy21 I used the Quick start guide...
14:54 elyograg the .glusterfs directory should not be visible from a client mount.
14:54 goofy21 that's true
14:54 goofy21 I can't see on client side...
14:54 goofy21 but data is twice?
14:55 elyograg the .glusterfs directory is part of the server operation.  the only thing that should be in there should be directories and hardlinks, so they'll take up hardly any space at all.
14:55 elyograg i think there may be symlinks too.
14:55 goofy21 thats it! :)
14:55 goofy21 hard links...
14:55 goofy21 I didnt think on them! :)
14:57 maek m0zes: that did it. Thanks!
14:58 m0zes maek: np. I'm glad it worked. that is not a feature I've tested yet :)
14:58 maek :D is there a way to add quorum-type after the volume has been created?
14:58 maek m0zes: ^?
14:59 m0zes maek: I think that is a gluster volume set operation. I
15:00 maek m0zes: great thanks. I found that
15:01 maek 1 more question. How do I get the data replicated out to the 3rd volume I just added?
15:01 m0zes ideally with the change in replica count, the self heal daemon will fix it. if it doesn't, you may need to stat the files.
15:03 goofy21 Does anybody a real production env running on gluster?
15:03 goofy21 We are testing for an HPC solution...
15:03 goofy21 and I'm litle worried about that...
15:04 m0zes goofy21: I am running in an hpc env.
15:04 maek m0zes: gluster volume status says sefl-heal daemon on localhost "Online Y, with a pid" but no replication. you suggest for i in `ls` stat $i in my gluster brick dir with the .glusterfs ?
15:04 goofy21 how long?
15:04 goofy21 When I looked for references on internet... all of them were bad ones..
15:04 goofy21 but I can't believe that...
15:04 m0zes goofy21: for > 1yr at the moment. I started with 3.2.1 iirc.
15:04 DrVonNostren Hi everybody, I have created a 6 x 2 = 12 distributed replicated cluster out of 750GB xfs bricks, however, when mounted on my client (using gluster native) it only shows up as a size of 3.8T when I believe I should be getting 4.5T, can anyone help me shed some light on this?
15:05 m0zes goofy21: there are oddities at times, but the 3.3 line has been very stable.
15:05 goofy21 any significant problem during this time?
15:06 goofy21 we will use it with IB connection
15:06 goofy21 two nodes at the beggining...
15:06 goofy21 and starting with one of them geo-replicated
15:06 balunasj joined #gluster
15:07 m0zes in the early 3.2 line there were problems with the nfs server component degrading over time. in 3.3 I haven't been able to get rdma working (considered Proof of Concept, should be working in 3.4)
15:07 vpshastry left #gluster
15:08 goofy21 is it possible to disable NFS globally?
15:08 goofy21 I think we can use fuse on clients
15:08 m0zes gluster volume set <volume> nfs.disable on
15:08 m0zes iirc
15:09 goofy21 yes...
15:09 goofy21 but this is just for a volume...
15:09 goofy21 I mean... disable this service globally
15:09 hateya joined #gluster
15:09 goofy21 without starting up nfs service
15:09 danishman joined #gluster
15:10 sashko joined #gluster
15:10 m0zes not that I know of.
15:10 goofy21 ok
15:10 goofy21 thanks a lot for your help
15:12 Azrael808 joined #gluster
15:13 wushudoin joined #gluster
15:14 maek m0zes: the rebalance volume command is not working. says Volume gv0 is not a distribute volume or contains only 1 brick.
15:14 maek Not performing rebalance
15:15 neofob joined #gluster
15:15 m0zes maek: right, rebalance is for the distributed portions of volumes, not replicated portions of volumes. self-heal is for replicated portions.
15:15 hateya joined #gluster
15:15 maek is there a way to force self heal?
15:15 m0zes gluster volume heal <volname> heal
15:16 maek thanks. sorry :(
15:16 maek didnt mean to reduce you to a man page
15:16 harshpb joined #gluster
15:17 msgq joined #gluster
15:18 maek awesome. thanks!
15:18 _msgq_ joined #gluster
15:19 obryan joined #gluster
15:19 maek m0zes: one last question. https://gist.github.com/4595408 Does that info mean I have a replicated volume with 3 bricks and I have 3 bricks listed meaning each brick has all the data?
15:19 glusterbot Title: gist:4595408 (at gist.github.com)
15:32 nueces joined #gluster
15:34 bugs_ joined #gluster
15:35 Nicolas_Leonidas joined #gluster
15:35 Nicolas_Leonidas hi
15:35 glusterbot Nicolas_Leonidas: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an
15:35 glusterbot answer.
15:36 Nicolas_Leonidas Hi, this mounts the drive and it works properly, mount -t glusterfs newwww1.mydomain:rimagesvolume /r_images
15:36 Nicolas_Leonidas what do I need to put in /etc/fstab to make this auto mount on restart?
15:37 m0zes maek: yes
15:38 Nicolas_Leonidas shouldn't it be newwww1.mydomain:/rimagesvolume /r_images glusterfs defaults, _netdev 0 0
15:38 Elendrys Nicolas_Leonidas : yes
15:38 twx_ wihtout the space
15:39 Elendrys but you'll have to change the last numbers according to your current partitions
15:39 Elendrys no space beetween defaults,_netdev
15:39 Nicolas_Leonidas Elendrys: how do I do that? what are those numbers?
15:41 Elendrys 0 0 : check your current fstab
15:41 m0zes those numbers tell the init scripts information about fsck. as it is a network filesystem, 0 0 will tell it to not care about fsck
15:41 Nicolas_Leonidas m0zes: is that bad or good? to not care about fsck?
15:42 m0zes Nicolas_Leonidas: there is no fsck command for glusterfs, so it is fine.
15:43 Elendrys thanks m0zes i just thought it would be in fact useless
15:43 Elendrys sorry :/
15:44 maek m0zes: thanks again for the help!
15:46 RicardoSSP joined #gluster
15:46 RicardoSSP joined #gluster
15:50 Nicolas_Leonidas It worked, now if I want to add a backup server that replicates the same volume, do I need to specify that in /etc/fstab?
15:53 Nicolas_Leonidas is it just newwww1.mydomain:/rimagesvolume /r_images glusterfs defaults,_netdev,backupvolf​ile-server=newwww3.mydomain 0 0 ?
15:53 Nicolas_Leonidas will that take care of the fail over?
15:56 DrVonNostren Hi everybody, I have created a 6 x 2 = 12 distributed replicated cluster out of 750GB xfs bricks, however, when mounted on my client (using gluster native) it only shows up as a size of 3.8T when I believe I should be getting 4.5T, can anyone help me shed some light on this?
15:58 kkeithley1 no. Since you're using native/fuse mounts and "replica 2", the client knows about both servers. If one fails the client will continue to write and read from the remaining server.
15:58 sjoeboo DrVonNostren: gluster volume status, make sure all your bricks are really in there
15:59 DrVonNostren sjoeboo: yes, all 12 are listed with Y in the online field
15:59 Nicolas_Leonidas kkeithley1: is that for me?
15:59 kkeithley1 yes
16:00 Nicolas_Leonidas kkeithley1: tnx
16:02 olri joined #gluster
16:05 Nicolas_Leonidas so the manual says just do yum install fuse fuse-libs, and then you can mount a volume on a client
16:05 Nicolas_Leonidas but when I do that I receive mount: unknown filesystem type 'glusterfs'
16:05 Nicolas_Leonidas what else needs to be installed on a client machine?
16:05 kkeithley1 glusterfs and glusterfs-fuse
16:06 m0zes DrVonNostren: hdd sizes are measured in base-10, not base-2. at *most* you'd get 4.1T
16:06 m0zes DrVonNostren: what filesystem did you use on your bricks?
16:06 kkeithley1 he said 750GB xfs bricks
16:06 m0zes wait, saw it :)
16:06 kkeithley1 /s/said/wrote/
16:07 m0zes not sure what would get you a 300GB discrepency, though.
16:09 kkeithley1 the question is, are those 750G drives, or are they larger drives with 750G xfs fs on them?
16:10 kkeithley1 and then what does /usr/bin/df say about their size
16:11 DrVonNostren hey guys, i figured it out, I typoed on one of my bricks (which caused gluster to create a folder on that particular server) with the directory name, a directory without a 750 GB mount
16:12 kkeithley1 et voila
16:12 DrVonNostren sorry, when I add a brick (hopefully correctly this time) how do I make sure it replicates with the other brick it should have been replicating with the entire time
16:13 kkeithley1 I believe your safest bet is to delete the volume and recreate it.
16:14 DrVonNostren kkeithley1: If i delete the volume is it going to bitch and moan about the various bricks already being part of a volume?
16:15 kkeithley1 yes, you'll need to clean out the .glusterfs dir and the xattrs. (That's why some of us suggest creating your bricks in a subdir of the volume. Then you can just rm -rf the subdir(s))
16:16 Azrael808 joined #gluster
16:16 DrVonNostren kkeithley1: sorry, im a noob, what does cleaning out the xattrs entail?
16:17 Elendrys Hi, can someone help me about page allocation failure problem ?
16:18 kkeithley1 just deleting the xattrs on the brick. One easy way is to mkfs.xfs your brick volumes again. Fortunately xfs is pretty quick.
16:18 kkeithley1 although that's like driving in finish nails with a sledge hammer
16:18 DrVonNostren okay, so i can just recreate the filesystem and call it a day?
16:19 DrVonNostren whats the peen hammer technique?
16:19 aliguori joined #gluster
16:19 * kkeithley1 should know this like the back of his hand. Every unix/unixlike it's different
16:20 DrVonNostren centos 6.2
16:21 kkeithley1 I meant linux, *BSD, Solaris, all have different xattr commands
16:21 kkeithley1 I can't get linux xattr stuff to stick in my brain ;-)
16:25 kkeithley1 thus I use subdirs and just rm -rf the subdir
16:26 greylurk joined #gluster
16:30 kkeithley1 because {set,get}fattr. fattr for xattr? Whose idea was that?
16:32 errstr joined #gluster
16:33 jdarcy It's pretty annoying, isn't it?
16:34 polfilm joined #gluster
16:34 kkeithley1 yeah, but even worse is that I can't seem to wrap my brain around it.
16:34 jdarcy What gets me is all the -n and -v and -m instead of sane syntax, and especially "setfattr -x" to delete.
16:36 kkeithley1 ooh, gluster is making selinux go batshit crazy on f18. First time I've seen that.
16:40 guigui3 left #gluster
16:40 semiosis joined #gluster
16:43 bdperkin joined #gluster
16:45 chouchins joined #gluster
16:54 jdarcy kkeithley1: AVC denials?
16:56 johnmorr jdarcy: re: the gluster EEXIST thing we talked about friday, remounting the fs on the two problem clients fixed it. if we see that again, is there something we can do to further debug the client side?
16:59 semiosis jdarcy: toruonu was asking about negative lookups yesterday... hows your negative lookup xlator these days?
17:00 kkeithley1 must be AVC denials, e.g. Jan 22 11:53:26 f18node1 setroubleshoot: SELinux is preventing /usr/sbin/glusterfsd from getattr access on the lnk_file /var/tmp/bricks/volX/X/.glusterfs/00/00​/00000000-0000-0000-0000-000000000001. blah blah blah
17:01 sashko joined #gluster
17:03 kkeithley1 This is in a VM, and since I'm @home I can't see the console.
17:06 zaitcev joined #gluster
17:08 kkeithley1 I had (and still have) SELinux enabled in enforcing mode on my F17 guests, but never saw this.
17:08 jdarcy semiosis: Same as it has been, I guess.  Haven't done anything with it for a while.
17:09 jdarcy johnmorr: I think that's the data point we really needed.  There must be some info cached somewhere on those clients that should have been evicted.  Thanks!
17:10 xmltok joined #gluster
17:16 lala joined #gluster
17:21 harshpb joined #gluster
17:21 theron joined #gluster
17:22 wN joined #gluster
17:26 theron joined #gluster
17:38 Mo___ joined #gluster
17:39 andreask joined #gluster
17:44 _msgq_ joined #gluster
18:01 portante joined #gluster
18:05 mweichert joined #gluster
18:10 testarossa joined #gluster
18:23 bauruine joined #gluster
18:23 Nr18 joined #gluster
18:24 Nr18 joined #gluster
18:34 johnmorr jdarcy: you're welcome. also, i upgraded the gluster servers for that volume to 3.3.1 today (they were 3.3.0, and the clients have been 3.3.1). i'm getting ENOTCONN and EINVAL on clients trying to access the the volume
18:34 johnmorr jdarcy: i upgraded the servers one at a time, and all the bricks are up/available. if i remount the fs on the clients, it's fine.
18:34 johnmorr jdarcy: http://gluster.org/pipermail/glu​ster-users/2012-May/010276.html makes me think the remount should happen automatically?
18:34 glusterbot <http://goo.gl/QJeyI> (at gluster.org)
18:37 jdarcy johnmorr: Seems like they should.
18:43 johnmorr jdarcy: the glusterfs process on one client i checked is spinning in a loop trying to poll a file descriptor that it's already closed: http://pastebin.com/dS5AmnmE
18:44 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
18:46 nightwalk joined #gluster
18:47 jjnash joined #gluster
18:50 johnmorr [2013-01-22 13:47:29.513912] I [socket.c:1798:socket_event_handler] 0-transport: disconnecting now
18:50 johnmorr seems to insist it's disconnecting; that's been logged on the client sporadically since the servers got restarted
18:55 hattenator joined #gluster
18:59 xmltok joined #gluster
19:00 jdarcy johnmorr: Would you like to file the bug, or should I?
19:01 johnmorr jdarcy: i can; at bugzilla.redhat.com? anything i should include beyond the strace, lsof, and log output?
19:04 johnmorr jdarcy: looks like 3.3.1 needs to be added to the version list in bugzilla
19:05 johnmorr unless 'mainline' is the current release?
19:05 jdarcy johnmorr: That should be enough to get started.  Let me know the number and I can take care of component assignments etc.  Thanks!
19:05 jdarcy When in doubt, just mainline.  ;)
19:06 johnmorr not freebase?
19:06 johnmorr ^____^
19:15 jdarcy Wow.  Going from Nightwish to Sandra Boynton (thank you Amazon AutoRip) is like skydiving into a three-year-old's birthday party.
19:17 y4m4 joined #gluster
19:23 JoeJulian jdarcy: I think your email about designating bricks and mine about xlator xli integration need to complement each other. Your example of a translator that lets you place some filetype onto a specific brick is a good example.
19:23 JoeJulian s/xli/cli/
19:23 JoeJulian Don't you say it glusterbot...
19:23 glusterbot JoeJulian: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
19:23 JoeJulian Bad glusterbot...
19:25 jdarcy JoeJulian: Yep, they're related.
19:26 jdarcy You listened to Philadelphia Chickens. Customers who bought this song also bought...Mean by Taylor Swift.  Really?
19:28 JoeJulian hehe
19:29 * johnmorr chuckles.
19:29 JoeJulian jdarcy: I just don't think that custom (and therefore outside of the core development team) translators are going to be developed and/or adopted unless there's a clear and easy way to integrate them.
19:29 jdarcy JoeJulian: Agreed.  Ran up against this with HekaFS.
19:32 jdarcy JoeJulian: The problem is, how do you specify a possibly-complex transformation on a graph (the translator graph in this case) in reasonably concise form that fits on a sane command line?  Never seen a good notation for that.
19:33 jdarcy I think "an X above/below every Y" is probably about the maximum complexity that fits on a command line, and covers 90% of the real cases.
19:34 jdarcy Maybe add "an X *instead of* every Y" for things like erasure codes instead of AFR.
19:36 kkeithley1 If we have an erasure code xlator, and people wanted that instead of replicate, why wouldn't it be `gluster volume create $volname erasure...` instead of `gluster volume create $volname replicate...`  ?
19:36 johnmorr jdarcy: https://bugzilla.redhat.com/show_bug.cgi?id=902953
19:36 glusterbot <http://goo.gl/YhZf5> (at bugzilla.redhat.com)
19:36 glusterbot Bug 902953: unspecified, unspecified, ---, amarts, NEW , Clients return ENOTCONN or EINVAL after restarting brick servers in quick succession
19:40 DaveS joined #gluster
19:42 _msgq__ joined #gluster
19:46 DaveS__ joined #gluster
19:48 johnmark JoeJulian: +1 - thanks for  bringing that up
19:48 JoeJulian Maybe iptables style... "gluster xlator insert misscache [before] distribute"
19:48 johnmark <to no one in particular> - we really need to think about a portable runtime that will make it easier to do things like compile and add in a custom xlator
19:49 johnmark without needing to build the whole damn thing from scratch
19:49 johnmark GPR, anyone?
19:49 JoeJulian The pieces are there in glusterfs-devel to do that without anything additional, I think.
19:49 johnmark JoeJulian: hrm
19:52 kkeithley1 I gather you mean replace xlator X with a  3rd party xlator that's not known to gluster at build time.
19:52 jdarcy Yep.
19:53 glusterbot New news from newglusterbugs: [Bug 902955] [enhancement] Provide a clear and easy way to integrate 3rd party translators <http://goo.gl/O60es> || [Bug 902953] Clients return ENOTCONN or EINVAL after restarting brick servers in quick succession <http://goo.gl/YhZf5>
19:54 kkeithley1 yeah, but we only have a gluster-devel in RPM packages, not in debian/ubuntu .debs IIRC
19:58 johnmark kkeithley1: oy
19:59 kkeithley1 what about the API (libgfapi) runtime, is there a need to be able to tweak that the same way?
20:00 kkeithley1 Red Hat Storage Servers 2.0 for Hybrid and Public Cloud Win Cloud Computing Cloud Storage Excellence Award  http://www.tmcnet.com/news/2013/01/15/6854267.htm
20:00 glusterbot <http://goo.gl/rDxTU> (at www.tmcnet.com)
20:04 jdarcy kkeithley1: Seems like if someone's able to use libgfapi they can handle custom volfiles as well.
20:06 kkeithley1 Okay, I'll take your word on that.
20:08 johnmark jdarcy: er, not sure about that
20:08 johnmark there's the idea that "they can" and then there's the idea of forcing them to
20:09 johnmark I'm sure I *could* use corncobs, but I'm rather attached to Charmin
20:09 tryggvil joined #gluster
20:09 Technicool joined #gluster
20:10 jdarcy johnmark: OK.  If there's a way for users to generate a "standard" volfile that refers to a third-party translator, and there's a way for libgfapi to use a standard translator, what needs to change in libgfapi?
20:10 jdarcy johnmark: And BTW thank you for that image.  :O~
20:11 johnmark jdarcy: lulz... I knew you'd appreciate that
20:11 johnmark jdarcy: ok, that sounds manageable
20:12 johnmark jdarcy: but from your earlier comment, it sounded hairier than that
20:12 jdarcy I blame Taylor Swift.
20:13 johnmark ha! I find she's the cause of 90% of the world's technology problems
20:14 theron left #gluster
20:14 theron joined #gluster
20:15 jdarcy I need to go burn off some of this energy with exercise.  BBL.
20:22 fixxxermet joined #gluster
20:25 fixxxermet I've installed glusterfs on two servers, added a peer to server1, and then created a volume on server1, which server2 can see via gluster volume info.
20:25 fixxxermet Then I started the volume.  When do the file systems start synchronizing?
20:26 semiosis fixxxermet: please ,,(pasteinfo)
20:26 glusterbot fixxxermet: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
20:28 fixxxermet semiosis: http://fpaste.org/1Obv/
20:28 glusterbot Title: Viewing Paste #269513 (at fpaste.org)
20:28 kkeithley1 fixxxermet: server2 is the peer you added to server1?
20:28 fixxxermet yes, yum002
20:28 andreask joined #gluster
20:29 semiosis did /opt/repo have data before you created the volume?
20:29 fixxxermet Yes, on yum001
20:29 fixxxermet Data is still there
20:29 fixxxermet I then added a test file and it wasn't replicated either
20:29 semiosis what version of glusterfs?
20:29 fixxxermet 3.2.7 from epel on centos 6.3
20:30 fixxxermet Time is within a second of eachother
20:30 semiosis ok, so first of all, once you create a volume you should not access the brick directories any more
20:30 kkeithley1 @repos
20:30 glusterbot kkeithley1: See @yum, @ppa or @git repo
20:30 kkeithley1 @yum repo
20:30 glusterbot kkeithley1: kkeithley's fedorapeople.org yum repository has 32- and 64-bit glusterfs 3.3 packages for RHEL/Fedora/Centos distributions: http://goo.gl/EyoCw
20:30 semiosis fixxxermet: writing directly to the bricks is bad.  all access should go through a glusterfs client mount point
20:30 semiosis (or nfs client mount point)
20:30 kkeithley1 fixxxermet: I suggest you start over with newer release from my repo. ^^^
20:31 semiosis since glusterfs 3.3.0 there is a self-heal daemon which should start replicating.  +1 to kkeithley1, upgrade :)
20:31 fixxxermet I will, but first let me make sure I understand what I'm doing
20:31 fixxxermet These are local centos repo mirrors that clients access via http
20:31 semiosis in 3.2.x and previous you would have to stat files through a client mount point to trigger self-heal
20:31 semiosis fixxxermet: all access should go through a client mount point
20:32 fixxxermet The clients in this sense are workstations
20:32 fixxxermet using yum
20:32 semiosis ,,(glossary)
20:32 glusterbot A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
20:32 semiosis i mean "client" as in mount -t glusterfs...
20:32 semiosis (or nfs)
20:32 fixxxermet So these should be treated as file servers
20:32 fixxxermet Then the actual mirror servers would mount their exports
20:33 semiosis yes pretty much
20:33 fixxxermet ok, that makes much more sense
20:33 semiosis although you can have a client mount point on the server(s)
20:33 fixxxermet To mount itself?
20:33 fixxxermet OK, let me upgrade and start over
20:33 kkeithley1 my repo works just like the epel repo once you install the .repo file in /etc/yum.repos.d). The rpms are built on the same build servers that build for epel. The only reason 3.3.x isn't in the 'official' epel repo is because jdarcy and I have hekafs in epel which needs 3.2.7.
20:34 semiosis just use one data dir for bricks, like gluster volume create repo replica 2 server1:/bricks/repo1 server2:/bricks/repo1, then mount -t glusterfs localhost:repo /opt/repo
20:35 fixxxermet alright kkeithley1
20:35 fixxxermet thanks for the info @ both of you
20:35 semiosis s/for bricks/for bricks and a different dir for the client mount/
20:35 glusterbot What semiosis meant to say was: just use one data dir for bricks and a different dir for the client mount, like gluster volume create repo replica 2 server1:/bricks/repo1 server2:/bricks/repo1, then mount -t glusterfs localhost:repo /opt/repo
20:35 semiosis yw, good luck
20:35 * JoeJulian was going to say something about johnmark's Charmin comment, but decided not to.
20:36 * kkeithley1 thanks JoeJulian for exercising restraint
20:51 fixxxermet So should the brick that I create already have my data in it, or should I create an empty brick and mount it where my data is?
20:53 semiosis normally you start with empty bricks and copy data in through a client mount point, but you should also be able to start with one "preloaded" brick and the other replica brick(s) empty
20:53 semiosis i've not tried that with 3.3 but afaik it still works
20:53 fixxxermet I'll just make an empty brick and copy the data
20:53 semiosis you sure dont wwant to mount over an existing directory because the mount will "hide" whatever is under the mount point on the rootfs
20:53 fixxxermet yup :)
20:54 fixxxermet Was wondering if gluster did some kind of magic to prevent that
20:54 semiosis ha no :)
20:54 kkeithley1 that'd be pretty power magic
20:56 Ryan_Lane joined #gluster
20:56 Ryan_Lane is gluster's NFS safe to use with inode64 in xfs?
21:00 melanor9 joined #gluster
21:22 Technicool joined #gluster
21:25 Bohrnag joined #gluster
21:32 Bohrnag Hello. I've just been asked to look into some options for shared filesystems on Linux, and GlusterFS was mentioned. I've read about GlusterFS performing less than optimal with smaller files. Does anyone know about any benchmarks, whitepapers or other information about this? How bad is the performance, etc?
21:37 semiosis best thing you can do is try it yourself with your real workload and see how it does
21:37 semiosis what's your use case?
21:38 m0zes joined #gluster
21:38 Bohrnag I've been asked to set up some shared redundant filesystem for 2 servers running ActiveMQ, pushing something like 150-200 messages per second. I'm not familiar with ActiveMQ myself, but have been told it's using fairly small files for each message in the queue.
21:39 Bohrnag And no, I can't go overboard by adding tons of servers and/or huge number of physical disks either.
21:41 Bohrnag So at this moment I have just started looking into what my options were, and was hoping someone could tell me whether this is something I should completely forget about for one reason or another, or if it might be worth spending the time to set it up and see with our specific workload :D
21:41 maek left #gluster
21:41 Bohrnag (if someone knows this isn't a good idea, that would save me a fair bit of time :D)
21:44 semiosis Bohrnag: why dont you use the shared-nothing HA arch for activemq?
21:44 Bohrnag As in, each ActiveMQ server has its own spool?
21:44 semiosis http://activemq.apache.org/pure-master-slave.html
21:44 glusterbot <http://goo.gl/2dDLe> (at activemq.apache.org)
21:45 chirino Bohrnag: the file size ActiveMQ uses is configurable.
21:45 Bohrnag This feature has been deprecated and will be removed in version 5.8 ?
21:45 semiosis ah, oops!  :)
21:46 semiosis lol, last time i looked into activemq they were recommending shared nothing, now they're back to shared fs/db
21:46 chirino Don't think we ever actually 'recommeded it'
21:47 chirino Bohrnag: switch to the levedb store.  Not only is it faster but it should work better /w gluster since it only appends to files.
21:47 chirino and if you ever have to deal /w a conflict resolution, that should make it easier to resolve those.
21:47 semiosis chirino: ok i'm sure you're right, i looked at this stuff a while back my memory is obvs not too great
21:47 Bohrnag chirino: Ok, thanks. I made a note of that, and will suggest it to the guys who are asking me for some shared storage :D
21:48 semiosis Bohrnag: you will probably want to use replica 3 with quorum in glusterfs because a split brain of your activemq data would be bad
21:49 semiosis chirino: would you agree?
21:49 chirino agree.
21:49 semiosis i remember running into that and it caused me to switch to jdbc persistence :)
21:49 semiosis that was before gluster had quorum tho
21:51 chirino BTW hoping our default HA strategy for ActiveMQ can be 'run on glusterfs'.
21:56 Bohrnag So, to sum it up, using GlusterFS as shared storage might not necessarily be a bad idea?
21:58 DrVonNostren with gluster, when i have a replication pair, does the gluster client send the data to the "head node" which in turn distributes out the data it receives from the client, or does the client itself round robin data amongst the servers in the cluster?
21:58 Ryan_Lane is gluster's NFS safe to use with inode64 in xfs?
22:06 Bohrnag chirino: levedb store? Would that be LevelDB?
22:28 Bohrnag
22:50 raven-np joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary