Camelia, the Perl 6 bug

IRC log for #gluster, 2013-03-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 lh joined #gluster
00:04 luckybambu joined #gluster
00:11 VSpike joined #gluster
00:15 helloadam joined #gluster
00:25 dustint joined #gluster
00:25 lpabon joined #gluster
00:27 Rocky joined #gluster
00:35 yinyin joined #gluster
00:41 lh joined #gluster
00:41 lh joined #gluster
00:53 yinyin joined #gluster
00:55 luckybambu joined #gluster
00:59 RicardoSSP joined #gluster
01:07 jag3773 joined #gluster
01:09 jules_ joined #gluster
01:12 bulde joined #gluster
01:16 kevein joined #gluster
01:19 lh joined #gluster
01:19 lh joined #gluster
01:24 luckybambu joined #gluster
01:29 lh joined #gluster
01:31 bala joined #gluster
01:35 neofob joined #gluster
01:37 lh joined #gluster
01:47 rwheeler joined #gluster
02:05 avati joined #gluster
02:10 jules_ joined #gluster
02:29 avati__ joined #gluster
02:32 jdarcy joined #gluster
02:39 hagarth joined #gluster
02:47 bulde joined #gluster
02:51 bharata joined #gluster
02:53 jdarcy joined #gluster
02:58 lh joined #gluster
02:58 lh joined #gluster
03:07 stat1x joined #gluster
03:10 vshankar joined #gluster
03:21 aravindavk joined #gluster
03:29 dr3amc0d3r2 joined #gluster
03:32 brunoleon__ joined #gluster
03:41 lh joined #gluster
03:44 bala joined #gluster
03:47 bulde joined #gluster
04:11 rastar joined #gluster
04:16 bala joined #gluster
04:18 hagarth joined #gluster
04:20 mohankumar joined #gluster
04:24 timothy joined #gluster
04:24 21WAADH0T joined #gluster
04:25 rastar joined #gluster
04:27 jskinner joined #gluster
04:30 avati joined #gluster
04:30 rastar1 joined #gluster
04:34 sripathi joined #gluster
04:34 sgowda joined #gluster
04:35 lalatenduM joined #gluster
04:37 bulde joined #gluster
04:38 bala joined #gluster
04:39 vpshastry joined #gluster
04:48 deepakcs joined #gluster
04:56 badone joined #gluster
05:00 dr3amc0d3r2 joined #gluster
05:03 dr3amc0d3r3 joined #gluster
05:04 shylesh joined #gluster
05:12 test joined #gluster
05:18 bala joined #gluster
05:19 pai joined #gluster
05:23 satheesh joined #gluster
05:26 avati__ joined #gluster
05:37 dr3amc0d3r joined #gluster
05:41 raghu joined #gluster
05:47 saurabh joined #gluster
05:55 yinyin joined #gluster
05:57 bulde joined #gluster
06:02 sripathi1 joined #gluster
06:07 bala joined #gluster
06:08 vimal joined #gluster
06:23 ricky-ticky joined #gluster
06:30 rastar joined #gluster
06:46 guigui3 joined #gluster
06:46 Rocky_ joined #gluster
06:58 stickyboy joined #gluster
07:04 18WAC5I3D joined #gluster
07:14 stickyboy joined #gluster
07:15 sripathi joined #gluster
07:16 msmith_ joined #gluster
07:19 ekuric joined #gluster
07:24 dr3amc0d3r2 joined #gluster
07:26 glusterbot New news from newglusterbugs: [Bug 928204] [rpc-transport.c:174:rpc_transport_load] 0-rpc-transport: missing 'option transport-type'. defaulting to "socket <http://goo.gl/u4g2d>
07:35 sripathi joined #gluster
07:53 dobber_ joined #gluster
08:00 camel1cz joined #gluster
08:01 andreask joined #gluster
08:08 DWSR joined #gluster
08:08 DWSR joined #gluster
08:09 hybrid5121 joined #gluster
08:11 bharata Hmm looks like I can't rename a volume ?
08:13 adria joined #gluster
08:20 camel1cz joined #gluster
08:22 camel1cz left #gluster
08:23 timothy joined #gluster
08:30 cw joined #gluster
08:41 ProT-0-TypE joined #gluster
08:46 shylesh joined #gluster
08:51 hagarth bharata: there's no support for renaming a volume - do you have an use case for that?
08:52 stickyboy Spelling mistakes? :)
08:55 bleon_ joined #gluster
08:57 hagarth stickyboy: how about delete followed by a create? :)
09:09 bharata hagarth, just found a need for this today, created a volume of the same name in two nodes and then thought of making them peers at which point gluster complained. Then had to delete one of the volumes
09:10 hagarth bharata: we have also been thinking about disabling peer probe if the server being probed has volumes already defined in it.
09:10 bharata hagarth, I found out that gluster already enforces that - it fails the probing in such a case
09:11 bharata hagarth, you mean _any_ volumes ?
09:12 hagarth bharata: yeah, the server being probed should have 0 volumes in it.
09:12 bharata hagarth, ok
09:17 guigui1 joined #gluster
09:20 rastar1 joined #gluster
09:27 rastar joined #gluster
09:49 hagarth joined #gluster
09:55 bala joined #gluster
10:01 jskinner joined #gluster
10:03 manik joined #gluster
10:10 Staples84 joined #gluster
10:14 rotbeard joined #gluster
10:20 test_ joined #gluster
10:22 camel1cz joined #gluster
10:23 camel1cz left #gluster
10:27 bala joined #gluster
10:36 pai joined #gluster
10:38 Rocky_ joined #gluster
10:45 camel1cz joined #gluster
10:45 ujjain joined #gluster
10:47 jskinner joined #gluster
10:56 camel1cz left #gluster
11:01 adria left #gluster
11:01 rastar joined #gluster
11:04 manik joined #gluster
11:05 andreask joined #gluster
11:09 vpshastry joined #gluster
11:33 jskinner joined #gluster
11:34 guigui3 joined #gluster
11:38 maxiepax joined #gluster
11:40 satheesh1 joined #gluster
11:40 vpshastry joined #gluster
11:43 dustint joined #gluster
11:50 timothy joined #gluster
11:54 mgebbe_ joined #gluster
11:56 plarsen joined #gluster
12:00 rastar1 joined #gluster
12:10 jclift joined #gluster
12:14 robo joined #gluster
12:16 jmara joined #gluster
12:19 vpshastry joined #gluster
12:20 jskinner joined #gluster
12:21 jmara joined #gluster
12:22 layer3switch joined #gluster
12:30 jmara joined #gluster
12:30 balunasj joined #gluster
12:35 vpshastry joined #gluster
12:37 plarsen joined #gluster
12:47 hagarth joined #gluster
12:48 vpshastry1 joined #gluster
12:49 bennyturns joined #gluster
12:53 dorkyspice joined #gluster
12:55 lalatenduM joined #gluster
12:55 rwheeler joined #gluster
12:55 vpshastry joined #gluster
13:01 bet_ joined #gluster
13:01 vpshastry1 joined #gluster
13:01 mohankumar joined #gluster
13:01 nueces joined #gluster
13:02 manik joined #gluster
13:05 jskinner joined #gluster
13:06 timothy joined #gluster
13:08 jskinner joined #gluster
13:16 16WAAN37O joined #gluster
13:21 hagarth joined #gluster
13:21 vpshastry joined #gluster
13:28 glusterbot New news from newglusterbugs: [Bug 928341] Enable write-behind for gluster nfs server process <http://goo.gl/Z4VeZ>
13:28 x4rlos Anyone any best practice suggestions for a large number of files in a gluster volume to be deleted if over a given time period?
13:30 bulde joined #gluster
13:37 joeto joined #gluster
13:38 NuxRo x4rlos: delete files older than x period?
13:38 NuxRo `find` should be able to do this
13:40 manik joined #gluster
13:42 manik joined #gluster
13:45 stickyboy x4rlos: `find /mnt/gluster -newermt "2 weeks ago" -delete`     This is nice if your `find` supports "-newermt"
13:47 zykure|uni joined #gluster
13:50 zykure|uni hi
13:50 glusterbot zykure|uni: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:50 manik1 joined #gluster
13:50 zykure|uni i'm trying to set up a system with one host, several clients, clients connected both via 1Gb ethernet and InfiniBand
13:50 manik1 joined #gluster
13:51 zykure|uni i came to the conclusion that it's not possible to use both tcp/rdma as transport type in gluster 3.3.0, is that correct?
13:51 bdperkin joined #gluster
13:51 shylesh joined #gluster
13:52 zykure|uni if it is, i would use ipoib to get it working; but it is possible to have both ethernets and infiniband-clients with this strategy?
13:52 x4rlos NuxRo: stickyboy: Thanks for the reply - but my concern is won't that method take a long time to get from gluster (and also delete the .gluster folder) if ran from a machine that has the gluster points mounted?
13:52 zykure|uni btw, hey edong23 .. you are here too? :D
13:53 x4rlos (it's actually for a database that will write all the WALs to this mountpoint - I intend to keep 4 weeks worth of data, rotating weekly).
13:53 NuxRo x4rlos: yes, true, it might take a while
13:53 NuxRo the .gluster dir is only visible on the brick directly, i hope you are not messing with the bricks but with the mounted volume
13:54 NuxRo you could mount the volume via nfs and delete from there, it will probably be faster
13:54 manik joined #gluster
13:54 x4rlos NuxRo: Ah, i didn't know about the client not seeing the .gluster folder (learn something new)  - i was however just trying to list a few things on both the client and the server. So i may have seen that pop up.
13:55 samppah zykure|uni: iirc, rdma is broken in 3.3.0.. you might want to try with 3.3.1
13:55 samppah it should be possible to use tcp and rdma at same time
13:55 manik joined #gluster
13:55 x4rlos NuxRo: It's possible then to just let it run its presumably lengthy course from a find :-/
13:56 NuxRo anything is possible :)
13:56 NuxRo how many files are we talking about?
13:56 aliguori joined #gluster
13:56 NuxRo I'm currently testing deletion and `rm -rf 7million_small_files` has been running for 24h now :)
13:57 x4rlos NuxRo: Not that many :-)
13:57 NuxRo i need to test this via nfs as well and see how big the improvements
13:57 NuxRo then it might be acceptable to let find do it's job
13:57 x4rlos Its a fair few of 16MB wal files though for the last few months.
13:57 NuxRo you should test this on some unimportant directory first, just to make sure it deletes what it's supposed to :)
13:58 puebele joined #gluster
13:58 x4rlos I have a test box at home with considerably very few files (on vms on the same machine though).
13:58 x4rlos Its only backup files i guess, so not the end of the world if it goes grazy-town.
13:58 x4rlos s/grazy-town/crazytown
13:59 bdperkin joined #gluster
13:59 NuxRo then go for it
14:01 theron joined #gluster
14:02 x4rlos cheers :-)
14:04 rcheleguini joined #gluster
14:05 zykure|uni samppah: argh, i'm using 3.3.1 actually .. but i didn't get it to work yesterday
14:05 x4rlos NuxRo: Interesting enough - I may look for my "this is my latest file i need" and then simply execute a delete files -older thanthisone
14:06 zykure|uni it was still using TCP when i used both transport types
14:06 x4rlos not sure if it will make the same ammount of calls though. (/me talking out loud )
14:06 glusterbot New news from resolvedglusterbugs: [Bug 928343] Enable write-behind for gluster nfs server process <http://goo.gl/KCqTw>
14:06 zykure|uni and with only RDMA it fails to mount the volume
14:07 zykure|uni i can't find any good documentation on this topic :(
14:07 hateya_ joined #gluster
14:09 NuxRo x4rlos: don't know what to say. anyhow, I'd first run find without the -delete switch and see what gets caught :)
14:09 x4rlos NuxRo: Yup, am playing now. :-)
14:09 NuxRo good luck
14:09 * NuxRo - snack time, &
14:14 gmcwhistler joined #gluster
14:16 puebele joined #gluster
14:22 zetheroo joined #gluster
14:23 zetheroo ok, I performed a test of the gluster setup here... I shut down one of the two KVM Hosts on the gluster and the VM's on the KVM host which was not shut off were still all running. The gluster was not accessible but the brick was - which is fine.
14:24 zetheroo the thing which was not fine however was that the VM's which were running seemed to have suddenly come across issues relating to their own filesystems being read-only
14:24 zetheroo after a reboot and manual fsck on each VM they were running ok again
14:25 samppah zetheroo: have you configured network.ping-timeout?
14:26 samppah it's 42 secs by default and it may cause problems
14:27 vpshastry joined #gluster
14:28 zetheroo what does that do?
14:29 jskinner joined #gluster
14:30 piotrektt_ joined #gluster
14:36 vpshastry joined #gluster
14:38 samppah zetheroo: during that time it's waiting for connection to server and blocking io.. that may be the reason why vms have their filesystem as read only
14:40 samppah bbl, i have to take the dogs for a walk :)
14:41 zetheroo should the number of seconds be diminished?
14:41 zetheroo where is this configured?
14:43 semiosis zetheroo: gluster volume set
14:43 semiosis ,,(rtfm)
14:43 glusterbot Read the fairly-adequate manual at http://goo.gl/E3Jis
14:43 semiosis zetheroo: ,,(pasteinfo)
14:43 glusterbot zetheroo: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
14:44 guigui3 joined #gluster
14:45 zetheroo http://paste.ubuntu.com/5652462/
14:45 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
14:47 semiosis no quorum? hmm
14:47 zetheroo :) forgot about that
14:54 puebele1 joined #gluster
14:54 hateya joined #gluster
14:55 lh joined #gluster
14:55 daMaestro joined #gluster
14:56 mohankumar joined #gluster
15:05 robos joined #gluster
15:07 zetheroo is there a way to check what the current setting is for network.ping-timeout
15:07 zetheroo ?
15:09 semiosis if it's not listed on the gluster volume info output, then it's the default, which can usually be found with 'gluster volume set help'
15:12 zetheroo hmmm .. not finding it with that command ...
15:13 zetheroo http://paste.ubuntu.com/5652520/
15:13 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
15:14 dumbda joined #gluster
15:18 vpshastry joined #gluster
15:19 semiosis hmm, see ,,(options) maybe
15:19 glusterbot http://goo.gl/dPFAf
15:19 semiosis going afk, traveling, will be in & out over the next few days.
15:21 jclift semiosis: Have fun?
15:24 puebele1 joined #gluster
15:25 rastar joined #gluster
15:25 zetheroo I thought you could only have 2 pricks in a replica 2 ... here someone is using 4 bricks
15:26 zetheroo gluster volume create glustervmstore replica 2 10.1.1.11:/vmstore 10.1.1.12:/vmstore 10.1.1.13:/vmstore 10.1.1.14:/vmstore
15:26 zetheroo how does that work?
15:28 elyograg zetheroo: that creates a distributed volume consisting of two replica sets.
15:28 zetheroo and regarding quorum ... it says in the documentation that "This feature when enabled kills the bricks in the volume that do not meet the quorum because of network splits/outages."
15:29 zetheroo ok but how does it decides which are the sets?
15:29 zetheroo is 11 and 12 a set, and 13 and 14 another set ?
15:29 elyograg zetheroo: with quorum enabled, the volume goes read only if you don't have a majority of replicas available.  this means that on a replica 2 volume, if one of them goes away, it goes read only.  there is a feature coming to make quorum useful for replica 2.
15:29 elyograg 11 and 12 are a set.  with replica n, every n bricks (in order) are made into a set.
15:29 ricky-ticky joined #gluster
15:29 zetheroo ok
15:30 zetheroo well it seems like that is already happening in our test case, but I never enabled quorum manually
15:31 zetheroo is it automatic?
15:32 Chiku|dc does all volume options is on Gluster_File_System-3.3.0-Ad​ministration_Guide-en-US.pdf ?
15:32 Chiku|dc I don't find nfs.mem-factor
15:33 elyograg zetheroo: I am pretty sure that quorum must be specifically enabled.
15:33 zaitcev joined #gluster
15:34 zetheroo ok, and once the brick goes into this read-only mode ... how do you get it back to read and write ?
15:34 zetheroo does this take a remount of the brick?
15:34 dumbda what if i need to have only volume to be replicated to 1 server meaning 1 server which has 2 bricks and distributed 2 volumes relate to those bricks, But now i want to add another replica to those 2 volumes to another server.
15:34 dumbda What is the best way to do it?
15:35 dumbda Since now volumes are configured as distributed only, but i want distributed-replicated.
15:37 elyograg zetheroo: It's not the brick that goes read-only, it's the volume.  once you restore a majority of bricks, the volume would automatically go back to read-write.
15:38 zetheroo ok
15:39 elyograg dumbda: Except for testing or proof-of-concept, it's a bad idea to have replicas be on the same server.
15:39 zetheroo seems that in out test case that the brick still running went into this read-only state, but when the other brick was back online the volume did not go back to being read/write ... or something of that nature
15:39 dumbda no replica will be on the new server.
15:40 dumbda Right now i just have one server, but i built a new one which i want to replicate those 2 volumes?
15:40 manik joined #gluster
15:40 elyograg dumbda: so right now you have a volume with no replication, right?
15:40 dumbda yeah on server 1.
15:41 zykure|uni any idea why peer probing fails on my two-host system? it just says "rpc-service: Failed authentication" in the log on the peer-probed host (server2)
15:41 dumbda i built server 2 and did all preparations, opened ports, installed gluster-server.
15:41 zykure|uni port 24007 seems to be accessible
15:41 zetheroo elyograg: so to enable quorum one just has to do this command? gluster volume set <volname> cluster.server-quorum-type none/server
15:42 zetheroo and this one too I suppose?
15:42 zetheroo gluster volume set all cluster.server-quorum-ratio <percentage%>
15:42 elyograg I believe that if you do an add-brick with replica 2 and all the bricks that should match up to the bricks on the volume in the right order, it will add them as replicas.  I would wait for confirmation from an expert, though.
15:43 elyograg zetheroo: that looks right.  I only have a replica 2 volume, so I haven't used quorum.  Anxiously waiting for 3.4.1 so I can.
15:43 zetheroo I also have only a replica 2 volume ... is quorum not needed in this case?
15:43 puebele1 joined #gluster
15:44 zetheroo I am just trying to figure out why the VM's running on the brick that was not shut down where suddenly all running on read-only filesystems ...
15:45 elyograg zetheroo: if you want it to go read only in the event of a server failure, then you want quorum.  if you want it to still work, then you don't want quorum until bug 914904 is fixed and available.
15:46 glusterbot Bug http://goo.gl/O4eim medium, unspecified, ---, dlehman, NEW , RFE: more MD RAID status properties
15:46 elyograg oops. wrong one.  bug 914804
15:46 glusterbot Bug http://goo.gl/NWJpx medium, unspecified, ---, jdarcy, POST , [FEAT] Implement volume-specific quorum
15:46 zetheroo ok, but the above occurred without having enabled quorum ...
15:47 elyograg I'm told that it won't make it into 3.4.0, but 3.4.1 should have it.
15:47 elyograg I don't know what's wrong there.
15:47 zetheroo ok
15:47 zetheroo I had to reboot the VM's and perform a manual fsck on each one ...
15:47 dumbda okay. so right now i have /data/export and /data/export2 on fs. Gluster volumes are test1 -> /data/export and test2 -> /data/export2.
15:47 zetheroo and then all was well it seems ..
15:48 dumbda So now on the new server on fs I should create same empty folders on /data ?
15:49 dumbda They will be empty, As current folders have around 80G between them.
15:49 rastar1 joined #gluster
15:49 dumbda Should i rsync the content first to the 2d server?
15:50 elyograg dumbda: I can't tell what's a brick and what's a volume from that.  when you add replica bricks, gluster should automatically copy the data.
15:53 xiu joined #gluster
15:53 xiu hi
15:53 glusterbot xiu: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:53 elyograg dumbda: if your purely distributed volume has server1:/bricks/b1 and server1:/bricks/b2, then I think you'd want to do add-brick with "replica2 server2:/bricks/b1 server2:/bricks/b2" ... but I don't have enough experience to know for sure whether this will work.
15:53 elyograg s/replica2/replica 2/
15:53 bala joined #gluster
15:53 xiu i keep getting 0-ftp/in
15:53 xiu ode: no dentry for non-root inode
15:53 glusterbot elyograg: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
15:54 xiu is this a known problem ?
15:56 dumbda but replica 2 i thought considers 4 servers
15:56 dumbda or i am mistaken?
15:57 elyograg replica n would imply that you have some number of servers that is a multiple of n.  i have 2 servers in my volume, each of those servers has four bricks.
15:58 elyograg actually, each server has 8 bricks.
15:58 elyograg 8 bricks on each server, 5 tb each.  the whole volume is 40TB.
15:59 dumbda okay.
15:59 dumbda so if i have 2 servers n will be 2?
16:00 elyograg replica is something you explicitly state when you create the volume or add bricks.
16:00 elyograg if you have replica 2 and it's set up right, you will have 2 servers, or 4 servers, or 6 servers, etc.
16:01 dumbda oh i see.
16:03 xiu it seems that i have this problem: http://permalink.gmane.org/gmane.c​omp.file-systems.gluster.user/3594 but i can't find any solution (i'm running 3.2.6)
16:03 glusterbot <http://goo.gl/k2rE5> (at permalink.gmane.org)
16:03 elyograg it's actually possible to set up a replica 2 volume with 3 servers where each server has 2 bricks (or 4, or 6, etc), but it complicates things just a little bit, and I wouldn't want to deal with it ... but it works perfectly.
16:04 bugs_ joined #gluster
16:04 manik joined #gluster
16:05 hagarth joined #gluster
16:06 dumbda ok so now since i have already 2 bricks on one server with 2 volumes, should i schedule downtime for that existing server stop glusterd and remove existing destributed volumes and recreate distributed-replicated adding a new server to it?
16:09 elyograg If you're using the newest version (3.3.1) then I know that taking things down won't be requried.  the newest version is probably required to change replica counts on the fly, too.  earliver versions I have no knowledge about, having never used them.
16:10 dumbda I am just afraid if i am recreating config, i will not loose any data on the file system itslef.
16:10 dumbda Right?
16:11 dumbda It is all metadata anyway on top of ext4 by glusterfs.
16:13 elyograg The way I understand it, if you delete your volume, you can recreate it without losing data, as long as the bricks with the data are the "left-hand" bricks - those listed first in each replica set.
16:15 dumbda well right now on the current setup i do not really have any replica set.
16:15 dumbda it is only distributed from 1 server.
16:17 elyograg I know.  So if you took that route, you'd ahve to put the existing bricks in as the left-hand bricks.  what version are you running?
16:19 zetheroo left #gluster
16:19 dumbda well that is the thing.
16:19 dumbda i am running on existing server 3.2.5
16:20 dumbda so i need to upgrade it to 3.3 first.
16:20 elyograg I think 3.2 doesn't have the ability to add a replica on the fly, so the delete/recreate might be the only way to go, unless you want to upgrade, but an upgrade to 3.3 from 3.2 involves downtime anyway.
16:22 dumbda yeah.
16:22 elyograg the upgrade is probably the safer route, of course.
16:22 dumbda I actually would like to upgrade for the future convenience.
16:23 dumbda and then once i upgrade i start glusterd deamon, probe the second server.
16:23 dumbda and add the empty bricks on the second server to an existing volumes?
16:24 dumbda will they change from distributed type to distributed-replicated automatically?
16:24 elyograg oh, someone mentioned a problem with probing earlier.  for that person: do you have the firewall enabled on the server? it will either need to be turned off or reconfigured for gluster.
16:24 dumbda I just can't find anything in the docs,
16:24 dumbda on how to replicate existings distributed volumes.
16:25 elyograg dumbda: If adding replica bricks to an existing distibuted volume is supported (and I think it is) then it should change type automatically.
16:25 dumbda I did already configured the firewall for those ports
16:25 dumbda gluster uses.
16:26 dumbda okay i will try to do that tonight.
16:27 dumbda Thank you for the help.
16:27 elyograg good luck.  It might be a good idea to try to get verification of the ability and the command required before actually trying it.  I'd hate to have given you advice that wrecks your volume.
16:29 dumbda Well it seems, no one is here . -))
16:29 dumbda but you.
16:30 elyograg it does seem that way.  that's a little unusual for this place. :)
16:33 manik joined #gluster
16:36 andreask joined #gluster
16:43 Mo____ joined #gluster
16:46 jclift joined #gluster
16:50 edong23 <zykure|uni> btw, hey edong23 .. you are here too? :D
16:50 edong23 hi
16:50 glusterbot edong23: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:55 jclift_ joined #gluster
16:55 verdurin_ joined #gluster
17:00 zykure|uni if anyone's interested, my peer probe problems resulted from using another version of glusterfs on the peer node :]
17:00 jdarcy joined #gluster
17:03 x4rlos zykure|uni: That's common. :-) Did it not say clearly in the logs?
17:04 x4rlos (i don't mean to ask sarcastically)
17:04 zykure|uni dunno, i was pretty sure that i had installed it :)
17:04 zykure|uni i guess that's what happens if you get distracted in the middle of something ;)
17:05 x4rlos zykure|uni: hehe, i did same thing a while back. Think i asked if someone could implement a check when you try and connect as there are quite a few versions floating about.
17:06 zykure|uni hm yeah that would be helpful
17:06 zykure|uni in fact, better error handling would be helpful in general :)
17:07 zykure|uni uhm, when i try to create a striped volume it say "Host server2 not connected"
17:07 andreask joined #gluster
17:07 zykure|uni server2 being the peer node
17:07 kincl joined #gluster
17:08 x4rlos try using the ip?
17:08 zykure|uni and "peer status" says "Peer Rejected (Connected)"
17:08 zykure|uni with ip it's the same
17:10 zykure|uni looks like a firewall issue?
17:11 zykure|uni okay that's interesting, on server2 i can even see the volumes on server1 (volume info)
17:12 zykure|uni but it's not consistent with the info on server1 .. wtf
17:32 zykure|uni still not working, i purged & reinstalled glusterfs on server2 (deleted configs too)
17:32 zykure|uni and removed peer info on server1
17:32 zykure|uni it still say  "Peer Rejected (Connected)" for server2 after "peer probe server2" ends with success
17:41 plarsen joined #gluster
17:47 verdurin_ joined #gluster
17:47 JoeJulian "[09:30] <dumbda> Well it seems, no one is here . -))" seems almost rude since elyograg was talking to him. elyograg is someone.
17:50 JoeJulian xiu: no dentry means no directory entry. Like the directory does not exist. Where are you getting that error? Have you looked in your client and/or brick logs for clues?
17:51 JoeJulian zyk|off: RDMA failing to mount is a known issue. There has been a bug reported and a patch submitted. It's not yet in a 3.3 release, but may be in the 3.4 alpha.
17:52 JoeJulian ,,(meh), I'm not scrolling back any further..
17:52 glusterbot I'm not happy about it either
17:54 nick5 joined #gluster
18:03 lalatenduM joined #gluster
18:03 zykure JoeJulian: okay thank you! any idea when 3.4 is going to be released?
18:03 verdurin_ joined #gluster
18:03 zykure i guess for now i'll stick with tcp then
18:05 JoeJulian Not really. When it passes review I guess. I keep seeing new bugs reported against it. I can't test it myself yet.
18:06 JoeJulian If you're not in production, test 3.4. You can get it at http://www.gluster.org/download/
18:06 glusterbot Title: Download | Gluster Community Website (at www.gluster.org)
18:07 zykure well it's intended to be a fast "scratch" fs for temp data, and users will know that nobody guarantees the data will be there next morning
18:07 zykure but using alpha is a bit too much ;)
18:08 zykure as for my other problem with the rejected peers, i just can't get it to work, and google isn't helping either :'(
18:09 ujjain2 joined #gluster
18:14 JoeJulian The glusterd log?
18:15 JoeJulian For that matter, have you just tried restarting all your glusterd?
18:18 zykure yeah i did
18:18 zykure glusterd, glusterfs
18:18 zykure on both hosts
18:18 plarsen joined #gluster
18:18 nick5 hello.  anyone know why a repair/heal wouldn't work until the gluster client umounts the filesystem?
18:19 nick5 using gluster 3.3.1, with 2 bricks in a replicate setup, and 1 brick loses networking while the client writes a file out.
18:21 JoeJulian zykure: you didn't respond to the question about the glusterd log. Did you look in there to see if there's anything informative about your peer status?
18:21 zykure oh
18:21 zykure yes i looked, but i didn't find anything
18:21 JoeJulian nick5: Sounds vaguely familiar... I filed a bug about that once, but I thought it was fixed before 3.3.1
18:21 zykure but let me check again
18:22 nick5 JoeJulian: do you have info on that bug?  it's driving me crazy and preventing me from moving forward.
18:22 JoeJulian searching... please wait...
18:23 JoeJulian ^ reminds me of  computers in the 80s.
18:24 JoeJulian bug 821056
18:24 glusterbot Bug http://goo.gl/FXfOl high, unspecified, 3.4.0, pkarampu, ON_QA , file does not remain in sync after self-heal
18:24 JoeJulian So it didn't make it into 3.3
18:24 zykure ah i think i looked at the wrong log .. but it's only info and a few warnings
18:24 zykure on server2
18:25 JoeJulian fpaste it
18:27 zykure JoeJulian: http://pastebin.com/6DHedEYi
18:27 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
18:27 zykure :(
18:27 zykure i was just going to ask if fpaste is better ^^
18:27 JoeJulian Plus pastebin give me this little tiny window to look at.
18:28 zykure there you go: http://fpaste.org/JPte/
18:28 glusterbot Title: Viewing Paste #288045 (at fpaste.org)
18:28 nick5 Thanks JoeJulian.
18:28 JoeJulian And if you use rpm based installs... "yum install fpaste" so you can just "fpaste /var/log/glusterfs/*glusterd.vol.log"
18:29 zykure even better, thanks! i love cli tools for these kind of things
18:29 zykure ompldr as well :)
18:30 zykure btw i don't understand the error in the log .. i can remove that socket manually
18:30 zykure but it doesn't seem to matter
18:34 JoeJulian Gah, damned salesmen... I hate when they want to spend 20 minutes to tell me what they're selling.
18:35 JoeJulian Received RJT from uuid: aaa605f5-ebf1-4314-a3d0-63cc09fc6f82, host: 10.0.2.1
18:35 JoeJulian So that says that we need to look at the glusterd log from 10.0.2.1
18:37 zykure okay
18:38 dustint joined #gluster
18:38 zykure JoeJulian: http://fpaste.org/0w2J/
18:38 glusterbot Title: Viewing Paste #288049 by zykure (at fpaste.org)
18:39 zykure it says checksums differ for some other volume, but i don't understand that
18:39 zykure that volume exists only on server1 (10.0.2.1)
18:40 JoeJulian So that says that the data in /var/lib/glusterd/vols is wrong on 10.0.1.2. The vol information for every volume in the peer group resides on every peer.
18:41 zykure hmm how can that happen?
18:42 JoeJulian Two ways of fixing that. On 10.0.1.2 stop glusterd. "rm -rf /var/lib/glusterd/vols/*". start glusterd. "gluster volume sync all". Or stop glusterd. rsync server1:/var/lib/glusterd/vols/ /var/lib/glusterd/vols/. Start glusterd.
18:42 zykure i removed /etc/glusterd/* on server2 before
18:42 JoeJulian oops
18:42 zykure okay i get it, will try that
18:42 JoeJulian That must be before you reinstalled the package though, or glusterd wouldn't have even started.
18:43 zykure yes
18:43 zykure remove gluster, clean config, reinstall is what i did
18:44 JoeJulian /etc is config, but not state. State is /var/lib
18:44 zykure ah okay
18:45 zykure well "gluster volume sync all" says: please delete all the volumes before full sync
18:45 zykure it seems to recreate them once is start glusterd
18:45 JoeJulian It may have already done it.
18:45 JoeJulian Check peer status
18:45 zykure damn you're right :D
18:45 zykure Peer in Cluster (Connected)
18:45 JoeJulian Woo-hoo!
18:46 zykure let me just check if i can create the volume now :)
18:46 JoeJulian So how it happened is this server was probably down when you created that other volume.
18:47 JoeJulian .. or when you changed the volume. This is precisely why jdarcy started a thread on moving configuration management out of glusterd and into something that already exists (like Zookeeper).
18:47 zykure yes that could be, there were some network issues before
18:47 zykure maybe related to the hostnames not being existent in /etc/hosts, could that be?
18:48 JoeJulian Probably.
18:48 zykure but anyway, now it works (volume is running now) and i know how to fix it if it happens again
18:48 zykure thank you for your help, very much appreciated! :)
18:48 JoeJulian You're welcome.
18:50 lpabon joined #gluster
18:53 zykure JoeJulian: i have another question actually ... since i won't be using RDMA for now, i need to fall back to TCP (with IPoIB)
18:53 zykure but there are also clients who use just ethernet
18:53 zykure is that possible? using TCP with two different interfaces?
18:55 JoeJulian Yes. The gluster services listen on 0.0.0.0 so just make sure the clients resolve the hostnames to the address you wish them to.
18:58 larsks joined #gluster
18:59 zykure okay, so it doesn't matter what i put in the brick address?
19:00 zykure i.e. i could use "server_eth:/mybrick" or "server_ib:/mybrick"
19:00 zykure from the clients it's pretty straightforward how to do it, just use "server_eth" for mounting (i guess)
19:02 stickyboy joined #gluster
19:04 JoeJulian Just use "server:/mybrick" and an ib host will use either split-horizon dns to resolve server to the ib address, or /etc/hosts. The ethernet clients should resolve server to be the ethernet address.
19:07 zykure hm okay, i know nothing about IPoIB tbh .. i'll hust check it out and see what happens :)
19:08 andreask joined #gluster
19:19 nick5 JoeJulian:  I read that bug report and i'm not quite sure that's what i'm seeing.  Perhaps i don't understand it properly.
19:19 nick5 in looking at the mailing list archives, there's one that documents the exact problem i'm seeing but in 3.3.1 not 3.2.
19:19 nick5 http://www.gluster.org/pipermail/g​luster-users/2011-June/008056.html
19:19 glusterbot <http://goo.gl/qbtj3> (at www.gluster.org)
19:21 nick5 i read through that thread, and was trying to figure out some timeout options, but the 3.3.1 vol options seem to have changed (even from 3.3.0).
19:21 nick5 any ideas of what if any options might help?
19:21 JoeJulian Gah! I hate when lines don't wrap.
19:22 nick5 try this one: http://www.gluster.org/pipermail/g​luster-users/2011-June/008059.html
19:22 glusterbot <http://goo.gl/j38l6> (at www.gluster.org)
19:22 nick5 the problem seems to be with locks?
19:23 disarone joined #gluster
19:26 msmith_ interesting.  was just sent a couple of RHSS docs that state that how we are trying to use gluster is not supported or recommended :/
19:27 zykure joined #gluster
19:32 JoeJulian They would tell me the same thing about how I run mysql on a volume.
19:36 stickyboy JoeJulian: :D
19:37 dustint joined #gluster
19:38 jiffe99 I have a gluster mount which I am re-exporting and importing via local nfs and I'm serving web content off of it.  Seems to work fine except after a while various scripts think directories aren't writable and files don't exist even though when I go and ls them it all looks fine
19:38 jiffe99 if I umount and remount the nfs mount it works again
19:39 JoeJulian Are you mounting from localhost?
19:39 jiffe99 the nfs mount is using 127.0.0.1
19:40 JoeJulian There's a known race condition with regard to kernel memory allocation. Doing NFS mounts from localhost is known to be a problem.
19:41 jiffe99 gotcha
19:41 JoeJulian The problem resides in the kernel and there's no workaround.
19:44 hateya joined #gluster
19:53 aliguori joined #gluster
20:02 msmith_ what about fuse mounting from localhost?
20:06 samppah msmith_: what about it? and how are you using glusterfs?
20:07 msmith_ was just asking if fuse mounting from localhost has the same issue as nfs mounting from localhost.  I wouldn't think it would, since fuse is userspace, but never hurts to ask.
20:07 msmith_ trying to use gluster for dovecot mail storage
20:08 samppah msmith_: mounting fuse from localhost should be fine
20:09 samppah at least it's very popular way to use glusterfs and haven't heard that there's anything really wrong with it
20:10 JoeJulian msmith_: Sorry, got called away. Yes, samppah's correct. With the exception being a fairly narrow window of redhat patched kernel versions, fuse mounting from localhost is fine.
20:11 samppah JoeJulian: can you tell more about redhat patched kernels?
20:12 JoeJulian @thp
20:12 glusterbot JoeJulian: There's an issue with khugepaged and it's interaction with userspace filesystems. Try echo never> /sys/kernel/mm/redhat_transparent_hugepage/enabled . See http://goo.gl/WUNSx for more information.
20:12 JoeJulian Ugh, that ddos attack today is annoying.
20:13 samppah ahh
20:14 JoeJulian lunch time... :D
20:18 robinr joined #gluster
20:19 robinr hi, after issuing, gluster volume heal command, I got:  E [afr-self-heald.c:1128:afr_start_crawl] Could not create the task for 0 ret -1. What are the likely causes ? A copy is already running ? How to verify ?
20:27 plarsen joined #gluster
20:27 jdarcy joined #gluster
21:02 phox joined #gluster
21:13 bronaugh joined #gluster
21:14 bronaugh so, got a bit of a problem. I'd -like- glusterfs to be accessible from a single machine, but with multiple IP addresses.
21:14 bronaugh however, it seems that each brick description consists of a single hostname and a path.
21:14 bronaugh this would seem to preclude this configuration. am I missing anything?
21:15 phox and actually, in the same case, it breaks having multiple transport protocols configured, unless clients on the different fabrics see different DNS... or so it seems
21:16 jag3773 joined #gluster
21:20 JoeJulian bronaugh: split-horizon dns
21:20 phox JoeJulian: that's kind of a broken solution
21:20 JoeJulian Why?
21:20 phox say a client blows its IB card and needs to fall back to TCP.  not a very functional situation there.
21:21 phox I mean, I can see why it has this sort of behaviour, as the brick isn't necessarily on the most one tried to mount the volume from in the first place
21:22 phox I guess the more logical solution here would be to have different rdma and ip hostnames attached to bricks in multi-transport environments
21:24 JoeJulian my own poorly assembled thoughts on the subject involve the utilization of ptr records for mount prioritization. Could also be used for transport-specific identity... I keep meaning to find the time to compose an rfc about that, but have been way too busy between work and home lately.
21:24 phox heh
21:25 phox and then you have to spend time and energy dealing with everyone who has some illogical knee-jerk disagreement to a perfectly good "better than the gong show we have now" solution :)
21:25 JoeJulian Eh, that's fine with me. I throw stuff out there and if someone disagrees, let them write up their own proposal.
21:28 phox really though, to solve this I think the right thing to do is multiple A records plus actually having sane iface metrics, which doesn't always happen... I doubt our IPoIB ifaces have sufficiently low metrics to encourage the correct behaviour :)
21:31 dumbda joined #gluster
21:36 JoeJulian Sorry, phone call. I disagree on multiple A records for a few reasons.
21:37 JoeJulian Dynamically, I'd prefer to have a single host have a single name that represents it.
21:38 pmuller_ joined #gluster
21:38 JoeJulian CNAMEs that point to that A record for identification of a host by service name.
21:38 JoeJulian SRV records to define services. (probably could have worded the previous line better).
21:39 GLHMarmot joined #gluster
21:41 jag3773 joined #gluster
21:44 JoeJulian So a host's A record might be "belinda". A CNAME for belinda might be "brickhost1". A volume SRV record might be "_myvol1._tcp.domain.dom 600 IN 10 10 24007 brickhost1.domain.dom". Something similar could be done for bricks.
21:47 JoeJulian s/IN /IN SRV /
21:47 glusterbot What JoeJulian meant to say was: So a host's A record might be "belinda". A CNAME for belinda might be "brickhost1". A volume SRV record might be "_myvol1._tcp.domain.dom 600 IN SRV 10 10 24007 brickhost1.domain.dom". Something similar could be done for bricks.
21:48 JoeJulian Then, of course, you could have multiple SRV records. A mount could then just be "mount -t glusterfs myvol1 /mnt/foo".  A SRV record lookup would be done to get the mount hosts and you'd be off and running.
22:06 zykure hey that bot has some pretty nice features
22:09 JoeJulian :)
23:10 robinr joined #gluster
23:17 mtanner_ joined #gluster
23:19 Uguu joined #gluster
23:20 BSTR joined #gluster
23:22 zykure joined #gluster
23:22 jag3773 joined #gluster
23:23 kbsingh joined #gluster
23:24 fleducquede joined #gluster
23:33 bronaugh btw, fwiw, glusterfs is way better behaved than, say, nfs as a simple nfs replacement.
23:34 bronaugh nfs is a joke :/
23:40 bronaugh been using it and pounding on it at 300M/sec for a few days no.
23:40 bronaugh now*
23:40 bronaugh NFS falls over if you do that.
23:41 JoeJulian Good news. :)
23:41 bronaugh (it's been mounted for longer, just wasn't getting used much)

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary