Camelia, the Perl 6 bug

IRC log for #gluster, 2013-03-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 hagarth joined #gluster
00:02 dowillia joined #gluster
00:12 m0zes joined #gluster
00:18 hagarth joined #gluster
00:38 hagarth joined #gluster
00:46 hagarth joined #gluster
00:47 zaitcev joined #gluster
00:58 vex how difficult is it to move from a Replicated to Distribute+Replicated gluster setup?
01:00 yinyin joined #gluster
01:02 yinyin_ joined #gluster
01:06 yinyin joined #gluster
01:06 hagarth joined #gluster
01:11 yinyin joined #gluster
01:16 purpleidea rnts: if you're going to reboot, but you've never done so properly before and then make a change, don't. First reboot normally and test. and Then reboot and replace hardware. Changing one variable at a time. Get it?
01:17 jdarcy joined #gluster
01:30 johnmark jdarcy: howdy
01:37 kevein joined #gluster
01:39 _pol joined #gluster
01:47 kevein joined #gluster
01:50 hagarth joined #gluster
01:58 jbrooks joined #gluster
02:10 yinyin joined #gluster
02:28 sahina joined #gluster
03:01 bulde joined #gluster
03:16 jdarcy joined #gluster
03:16 dowillia joined #gluster
03:20 jdarcy joined #gluster
03:27 sgowda joined #gluster
03:31 vshankar joined #gluster
03:33 anmol joined #gluster
03:33 hagarth joined #gluster
03:40 Humble joined #gluster
04:13 alex88 joined #gluster
04:16 fidevo joined #gluster
04:35 twx joined #gluster
04:38 jdarcy joined #gluster
04:40 deepakcs joined #gluster
04:50 sonne joined #gluster
04:52 sahina joined #gluster
04:52 glusterbot New news from newglusterbugs: [Bug 917901] Mismatch in calculation for quota directory <http://goo.gl/W23o9>
04:54 bala joined #gluster
04:59 lala_ joined #gluster
05:01 vpshastry joined #gluster
05:13 sripathi joined #gluster
05:20 mohankumar joined #gluster
05:22 kevein joined #gluster
05:29 dowillia joined #gluster
05:34 bala joined #gluster
05:34 vijaykumar joined #gluster
05:34 test_ joined #gluster
05:40 satheesh joined #gluster
05:43 sas joined #gluster
05:48 aravindavk joined #gluster
05:50 rastar joined #gluster
05:54 bharata joined #gluster
05:54 johnmark interesting - root-squashing for NFS: http://review.gluster.org/#change,4619
05:54 glusterbot Title: Gerrit Code Review (at review.gluster.org)
05:54 shylesh joined #gluster
06:00 zwu joined #gluster
06:02 dowillia joined #gluster
06:06 jdarcy joined #gluster
06:08 ramkrsna joined #gluster
06:08 ramkrsna joined #gluster
06:13 jdarcy joined #gluster
06:21 sripathi joined #gluster
06:32 jdarcy joined #gluster
06:49 ngoswami joined #gluster
06:53 misuzu joined #gluster
06:53 rgustafs joined #gluster
06:58 vpshastry joined #gluster
06:59 spai joined #gluster
07:00 ndevos joined #gluster
07:01 sripathi joined #gluster
07:03 Humble joined #gluster
07:08 sahina joined #gluster
07:13 jdarcy joined #gluster
07:16 guigui joined #gluster
07:17 dowillia joined #gluster
07:21 sripathi joined #gluster
07:22 jdarcy joined #gluster
07:22 jtux joined #gluster
07:22 _benoit_ joined #gluster
07:24 puebele joined #gluster
07:26 tryggvil joined #gluster
07:29 Humble joined #gluster
07:38 mooperd joined #gluster
07:42 sas rastar, ping
07:43 puebele joined #gluster
07:48 ctria joined #gluster
07:52 ekuric joined #gluster
07:53 Humble joined #gluster
07:54 dowillia joined #gluster
08:02 jtux joined #gluster
08:04 Nevan joined #gluster
08:11 Humble joined #gluster
08:15 vpshastry joined #gluster
08:23 jdarcy joined #gluster
08:35 ProT-0-TypE joined #gluster
08:35 puebele1 joined #gluster
08:41 vpshastry joined #gluster
08:45 alex88 left #gluster
08:46 _benoit_ joined #gluster
08:53 rnts purpleidea: yeah no problem :)
08:53 sripathi joined #gluster
08:57 ndevos joined #gluster
08:59 sripathi joined #gluster
09:00 Staples84 joined #gluster
09:00 sripathi1 joined #gluster
09:01 hchiramm_ joined #gluster
09:02 vimal joined #gluster
09:05 Humble joined #gluster
09:07 Humble joined #gluster
09:13 tryggvil joined #gluster
09:15 bulde joined #gluster
09:16 ThatGraemeGuy joined #gluster
09:18 tryggvil joined #gluster
09:19 _benoit_ joined #gluster
09:21 jdarcy joined #gluster
09:25 kevein joined #gluster
09:32 Humble joined #gluster
09:34 Humble joined #gluster
09:44 rotbeard joined #gluster
09:56 joehoyle joined #gluster
10:00 dobber_ joined #gluster
10:08 vpshastry joined #gluster
10:08 layer3switch joined #gluster
10:15 layer3switch joined #gluster
10:18 glusterbot New news from resolvedglusterbugs: [Bug 764919] geo-replication fails if glusterfs installed from src and glusterd started using /etc/init.d/glusterd <http://goo.gl/TuDXK>
10:22 bulde joined #gluster
10:25 vpshastry joined #gluster
10:30 H__ semiosis: I've added the nobootwait and see "mountall: Skipping mounting /mnt/vol01 since Plymouth is not available" during boot. Here's the full (since this boot) gluster logs : http://dpaste.org/oFXoq/
10:30 glusterbot Title: dpaste.de: Snippet #220573 (at dpaste.org)
10:33 misuzu joined #gluster
10:38 joehoyle joined #gluster
10:39 joehoyle joined #gluster
10:45 vpshastry joined #gluster
10:46 H__ semiosis: also, a "mount -a -tglusterfs" in rc.local does not work. The command works when I log in in but complains about "unknown option _netdev (ignored)"
10:50 bstansell joined #gluster
10:57 misuzu joined #gluster
11:05 tryggvil joined #gluster
11:10 duerF joined #gluster
11:16 _pol joined #gluster
11:21 tryggvil joined #gluster
11:24 vimal joined #gluster
11:29 bulde joined #gluster
11:30 bulde1 joined #gluster
11:30 H__ semiosis: More attempts to mount gluster 3.3.1 at boot : http://dpaste.org/wV9pG/
11:30 glusterbot Title: dpaste.de: Snippet #220578 (at dpaste.org)
11:33 jclift_ joined #gluster
11:34 H__ semiosis: some success : doing a 'sleep 10' in rc.local before the mount command makes the mount command succeed. We have a race.
11:37 cyberbootje hi, remember my windows NFS issue?
11:37 H__ semiosis: a 'sleep 10' before 'mount -a -tglusterfs' in /etc/rc.local *also* works.
11:38 tryggvil_ joined #gluster
11:48 jdarcy joined #gluster
11:50 sgowda joined #gluster
11:54 glusterbot New news from newglusterbugs: [Bug 918052] Failed getxattr calls are throwing E level error in logs. <http://goo.gl/7yXTH>
12:05 tryggvil joined #gluster
12:10 tryggvil_ joined #gluster
12:13 edward1 joined #gluster
12:13 Staples84 joined #gluster
12:25 Norky joined #gluster
12:34 yinyin joined #gluster
12:36 timothy joined #gluster
12:39 Humble joined #gluster
12:42 dustint joined #gluster
12:44 yinyin joined #gluster
12:49 vijaykumar left #gluster
12:51 16WAAJ33Y joined #gluster
12:57 dowillia joined #gluster
12:58 bulde joined #gluster
13:02 guigui1 joined #gluster
13:03 jdarcy joined #gluster
13:10 puebele1 joined #gluster
13:25 vimal joined #gluster
13:25 vici joined #gluster
13:27 vici I've got an ovirt/kvm setup where the VMs are stored on a gluster volume. The VMs mainly run oracle databases. Is there any guide to tuning IO performance for glusterfs. Data integrity is a non-issue, it's a test farm setup.
13:30 samppah Celestar: what version of gluster you are using? gluster version 3.4 boosted performance for me but it's still in alpha state
13:31 joehoyle joined #gluster
13:33 m0zes left #gluster
13:33 m0zes joined #gluster
13:35 Celestar samppah: We are using 3.4, it was an improvement over 3.3, but by far not enough.
13:36 Celestar dd'ing something to the gluster device with a block size of 4k gave << 20MB/sec.
13:37 Celestar erm sec. 4k was 100MB/sec
13:37 samppah umm
13:37 Celestar smaller block sizes then dropped performance dramatically.
13:37 Celestar down to < 1MB/sec
13:39 samppah Celestar: have you tested if setting io=threads or aio=native in VM settings has any effect?
13:39 samppah i don't have much experience on tuning glusterfs
13:39 cw joined #gluster
13:40 Celestar samppah: no not yet. We've tried a bit with setting the fs to async, but I will test that in a bit..
13:41 Celestar as I said. I don't give a damn whether data is lost :P
13:42 aliguori joined #gluster
13:50 jmara joined #gluster
13:50 bennyturns joined #gluster
13:52 H__ semiosis: I have a patch prototype; add post-start script sleep 5 end script to /etc/init/glusterd.conf . (2 sec sleep is not enough). Of course we need something better than a blunt sleep here. Ideas ?
14:05 sahina joined #gluster
14:07 tqrst joined #gluster
14:10 jack joined #gluster
14:11 tqrst I started a rebalance yesterday on my 25x2 volume. Memory usage of the rebalance process has been steadily climbing by ~1 megabyte/minute on all my servers and shows no sign of stopping. "echo 2 > /proc/sys/vm/drop_caches" doesn't change anything. Any ideas?
14:11 tqrst this is 3.3.1
14:12 tqrst some of them are already at 10G memory
14:13 tqrst all I could find on the mailing list are old emails related to 3.2.5
14:19 tqrst at this rate I will have to stop rebalancing (again), which is a shame given that I just added a bunch of new bricks that are just sitting there, half empty, while the others are at 85% capacity
14:22 lh joined #gluster
14:28 vpshastry joined #gluster
14:33 rubbs anyone here set up a 3.3.1 on RHEL 5.9 before? I keep running into issues when trying to ls a dir. I used EXT4 because there is no official xfs package for RHEL. Is there any way to confirm or deny that I'm running into the EXT4 bug?
14:36 Norky I have a very strange problem involving gluster and samba (on Red Hat Storage server). An application running on Windows XP that writes a file produces only about half of the expected output
14:36 H__ rubbs : http://joejulian.name/blog/gluste​rfs-bit-by-ext4-structure-change/ I have a test script to see if the issue exists
14:37 glusterbot <http://goo.gl/PEBQU> (at joejulian.name)
14:37 H__ rubbs: also, see this http://review.gluster.com/#change,3679
14:37 glusterbot Title: Gerrit Code Review (at review.gluster.com)
14:38 Norky the file shoudl be about 2MB in size, and writing to an 'ordinary' Samba export works fine, however, when writing to the Samba export of the Gluster mount, it produces a file truncated at anywhere from 800KB - 1.1MB
14:38 rubbs H__: thanks I'll read and get back
14:40 rwheeler joined #gluster
14:42 dowillia joined #gluster
14:50 sas joined #gluster
14:53 klaxa|web joined #gluster
14:53 rubbs H__: I'm using 2.6.18-348.1.1.el5 so I'm not sure I should be affected.
14:53 flrichar joined #gluster
14:53 rubbs interesting: ls: memory exhausted
14:54 elyograg I often get so tired that my memory doesn't work. :)
14:54 rubbs don't we all
14:54 rubbs I'm glad I'm running into all these problems now, when I'm just farting around with gluster rather than on a production setup.
14:56 stopbit joined #gluster
14:57 Norky rubbs, there is official XFS for RHEL, it's an optional extra: http://www.redhat.com/products/ente​rprise-linux-add-ons/file-systems/
14:57 glusterbot <http://goo.gl/mGqbG> (at www.redhat.com)
14:57 twx joined #gluster
14:57 Norky or if you buy the official Red Hat Storage server product, you get a cut-down RHEL with XFS support
14:58 klaxa|web hi, we set up a rather simple glusterfs setup, two nodes, replica. now on Machine B i corrupt a file in the brick-directory, in the mounted gluster filesystem i get input/output errors upon file access (that's good, i want that) however, how would i go about to get the uncorrupted file from machine A on the gluster filesystem?
14:58 rubbs Norky: ah, thanks. I'll take a look into that.
14:59 Norky RHSS is moderately expensive
14:59 klaxa|web xattrs in both brick directories show valid values
15:00 lpabon joined #gluster
15:00 rcheleguini joined #gluster
15:00 klaxa|web i.e. trusted.afr.storage-client-{0-1}=0x0...0 and the same trusted.gfid hash
15:01 GLHMarmot joined #gluster
15:11 Norky klaxa|web, ,,{split-brain}
15:11 Norky klaxa|web, ,,(split-brain)
15:11 glusterbot (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://goo.gl/FPFUX .
15:11 Norky bah, I give up
15:11 Nagilum joined #gluster
15:11 klaxa|web thanks
15:11 Norky ahh, glusterbot was a bit laggy
15:11 Norky see the second link
15:12 klaxa|web like i said xattr values are both valid, we are running 3.2.7
15:12 Norky basically, in the case of mismatched files in a replica (split-brain), you delete the 'bad' file and its .glusterfs entry
15:13 klaxa|web if i delete the file from the corrupted brick-directory it also gets deleted from the gluster filesystem
15:13 Norky ahh, 3.2.7, I dont' think that link applies then
15:13 klaxa|web i guess the easiest way is to overwrite the "bad" file with the "good" file?
15:13 tqrst Norky: there was no .glusterfs in 3.2.7 :)
15:13 klaxa|web :(
15:13 tqrst I think it was all xattr back then
15:13 Norky tqrst, indeed :)
15:13 klaxa|web well my boss wants to rebuild the production system in this test-system
15:14 tqrst fixing split brains was actually simpler back then
15:14 Norky I *think* you'd remove the corrupt file from the brick, and initiate a self-heal
15:14 klaxa|web so we are more or less forced to use 3.2.7
15:14 Norky but I imagine tqrst knows better than me
15:14 klaxa|web i hope so, i tried that, it removed the file from the gluster filesystem
15:15 klaxa|web in that case we would just copy the file from the uncorrupted brick-directory to th-- wait a sec
15:15 klaxa|web if i then initialize a self-heal would it copy the uncorrupted file to the gluster filesystem and thus sync it back?
15:16 klaxa|web hmm no i can only initialize self-heals in the gluster filesystem, no? so if the file doesn't exist in the first place, i can't heal it
15:17 klaxa|web so i have to delete the corrupt file, then copy the uncorrupt file back to the gluster filesystem and then it gets synced again, right?
15:17 Norky I think a self heal woudl copy the good file to the 'bad' brick from which the corrupt file has been dleted
15:18 klaxa|web so i can do a self-heal on a file that's not in the gluster filesystem? sounds good
15:18 klaxa|web ,,(self-heal)
15:18 glusterbot klaxa|web: Error: No factoid matches that key.
15:18 klaxa|web :(
15:19 Norky klaxa|web, I probably shoudln't be saying anything, I dont' know the system well enough myself
15:19 klaxa|web heh, well you probably know more than me
15:19 klaxa|web and saying /something/ is better than me sitting here, being stupid all by myself
15:20 hagarth joined #gluster
15:20 bugs_ joined #gluster
15:22 klaxa|web okay from the documentation it looks like i can only initialize a self-heal from files within the gluster filesystem by calling xargs --null stat >/dev/null
15:22 klaxa|web but if the file doesn't exist how can i touch it?
15:23 Norky try
15:23 Norky touch it/cat it/ do anything that calls an open() or a stat() on the filename
15:24 BSTR joined #gluster
15:24 klaxa|web i predict error 2: file not found
15:24 klaxa|web yeah file got deleted from the gluster filesystem
15:24 Norky hmm, I was wrong then
15:25 Norky I assumed that wodul trigger a self heal, and gluster woudl notice that the file existing on one brick but not the other and so replicate it
15:26 klaxa|web ah
15:26 klaxa|web if i cat the file, at least it says input/output error instead of file not found
15:26 Nagilum I'm playing around with glusterfs for the first time, I'm running RHEL6 (2.6.32-279.11.1.el6.x86_64), is xfs still preferred over ext4 ?
15:26 klaxa|web file is 0 bytes though
15:28 Norky Nagilum, I believe that recent ext4 causes a problem for Gluster, yes
15:28 Nagilum Norky: ok, thanks!
15:31 jbrooks joined #gluster
15:36 rotbeard Nagilum, depends on your base system
15:36 lpabon_ joined #gluster
15:36 rotbeard if possible, use xfs to avoid problems in the future
15:36 Nagilum rotbeard: k, I just switched them to xfs
15:39 rotbeard in my case xfs is not that fast as ext4 is
15:40 rotbeard but I used the default mkfs.xfs, I am pretty sure I can handle this with some tuning
15:40 hagarth joined #gluster
15:44 Nagilum after deleting a volume it seems that I can't simply the old bricks to a new volume
15:44 Nagilum "/export/b01 or a prefix of it is already part of a volume"
15:45 glusterbot Nagilum: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
15:45 Nagilum thx!
15:45 Nagilum :>
15:50 Nagilum hmm, when I try to extract a tar into the new glusterfs I get "Cannot write: Invalid argument" for many (but not all) files
15:50 kr4d10 joined #gluster
15:51 Nagilum http://pastebin.com/fn8sYwaF
15:51 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
15:51 Norky do all bricks have the same mode (permissions)?
15:52 Norky "ls -ld /export/b0?" on all machines
15:52 Norky ahh, it's striped
15:52 Nagilum I'm doing this as root
15:52 Norky I had a similar problem with striped
15:53 Nagilum all the dirs are 755 root:root
15:53 vpshastry joined #gluster
15:53 Norky and you might not want striping, distributed is better for most cases
15:53 Norky ,,(striping)
15:53 glusterbot I do not know about 'striping', but I do know about these similar topics: 'stripe'
15:53 Nagilum I see
15:53 Norky ,,(stripe)
15:53 glusterbot Please see http://goo.gl/5ohqd about stripe volumes.
15:54 Nagilum Norky: the man page is quite thing on "distributed" :>
15:54 Nagilum s/thing/thin/
15:55 glusterbot Nagilum: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
15:55 Norky glusterbot really does seem to be lagging otday
15:55 Norky s/otday/today/
15:55 glusterbot What Norky meant to say was: glusterbot really does seem to be lagging today
15:56 bala joined #gluster
15:58 tryggvil joined #gluster
15:59 en0x joined #gluster
16:00 Norky there's not much to say about distributed... it's distributed
16:00 glusterbot joined #gluster
16:00 JoeJulian @meh
16:00 glusterbot JoeJulian: I'm not happy about it either
16:01 vpshastry1 joined #gluster
16:07 aliguori joined #gluster
16:22 aliguori_ joined #gluster
16:23 aliguori__ joined #gluster
16:24 Gugge joined #gluster
16:25 aliguori joined #gluster
16:25 dowillia joined #gluster
16:36 __Bryan__ joined #gluster
16:38 lpabon joined #gluster
16:39 en0x hi. is there any info on setting up glusterfs (few replica servers) behind a load balancer on aws?
16:41 rubbs crap, now I'm in licensing hell
16:41 rubbs opps. sorry wrong chan
16:47 tjstansell joined #gluster
16:48 JoeJulian en0x: yes and no...
16:49 JoeJulian en0x: What are you trying to accomplish?
16:50 rubbs I'm repeating myself I'm sure, but I've got some basic questions that I'm still hung up on. First, is it reasonable to want to set up KVM disk images up on a replicated volume? Or is performance going to be too bad for it to be reasonable (over gigbit eth)?
16:50 en0x i'm just reading about glusterfs... what i'm trying to accomplis is to not have any downtime. I seen examples for fstab where you mount the share only from one gluster server... or can I use something like this? http://www.wklej.org/id/973257/
16:50 glusterbot Title: Wklejka #973257 – Wklej.org (at www.wklej.org)
16:51 en0x i thought that if I use: mount -t glusterfs amazon-load-balancer:/test-volume /mnt/glusterfs
16:51 en0x would be easier
16:51 JoeJulian @mount server
16:51 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrnds, or (#2) Learn more about the role played by the server specified on the mount command here: http://goo.gl/0EB1u
16:52 en0x oh i see thanks JoeJulian
16:55 _pol joined #gluster
16:55 luckybambu joined #gluster
16:55 phase5 joined #gluster
16:55 m0zes rubbs: I've got KVM disk images on a dist+repl volume, I try to write as little as possible to the files themselves. I see better performance mounting the gluster volume inside the vm.
16:56 phase5 hello all
16:56 luckybambu Anyone have experience running a geo replication on a >2million file gluster?
16:56 m0zes s/files/disk images/
16:56 glusterbot What m0zes meant to say was: rubbs: I've got KVM disk images on a dist+repl volume, I try to write as little as possible to the disk images themselves. I see better performance mounting the gluster volume inside the vm.
16:56 _pol joined #gluster
16:56 rubbs m0zes: that's what I was thinking. I also thought about trying to just pxeboot into a very small image with what I need on it, but that may not be feasible.
16:57 rubbs essentially having diskless VMs
16:58 phase5 can someone help me with a problem i do not understand? I have 3 nodes profiving a replicated gluster. I mount the drives via fstab (nfs). all is fine except that if one node goes down the mounted directory is not anymore accessible on all other notes until the missing node comes back. I mount on each node just the lokal nfs
16:59 phase5 my nfs line in fstab looks like this:
16:59 phase5 176.9.196.237:/www /var/www nfs acl,noatime,_netdev,mountproto=tcp,vers=3 0 0
16:59 phase5 while the ip is the local one
17:00 phase5 i give in addition exta options to my brick:
17:00 phase5 performance.read-ahead: off
17:00 phase5 performance.cache-refresh-timeout: 1
17:00 phase5 cluster.lookup-unhashed: off
17:00 phase5 performance.cache-size: 256MB
17:00 phase5 performance.flush-behind: off
17:00 phase5 performance.cache-max-file-size: 1GB
17:00 phase5 nfs.addr-namelookup: on
17:00 phase5 nfs.trusted-write: on
17:00 phase5 performance.quick-read: on
17:00 phase5 was kicked by glusterbot: message flood detected
17:00 phase5 joined #gluster
17:01 phase5 *grr* — sorry for flooding
17:01 phase5 can anybody help me ?
17:02 phase5 i use glsuterfs 3.3
17:03 phase5 noone?
17:03 fleducquede helllo
17:03 fleducquede any errors in the logs ?
17:04 fleducquede that could lead us to the root cause
17:04 zaitcev joined #gluster
17:04 phase5 no - I do not see at least non
17:04 phase5 but this look to me like a nfs timeout
17:04 fleducquede grep -ir " E " /var/log/glusterfs/* ?
17:05 phase5 but i do not understand way this matters as i mount the local IP
17:05 phase5 mom - -doing the grep
17:05 phase5 wow - -there is a lot
17:06 phase5 so — privat msg ?
17:06 fleducquede try to grep inot bricks directory
17:06 fleducquede instead of the whole folder
17:06 phase5 i would say the key msg would be that one :
17:06 phase5 E [afr-common.c:1853:afr_lookup_done] 0-www-replicate-0: Failing lookup for /usage, LOOKUP on a file without gfid is not allowed when some of the children are down
17:07 _br_ joined #gluster
17:07 phase5 [2013-03-05 18:04:40.274835] E [posix.c:184:posix_lookup] 0-www-posix: buf->ia_gfid is null for /data/export/www/manual
17:07 fleducquede let me google the error message
17:09 phase5 google does not say much to:  glusterfs "buf->ia_gfid is null for"
17:09 rubbs Ok, second question: If I'm sticking to RHEL 5.4+ and not moving to RHEL 6.x yet, is EXT4 OK? I guess I could add XFS tools to it manually, since I'm not running RH supported versions of gluster anyway.
17:10 rubbs I've been trying some things in test runs on EXT4 but I keep getting problems when I try to "ls" in a glusterfs mounted directory
17:10 rubbs so I'm wondering if that's the EXT problem.
17:10 fleducquede what are the mount options on bricks ?
17:10 rubbs me or phase5?
17:10 fleducquede phase5 :)
17:10 rubbs ah, just checking
17:12 phase5 @fleducquede: got my privat msg?
17:12 fleducquede phase5, ?
17:12 fleducquede tro ty grep with " W " instead
17:12 fleducquede try to grep with " W " instead
17:13 _br_ joined #gluster
17:14 phase5 @fleducquede: got my privat msg?
17:14 phase5 I sent you personal msg in order not to spam here
17:19 joeto joined #gluster
17:19 hagarth joined #gluster
17:20 rubbs phase5: you can use dpaste and link here
17:20 rubbs that's the prefered way of getting support so that multiple people can look at your stuff
17:22 phase5 I am running 3.3
17:22 phase5 not 3.2
17:23 phase5 3.3.1-1 to be precice
17:24 vpshastry joined #gluster
17:26 glusterbot New news from newglusterbugs: [Bug 913699] Conservative merge fails on client3_1_mknod_cbk <http://goo.gl/ThGYk>
17:31 _pol joined #gluster
17:32 _pol joined #gluster
17:34 phase5 joined #gluster
17:36 luckybambu_ joined #gluster
17:40 daMaestro joined #gluster
17:46 Mo___ joined #gluster
17:49 * kr4d10 is taking down mysql on be5.stage2 for a few minutes
17:49 kr4d10 sorry all
17:56 hybrid5122 joined #gluster
18:05 nueces joined #gluster
18:06 luckybambu joined #gluster
18:20 timothy joined #gluster
18:27 hagarth joined #gluster
18:33 disarone joined #gluster
18:34 suku joined #gluster
18:41 _benoit_ joined #gluster
18:43 lpabon joined #gluster
18:48 hagarth joined #gluster
18:48 _br_ joined #gluster
18:53 _br_ joined #gluster
18:56 _br_ joined #gluster
18:58 phase5 hello all again -  I just found the option which cause that my other drives cannot access the mounted volume if a node goes down
18:58 phase5 the option is: network.ping-timeout
18:58 phase5 after the timeout the volume get accessable again
18:59 phase5 What means "This reconnect is a very expensive operation and should be avoided at the cost of client wait time for a network disconnect."?
18:59 phase5 is there a reason not to set the timeout to lets say 5s or so?
19:00 phase5 is there maybe amother option which still allows me to read the volume during the timeout and just locks the write operations until the timeout hits?
19:04 timothy joined #gluster
19:22 nemish joined #gluster
19:24 nemish I was wondering if someone could help me
19:24 nemish http://pastie.org/6394630
19:24 glusterbot Title: #6394630 - Pastie (at pastie.org)
19:24 nemish I can't get the mounts to sync
19:24 nemish I believe its configured correctly
19:31 Humble_away joined #gluster
19:31 semiosis nemish: most likely name resolution isnt working right, or iptables is blocking traffic.  your client log file(s) will say for sure if it's one of those, or a different problem.  pelase pastie.org the client log, /var/log/glusterfs/var-lib-puppet-ssl.log probably
19:32 lpabon joined #gluster
19:36 nemish looks fine in logs… putting on paste… 1 sec
19:37 nemish http://pastie.org/6394791
19:37 glusterbot Title: #6394791 - Pastie (at pastie.org)
19:41 nemish there is no fw… i have the entries for these hosts in local hosts file on each node
19:41 nemish so there shouldn't be resolution issue
19:43 semiosis those logs aren't big enough, dont show the problem
19:44 semiosis when the client first starts up it prints a dump of the volume config, please include the log from that point on
19:46 nemish semiosis: I restarted bluster but it didn't seem to make a difference… here are the full logs.. you'll see the restart in there http://pastie.org/6394920
19:46 glusterbot Title: #6394920 - Pastie (at pastie.org)
19:48 Humble_away joined #gluster
19:49 nemish semiosis: http://pastie.org/6394934 here is the netstat showing the connection
19:49 glusterbot Title: #6394934 - Pastie (at pastie.org)
19:52 m0zes joined #gluster
19:52 BSTR any of you guys ever seen a replicated volume not replicate to one side, but place the following in the data brick: /path-to-brick/.glusterfs/00/00/00​000000-0000-0000-0000-000000000001
19:53 BSTR volume *appears* to be healthy from volume info
20:00 JoeJulian Is that directory entry a symlink (should be) or a directory?
20:01 BSTR JoeJulian : its a directory, non-symlink
20:01 JoeJulian That's the problem then.
20:04 BSTR JoeJulian : what needs to be a symlink, im seeing this in the brick2 node
20:04 BSTR Volume Name: SADMIN
20:04 BSTR Type: Replicate
20:04 BSTR Volume ID: da8de27f-6293-4285-8187-057818c86acf
20:04 BSTR Status: Started
20:04 BSTR Number of Bricks: 1 x 2 = 2
20:04 BSTR Transport-type: tcp
20:04 BSTR Bricks:
20:04 BSTR Brick1: rd-adminl07:/data/SADMIN
20:04 BSTR Brick2: rd-adminl08:/data/SADMIN
20:04 BSTR was kicked by glusterbot: message flood detected
20:04 BSTR joined #gluster
20:05 sborza joined #gluster
20:06 timothy joined #gluster
20:10 elyograg BSTR: use pastie.org or another pastebin site.  using pastebin.com will get you chastised by glusterbot, so don't do that.
20:10 glusterbot Please use http://fpaste.org or http://dpaste.org . pb has too many ads. Say @paste in channel for info about paste utils.
20:10 mooperd_ joined #gluster
20:10 elyograg like that. :)
20:11 BSTR elyograg : will do that going forward, thanks
20:15 _pol joined #gluster
20:16 _pol joined #gluster
20:19 JoeJulian BSTR: ln -sf ../../.. /path-to-brick/.glusterfs/00/00/00​000000-0000-0000-0000-000000000001
20:19 JoeJulian There's a self-heal bug that I filed a bug report on, but since nobody can figure out how to repro it, the bug was closed as "works for me"
20:20 dustint joined #gluster
20:23 JoeJulian bug 859581
20:23 glusterbot Bug http://goo.gl/60bn6 high, unspecified, ---, vsomyaju, ASSIGNED , self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs
20:23 pipopopo joined #gluster
20:28 nueces joined #gluster
20:39 _pol joined #gluster
20:40 _pol joined #gluster
20:44 nemish semiosis: sorry had to step away, any idea why this isn't working?
20:45 semiosis nemish: not sure, logs have some messages about connectivity issues, but it's not clear to me what's failing
20:46 semiosis nemish: could you pastie the output of 'gluster volume status'
20:46 nemish semiosis: ok what can i do to help?
20:46 nemish yeah it was in a pastie already but i will again.. 1 min
20:46 semiosis oops, will recheck
20:47 semiosis i only see 'gluster volume info' output, not 'gluster volume status'
20:47 nemish semiosis: oh there is no "gluster volume status" gives error you mean "gluster volume info all"?
20:47 semiosis nemish: what version of glusterfs?
20:47 semiosis voluem status is new in 3.3
20:47 semiosis iirc
20:47 nemish 3.2.5 i think it said in log
20:47 jag3773 joined #gluster
20:48 semiosis if you're just starting out, please use the ,,(latest)
20:48 glusterbot The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
20:49 JoeJulian For me ,,(split-brain)
20:49 glusterbot (#1) learn how to cause split-brain here: http://goo.gl/nywzC, or (#2) To heal split-brain in 3.3, see http://goo.gl/FPFUX .
20:52 JoeJulian That damned Q&A site... You can't search and find that split-brain article.
20:56 jskinner_ joined #gluster
21:00 _pol_ joined #gluster
21:02 nemish semiosis: i've used gluster before (2.0.9) on another set of servers and works fine… much different config though… I just upgraded to 3.3 from 3.25 on these servers… volumes and peers were lost… recreated and everything is working now… maybe a bug in 3.2.5?
21:03 jbrooks joined #gluster
21:04 BSTR JoeJulian : figured out what i was doing wrong, was trying to geo-replicate to the slave brick as opposed to the slave mount point. Think im a little loopy today <shakes head>
21:05 semiosis JoeJulian: you wont have "that damned Q&A site" to kick around for much longer
21:37 tqrst semiosis: he'll still have the mailing list, blogs, pdf guide, official documentation and the blogs left to kick though :p
21:38 tqrst (which is my main gripe about documentation - it's all over the place, and the parts I care about are on potentially outdated mailing list / blog posts)
21:38 tqrst oh and the unsearchable irc logs
21:39 flrichar joined #gluster
21:41 hybrid512 joined #gluster
21:51 _pol joined #gluster
21:53 _pol joined #gluster
21:54 edong23 joined #gluster
21:54 _pol joined #gluster
22:00 hattenator joined #gluster
22:01 klaxa joined #gluster
22:18 ultrabizweb joined #gluster
22:19 nemish semiosis: sorry to bother you but could you help me with this error: "is in use as a brick of a gluster volume" I've uninstalled and wiped /var/lib/gluster* and /etc/gluster*
22:19 nemish can't find where that reference is
22:20 semiosis path or a prefix of it is already part of a volume
22:20 glusterbot semiosis: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
22:20 semiosis nemish: maybe that ^?
22:20 semiosis JoeJulian: most difficult factoid trigger... EVER
22:21 nemish semiosis: like i said i stopped gluster, removed packages and removed all the configurations and still can't clear
22:22 semiosis that instructions link gives you another thing to check... the ,,(extended attributes) on the brick paths (if you're reusing the same brick paths, that is)
22:22 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
22:22 JoeJulian tqrst: The more recent irc logs are searchable.
22:25 JoeJulian @learn reuse brick as To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
22:25 glusterbot JoeJulian: The operation succeeded.
22:25 glusterbot Bug 877522: medium, unspecified, ---, jdarcy, ON_QA , Bogus "X is already part of a volume" errors
22:25 H__ i'm thinking of adding strace to the upstart scripts to find out what's causing the race. Do any of you have recommendations on that matter ?
22:31 JoeJulian H__: Maybe this? https://gist.github.com/anonymous/4472816
22:31 glusterbot Title: The glusterfs-server upstart conf would start before the brick was mounted. I added this ugly hack to get it working. I ripped the pre-start script from mysql and decided to leave the sanity checks in place, just because. (at gist.github.com)
22:31 daMaestro joined #gluster
22:34 H__ JoeJulian: almost, i have a workaround with a sleep 10 in the post-start :) (pasted url in gluster-dev earlier today). But i want the sleep out of it, i want guarantees that the mount can proceed after glusterd started.
22:35 H__ my brick is mounted already though, it's a test-setup with the bricks just direcories on the root filesystem. So that's not it
22:37 misuzu joined #gluster
22:37 vpshastry joined #gluster
22:40 H__ strange thing is that the race between the two gluster upstart scripts never showed with 3.2.5, and reproduces with 3.3.1
22:45 hagarth joined #gluster
22:46 vpshastry joined #gluster
22:49 _pol joined #gluster
22:50 _pol joined #gluster
22:51 JoeJulian H__ I think the glusterfsd processes started faster in 3.2. There seems to be a slight delay in 3.3 between glusterd starting and the bricks being ready.
22:55 JoeJulian Anybody here want to volunteer to document 2991 error messages, their meaning, causes, and possible solutions? ;)
22:56 misuzu joined #gluster
22:59 kr4d10 joined #gluster
23:03 hagarth joined #gluster
23:05 semiosis hmmm interesting
23:23 jdarcy joined #gluster
23:51 vigia joined #gluster
23:51 yinyin joined #gluster
23:54 JoeJulian jdarcy: I'm referring to being backported to 3.3
23:59 pipopopo joined #gluster
23:59 Celestar_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary